CN109961426B - Method for detecting skin of human face - Google Patents

Method for detecting skin of human face Download PDF

Info

Publication number
CN109961426B
CN109961426B CN201910181670.0A CN201910181670A CN109961426B CN 109961426 B CN109961426 B CN 109961426B CN 201910181670 A CN201910181670 A CN 201910181670A CN 109961426 B CN109961426 B CN 109961426B
Authority
CN
China
Prior art keywords
small
skin
image block
color
low
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910181670.0A
Other languages
Chinese (zh)
Other versions
CN109961426A (en
Inventor
卢朝阳
黄舒婷
李静
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xidian University
Original Assignee
Xidian University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xidian University filed Critical Xidian University
Priority to CN201910181670.0A priority Critical patent/CN109961426B/en
Publication of CN109961426A publication Critical patent/CN109961426A/en
Application granted granted Critical
Publication of CN109961426B publication Critical patent/CN109961426B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30088Skin; Dermal

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a method for detecting skin types of human faces. The method comprises the following implementation steps: (1) establishing a human face skin sample library; (2) acquiring a training sample set and a sample to be identified; (3) extracting texture features and RGB color information of the small color image blocks; (4) training the VGG16 classifier by using a training sample set; (5) classifying small color image blocks in a sample set to be identified; (6) and counting the proportion of each category in the color skin image. The invention improves the accuracy of detecting complicated and changeable round skin problems such as acne, spots, moles and pores, can analyze the severity of various skin problems and improves the reliability of analyzing the skin quality of the human face. The invention can be transplanted to the mobile end of the mobile phone, and the skin condition of the skin can be conveniently and visually known.

Description

Method for detecting skin of human face
Technical Field
The invention belongs to the technical field of image processing, and further relates to a method for detecting human face skin in the technical field of image detection. The invention can comprehensively give out skin quality analysis results by detecting the content of pores, acnes, spots, wrinkles and moles in the face skin image, thereby achieving the purpose of intuitively knowing the skin condition.
Background
In recent years, with the rapid development of medical cosmetology along with the improvement of living standard, people are more and more focused on the health condition of their skin. In the case of skin care, it is necessary to first make a reasonable evaluation of the facial skin. The skin problems of the human face mainly comprise wrinkles, spots, acne, pores, moles and the like, the skin problems are most concerned by people, definite identification standards exist in medicine, the types of the skin problems can be judged by naked eyes, and the severity of the skin problems is greatly influenced by main observation. At present, human face skin detection products at home and abroad are mainly divided into two types: 1. the metal testing type skin detector based on the bioelectrical impedance measurement method has the advantages that the price of the product is low, the detection function is single, and the precision is low; 2. the mechanical detector based on image processing has perfect functions and has the defects of high price, huge volume and unsuitability for carrying. In conclusion, it is feasible to use an imaging analysis method to evaluate skin conditions.
A facial skin quality evaluation method is disclosed in "a facial skin quality evaluation method" in patent document filed by university in northeast (patent application No. 201810698035.5, publication No. CN 108932493A). Firstly, acquiring a face image, graying the face image to obtain face coordinates, and segmenting the face image; then, preprocessing is carried out by adopting a uniform brightness and Gamma correction method, the contrast of the image is improved, and skin texture features are extracted through LBP features; and finally, classifying the skin by adopting a Support Vector Machine (SVM) classifier, and counting the proportion of acne and wrinkles in the face to obtain an analysis result. Although the method can detect the problems of acne and wrinkles of the human face, the method still has the following defects: the method only extracts the texture features of the skin, and lacks the extraction of the color features, so that the complicated and changeable round skin problems, such as spots, acne, pores and nevi, are difficult to distinguish.
Wuliang discloses a method for identifying and detecting skin type and skin problems of human face in the patent document "a method for identifying and detecting skin type and skin problems based on facial image recognition" (patent application No. 201410537110.1, publication No. CN 104299011A). Firstly, inputting a face photo to perform face recognition, dividing the face photo of a face into 20 face image blocks, and performing hair and skin recognition on each divided face block based on a local adaptive threshold method and a connected domain analysis method; then, calculating skin color and skin greasiness in a Lab color space, and calculating a skin smoothness value and skin problems based on a gray level co-occurrence matrix method, wherein the skin problems comprise six types, namely pox, red blood streak, nevus, freckle and large pore; and finally, classifying the skin attributes and the skin problems in the facial image blocks by a Support Vector Machine (SVM) classifier, and obtaining the analysis result of the skin attributes and the skin problems according to the classification type. Although the method can identify and detect the skin color, the greasiness degree and the skin problems of the face, the method still has the following defects: the problem of human face skin is complex, the severity of the skin problem is not detected, whether the skin problem exists or not is only detected, and flexibility and adaptability are lacked.
Disclosure of Invention
The invention aims to provide a method for detecting human face skin types aiming at the defects of the prior art. The invention improves the accuracy of detecting complicated and changeable round skin problems such as acne, spots, moles and pores, can analyze the severity of various skin problems and improves the reliability of analyzing the skin quality of the human face.
The specific idea for realizing the purpose of the invention is as follows: when the truth value of the whole face skin image is divided, the truth value is easily influenced by subjective factors, and the severity of the skin problem is difficult to reflect, so that the acquired face skin image is divided into small color image blocks, a classifier is subsequently utilized to train and detect the small color image blocks, and finally the analysis of the face skin image is realized by counting the proportion of various classes in the color skin image. In the characteristic extraction process, the small color image block is converted from an RGB color space to an HSV color space, an S channel is separated from the small color image block, wavelet decomposition is carried out on the S channel, low-frequency components of the small image block of the S channel are obtained, and the purpose of enhancing the contrast of the small image block of the S channel is achieved. And carrying out binarization on the low-frequency small image blocks by adopting a maximum inter-class variance method to obtain texture features of the skin image, and multiplying the binarized skin image by the corresponding RGB color image to obtain RGB color information of the corresponding texture area. In the training and detection stage, the extracted texture and color features are trained by using a VGG16 classifier, and the small color image blocks are detected by using a trained VGG16 classifier.
The method comprises the following steps:
(1) establishing a human face skin sample library:
(1a) acquiring color skin images of at least 100 persons through a high-definition camera, wherein each person acquires 5 parts of a face and 10 images of each part;
(1b) dividing each color skin image into 100 small color image blocks;
(1c) dividing the skin states into 6 types, wherein each small color image block corresponds to one skin state;
(1d) traversing each small color image block, and determining the type of the skin state corresponding to each small color image block;
(1e) forming a human face skin sample library by all the small color image blocks and the corresponding types of each small color image block;
(2) acquiring a training sample set and a sample to be identified:
(2a) randomly selecting 70% of small color image blocks and corresponding types thereof from a human face skin sample library to form a training sample set;
(2b) forming a sample set to be identified by all the residual small color image blocks in the human face skin sample library;
(3) extracting texture features and RGB color information of the color small image blocks:
(3a) converting each small color image block from an RGB color space to an HSV color space by using a conversion formula, and separating an S channel from the HSV color space;
(3b) performing wavelet decomposition on the S channel by using a wavelet decomposition formula of the low-frequency image to obtain a low-frequency small image block corresponding to each color small image block;
(3c) calculating the maximum inter-class variance of each low-frequency small image block by using a maximum inter-class variance formula;
(3d) taking a segmentation threshold corresponding to the maximum inter-class variance as an optimal threshold;
(3e) judging whether the optimal threshold of each low-frequency small image block is larger than the average gray value of the low-frequency small image blocks, if so, judging that the low-frequency small image blocks contain hairs, executing the step (3f), otherwise, judging that the low-frequency small image blocks do not contain hairs, and executing the step (3 g);
(3f) adding 40 to the optimal threshold of each low-frequency small image block to obtain an updated optimal threshold, and performing binarization processing on the low-frequency small image blocks containing hairs by using the updated optimal threshold to obtain binary small image blocks without skin and hair interference;
(3g) carrying out binarization processing on each low-frequency small image block without hair;
(3h) taking the distribution of white pixel points in each binary small image block as the texture characteristics of the corresponding color small image block;
(3i) multiplying each binarized small image block by the corresponding RGB color small image block to obtain RGB color information of the color small image block;
(4) training the VGG16 classifier:
(4a) using the VGG16 classification model without the top layer as a classifier;
(4b) inputting texture features and RGB color information of all the small color image blocks in the training sample set and the corresponding category of each small color image block into a classifier for training to obtain a trained VGG16 classifier;
(5) classifying the small color image blocks in the sample set to be identified:
simultaneously inputting texture characteristics and RGB color information of each small color image block in a sample set to be recognized into a trained VGG16 classifier for classification to obtain a classification result of each small color image block;
(6) counting the proportion of various categories in the color skin image:
and counting the proportion of the number of the small color image blocks in each category to the total number of the small color image blocks.
Compared with the prior art, the invention has the following advantages:
firstly, the small image blocks after binarization are multiplied by corresponding RGB color small image blocks to obtain RGB color information of the color small image blocks, and extracted features are trained by a VGG16 classification model of a convolutional neural network, so that the defect that the problem of complicated and changeable round skin is difficult to distinguish due to lack of extraction of the color features in the prior art is overcome, and spots, acne, pores and nevi can be detected.
Secondly, each color skin image is divided into 100 small color image blocks, the ratio of the number of the small color image blocks in each category to the total number of the small color image blocks is counted, the defect that the severity of skin problems is lack of detection in the prior art is overcome, and the skin condition of the skin can be visually presented.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a schematic diagram of the present invention for collecting samples of a human face skin area;
fig. 3 is a flow chart of extracting texture features and RGB color information of a color small image block according to the present invention.
Detailed description of the preferred embodiments
The invention is further described below with reference to the accompanying drawings.
The specific steps of the method of the present invention are described below with reference to FIG. 1:
step 1, establishing a human face skin sample library.
And acquiring color skin images of at least 100 persons by using a high-definition camera.
The process of collecting 5 parts of each person's face, except the facial five sense organs, is further described with reference to fig. 2.
Fig. 2(a) is an overall schematic view of 5 parts for collecting a face, fig. 2(b) is a forehead for collecting a face, fig. 2(c) is an upper part of a right cheek for collecting a face, fig. 2(d) is an upper part of a left cheek for collecting a face, fig. 2(e) is a lower part of a right cheek for collecting a face, and fig. 2(f) is a lower part of a left cheek for collecting a face.
10 images are collected at each position, the light source and the physical distance are kept consistent in the collection process, and the camera is kept stable.
Dividing each color skin image into 100 small color image blocks, and dividing the skin state into 6 types: the method comprises the steps of obtaining normal skin, large pores, acne, spots, wrinkles and moles, enabling each small color image block to correspond to a skin state, traversing each small color image block, and determining the type of the skin state corresponding to each small color image block.
And forming a human face skin sample library by all the small color image blocks and the corresponding types of each small color image block.
And 2, acquiring a training sample set and a sample to be identified.
And randomly selecting 70% of small color image blocks and corresponding types thereof from a human face skin sample library to form a training sample set, and forming a sample set to be identified by the residual small color image blocks.
And 3, extracting texture characteristics and RGB color information of the small color image block.
The process of extracting texture features and RGB color information of a color small image block according to the present invention is further described with reference to fig. 3.
Step 1: a small image block of colors is input.
Step 2: and converting each small color image block from the RGB color space to the HSV color space through a conversion formula, and separating an S channel from the HSV color space.
Figure GDA0003002127080000051
S (i, j) represents a saturation channel value of an HSV image pixel point with an abscissa of i and an ordinate of j, max represents maximum value operation, r, g and b respectively represent values of red, green and blue channels of an RGB color space, r, g and b belong to [0,1,. once.255 ], and min represents minimum value operation.
And 3, step 3: and performing wavelet decomposition on the S channel through a wavelet decomposition formula of the low-frequency image to obtain a low-frequency small image block corresponding to each color small image block.
Dj=LrLcCj
Wherein D isjRepresenting the low-frequency image after the jth wavelet decomposition, L representing a one-dimensional low-pass mirror wavelet filter operator, r and C representing the rows and columns of the S-channel image, respectively, CjRepresenting the S-channel image before the jth wavelet decomposition.
And 4, step 4: and calculating the maximum inter-class variance of each low-frequency small image block through a maximum inter-class variance formula, and taking a segmentation threshold corresponding to the maximum inter-class variance as an optimal threshold.
Figure GDA0003002127080000061
Wherein T represents a segmentation threshold of the image, the segmentation threshold T divides the low-frequency small image block into a target part and a background part according to the size of the image pixel, and gTRepresenting the inter-class variance, w, of a small low-frequency image block with a division threshold T0Representing the ratio of the target pixels of the low-frequency small image block to the total number of pixels of the low-frequency small image block, u0And u represents the average gray scale of the target pixel point, and u represents the average gray scale of the low-frequency small image block.
And 5, step 5: and judging whether the optimal threshold of each low-frequency small image block is larger than the average gray scale of the low-frequency small image block, if so, judging that the low-frequency small image block contains hairs, and executing the step 6, otherwise, judging that the low-frequency small image block does not contain hairs, and executing the step 7.
And 6, step 6: and adding 40 to the optimal threshold of each low-frequency small image block to obtain an updated optimal threshold, and performing binarization processing on the low-frequency small image blocks containing hairs by using the updated optimal threshold to obtain binary small image blocks without skin and hair interference.
And 7, step 7: and carrying out binarization processing on each low-frequency small image block.
And 8, step 8: and multiplying each binarized small image block by the corresponding RGB color small image block to obtain the RGB color information of the color small image block.
The small image block after binarization processing is composed of two colors of black and white, the pixel value of a black pixel point is 0, the pixel value of a white pixel point is 1, and the pixel value range of the RGB color small image block is 0 to 255. Multiplying each small image block after binarization processing by the corresponding RGB color small image block, multiplying black pixels in the small binarized image block by corresponding color pixels in the small RGB color image block to obtain black pixels, and multiplying white pixels in the small binarized image block by corresponding color pixels in the small RGB color image block to obtain color pixels.
And 4, training the VGG16 classifier by using the training sample set.
The VGG16 classification model without the top level is used as a classifier. The network structure of the classifier is shown in table 1. And simultaneously inputting texture features and RGB color information of all the small color image blocks in the training sample set and the corresponding category of each small color image block into a classifier for training to obtain a trained VGG16 classifier.
And 5, classifying the small color image blocks in the sample set to be recognized.
And simultaneously inputting the texture characteristics and the RGB color information of each color small image block in the sample set to be recognized into a trained VGG16 classifier for classification to obtain the classification result of each color small image block.
And 6, counting the proportion of each category in the color skin image.
The proportion of the number of the small color image blocks in each category to the total number of the small color image blocks is counted, the proportion of the skin states of each category is analyzed, the skin type of the human face is comprehensively analyzed, the higher the proportion of the normal skin state is, the better the skin type of the human face is, and the higher and more the classes of the skin states of other categories are, the more serious the problem of the human face skin is.
Table 1 VGG16 network structure table without top layer
Network layer Feature dimension
Input layer (100X 100RGB color image)
Convolutional layer Block1Conv1 100×100×64
Convolutional layer Block1Conv2 100×100×64
Pooling layer Block pool 50×50×64
Convolutional layer Block2Conv1 50×50×128
Convolutional layer Block2Conv2 50×50×128
Pooling layer Block2Pool 25×25×128
Convolutional layer Block3Conv1 25×25×256
Convolutional layer Block3Conv2 25×25×256
Convolutional layer Block3Conv3 25×25×256
Pooling layer Block3Pool 13×13×256
Convolutional layer Block4Conv1 13×13×256
Convolutional layer Block4Conv2 13×13×256
Convolutional layer Block4Conv3 13×13×256
Pooling layer Block4Pool 7×7×512
Convolutional layer Block5Conv1 7×7×512
Convolutional layer Block5Conv2 7×7×512
Convolutional layer Block5Conv3 7×7×512
Pooling layer Block5Pool 4×4×512

Claims (6)

1. A method for detecting human face skin is characterized in that texture features and RGB color information of small color image blocks are extracted, a VGG16 classifier is trained by utilizing a training sample set, and the method specifically comprises the following steps:
(1) establishing a human face skin sample library:
(1a) acquiring color skin images of at least 100 persons through a high-definition camera, wherein each person acquires 5 parts of a face and 10 images of each part;
(1b) dividing each color skin image into 100 small color image blocks;
(1c) dividing the skin states into 6 types, wherein each small color image block corresponds to one skin state;
(1d) traversing each small color image block, and determining the type of the skin state corresponding to each small color image block;
(1e) forming a human face skin sample library by all the small color image blocks and the corresponding types of each small color image block;
(2) acquiring a training sample set and a sample to be identified:
(2a) randomly selecting 70% of small color image blocks and corresponding types thereof from a human face skin sample library to form a training sample set;
(2b) forming a sample set to be identified by all the residual small color image blocks in the human face skin sample library;
(3) extracting texture features and RGB color information of the color small image blocks:
(3a) converting each small color image block from an RGB color space to an HSV color space by using a conversion formula, and separating an S channel from the HSV color space;
(3b) performing wavelet decomposition on the S channel by using a wavelet decomposition formula of the low-frequency image to obtain a low-frequency small image block corresponding to each color small image block;
(3c) calculating the maximum inter-class variance of each low-frequency small image block by using a maximum inter-class variance formula;
(3d) taking a segmentation threshold corresponding to the maximum inter-class variance as an optimal threshold;
(3e) judging whether the optimal threshold of each low-frequency small image block is larger than the average gray value of the low-frequency small image blocks, if so, judging that the low-frequency small image blocks contain hairs, executing the step (3f), otherwise, judging that the low-frequency small image blocks do not contain hairs, and executing the step (3 g);
(3f) adding 40 to the optimal threshold of each low-frequency small image block to obtain an updated optimal threshold, and performing binarization processing on the low-frequency small image blocks containing hairs by using the updated optimal threshold to obtain binary small image blocks without skin and hair interference;
(3g) carrying out binarization processing on each low-frequency small image block without hair;
(3h) taking the distribution of white pixel points in each binary small image block as the texture characteristics of the corresponding color small image block;
(3i) multiplying each binarized small image block by the corresponding RGB color small image block to obtain RGB color information of the color small image block;
(4) training the VGG16 classifier:
(4a) using the VGG16 classification model without the top layer as a classifier;
(4b) inputting texture features and RGB color information of all the small color image blocks in the training sample set and the corresponding category of each small color image block into a classifier for training to obtain a trained VGG16 classifier;
(5) classifying the small color image blocks in the sample set to be identified:
simultaneously inputting texture characteristics and RGB color information of each small color image block in a sample set to be recognized into a trained VGG16 classifier for classification to obtain a classification result of each small color image block;
(6) counting the proportion of various categories in the color skin image:
counting the proportion of the number of the small color image blocks in each category to the total number of the small color image blocks; the higher the proportion of the small image blocks in the normal skin state is, the better the skin of the human face is, and the higher the proportion of the small image blocks in the other 5 states is, the worse the skin of the human face is.
2. The method for detecting the skin type of the human face according to claim 1, wherein the 5 parts of the human face in the step (1a) are: the forehead, the upper and lower parts of the left cheek and the upper and lower parts of the right cheek except the five sense organs of the human face.
3. The method for detecting the skin types of the human faces according to claim 1, wherein the classifying the skin states into 6 types in the step (1c) refers to: normal skin, large pores, acne, spots, wrinkles, moles.
4. The method for detecting the skin type of the human face according to claim 1, wherein the conversion formula in the step (3a) is as follows:
Figure FDA0003002127070000031
s (i, j) represents a saturation channel value of an HSV image pixel point with an abscissa of i and an ordinate of j, max represents maximum value operation, r, g and b respectively represent values of red, green and blue channels of an RGB color space, r, g and b belong to [0,1,. once.255 ], and min represents minimum value operation.
5. The method for detecting the skin type of the human face according to claim 1, wherein the wavelet decomposition formula of the low-frequency image in the step (3b) is as follows:
Dj=LrLcCj
wherein D isjRepresenting the low-frequency image after the jth wavelet decomposition, L representing a one-dimensional low-pass mirror wavelet filter operator, r and C representing the rows and columns of the S-channel image, respectively, CjRepresenting the S-channel image before the jth wavelet decomposition.
6. The method for detecting the skin type of the human face according to claim 1, wherein the maximum between-class variance formula in the step (3c) is as follows:
Figure FDA0003002127070000032
wherein T represents a segmentation threshold of the image, the segmentation threshold T divides the low-frequency small image block into a target part and a background part according to the size of the image pixel, and gTRepresenting the inter-class variance, w, of a small low-frequency image block with a division threshold T0Representing the ratio of the target pixels of the low-frequency small image block to the total number of pixels of the low-frequency small image block, u0And u represents the average gray scale of the target pixel point, and u represents the average gray scale of the low-frequency small image block.
CN201910181670.0A 2019-03-11 2019-03-11 Method for detecting skin of human face Active CN109961426B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910181670.0A CN109961426B (en) 2019-03-11 2019-03-11 Method for detecting skin of human face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910181670.0A CN109961426B (en) 2019-03-11 2019-03-11 Method for detecting skin of human face

Publications (2)

Publication Number Publication Date
CN109961426A CN109961426A (en) 2019-07-02
CN109961426B true CN109961426B (en) 2021-07-06

Family

ID=67024131

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910181670.0A Active CN109961426B (en) 2019-03-11 2019-03-11 Method for detecting skin of human face

Country Status (1)

Country Link
CN (1) CN109961426B (en)

Families Citing this family (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112396573A (en) * 2019-07-30 2021-02-23 纵横在线(广州)网络科技有限公司 Facial skin analysis method and system based on image recognition
CN110956623B (en) * 2019-11-29 2023-11-07 深圳数联天下智能科技有限公司 Wrinkle detection method, wrinkle detection device, wrinkle detection equipment and computer-readable storage medium
CN113129250A (en) * 2019-12-27 2021-07-16 华为技术有限公司 Skin detection method and device, terminal equipment and computer storage medium
CN111401463B (en) * 2020-03-25 2024-04-30 维沃移动通信有限公司 Method for outputting detection result, electronic equipment and medium
CN112053344A (en) * 2020-09-02 2020-12-08 杨洋 Skin detection method system and equipment based on big data algorithm
CN112837304B (en) * 2021-02-10 2024-03-12 姜京池 Skin detection method, computer storage medium and computing device
CN113128375B (en) * 2021-04-02 2024-05-10 西安融智芙科技有限责任公司 Image recognition method, electronic device, and computer-readable storage medium
CN113554623A (en) * 2021-07-23 2021-10-26 江苏医像信息技术有限公司 Intelligent quantitative analysis method and system for human face skin
CN113723310B (en) * 2021-08-31 2023-09-05 平安科技(深圳)有限公司 Image recognition method and related device based on neural network
CN115119897A (en) * 2022-06-17 2022-09-30 上海食未生物科技有限公司 3D printing meat printing method and system
KR102495889B1 (en) * 2022-07-13 2023-02-06 주식회사 룰루랩 Method for detecting facial wrinkles using deep learning-based wrinkle detection model trained according to semi-automatic labeling and apparatus for the same
CN116993714A (en) * 2023-08-30 2023-11-03 深圳伯德睿捷健康科技有限公司 Skin detection method, system and computer readable storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407915A (en) * 2016-08-31 2017-02-15 广州精点计算机科技有限公司 SVM (support vector machine)-based face recognition method and device
CN107680128A (en) * 2017-10-31 2018-02-09 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107862695A (en) * 2017-12-06 2018-03-30 电子科技大学 A kind of modified image segmentation training method based on full convolutional neural networks

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9886758B2 (en) * 2016-03-31 2018-02-06 International Business Machines Corporation Annotation of skin image using learned feature representation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106407915A (en) * 2016-08-31 2017-02-15 广州精点计算机科技有限公司 SVM (support vector machine)-based face recognition method and device
CN107680128A (en) * 2017-10-31 2018-02-09 广东欧珀移动通信有限公司 Image processing method, device, electronic equipment and computer-readable recording medium
CN107862695A (en) * 2017-12-06 2018-03-30 电子科技大学 A kind of modified image segmentation training method based on full convolutional neural networks

Also Published As

Publication number Publication date
CN109961426A (en) 2019-07-02

Similar Documents

Publication Publication Date Title
CN109961426B (en) Method for detecting skin of human face
CN106983493B (en) Skin image processing method based on three spectrums
Barata et al. A system for the detection of pigment network in dermoscopy images using directional filters
CN110097034A (en) A kind of identification and appraisal procedure of Intelligent human-face health degree
CN110363088B (en) Self-adaptive skin inflammation area detection method based on multi-feature fusion
CN110189383B (en) Traditional Chinese medicine tongue color and fur color quantitative analysis method based on machine learning
CN110210448B (en) Intelligent face skin aging degree identification and evaluation method
CN112396573A (en) Facial skin analysis method and system based on image recognition
CN104000593B (en) Skin test method
CN103034838A (en) Special vehicle instrument type identification and calibration method based on image characteristics
AU2020103260A4 (en) Rice blast grading system and method
KR20140112046A (en) Method and device for detecting and quantifying cutaneous signs on an area of skin
Al-Tarawneh An empirical investigation of olive leave spot disease using auto-cropping segmentation and fuzzy C-means classification
CN110070024B (en) Method and system for identifying skin pressure injury thermal imaging image and mobile phone
CN116849612B (en) Multispectral tongue picture image acquisition and analysis system
Achakanalli et al. Statistical analysis of skin cancer image–a case study
Sigit et al. Identification of leukemia diseases based on microscopic human blood cells using image processing
WO2006113979A1 (en) Method for identifying guignardia citricarpa
Patki et al. Cotton leaf disease detection & classification using multi SVM
CN107506713A (en) Living body faces detection method and storage device
CN110874572B (en) Information detection method and device and storage medium
Srinivasan et al. A probabilistic approach to segmentation and classification of neoplasia in uterine cervix images using color and geometric features
CN111709305A (en) Face age identification method based on local image block
Madhankumar et al. Characterization of skin lesions
DE112019004112T5 (en) SYSTEM AND PROCEDURE FOR ANALYSIS OF MICROSCOPIC IMAGE DATA AND FOR GENERATING A NOTIFIED DATA SET FOR TRAINING THE CLASSIFICATORS

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant