CN113486768A - Image recognition method for skin - Google Patents

Image recognition method for skin Download PDF

Info

Publication number
CN113486768A
CN113486768A CN202110744338.8A CN202110744338A CN113486768A CN 113486768 A CN113486768 A CN 113486768A CN 202110744338 A CN202110744338 A CN 202110744338A CN 113486768 A CN113486768 A CN 113486768A
Authority
CN
China
Prior art keywords
skin
image
typical
image recognition
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110744338.8A
Other languages
Chinese (zh)
Inventor
林强
苗苗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jiuzhang Lixin Technology Co ltd
Original Assignee
Chengdu Jiuzhang Lixin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jiuzhang Lixin Technology Co ltd filed Critical Chengdu Jiuzhang Lixin Technology Co ltd
Priority to CN202110744338.8A priority Critical patent/CN113486768A/en
Publication of CN113486768A publication Critical patent/CN113486768A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Abstract

The invention provides a skin type image identification method, which comprises the following steps: s1: establishing a typical image atlas database of different skin types; s2: acquiring a facial skin target image, and establishing an identification map library; the target image comprises a white light image, a polarized light image and a UV light image for the target area; extracting a plurality of feature points at equal intervals from the contour of a hairline, eyebrows, a nose, eyes, a mouth and a chin, and dividing triangles according to every 3 feature points to ensure that the different triangles are not intersected, wherein the obtained triangular area is a target area; s3: establishing a skin type recognition model by using a convolutional neural network algorithm, and performing corresponding map processing by using an Otsu algorithm; then, the ResNet50 is used as a training model, and the classification network is trained by using the transfer learning. The invention can quickly identify the skin and the skin defects, and provides convenience for skin beauty, maintenance and repair.

Description

Image recognition method for skin
Technical Field
The invention belongs to the technical field of computers, relates to the technical field of image processing and recognition, and particularly relates to an intelligent image recognition method for skin.
Background
With the continuous development of artificial intelligence technology, it is becoming more and more popular to utilize big data technology to collect and analyze images and identify and prejudge corresponding information according to the needs of people. Under the trend, image recognition technology has been widely applied in the fields of agricultural irrigation, weather prediction, mechanical failure diagnosis, safety protection, telemedicine and the like.
The artificial intelligence technology has made a certain progress in the aspect of skin image recognition, and the main models adopted include a Convolutional Neural Network (CNN) and a Support Vector Machine (SVM). The CNN can automatically extract and classify characteristics under the condition of sufficient sample size, the SVM can accurately classify by combining a small sample after extracting image characteristics by a computer vision algorithm, and the two methods are widely applied to skin disease diagnosis, for example, the diagnosis work of skin cancer developed by the Stanford university team by using the CNN is published in Nature journal in the 1 st month in 2017, and the automatic labeling work of pathological images developed by the Guangdong industry university team by using the SVM is published in computer research and development journal in 2015. With the increasing clinical sample size, CNN is becoming a more popular technology, but due to the problems of low computational efficiency, limited pixel block size, and large receptive field in skin image segmentation, the related technical problems still need to be studied intensively.
In addition to the above-mentioned main model techniques, other techniques have been studied and achieved satisfactory results in the application of artificial intelligence techniques to the skin field. For example, patent CN201610085988.5 provides a method for segmenting affected parts of skin images, which combines with morphological closed operation to realize segmentation of affected parts in various skin images; the patent CN202010860172.1 provides a skin disease image focus segmentation method based on a deep convolutional neural network, and abundant detail information is obtained by adopting a large amount of cavity convolution and a space pyramid structure, so that edge information can be better acquired, and a more accurate focus segmentation result is obtained; patent cn201811524821.x provides a deep learning-based skin disease picture lesion type classification method, which can perform classification diagnosis on seven skin lesions including melanoma, nevi, basal cell carcinoma, actinic keratosis, benign keratosis, dermatofibroma and hemangioma.
However, in the prior art, artificial intelligence techniques for skin identification are less than for skin condition identification. This is because the diagnostic imaging standard of skin diseases is usually clear, and has typical image features, and a large number of clinical samples are available for training. In contrast, there is no clear and uniform standard for judging skin type, and there is no image sample for training of artificial intelligence professionals, which results in relatively slow research progress of artificial intelligence technology in the aspect of skin type identification.
Even so, there have been some encouraging research results in the field of skin recognition technology. For example, patent CN201910181670.0 provides a method for detecting skin types of human faces, which trains extracted features by using a VGG16 classification model of convolutional neural network, and can detect spots, acne, pores and moles; the patent CN201510130557.1 provides a quantitative skin detection method based on face image recognition, and the method can realize the classification of common neutral skin, dry skin, oily skin and mixed skin in skin care; the patent CN201910806679.6 provides an automatic skin type identification method based on data enhancement and Mask R-CNN model, and the method can improve the efficiency and accuracy of skin type identification; CN201610930032.0 provides a facial skin analysis system based on image recognition, which can analyze whiteness, roughness and mottle amount, and visually make people clearly recognize skin change.
In the existing skin-type identification technology, the skin microscopic problem is limited to be difficult to evaluate by full-face photographing detection and the skin mirror detection technology can only reflect the local problem of the skin, so that the existing technology lacks an effective method for skin-type identification. Although the above results improve the convenience of people in identifying skin to a certain extent, the problems in the prior art are not completely overcome.
Good skin identification technology needs to accurately identify different skin types and provide reliable reference for corresponding beauty skin care schemes and even solving skin problems. In view of the above, the present invention develops a research for a technical solution that can satisfy the above-mentioned needs.
Disclosure of Invention
In view of the shortcomings of the prior art, the invention aims to provide a skin type image identification method, which can quickly and accurately identify the type of skin and the skin problem defects, and the identification precision basically reaches the identification level of a professional doctor.
In order to achieve the purpose, the technical scheme provided by the invention is as follows:
a method of image recognition of skin type, the method comprising the steps of:
s1: establishing a typical image map library according to the set typical images corresponding to different skin characteristics; the skin characteristics comprise typical characteristics of eight categories, namely sensitivity, moisture, oil, pox, sebum, pores, color depth and wrinkles;
s2: acquiring a facial skin target image, and establishing an identification map library; the target image comprises a white light image, a polarized light image and a UV light image for a target area; the determination mode of the target area is as follows: extracting a plurality of feature points at equal intervals from the contour of a hairline, eyebrows, a nose, eyes, a mouth and a chin, dividing triangles according to each 3 feature points, wherein the triangles are not intersected, and the obtained area in each triangle is a target area;
s3: establishing a skin type identification model according to a typical image spectrum library and an identification spectrum library by using a convolutional neural network algorithm, processing a spectrum by sequentially using edge preserving smooth filtering and wiener filtering, then performing spectrum processing by using a formula (1), and then performing spectrum processing by using an Otsu algorithm; then adopting ResNet50 as a training model, and training a classification network by using transfer learning; during training, replacing a softmax layer in a training network, and setting the output of the softmax layer to be 8;
Figure BDA0003143897010000041
wherein, a (x) is a diffusion coefficient function, (i, j) represents the coordinates of pixel points, D represents the neighborhood of the pixel (i, j), gamma is a constant of a smooth coefficient, n is the number of the pixel points in the neighborhood, and t represents the iteration number.
Further, in step S1, the sensitive characteristic features include features of red areas in the polarized light image of the forehead, cheek and wing of nose; the characteristic features of the moisture comprise the roughness of the skin texture and the area ratio of the light reflecting area in the white light image; typical oil content features include features of the area ratio of the glistening regions in white light images of the forehead, cheek and alar parts; typical characteristics of pox include the number of post-pox erythema, depressed scars, papules, pustules, cysts and nodules; typical characteristics of sebum include the number of white heads and the number of black heads; typical characteristics of pores include number of pores and area fraction; the typical characteristics of color deposition comprise color depth and area size characteristics of color spots in polarized light images of the forehead, the cheek, the canthus, the nose wing and the chin; the wrinkle typical features include the number, length and depth features of wrinkles in white light images of the forehead, cheek and corners of the eyes.
Further, in step S2, the number of feature points on the contour of the hairline, the eyebrow, the nose, the eyes, the mouth, and the chin is 10, 12, 19, 37, 14, and 8, respectively.
Further, in step S3, the processing formula of the edge preserving smoothing filter is:
Figure BDA0003143897010000051
wherein, IsrIs a map image; alpha is reserved weight, controls color reservation degree and has a value range of [0,2 ]];σcPreserving edge strength for the value range parameter; lambda [ alpha ]cControlling the smoothness degree of an image space domain for smoothing the weight; gamma rayGIs the gaussian filter radius; sigmaGIs the gaussian standard deviation;
Figure BDA0003143897010000053
is the convolution operator; i issIs the retained image.
Further, in step S3, a profile of the typical feature region is obtained by performing a calculation on the typical feature using Otsu algorithm.
Further, in step S3, after the training, classification adjustment is performed using the following function:
Figure BDA0003143897010000052
wherein t is the number of output results; c is the number of categories; pt is the prediction probability; yt is a real class value, the suppression parameter is 2, the value range of the weight at of each class is 0-1, the class with more samples is small, and the class with less samples is large according to the sample size, until the number of the samples of each class is the same, the value is 1.
Further, the loss function used by the model in training is a binary cross-entropy loss function binary _ cross _ entropy.
As an embodiment of the invention, the photos of each typical feature are respectively 20 million pictures of the human face which are taken by the intelligent photographing device and marked by the dermatologist.
The invention can accurately and quickly identify the type of skin and the skin defect thereof, greatly save the time for identifying the skin problem and provide convenience for skin beautification, maintenance and related skin defect repair.
It should be noted that, because the method of the present invention has the advantages described above, the present invention can combine the skin questionnaire, the intelligent photographing software and the skin mirror detection to perform the comprehensive skin detection and identification. Through the working mode, the guiding function of the invention in the actual work can be further improved.
As an example:
in the oil component detection, the following method can be adopted:
questionnaires: oil scores were obtained from a sixteen-type skin questionnaire.
Equipment is shot to intelligence: acquiring a full-face image by using intelligent photographing equipment, identifying by using the identification method of the invention, and outputting a score according to a set grading rule;
skin mirror: the method comprises the following steps of acquiring images of the forehead, the cheek and the nasal alar part by using a skin mirror, identifying by using the identification method, outputting scores according to a set scoring rule, and outputting average scores according to a formula: average score a + cheek score B + alar score C. The value range of A is 10-40%, the value range of B is 10-40%, and the value range of C is 10-30%;
the oil content comprehensive score is questionnaire score A + magic mirror score B + dermatoscope score C, the value range of A is 10-30%, the value range of B is 10-35%, and the value range of C is 10-35%.
Corresponding fraction interval of oil content degree: severe oiliness (0-40 points), moderate oiliness (41-60 points), mild oiliness (61-85 points), and good oiliness (86-100 points).
Drawings
FIG. 1 is a schematic flow chart of a method for identifying skin type images according to the present invention;
fig. 2 is an illustration picture in the process of performing the sensitivity detection and identification in embodiment 1 of the present invention;
fig. 3 is an illustration picture of an oil content detection and identification process in embodiment 1 of the present invention;
FIG. 4 is a drawing showing an example of the recognition process of pox detection in example 1 of the present invention;
FIG. 5 is a schematic view showing a sebum detection and identification process performed in example 1 of the present invention;
fig. 6 is an illustration picture in the process of pore detection and identification in embodiment 1 of the present invention;
fig. 7 is an illustration picture of the process of performing color deposit detection and identification in embodiment 1 of the present invention;
FIG. 8 is a graph showing the skin mirror sensitivity score criteria in example 1 of the present invention;
FIG. 9 is a chart showing the standard of the skin sebum score by a dermoscope in example 1 of the present invention;
FIG. 10 is a skin mirror pore scoring criteria chart in example 1 of the present invention;
FIG. 11 is a chart showing the skin mirror stain score criteria in example 1 of the present invention.
Detailed Description
The present invention is described in detail below by way of examples, and it should be noted that the following examples are only for illustrating the present invention and should not be construed as limiting the scope of the present invention.
Example 1
The facial skin is classified in eight dimensions of sensitivity, moisture, oil content, pox, sebum, pores, color sediment and wrinkles, 20 ten thousand face pictures which are graded and labeled by doctors are taken as samples, and the samples are processed by the following method:
s1: establishing a typical image map library according to the set typical images corresponding to different skin characteristics; the skin characteristics comprise typical characteristics of eight categories, namely sensitivity, moisture, oil, pox, sebum, pores, color depth and wrinkles;
s2: acquiring a facial skin target image, and establishing an identification map library; the target image comprises a white light image, a polarized light image and a UV light image for a target area; the determination mode of the target area is as follows: extracting a plurality of feature points at equal intervals from the contour of a hairline, eyebrows, a nose, eyes, a mouth and a chin, dividing triangles according to each 3 feature points, wherein the triangles are not intersected, and the obtained area in each triangle is a target area;
s3: establishing a skin type identification model according to a typical image spectrum library and an identification spectrum library by using a convolutional neural network algorithm, processing a spectrum by sequentially using edge preserving smooth filtering and wiener filtering, then performing spectrum processing by using a formula (1), and then performing spectrum processing by using an Otsu algorithm; then adopting ResNet50 as a training model, and training a classification network by using transfer learning; during training, replacing a softmax layer in a training network, and setting the output of the softmax layer to be 8;
Figure BDA0003143897010000081
wherein, a (x) is a diffusion coefficient function, (i, j) represents the coordinates of pixel points, D represents the neighborhood of the pixel (i, j), gamma is a constant of a smooth coefficient, n is the number of the pixel points in the neighborhood, and t represents the iteration number.
In step S1, the sensitive characteristic features include features of red areas in the polarized light images of the forehead, cheek and wing of nose; the characteristic features of the moisture comprise the roughness of the skin texture and the area ratio of the light reflecting area in the white light image; typical oil content features include features of the area ratio of the glistening regions in white light images of the forehead, cheek and alar parts; typical characteristics of pox include the number of post-pox erythema, depressed scars, papules, pustules, cysts and nodules; typical characteristics of sebum include the number of white heads and the number of black heads; typical characteristics of pores include number of pores and area fraction; the typical characteristics of color deposition comprise color depth and area size characteristics of color spots in polarized light images of the forehead, the cheek, the canthus, the nose wing and the chin; the wrinkle typical features include the number, length and depth features of wrinkles in white light images of the forehead, cheek and corners of the eyes.
In step S2, the number of feature points on the contour of the hairline, eyebrow, nose, eye, mouth, and chin is 10, 12, 19, 37, 14, and 8, respectively.
In step S3, the processing formula of the edge preserving smoothing filter is:
Figure BDA0003143897010000091
wherein, IsrIs a map image; alpha is reserved weight, controls color reservation degree and has a value range of [0,2 ]];σcPreserving edge strength for the value range parameter; lambda [ alpha ]cControlling the smoothness degree of an image space domain for smoothing the weight; gamma rayGIs the gaussian filter radius; sigmaGIs the gaussian standard deviation;
Figure BDA0003143897010000093
is the convolution operator; i issIs the retained image.
In step S3, the representative feature is calculated by Otsu algorithm, and the contour of the representative feature region is obtained.
In step S3, after the training, classification adjustment is performed using the following function:
Figure BDA0003143897010000092
wherein t is the number of output results; c is the number of categories; pt is the prediction probability; yt is a real class value, the suppression parameter is 2, the value range of the weight at of each class is 0-1, the class with more samples is small, and the class with less samples is large according to the sample size, until the number of the samples of each class is the same, the value is 1.
The loss function used by the model in training is the binary cross-entropy loss function binary cross entropy.
In this example, the cases reflected in fig. 8 to 11 were classified into four degrees of severity, namely, severe (0 to 29 points), moderate (30 to 59 points), mild (60 to 79 points), and good (80 to 100 points), as criteria for the degree of skin trouble defect, and skin images of 20 points, 40 points, 60 points, and 80 points were listed in the drawings for reference.
After the training, selecting other 120 face dermatoscope photos to respectively identify and evaluate by using the model and a professional doctor, identifying and evaluating by using the same standard, and judging the accuracy, wherein fig. 2-7 are related pictures in the identification process. The same steps are adopted to examine the common models DenseNet201, Inception V4, VGG16BN, Xception, ResNet50+ Xception and VGG19+ ResNet50 to carry out the same identification evaluation test and examine the accuracy. The results of the experiment are shown in table 1:
TABLE 1
Figure BDA0003143897010000101

Claims (8)

1. A method of image recognition of skin type, the method comprising the steps of:
s1: establishing a typical image map library according to the set typical images corresponding to different skin characteristics; the skin characteristics comprise typical characteristics of eight categories, namely sensitivity, moisture, oil, pox, sebum, pores, color depth and wrinkles;
s2: acquiring a facial skin target image, and establishing an identification map library; the target image comprises a white light image, a polarized light image and a UV light image for a target area; the determination mode of the target area is as follows: extracting a plurality of feature points at equal intervals from the contour of a hairline, eyebrows, a nose, eyes, a mouth and a chin, dividing triangles according to each 3 feature points, wherein the triangles are not intersected, and the obtained area in each triangle is a target area;
s3: establishing a skin type identification model according to a typical image spectrum library and an identification spectrum library by using a convolutional neural network algorithm, processing a spectrum by sequentially using edge preserving smooth filtering and wiener filtering, then performing spectrum processing by using a formula (1), and then performing spectrum processing by using an Otsu algorithm; then adopting ResNet50 as a training model, and training a classification network by using transfer learning; during training, replacing a softmax layer in a training network, and setting the output of the softmax layer to be 8;
Figure FDA0003143897000000011
wherein, a (x) is a diffusion coefficient function, (i, j) represents the coordinates of pixel points, D represents the neighborhood of the pixel (i, j), gamma is a constant of a smooth coefficient, n is the number of the pixel points in the neighborhood, and t represents the iteration number.
2. The image recognition method according to claim 1, wherein in step S1, the sensitive characteristic features include features of red areas in the bias light image of the forehead, cheek and wing of nose; the characteristic features of the moisture comprise the roughness of the skin texture and the area ratio of the light reflecting area in the white light image; typical oil content features include features of the area ratio of the glistening regions in white light images of the forehead, cheek and alar parts; typical characteristics of pox include the number of post-pox erythema, depressed scars, papules, pustules, cysts and nodules; typical characteristics of sebum include the number of white heads and the number of black heads; typical characteristics of pores include number of pores and area fraction; the typical characteristics of color deposition comprise color depth and area size characteristics of color spots in polarized light images of the forehead, the cheek, the canthus, the nose wing and the chin; the wrinkle typical features include the number, length and depth features of wrinkles in white light images of the forehead, cheek and corners of the eyes.
3. The image recognition method according to claim 1, wherein in step S2, the number of feature points on the contour of the hairline, the eyebrow, the nose, the eyes, the mouth, and the chin is 10, 12, 19, 37, 14, and 8, respectively.
4. The image recognition method according to claim 1, wherein in step S3, the processing formula of the edge preserving smoothing filter is:
Figure FDA0003143897000000021
wherein, IsrIs a map image; alpha is reserved weight, controls color reservation degree and has a value range of [0,2 ]];σcPreserving edge strength for the value range parameter; lambda [ alpha ]cControlling the smoothness degree of an image space domain for smoothing the weight; gamma rayGIs the gaussian filter radius; sigmaGIs the gaussian standard deviation;
Figure FDA0003143897000000022
is the convolution operator; i issIs the retained image.
5. The image recognition method according to claim 1, wherein in step S3, the outline of the characteristic feature region is obtained by performing calculation on the characteristic features using Otsu algorithm.
6. The image recognition method according to claim 5, wherein in step S3, after the training, classification adjustment is performed using a function:
Figure FDA0003143897000000031
wherein t is the number of output results; c is the number of categories; pt is the prediction probability; yt is a real class value, the suppression parameter is 2, the value range of the weight at of each class is 0-1, the class with more samples is small, and the class with less samples is large according to the sample size, until the number of the samples of each class is the same, the value is 1.
7. The image recognition method of claim 6, wherein the loss function used by the model in the training is a binary cross-entropy loss function binary cross entropy.
8. The image recognition method according to claim 1, wherein the pictures of the respective representative features are 20 million pictures of the face of a person which is taken by a photographing device and labeled by a dermatologist.
CN202110744338.8A 2021-07-01 2021-07-01 Image recognition method for skin Pending CN113486768A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110744338.8A CN113486768A (en) 2021-07-01 2021-07-01 Image recognition method for skin

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110744338.8A CN113486768A (en) 2021-07-01 2021-07-01 Image recognition method for skin

Publications (1)

Publication Number Publication Date
CN113486768A true CN113486768A (en) 2021-10-08

Family

ID=77937436

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110744338.8A Pending CN113486768A (en) 2021-07-01 2021-07-01 Image recognition method for skin

Country Status (1)

Country Link
CN (1) CN113486768A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305334A (en) * 2021-12-09 2022-04-12 深圳贵之族生科技有限公司 Intelligent beauty method, device, equipment and storage medium
CN114376526A (en) * 2022-01-12 2022-04-22 广东药科大学 Skin state analysis method and skin care mirror

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063582A1 (en) * 2003-08-29 2005-03-24 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
CN108932493A (en) * 2018-06-29 2018-12-04 东北大学 A kind of facial skin quality evaluation method
CN109730637A (en) * 2018-12-29 2019-05-10 中国科学院半导体研究所 A kind of face face-image quantified system analysis and method
CN110378234A (en) * 2019-06-20 2019-10-25 合肥英威晟光电科技有限公司 Convolutional neural networks thermal imagery face identification method and system based on TensorFlow building
CN111814520A (en) * 2019-04-12 2020-10-23 虹软科技股份有限公司 Skin type detection method, skin type grade classification method, and skin type detection device
CN111860169A (en) * 2020-06-18 2020-10-30 北京旷视科技有限公司 Skin analysis method, device, storage medium and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050063582A1 (en) * 2003-08-29 2005-03-24 Samsung Electronics Co., Ltd. Method and apparatus for image-based photorealistic 3D face modeling
CN108932493A (en) * 2018-06-29 2018-12-04 东北大学 A kind of facial skin quality evaluation method
CN109730637A (en) * 2018-12-29 2019-05-10 中国科学院半导体研究所 A kind of face face-image quantified system analysis and method
CN111814520A (en) * 2019-04-12 2020-10-23 虹软科技股份有限公司 Skin type detection method, skin type grade classification method, and skin type detection device
CN110378234A (en) * 2019-06-20 2019-10-25 合肥英威晟光电科技有限公司 Convolutional neural networks thermal imagery face identification method and system based on TensorFlow building
CN111860169A (en) * 2020-06-18 2020-10-30 北京旷视科技有限公司 Skin analysis method, device, storage medium and electronic equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114305334A (en) * 2021-12-09 2022-04-12 深圳贵之族生科技有限公司 Intelligent beauty method, device, equipment and storage medium
CN114376526A (en) * 2022-01-12 2022-04-22 广东药科大学 Skin state analysis method and skin care mirror

Similar Documents

Publication Publication Date Title
Javed et al. A comparative study of features selection for skin lesion detection from dermoscopic images
CN111524080A (en) Face skin feature identification method, terminal and computer equipment
CN110097034B (en) Intelligent face health degree identification and evaluation method
US8290257B2 (en) Method and apparatus for simulation of facial skin aging and de-aging
Senan et al. Analysis of dermoscopy images by using ABCD rule for early detection of skin cancer
US7689016B2 (en) Automatic detection of critical dermoscopy features for malignant melanoma diagnosis
Hoshyar et al. Review on automatic early skin cancer detection
CN110473199B (en) Mottle acne detection and health evaluation method based on deep learning example segmentation
CN113486768A (en) Image recognition method for skin
Thaajwer et al. Melanoma skin cancer detection using image processing and machine learning techniques
JP2015500722A (en) Method and apparatus for detecting and quantifying skin symptoms in a skin zone
CN112967285B (en) Chloasma image recognition method, system and device based on deep learning
CN113159227A (en) Acne image recognition method, system and device based on neural network
Madooei et al. Incorporating colour information for computer-aided diagnosis of melanoma from dermoscopy images: A retrospective survey and critical analysis
Vocaturo et al. Features for melanoma lesions characterization in computer vision systems
Van Zon et al. Segmentation and classification of melanoma and nevus in whole slide images
Sharma et al. Automatically detection of skin cancer by classification of neural network
Pathan et al. Classification of benign and malignant melanocytic lesions: A CAD tool
TWI430776B (en) Smart video skin test system and method of the same
Kaur et al. Human skin texture analysis using image processing techniques
Bhardwaj et al. Two-tier grading system for npdr severities of diabetic retinopathy in retinal fundus images
Khan et al. Automated non-invasive diagnosis of melanoma skin cancer using dermo-scopic images
Madooei et al. 2 A Bioinspired Color Representation for Dermoscopy Analysis
Turkeli et al. A smart dermoscope design using artificial neural network
Paul et al. Technologies in Texture Analysis–A Review

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination