CN113592851B - Pore detection method based on full-face image - Google Patents

Pore detection method based on full-face image Download PDF

Info

Publication number
CN113592851B
CN113592851B CN202110924995.0A CN202110924995A CN113592851B CN 113592851 B CN113592851 B CN 113592851B CN 202110924995 A CN202110924995 A CN 202110924995A CN 113592851 B CN113592851 B CN 113592851B
Authority
CN
China
Prior art keywords
face
image
black
pore
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110924995.0A
Other languages
Chinese (zh)
Other versions
CN113592851A (en
Inventor
陈冰凌
杨磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Deepexi Technology Co Ltd
Original Assignee
Beijing Deepexi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Deepexi Technology Co Ltd filed Critical Beijing Deepexi Technology Co Ltd
Priority to CN202110924995.0A priority Critical patent/CN113592851B/en
Publication of CN113592851A publication Critical patent/CN113592851A/en
Application granted granted Critical
Publication of CN113592851B publication Critical patent/CN113592851B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Medical Informatics (AREA)
  • Quality & Reliability (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Analysis (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Measuring And Recording Apparatus For Diagnosis (AREA)

Abstract

The invention discloses a pore detection method based on a full-face image, and particularly relates to the technical field of medical cosmetology. The invention provides a scheme which is higher in robustness and can fall to the ground by a comparison system based on the pore detection problem of the full-face image, and the method simplifies the pore detection problem by splitting the pore detection into four sub-problems, so that the pore detection can obtain a relatively stable and reliable effect under the condition that deep learning is not adopted, meanwhile, the problems of hue, brightness and the like caused by the influence of illumination on the face picture are solved, the influence caused by the problem is relieved better by the methods of highlight treatment, the percentage of skin is used as a dynamic threshold value and the like, the angles of the face picture are various, and the robustness of each part of the face in positioning is also solved effectively by semantic segmentation.

Description

Pore detection method based on full-face image
Technical Field
The invention relates to the technical field of medical cosmetology, in particular to a pore detection method based on a full-face image.
Background
Facial pore detection plays an important role in scenes such as medical cosmetology, skin care products and skin state detection, and the like, through the severity and occurrence area of pores, a plurality of medical products, skin care products or maintenance knowledge suitable for the situation can be recommended for the facial pore detection, the existing pore detection method basically depends on professional equipment such as a dermatoscope, and the like, the method is high in cost and large in place limitation, and brings about a lot of inconvenience for daily skin detection, cosmetic development and dermatology research, and the technical scheme for accurately identifying pores by using common images is very few, is generally immature, and has the following difficulties:
1. the pore characteristics are fine, if a deep learning scheme is adopted, the labeling cost is very high, and a related public data set does not exist;
2. the photo of the face is influenced by environmental factors such as illumination, so that the difference of hue and brightness is large, and the image processing is difficult;
3. the angles of the face photos are various, and the robustness of each part of the face in positioning is a problem that the existing scheme is less in consideration.
The prior art scheme is generally only suitable for local flat skin pictures without reflection and shadow, has low robustness and poor practical application effect, and most schemes only solve the problem that the specific pore position cannot be accurately positioned in the pore detection based on the local flat skin pictures, and have poor practical application effect due to the fact that the scheme based on a complete picture is also lacking in a comparison system, so that the research of a pore detection method based on a full-face image is of great significance in solving the problem.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a pore detection method based on a full-face image, which aims to solve the technical problems that: the existing pore detection method basically depends on professional equipment such as a dermatoscope, the cost of the method is high, the site limitation is high, and a lot of inconvenience is brought to daily skin detection, cosmetic development and dermatology research, the technical scheme for accurately identifying pores by using common images is very few and is generally immature, most schemes cannot accurately locate specific pore positions, or the method is only suitable for local flat skin pictures without reflection and shadow, and the problem of poor practical application effect is caused.
In order to achieve the above purpose, the present invention provides the following technical solutions: a pore detection method based on a full-face image comprises the following steps:
s1, face detection:
and detecting the human face, and adjusting the uniform width of the human face after the detection is finished.
S2, human face part segmentation:
the BiSeNet algorithm of deep learning is adopted to carry out semantic segmentation to carry out segmentation extraction on each part of the face, through semantic segmentation, the model can learn the characteristics of each part of the face, semantic information of each part of the face can be fused to carry out comprehensive judgment, and interference of information such as light rays, background information, skin colors, human species and the like is reduced.
S3, extracting key areas:
firstly, on the basis of S2, extracting black-and-white masks of all parts, then carrying out maximum contour extraction on all black-and-white masks, and wrapping the black-and-white masks in a binding box to obtain position information and length and width information of the corresponding binding box of all parts, namely, the position information of all face parts is obtained, and then defining four key areas according to the position information of all parts, wherein the four key areas are respectively as follows: forehead, left cheek, right cheek and nose.
S4, pore detection:
after the key area is extracted through the step S3, the interference of background information and large facial structure information is eliminated, and now, the detection of pores is carried out by a method of adding image processing through a statistical method based on the key area of the step S3, wherein the pore detection steps are as follows:
a. extracting a blue channel;
b. sharpening the image;
c. performing highlight treatment;
d. local binarization;
e. performing open operation;
f. contour extraction and screening;
g. gbounding box extraction and screening.
As a further aspect of the invention: the S1 face detection is realized by adopting mtcnn, and interference of background information is eliminated as much as possible through an mtcnn mature face detection scheme.
As a further aspect of the invention: in the process of adjusting the uniform width of the face in the S1 face detection, the face is uniformly adjusted to 1350 pixels.
As a further aspect of the invention: and the positioning of the forehead, the left cheek, the right cheek and the nose in the S3 is respectively as follows:
forehead: x1 coordinates = face x1+10%;
x2 coordinates = face x1+face w-10% >;
y1 coordinates = face y1+5% > face h;
y2 coordinates= (left eyebrow y1+right eyebrow y 1)/2-5%. Facial h;
left cheek: x1 coordinates = face x1+10% face w;
x2 coordinates = nose x1-10% face w;
y1 coordinates = left eye y2+5% face h;
y2 coordinates= (nose y2+upper lip y 1)/2;
the definition of the right cheek and the nose is the same and is not repeated;
wherein x1 represents the starting position of the site on the abscissa; x2 represents the ending position of the site on the abscissa; y1 represents the starting position of the part on the ordinate; y2 represents the ending position of the part on the ordinate; w represents the width of the site; h represents the height of the site.
As a further aspect of the invention: the specific steps of image sharpening in the step S4 are as follows: performing traversal calculation on the skin of each region by customizing a 3*3 filter to obtain image sharpening treatment so as to strengthen the contrast of pore characteristics, wherein the filter is [ [0, -1,0], [ -1,5, -1], [0, -1,0] ];
as a further aspect of the invention: the specific steps of the highlight treatment in the S4 are as follows: and respectively acquiring a 50 th percentile value, an 80 th percentile value and an 87 th percentile value of the skin image of the area, regarding pixels larger than the 87 th percentile value as pixels of a high-light area, regarding original values 0.95 as new pixel values for the pixels of the high-light area, and performing targeted dimming treatment on the brightness of the high-light area by using the method, so that the light contrast of the whole area is reduced under the condition that the normal illumination area information is not influenced.
As a further aspect of the invention: the specific steps of the local binarization in the S4 are as follows: and (3) carrying out binarization processing on the brightness in the range of 15 x 15 by taking the current pixel as the center to obtain a black-and-white image, wherein pores are black and other areas are white on the black-and-white image.
As a further aspect of the invention: the specific steps of contour extraction and screening in the step S4 are as follows: the black part outline of the black-and-white map is extracted and filtered by area.
As a further aspect of the invention: the concrete steps of extracting and screening the binding box in the S4 are as follows: based on the extracted contours, a bounding box of each contour is obtained and filtered from multiple aspects as follows:
first filtering from the bounding box aspect ratio limit: limiting the aspect ratio to between 0.5 and 1.5, eliminating interference of characteristics such as wrinkles, and then filtering from the R channel value limit: theoretically, though pores appear slightly dark on the skin surface, the primary color is still skin color, and the color is not dark like nevi and hair, that is, the R value of the pores is not too low, but in practical situations, the environmental information such as light and the like and the skin color of an individual are not uniform, so that a fixed value is not suitable, in order to make the result more stable, a statistical method is used for judging, a dynamic value is used, the value of the 20 th percentile of the facial skin R channel is calculated as the bottom line value of the skin color in the R channel, and the noise of the characteristics such as nevi and hair is eliminated by filtering out the blondin boxes with the R channel value lower than the bottom line value.
The invention has the beneficial effects that:
1. the invention provides a scheme of comparing a system with higher robustness and being capable of falling to the ground through pore detection based on the full face image, and the method simplifies pore detection by splitting pore detection into four sub-problems, so that the pore detection can obtain a relatively stable and reliable effect under the condition of not adopting deep learning, meanwhile, the problems of hue, brightness and the like caused by the influence of illumination on a face picture are solved effectively through methods of highlight treatment, the percentage of skin is used as a dynamic threshold value and the like, the influence caused by the problem is relieved better, the angles of face pictures are various, and the robustness of each part of the face in positioning is also solved effectively through semantic segmentation;
2. according to the invention, the BiSeNet algorithm of deep learning is adopted to carry out semantic segmentation to carry out segmentation extraction on each part of the human face, and through the semantic segmentation, the model can learn the characteristics of each part of the human face, and can also integrate semantic information of each part of the whole face to carry out comprehensive judgment, so that the interference of light, background information, skin color, human species and other information is reduced, and the method is more stable and has higher precision than the traditional image processing scheme in practical application;
3. according to the invention, through adding two steps of extracting the blue channel and performing open operation in the pore detection process, the pore characteristics of the human face are most obvious in the blue channel through observation, so that the blue channel is extracted as an image basis for pore detection, pores on the surface of the human face can be detected more clearly, and the binarization curve is smoother through performing open operation on the binarization result;
4. in the invention, the face is uniformly adjusted to 1350 pixels in the process of adjusting the uniform width of the face, so that the stability of the subsequent pore recognition link can be ensured, and the illumination of each part of the face is possibly not uniform, if a common binarization scheme is adopted, the binarization result is very unstable, and under the condition of unbalanced illumination, the common binarization result is more prone to binarization of the illuminated face and the dark face, so that a local binarization scheme is adopted, and the stability of the binarization result can be ensured.
Drawings
FIG. 1 is a flow chart of a pore detection method of the present invention;
FIG. 2 is a flow chart of the key region definition of the present invention.
Detailed Description
The following description of the technical solutions in the embodiments of the present invention will be clear and complete, and it is obvious that the described embodiments are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1-2, a pore detection method based on a full-face image includes the following steps:
s1, face detection:
the method and the device for detecting the human face and the system for detecting the human face have the advantages that the unified width of the human face is adjusted after the detection is finished, and the human face is uniformly adjusted to 1350 pixels in the process of adjusting the unified width of the human face, so that the stability of a subsequent pore recognition link can be ensured.
S2, human face part segmentation:
the invention adopts the BiSeNet algorithm of deep learning to carry out the segmentation extraction of each part of the human face, through the semantic segmentation, the model can learn the characteristics of each part of the human face, can also fuse the semantic information of each part of the human face to carry out a comprehensive judgment, reduce the interference of light, background information, skin color, human species and other information.
S3, extracting key areas:
firstly, on the basis of S2, extracting black-and-white masks of all parts, then carrying out maximum contour extraction on all black-and-white masks, and wrapping the black-and-white masks in a binding box to obtain position information and length and width information of the corresponding binding box of all parts, namely, the position information of all face parts is obtained, and then defining four key areas according to the position information of all parts, wherein the four key areas are respectively as follows: forehead, left cheek, right cheek and nose.
S4, pore detection:
after the key area is extracted through the S3, the interference of background information and large facial structure information is eliminated, and now, the detection of pores is carried out by a method of adding image processing through a statistical method based on the key area of the S3, and the pore detection steps are as follows:
e. extracting a blue channel;
f. sharpening the image;
g. performing highlight treatment;
h. local binarization;
e. performing open operation;
f. contour extraction and screening;
g. gbounding box extraction and screening.
According to the invention, through adding two steps of extracting the blue channel and performing open operation in the pore detection process, the characteristics of the pores of the human face are most obvious in the blue channel through observation, so that the blue channel is extracted as an image base of pore detection, pores on the surface of the human face can be detected more clearly, and the binarization curve is smoother through performing open operation on the binarization result.
The S1 face detection is realized by adopting an mtcnn, and interference of background information is eliminated as much as possible through an mtcnn mature face detection scheme.
In the S1 face detection, the face is uniformly adjusted to 1350 pixels in the process of adjusting the uniform width of the face, and the face is uniformly adjusted to 1350 pixels in the process of adjusting the uniform width of the face, so that the stability of a subsequent pore recognition link can be ensured.
And S3, positioning of the forehead, the left cheek, the right cheek and the nose is respectively as follows:
forehead: x1 coordinates = face x1+10%;
x2 coordinates = face x1+face w-10% >;
y1 coordinates = face y1+5% > face h;
y2 coordinates= (left eyebrow y1+right eyebrow y 1)/2-5%. Facial h;
left cheek: x1 coordinates = face x1+10% face w;
x2 coordinates = nose x1-10% face w;
y1 coordinates = left eye y2+5% face h;
y2 coordinates= (nose y2+upper lip y 1)/2;
the definition of the right cheek and the nose is the same and is not repeated;
wherein x1 represents the starting position of the site on the abscissa; x2 represents the ending position of the site on the abscissa; y1 represents the starting position of the part on the ordinate; y2 represents the ending position of the part on the ordinate; w represents the width of the site; h represents the height of the site.
The specific steps of image sharpening in S4 are as follows: performing traversal calculation on the skin of each region by customizing a 3*3 filter to obtain the sharpening treatment of the image so as to strengthen the contrast of pore characteristics, wherein the filter is [ [0, -1,0], [ -1,5, -1], [0, -1,0] ];
the specific steps of the highlight treatment in S4 are as follows: and respectively acquiring a 50 th percentile value, an 80 th percentile value and an 87 th percentile value of the skin image of the area, regarding pixels larger than the 87 th percentile value as pixels of a high-light area, regarding original values 0.95 as new pixel values for the pixels of the high-light area, and performing targeted dimming treatment on the brightness of the high-light area by using the method, so that the light contrast of the whole area is reduced under the condition that the normal illumination area information is not influenced.
The specific steps of the local binarization in S4 are as follows: the brightness in the range of 15 x 15 is used as the center to carry out binarization processing to obtain a black-white image, on the black-white image, pores are black, and other areas are white, because the illumination of each part of the face is possibly not uniform, if a common binarization scheme is adopted, the binarization result is unstable, and under the condition of unbalanced illumination, the binarization result of the common binarization is more prone to binarization of the illuminated face and the dark face, so that the stability of the binarization result can be ensured by adopting the local binarization scheme.
The specific steps of contour extraction and screening in S4 are as follows: the black part outline of the black-and-white map is extracted and filtered by area.
The concrete steps of the extracting and screening of the binding box in the S4 are as follows: based on the extracted contours, a bounding box of each contour is obtained and filtered from multiple aspects as follows:
first filtering from the bounding box aspect ratio limit: limiting the aspect ratio to between 0.5 and 1.5, eliminating interference of characteristics such as wrinkles, and then filtering from the R channel value limit: theoretically, though pores appear slightly dark on the skin surface, the primary color is still skin color, and the color is not dark like nevi and hair, that is, the R value of the pores is not too low, but in practical situations, the environmental information such as light and the like and the skin color of an individual are not uniform, so that a fixed value is not suitable, in order to make the result more stable, a statistical method is used for judging, a dynamic value is used, the value of the 20 th percentile of the facial skin R channel is calculated as the bottom line value of the skin color in the R channel, and the noise of the characteristics such as nevi and hair is eliminated by filtering out the blondin boxes with the R channel value lower than the bottom line value.
The invention provides a scheme which is higher in robustness and can fall to the ground by a comparison system based on the pore detection problem of the full-face image, and the method simplifies the pore detection problem by splitting the pore detection into four sub-problems, so that the pore detection can obtain a relatively stable and reliable effect under the condition that deep learning is not adopted, meanwhile, the problems of hue, brightness and the like caused by the influence of illumination on the face picture are solved, the influence caused by the problem is relieved better by the methods of highlight treatment, the percentage of skin is used as a dynamic threshold value and the like, the angles of the face picture are various, and the robustness of each part of the face in positioning is also solved effectively by semantic segmentation.
The last points to be described are: while the invention has been described in detail in the foregoing general description and with reference to specific embodiments, the foregoing embodiments are merely illustrative of the technical aspects of the invention and are not limiting thereof; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (6)

1. The pore detection method based on the full-face image is characterized by comprising the following steps of:
s1, face detection: detecting the human face, and adjusting the unified width of the human face after the detection is finished;
s2, human face part segmentation: carrying out semantic segmentation by adopting a deep learning BiSeNet algorithm to carry out segmentation extraction on each part of the human face, and enabling a model to learn the characteristics of each part of the human face or to integrate semantic information of each part of the whole face to carry out comprehensive judgment so as to reduce the interference of light rays, background information, skin color and human information;
s3, extracting key areas: on the basis of S2, extracting black-and-white masks of all parts, then carrying out maximum contour extraction on all black-and-white masks, and wrapping the black-and-white masks in a boundingbox to obtain position information and length-width information of the boundingbox corresponding to all parts, namely, the position information of all face parts is obtained equivalently, and then defining four key areas according to the position information of all parts, wherein the four key areas are respectively as follows: forehead, left cheek, right cheek and nose;
s4, pore detection: after the key area is extracted through the step S3, the interference of background information and large facial structure information is eliminated, and now, the detection of pores is carried out by a method of adding image processing through a statistical method based on the key area of the step S3, wherein the pore detection steps are as follows:
extracting a blue channel of the key region;
sharpening the extracted blue channel and performing high-gloss treatment;
performing local binarization on the blue channel subjected to highlight treatment to obtain a black-and-white image corresponding to the blue channel, and performing open operation on the black-and-white image;
carrying out contour extraction and screening on the black-and-white image;
filtering the extracted and screened contours by using the aspect ratio limit of the gboundingbox; and
filtering according to the corresponding R channel value for the filtered contour, thereby extracting the pores, and wherein
For the filtered contour, performing filtering operation according to the corresponding R-channel value, including: the value of the 20 th percentile of the facial skin R-channels is calculated as the floor value of the skin tone at the R-channels, those boundingboxes whose R-channel values are below the floor value are filtered out, and wherein,
the specific steps of image sharpening for the extracted blue channel in the step S4 are as follows: the sharpening of the image is obtained by performing a traversal calculation on the skin of each region by customizing a 3*3 filter, which is [ [0, -1,0], [ -1,5, -1], [0, -1,0] ], to enhance the contrast of the pore features, and wherein
The specific steps of performing the highlight treatment in the step S4 are as follows: and respectively acquiring a 50 th percentile value, an 80 th percentile value and an 87 th percentile value of the skin image of the area, regarding pixels larger than the 87 th percentile value as pixels of a high-light area, regarding original values 0.95 as new pixel values for the pixels of the high-light area, and performing targeted dimming treatment on the brightness of the high-light area by using the method, so that the light contrast of the whole area is reduced under the condition that the normal illumination area information is not influenced.
2. The pore detection method based on full-face image as claimed in claim 1, wherein: the S1 face detection is realized by adopting an mtcnn, and interference of background information is eliminated through an mtcnn mature face detection scheme.
3. The pore detection method based on full-face image as claimed in claim 1, wherein: in the process of adjusting the uniform width of the face in the S1 face detection, the face is uniformly adjusted to 1350 pixels.
4. The pore detection method based on full-face image as claimed in claim 1, wherein: the specific steps of the local binarization in the S4 are as follows: and (3) carrying out binarization processing on the brightness in the range of 15 x 15 by taking the current pixel as the center to obtain a black-and-white image, wherein pores are black and other areas are white on the black-and-white image.
5. The pore detection method based on full-face image as claimed in claim 4, wherein: the specific steps of contour extraction and screening in the step S4 are as follows: the black part outline of the black-and-white map is extracted and filtered by area.
6. The pore detection method based on full-face image as claimed in claim 5, wherein:
the filtering operation of the extracted and filtered contours using the aspect ratio limit of the gboundingbox includes: limiting the length-width ratio to be between 0.5 and 1.5, and eliminating interference of characteristics such as wrinkles.
CN202110924995.0A 2021-08-12 2021-08-12 Pore detection method based on full-face image Active CN113592851B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110924995.0A CN113592851B (en) 2021-08-12 2021-08-12 Pore detection method based on full-face image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110924995.0A CN113592851B (en) 2021-08-12 2021-08-12 Pore detection method based on full-face image

Publications (2)

Publication Number Publication Date
CN113592851A CN113592851A (en) 2021-11-02
CN113592851B true CN113592851B (en) 2023-06-20

Family

ID=78257682

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110924995.0A Active CN113592851B (en) 2021-08-12 2021-08-12 Pore detection method based on full-face image

Country Status (1)

Country Link
CN (1) CN113592851B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117152099A (en) * 2023-09-05 2023-12-01 深圳伯德睿捷健康科技有限公司 Skin pore or blackhead detection method, system and computer readable storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008293325A (en) * 2007-05-25 2008-12-04 Noritsu Koki Co Ltd Face image analysis system
WO2013098512A1 (en) * 2011-12-26 2013-07-04 Chanel Parfums Beaute Method and device for detecting and quantifying cutaneous signs on an area of skin
CN109844804A (en) * 2017-08-24 2019-06-04 华为技术有限公司 A kind of method, apparatus and terminal of image detection
CN110147728A (en) * 2019-04-15 2019-08-20 深圳壹账通智能科技有限公司 Customer information analysis method, system, equipment and readable storage medium storing program for executing
CN111862285A (en) * 2020-07-10 2020-10-30 完美世界(北京)软件科技发展有限公司 Method and device for rendering figure skin, storage medium and electronic device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101339612B (en) * 2008-08-19 2010-06-16 陈建峰 Face contour checking and classification method
CN104299011A (en) * 2014-10-13 2015-01-21 吴亮 Skin type and skin problem identification and detection method based on facial image identification
CN111832475B (en) * 2020-07-10 2022-08-12 电子科技大学 Face false detection screening method based on semantic features
CN113160036B (en) * 2021-04-19 2022-09-20 金科智融科技(珠海)有限公司 Face changing method for image keeping face shape unchanged

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008293325A (en) * 2007-05-25 2008-12-04 Noritsu Koki Co Ltd Face image analysis system
WO2013098512A1 (en) * 2011-12-26 2013-07-04 Chanel Parfums Beaute Method and device for detecting and quantifying cutaneous signs on an area of skin
CN109844804A (en) * 2017-08-24 2019-06-04 华为技术有限公司 A kind of method, apparatus and terminal of image detection
CN110147728A (en) * 2019-04-15 2019-08-20 深圳壹账通智能科技有限公司 Customer information analysis method, system, equipment and readable storage medium storing program for executing
CN111862285A (en) * 2020-07-10 2020-10-30 完美世界(北京)软件科技发展有限公司 Method and device for rendering figure skin, storage medium and electronic device

Also Published As

Publication number Publication date
CN113592851A (en) 2021-11-02

Similar Documents

Publication Publication Date Title
CN103914699B (en) A kind of method of the image enhaucament of the automatic lip gloss based on color space
CN108932493B (en) Facial skin quality evaluation method
CN106506901B (en) A kind of hybrid digital picture halftoning method of significance visual attention model
CN111476849B (en) Object color recognition method, device, electronic equipment and storage medium
CN111860538A (en) Tongue color identification method and device based on image processing
CN111667400A (en) Human face contour feature stylization generation method based on unsupervised learning
CN113344836B (en) Face image processing method and device, computer readable storage medium and terminal
CN113592851B (en) Pore detection method based on full-face image
CN106469300B (en) A kind of color spot detection recognition method
CN106373096A (en) Multi-feature weight adaptive shadow elimination method
CN109543518A (en) A kind of human face precise recognition method based on integral projection
CN110796648A (en) Facial chloasma area automatic segmentation method based on melanin extraction
CN104392211A (en) Skin recognition method based on saliency detection
CN110956184A (en) Abstract diagram direction determination method based on HSI-LBP characteristics
US20240020843A1 (en) Method for detecting and segmenting the lip region
CN109583330A (en) A kind of pore detection method for human face photo
US7221780B1 (en) System and method for human face detection in color graphics images
KR100439377B1 (en) Human area detection for mobile video telecommunication system
Hua et al. Image segmentation algorithm based on improved visual attention model and region growing
CN104573743A (en) Filtering method for facial image detection
US10909351B2 (en) Method of improving image analysis
CN111414960A (en) Artificial intelligence image feature extraction system and feature identification method thereof
CN114155569B (en) Cosmetic progress detection method, device, equipment and storage medium
CN111507995B (en) Image segmentation method based on color image pyramid and color channel classification
CN109657544A (en) A kind of method for detecting human face and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant