CN113592851A - Pore detection method based on full-face image - Google Patents
Pore detection method based on full-face image Download PDFInfo
- Publication number
- CN113592851A CN113592851A CN202110924995.0A CN202110924995A CN113592851A CN 113592851 A CN113592851 A CN 113592851A CN 202110924995 A CN202110924995 A CN 202110924995A CN 113592851 A CN113592851 A CN 113592851A
- Authority
- CN
- China
- Prior art keywords
- face
- pore
- value
- full
- pore detection
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0012—Biomedical image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Quality & Reliability (AREA)
- Radiology & Medical Imaging (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Image Analysis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a pore detection method based on a full-face image, and particularly relates to the technical field of medical cosmetology. The invention provides a scheme with a comparison system, higher robustness and capability of landing through the pore detection problem based on the whole face image, and the method simplifies the pore detection problem by dividing the pore detection into four sub-problems, so the pore detection can also obtain a more stable and reliable effect under the condition of not adopting deep learning, meanwhile, the problems of hue, brightness and the like caused by the influence of illumination on a face image are solved, the influence caused by the problem is better relieved through methods of highlight treatment, the percentage of skin as a dynamic threshold value and the like, the angles of the face image are various, the robustness of each part of the face on positioning is effectively solved through semantic segmentation.
Description
Technical Field
The invention relates to the technical field of medical cosmetology, in particular to a pore detection method based on a full-face image.
Background
The facial pore detection plays an important role in scenes such as medical cosmetology, skin care products, skin state detection and the like, some medical and beauty products, skin care products or maintenance knowledge suitable for the situation can be recommended for the facial pore detection through the severity and the occurrence area of the pore, the existing pore detection method basically depends on professional equipment such as a skin mirror and the like, the method is often high in cost and large in place limitation, and brings inconvenience for daily skin detection, cosmetic development and skin medical research, the existing technical scheme for accurately identifying the pore by using a common image is few and is generally not mature, and the difficulty is that:
1. pore characteristics are fine, if a deep learning scheme is adopted, the labeling cost is very high, and no related public data set exists;
2. the human face photos are influenced by environmental factors such as illumination, so that the hue and lightness difference is large, and the image processing is difficult;
3. the angles of the face photos are various, and the robustness of each part of the face in positioning is also a problem that the existing scheme considers less.
However, the existing technical scheme is generally only suitable for skin pictures which are locally flat and have no reflection or shadow, the robustness is low, the practical application effect is poor, most schemes only solve the problem that the specific pore position cannot be accurately positioned on pore detection based on the locally flat skin picture, and the practical application effect is poor due to the fact that a complete picture-based scene is lack of a systematic solution, so that the research of a pore detection method based on a full-face image is of great significance for solving the problems.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides a pore detection method based on a full-face image, and the technical problem to be solved by the invention is as follows: the existing pore detection method basically depends on professional equipment such as a skin mirror and the like, the cost is high, the location limitation is large, great inconvenience is brought to daily skin detection, cosmetic development and skin medical research, the existing technical scheme for accurately identifying the pores by using common images is few, the existing technical scheme is generally not mature, the specific pore position cannot be accurately positioned by most of the existing technical schemes, or the existing technical scheme is only suitable for skin images which are locally flat and have no reflection or shadow, and the problem of poor practical application effect is caused.
In order to achieve the purpose, the invention provides the following technical scheme: a pore detection method based on a full-face image comprises the following steps:
s1, face detection:
and detecting the face, and adjusting the uniform width of the face after the detection is finished.
S2, segmenting the human face:
the method adopts a BiSeNet algorithm of deep learning to perform semantic segmentation to perform segmentation extraction on each part of the human face, and through the semantic segmentation, the model can learn the characteristics of each part of the human face and can also perform comprehensive judgment by fusing semantic information of each part of the whole face, so that the interference of information such as light, background information, skin color, race and the like is reduced.
S3, key area extraction:
firstly, on the basis of S2, extracting black and white masks of each part, then performing maximum contour extraction on each black and white mask, and wrapping the extracted maximum contour in a bounding box to obtain position information and length and width information of the bounding box corresponding to each part, that is, to obtain position information of each facial part, and then defining four key regions according to the position information of each part, where the four key regions are respectively: forehead, left cheek, right cheek and nose.
S4, pore detection:
after the key region is extracted in S3, the interference of background information and large facial structure information is removed, and now, based on the key region in S3, the sweat pore is detected by a statistical method plus image processing method, and the sweat pore detection step is as follows:
a. extracting a blue channel;
b. image sharpening;
c. highlight treatment;
d. local binarization;
e. opening operation;
f. extracting and screening contours;
g. and g, extracting and screening the gbounding box.
As a further scheme of the invention: the S1 face detection is realized by mtcnn, and the interference of background information is eliminated as much as possible by the mature mtcnn face detection scheme.
As a further scheme of the invention: in the process of adjusting the uniform width of the human face in the S1 human face detection, the human face is uniformly adjusted to 1350 pixels.
As a further scheme of the invention: the positions of the forehead, the left cheek, the right cheek and the nose in the S3 are respectively as follows:
forehead: x1 coordinates face x1+ 10% > -face;
x2 coordinates face x1+ face w-10% >, face w;
y1 coordinates face y1+ 5% face h;
y2 (left eyebrow y1+ right eyebrow y 1)/2-5% >, face h;
left cheek: x1 coordinates x1+ 10% face w;
x2 coordinates x 1-10% of the nose x face w;
y1 coordinates left eye y2+ 5% face h;
y2 coordinate (nose y2+ upper lip y 1)/2;
the definition of the right cheek and the nose are the same, and the description is omitted;
wherein x1 represents the start position of the site on the abscissa; x2 denotes the end position of the site on the abscissa; y1 denotes the start position of the site on the ordinate; y2 denotes the end position of the part on the ordinate; w represents the width of the site; h represents the height of the site.
As a further scheme of the invention: the image sharpening in S4 specifically includes: performing traversal calculation on the skin of each region by self-defining a 3-by-3 filter to obtain sharpening processing of an image so as to enhance the contrast of pore characteristics, wherein the filter is [ [0, -1,0], [ -1,5, -1], [0, -1,0] ];
as a further scheme of the invention: the step of processing high light in S4 is as follows: respectively acquiring a 50 th percentile value, an 80 th percentile value and an 87 th percentile value of the regional skin image, regarding pixels larger than the 87 th percentile value as pixels of a highlight region, regarding the pixels of the highlight region, regarding an original value 0.95 as a new pixel value, and performing targeted dimming processing on the brightness of the highlight region by using the method so as to reduce the light contrast of the whole region under the condition of not influencing normal illumination region information.
As a further scheme of the invention: the specific steps of the local binarization in S4 are as follows: and (3) carrying out binarization processing on the brightness within the range of 15 × 15 by taking the current pixel as the center to obtain a black-white image, wherein pores are black on the black-white image, and other areas are white.
As a further scheme of the invention: the specific steps of profile extraction and screening in the step S4 are as follows: the black part of the black and white image is outlined and filtered by area.
As a further scheme of the invention: the concrete steps of the bounding box extraction and screening in the S4 are as follows: based on the extracted contours, bounding boxes of the respective contours are obtained and filtered from multiple aspects as follows:
first filtered from the bounding box aspect ratio limit: limiting the aspect ratio to between 0.5 and 1.5, eliminating interference from features such as wrinkles, and then filtering from the R channel value limit: theoretically, although pores appear to be slightly dark on the surface of skin, primary colors are skin colors and are not as dark as moles and hairs, that is, the R value of pores is generally not too low, but in actual situations, environmental information such as light and the like and individual skin colors are not uniform, so that a fixed value is not suitable to be taken, in order to make the result more stable, a statistical method is used, a dynamic value is used for judgment, the value of the 20 th percentile of the R channel of facial skin is calculated as the base line value of skin colors in the R channel, and those bounding boxes with R channel values lower than the base line value are filtered out, and thus, the interferences of characteristics such as moles and hairs are eliminated.
The invention has the beneficial effects that:
1. the invention provides a comparison system, which has higher robustness and can be landed by the pore detection problem based on the whole face image, the method simplifies the pore detection problem by splitting the pore detection into four sub-problems, so the pore detection can also obtain a more stable and reliable effect under the condition of not adopting deep learning, meanwhile, the problems of hue, brightness and the like caused by the influence of illumination on a face image are solved, the influence caused by the problem is better relieved by methods such as highlight treatment, the percentage of skin as a dynamic threshold value and the like, the angles of the face image are various, the robustness of each part of the face on positioning is effectively solved by semantic segmentation;
2. according to the invention, the segmentation and extraction of each part of the face are carried out by adopting the deep learning BiSeNet algorithm to carry out semantic segmentation, and through the semantic segmentation, the model can learn the characteristics of each part of the face and can also carry out comprehensive judgment by combining the semantic information of each part of the whole face, so that the interference of information such as light, background information, skin color, race and the like is reduced, and the method is more stable and higher in precision in actual application than the traditional image processing scheme;
3. according to the invention, two steps of extracting the blue channel and opening operation are added in the pore detection process, and the pore characteristics of the human face are most obvious in the blue channel by observation, so that the blue channel is extracted to serve as the image basis of pore detection, thus the pores on the surface of the human face can be more clearly detected, and the binarization curve is smoother by performing opening operation on the binarization result;
4. in the process of adjusting the uniform width of the face, the face is uniformly adjusted to 1350 pixels, so that the stability of a subsequent pore identification link can be ensured, and because the illumination of each part of the face is possibly not uniform, if a common binarization scheme is adopted, the binarization result is very unstable, and under the condition of unbalanced illumination, the common binarization result is more prone to binarization of the illuminated face and a dark surface, so that a local binarization scheme is adopted, and the stability of the binarization result can be ensured.
Drawings
FIG. 1 is a schematic flow chart of a pore detection method according to the present invention;
FIG. 2 is a flow chart illustrating the definition of key regions according to the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1-2, a pore detection method based on a full-face image includes the following steps:
s1, face detection:
the human face is detected, and the uniform width of the human face is adjusted after the detection is finished.
S2, segmenting the human face:
the invention relates to a method for extracting human face features, which comprises the steps of adopting a deep learning BiSeNet algorithm to carry out semantic segmentation to carry out segmentation extraction on each part of a human face, and carrying out comprehensive judgment by fusing semantic information of each part of the whole face to reduce interference of information such as light, background information, skin color, race and the like.
S3, key area extraction:
firstly, on the basis of S2, extracting black and white masks of each part, then performing maximum contour extraction on each black and white mask, and wrapping the extracted maximum contour in a bounding box to obtain position information and length and width information of the bounding box corresponding to each part, that is, to obtain position information of each facial part, and then defining four key regions according to the position information of each part, where the four key regions are respectively: forehead, left cheek, right cheek and nose.
S4, pore detection:
after the key region is extracted in S3, the interference of background information and large facial structure information has been removed, and now, based on the key region in S3, the sweat pore is detected by a statistical method plus image processing method, and the sweat pore detection step is as follows:
e. extracting a blue channel;
f. image sharpening;
g. highlight treatment;
h. local binarization;
e. opening operation;
f. extracting and screening contours;
g. and g, extracting and screening the gbounding box.
According to the invention, two steps of extracting the blue channel and opening operation are added in the pore detection process, and the pore characteristics of the human face are most obvious in the blue channel through observation, so that the blue channel is extracted to serve as the image basis of pore detection, thus the pores on the surface of the human face can be more clearly detected, and the binarization curve is smoother through opening operation on the binarization result.
The S1 face detection is realized by mtcnn, and the interference of background information is eliminated as much as possible by the mature mtcnn face detection scheme.
In the process of adjusting the uniform width of the face in the S1 face detection, the face is uniformly adjusted to 1350 pixels, and in the process of adjusting the uniform width of the face, the face is uniformly adjusted to 1350 pixels, so that the stability of the subsequent pore identification link can be ensured.
The forehead, left cheek, right cheek and nose in S3 are respectively:
forehead: x1 coordinates face x1+ 10% > -face;
x2 coordinates face x1+ face w-10% >, face w;
y1 coordinates face y1+ 5% face h;
y2 (left eyebrow y1+ right eyebrow y 1)/2-5% >, face h;
left cheek: x1 coordinates x1+ 10% face w;
x2 coordinates x 1-10% of the nose x face w;
y1 coordinates left eye y2+ 5% face h;
y2 coordinate (nose y2+ upper lip y 1)/2;
the definition of the right cheek and the nose are the same, and the description is omitted;
wherein x1 represents the start position of the site on the abscissa; x2 denotes the end position of the site on the abscissa; y1 denotes the start position of the site on the ordinate; y2 denotes the end position of the part on the ordinate; w represents the width of the site; h represents the height of the site.
The image sharpening in S4 specifically includes: the method comprises the steps of (1) performing traversal calculation on skin of each region by self-defining a 3-by-3 filter to obtain sharpening processing of an image so as to enhance the contrast of pore characteristics, wherein the filter is [ [0, -1,0], [ -1,5, -1], [0, -1,0] ];
the concrete steps of the high light treatment in the S4 are as follows: respectively acquiring a 50 th percentile value, an 80 th percentile value and an 87 th percentile value of the regional skin image, regarding pixels larger than the 87 th percentile value as pixels of a highlight region, regarding the pixels of the highlight region, regarding an original value 0.95 as a new pixel value, and performing targeted dimming processing on the brightness of the highlight region by using the method so as to reduce the light contrast of the whole region under the condition of not influencing normal illumination region information.
The specific steps of local binarization in S4 are: the method comprises the steps of performing binarization processing on the brightness within the range of 15 × 15 by taking a current pixel as a center to obtain a black-white image, wherein pores on the black-white image are black, and other areas are white, because the illumination of each part of the face is possibly not uniform, if a common binarization scheme is adopted, the binarization result is very unstable, and under the condition of unbalanced illumination, the common binarization result is more prone to binarization of the illumination face and the dark face, so that the local binarization scheme is adopted, and the stability of the binarization result can be ensured.
The specific steps of profile extraction and screening in S4 are: the black part of the black and white image is outlined and filtered by area.
The concrete steps of bounding box extraction and screening in S4 are as follows: based on the extracted contours, bounding boxes of the respective contours are obtained and filtered from multiple aspects as follows:
first filtered from the bounding box aspect ratio limit: limiting the aspect ratio to between 0.5 and 1.5, eliminating interference from features such as wrinkles, and then filtering from the R channel value limit: theoretically, although pores appear to be slightly dark on the surface of skin, primary colors are skin colors and are not as dark as moles and hairs, that is, the R value of pores is generally not too low, but in actual situations, environmental information such as light and the like and individual skin colors are not uniform, so that a fixed value is not suitable to be taken, in order to make the result more stable, a statistical method is used, a dynamic value is used for judgment, the value of the 20 th percentile of the R channel of facial skin is calculated as the base line value of skin colors in the R channel, and those bounding boxes with R channel values lower than the base line value are filtered out, and thus, the interferences of characteristics such as moles and hairs are eliminated.
The invention provides a scheme with a comparison system, higher robustness and capability of landing through the pore detection problem based on the whole face image, and the method simplifies the pore detection problem by dividing the pore detection into four sub-problems, so the pore detection can also obtain a more stable and reliable effect under the condition of not adopting deep learning, meanwhile, the problems of hue, brightness and the like caused by the influence of illumination on a face image are solved, the influence caused by the problem is better relieved through methods of highlight treatment, the percentage of skin as a dynamic threshold value and the like, the angles of the face image are various, the robustness of each part of the face on positioning is effectively solved through semantic segmentation.
The points to be finally explained are: although the present invention has been described in detail with reference to the general description and the specific embodiments, on the basis of the present invention, the above embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (9)
1. A pore detection method based on a full-face image is characterized by comprising the following steps:
s1, face detection: detecting the face, and adjusting the uniform width of the face after the detection is finished;
s2, segmenting the human face: the method comprises the steps of adopting a BiSeNet algorithm for deep learning to carry out semantic segmentation to carry out segmentation extraction on each part of the human face, enabling a model to learn the characteristics of each part of the human face through the semantic segmentation, or fusing semantic information of each part of the whole face to carry out comprehensive judgment, and reducing the interference of light, background information, skin color and race information;
s3, key area extraction: on the basis of S2, black and white masks of each part are extracted, then maximum outline extraction is performed on each black and white mask, and the extracted maximum outline is wrapped in a bounding box to obtain position information and length and width information of the bounding box corresponding to each part, that is, to obtain position information of each facial part, and then four key regions are defined according to the position information of each part, where: forehead, left cheek, right cheek and nose;
s4, pore detection: after the key region is extracted in S3, the interference of background information and large facial structure information is removed, and now, based on the key region in S3, the sweat pore is detected by a statistical method plus image processing method, and the sweat pore detection step is as follows: extracting a blue channel; image sharpening; highlight treatment; local binarization; opening operation; extracting and screening contours; gbaundingbox extraction and screening.
2. The pore detection method based on the full-face image according to claim 1, characterized in that: the S1 face detection is realized by mtcnn, and the interference of background information is eliminated by an mtcnn mature face detection scheme.
3. The pore detection method based on the full-face image according to claim 1, characterized in that: in the process of adjusting the uniform width of the human face in the S1 human face detection, the human face is uniformly adjusted to 1350 pixels.
4. The pore detection method based on the full-face image according to claim 1, characterized in that: the positions of the forehead, the left cheek, the right cheek and the nose in the S3 are respectively as follows:
forehead: x1 coordinates face x1+ 10% > -face;
x2 coordinates face x1+ face w-10% >, face w;
y1 coordinates face y1+ 5% face h;
y2 (left eyebrow y1+ right eyebrow y 1)/2-5% >, face h;
left cheek: x1 coordinates x1+ 10% face w;
x2 coordinates x 1-10% of the nose x face w;
y1 coordinates left eye y2+ 5% face h;
y2 coordinate (nose y2+ upper lip y 1)/2;
the definition of the right cheek and the nose are the same, and the description is omitted;
wherein x1 represents the start position of the site on the abscissa; x2 denotes the end position of the site on the abscissa; y1 denotes the start position of the site on the ordinate; y2 denotes the end position of the part on the ordinate; w represents the width of the site; h represents the height of the site.
5. The pore detection method based on the full-face image according to claim 1, characterized in that: the image sharpening in S4 specifically includes: performing traversal calculation on the skin of each region by self-defining a 3-by-3 filter to obtain sharpening processing of an image so as to enhance the contrast of pore characteristics, wherein the filter is [ [0, -1,0], [ -1,5, -1], [0, -1,0] ];
6. the pore detection method based on the full-face image according to claim 5, characterized in that: the step of processing high light in S4 is as follows: respectively acquiring a 50 th percentile value, an 80 th percentile value and an 87 th percentile value of the regional skin image, regarding pixels larger than the 87 th percentile value as pixels of a highlight region, regarding the pixels of the highlight region, regarding an original value 0.95 as a new pixel value, and performing targeted dimming processing on the brightness of the highlight region by using the method so as to reduce the light contrast of the whole region under the condition of not influencing normal illumination region information.
7. The pore detection method based on the full-face image according to claim 6, characterized in that: the specific steps of the local binarization in S4 are as follows: and (3) carrying out binarization processing on the brightness within the range of 15 × 15 by taking the current pixel as the center to obtain a black-white image, wherein pores are black on the black-white image, and other areas are white.
8. The pore detection method based on the full-face image according to claim 7, characterized in that: the specific steps of profile extraction and screening in the step S4 are as follows: the black part of the black and white image is outlined and filtered by area.
9. The pore detection method based on the full-face image according to claim 8, characterized in that: the concrete steps of boundingbox extraction and screening in the S4 are as follows: based on the extracted contours, bounding boxes of the respective contours are obtained and filtered from multiple aspects as follows:
first filtered from boundingbox aspect ratio limit: limiting the aspect ratio to between 0.5 and 1.5, eliminating interference from features such as wrinkles, and then filtering from the R channel value limit: although pores appear to be slightly dull on the surface of skin, the primary color is still the skin color, that is, the R value of the pores is not too low, but because the light and the individual skin color are not uniform, it is not suitable to take a fixed value, in order to make the result more stable, a statistical method is used, a dynamic value is adopted for judgment, the value of the 20 th percentile of the R channel of facial skin is calculated as the base line value of the skin color in the R channel, those boundling box with the R channel value lower than the base line value are filtered out, and by such a method, the interference of nevi and hair features is eliminated.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110924995.0A CN113592851B (en) | 2021-08-12 | 2021-08-12 | Pore detection method based on full-face image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110924995.0A CN113592851B (en) | 2021-08-12 | 2021-08-12 | Pore detection method based on full-face image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113592851A true CN113592851A (en) | 2021-11-02 |
CN113592851B CN113592851B (en) | 2023-06-20 |
Family
ID=78257682
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110924995.0A Active CN113592851B (en) | 2021-08-12 | 2021-08-12 | Pore detection method based on full-face image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113592851B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008293325A (en) * | 2007-05-25 | 2008-12-04 | Noritsu Koki Co Ltd | Face image analysis system |
CN101339612A (en) * | 2008-08-19 | 2009-01-07 | 陈建峰 | Face contour checking and classification method |
WO2013098512A1 (en) * | 2011-12-26 | 2013-07-04 | Chanel Parfums Beaute | Method and device for detecting and quantifying cutaneous signs on an area of skin |
CN104299011A (en) * | 2014-10-13 | 2015-01-21 | 吴亮 | Skin type and skin problem identification and detection method based on facial image identification |
CN109844804A (en) * | 2017-08-24 | 2019-06-04 | 华为技术有限公司 | A kind of method, apparatus and terminal of image detection |
CN110147728A (en) * | 2019-04-15 | 2019-08-20 | 深圳壹账通智能科技有限公司 | Customer information analysis method, system, equipment and readable storage medium storing program for executing |
CN111832475A (en) * | 2020-07-10 | 2020-10-27 | 电子科技大学 | Face false detection screening method based on semantic features |
CN111862285A (en) * | 2020-07-10 | 2020-10-30 | 完美世界(北京)软件科技发展有限公司 | Method and device for rendering figure skin, storage medium and electronic device |
CN113160036A (en) * | 2021-04-19 | 2021-07-23 | 金科智融科技(珠海)有限公司 | Face changing method for image keeping face shape unchanged |
-
2021
- 2021-08-12 CN CN202110924995.0A patent/CN113592851B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008293325A (en) * | 2007-05-25 | 2008-12-04 | Noritsu Koki Co Ltd | Face image analysis system |
CN101339612A (en) * | 2008-08-19 | 2009-01-07 | 陈建峰 | Face contour checking and classification method |
WO2013098512A1 (en) * | 2011-12-26 | 2013-07-04 | Chanel Parfums Beaute | Method and device for detecting and quantifying cutaneous signs on an area of skin |
CN104299011A (en) * | 2014-10-13 | 2015-01-21 | 吴亮 | Skin type and skin problem identification and detection method based on facial image identification |
CN109844804A (en) * | 2017-08-24 | 2019-06-04 | 华为技术有限公司 | A kind of method, apparatus and terminal of image detection |
CN110147728A (en) * | 2019-04-15 | 2019-08-20 | 深圳壹账通智能科技有限公司 | Customer information analysis method, system, equipment and readable storage medium storing program for executing |
CN111832475A (en) * | 2020-07-10 | 2020-10-27 | 电子科技大学 | Face false detection screening method based on semantic features |
CN111862285A (en) * | 2020-07-10 | 2020-10-30 | 完美世界(北京)软件科技发展有限公司 | Method and device for rendering figure skin, storage medium and electronic device |
CN113160036A (en) * | 2021-04-19 | 2021-07-23 | 金科智融科技(珠海)有限公司 | Face changing method for image keeping face shape unchanged |
Non-Patent Citations (2)
Title |
---|
段光奎, 西南交通大学出版社, pages: 192 - 193 * |
段光奎: "《PHOTOSHOP基础与图像创意案例》", 31 August 2018, 西南交通大学出版社, pages: 193 * |
Also Published As
Publication number | Publication date |
---|---|
CN113592851B (en) | 2023-06-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108932493B (en) | Facial skin quality evaluation method | |
CN103914699B (en) | A kind of method of the image enhaucament of the automatic lip gloss based on color space | |
CA2678551C (en) | Method and apparatus for simulation of facial skin aging and de-aging | |
CN104299011A (en) | Skin type and skin problem identification and detection method based on facial image identification | |
CN109345480B (en) | Face automatic acne removing method based on image restoration model | |
CN113344836B (en) | Face image processing method and device, computer readable storage medium and terminal | |
CN101650782A (en) | Method for extracting front human face outline based on complexion model and shape constraining | |
CN110688962B (en) | Face image processing method, user equipment, storage medium and device | |
CN103714225A (en) | Information system with automatic make-up function and make-up method of information system | |
US20240020843A1 (en) | Method for detecting and segmenting the lip region | |
CN108710883A (en) | A kind of complete conspicuousness object detecting method using contour detecting | |
CN107194870A (en) | A kind of image scene reconstructing method based on conspicuousness object detection | |
JP4076777B2 (en) | Face area extraction device | |
CN113592851A (en) | Pore detection method based on full-face image | |
CN109583330A (en) | A kind of pore detection method for human face photo | |
CN113256673A (en) | Intelligent wrinkle removing system based on infrared laser | |
CN114663574A (en) | Three-dimensional face automatic modeling method, system and device based on single-view photo | |
KR20020085669A (en) | The Apparatus and Method for Abstracting Peculiarity of Two-Dimensional Image & The Apparatus and Method for Creating Three-Dimensional Image Using Them | |
CN114155569B (en) | Cosmetic progress detection method, device, equipment and storage medium | |
US20190347469A1 (en) | Method of improving image analysis | |
CN110458012A (en) | Multi-angle human face recognition method and device, storage medium and terminal | |
CN110293684A (en) | Dressing Method of printing, apparatus and system based on three-dimensional printing technology | |
JP2007299113A (en) | Hair coloring and makeup simulation system | |
CN114463814A (en) | Rapid certificate photo glasses detection method based on image processing | |
JP3578321B2 (en) | Image normalizer |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |