CN110890156A - Human face glossiness classification device, method and computer storage medium - Google Patents

Human face glossiness classification device, method and computer storage medium Download PDF

Info

Publication number
CN110890156A
CN110890156A CN201811047851.6A CN201811047851A CN110890156A CN 110890156 A CN110890156 A CN 110890156A CN 201811047851 A CN201811047851 A CN 201811047851A CN 110890156 A CN110890156 A CN 110890156A
Authority
CN
China
Prior art keywords
face
image
glossiness
gloss
skin color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811047851.6A
Other languages
Chinese (zh)
Inventor
张贯京
葛新科
谭敦
王海荣
谢伟
高伟明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qianhai AnyCheck Information Technology Co Ltd
Shenzhen E Techco Information Technology Co Ltd
Original Assignee
Shenzhen Qianhai AnyCheck Information Technology Co Ltd
Shenzhen E Techco Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qianhai AnyCheck Information Technology Co Ltd, Shenzhen E Techco Information Technology Co Ltd filed Critical Shenzhen Qianhai AnyCheck Information Technology Co Ltd
Priority to CN201811047851.6A priority Critical patent/CN110890156A/en
Priority to PCT/CN2019/104985 priority patent/WO2020052525A1/en
Publication of CN110890156A publication Critical patent/CN110890156A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment

Abstract

The invention provides a human face glossiness classification device, a method and a computer storage medium, wherein the method comprises the following steps: segmenting the face image and extracting to obtain a binary image of a face skin color area; removing the eye lip region from the binary image of the face skin color region; converting the face image into an HSV image space, extracting a V-channel characteristic vector, and multiplying the V-channel characteristic vector by the binary image with the eye lip region removed to obtain a V-channel gray image with the eye lip region removed; positioning a face gloss area in the V-channel gray level image based on an OTSU binary cycle method; screening connected areas of the face gloss areas, and carrying out feature extraction on the two largest white connected areas screened out to obtain feature vectors of the face image; and carrying out face glossiness classification training on the feature vectors of all the face images by using an SVM (support vector machine) classifier to generate a face glossiness classifier. The method and the device can improve the accuracy of the glossiness classification of the face.

Description

Human face glossiness classification device, method and computer storage medium
Technical Field
The invention relates to the technical field of traditional Chinese medicine face image processing, in particular to a face glossiness classification device and method and a computer storage medium applied to traditional Chinese medicine face diagnosis.
Background
The traditional Chinese medicine face inspection is an important component of the traditional Chinese medicine diagnosis, and the 'Su Wen & Mai Yao essence Microtreatise' has a name: the five colors are refined by Fu and the qi is also in the light of the five colors. It is considered that the essence of qi and blood can be expressed by different colors, and the gloss of the face can assist the traditional Chinese medical science in diagnosing the essential qi of viscera, which has an important role in the presumption of the state of an illness, so that the traditional Chinese medical diagnosis needs to be carried out by using an image recognition technology, and how to judge the gloss of the face by using the image recognition technology becomes the key point of the traditional Chinese medical science face inspection.
For the aspect of traditional Chinese medicine surface diagnosis by using a computer vision theory and an image recognition technology, classification diagnosis analysis is mainly performed from the aspect of complexion and lip color at present, and algorithms proposed for analyzing the surface glossiness in the traditional Chinese medicine inspection diagnosis are few at present. Aiming at the main components of spirit in the traditional Chinese medicine diagnosis, a new facial glossiness analysis method is needed to be provided, so that the facial glossiness can be effectively judged, the robustness of facial glossiness classification of the human face is improved, and the accuracy of facial glossiness classification of the human face is improved.
Disclosure of Invention
The invention mainly aims to provide a human face and face glossiness classification device, a human face and face glossiness classification method and a computer storage medium, which are applied to traditional Chinese medicine face diagnosis and can improve the robustness of human face and face glossiness classification, so that the accuracy of human face and face glossiness classification is improved.
To achieve the above object, the present invention provides a human face glossiness classification apparatus, comprising a processor adapted to implement various computer program instructions and a memory adapted to store a plurality of computer program instructions, the computer program instructions being loaded by the processor and executing the steps of: receiving an input face image; segmenting the face image based on the elliptical clustering skin color model and extracting to obtain a binary image of a face skin color area; removing eye lip regions from the binary image of the face skin color region based on a projection method; converting the face image into an HSV image space, extracting a V-channel characteristic vector, and multiplying the V-channel characteristic vector by the binary image with the eye lip region removed to obtain a V-channel gray image with the eye lip region removed; positioning a face gloss area in the V-channel gray level image based on an OTSU binary cycle method; screening connected areas of white areas represented by the face gloss areas, and screening out the largest two white connected areas to perform feature extraction to obtain feature vectors of the face image; and carrying out face glossiness classification training on the feature vectors of all face images by using an SVM (support vector machine) classifier to generate a final face glossiness classifier and storing the final face glossiness classifier in a memory.
Further, the step of segmenting the face image based on the elliptical cluster skin color model and extracting a binary image of the face skin color region comprises the following steps: converting RGB three channels of the face image into a YCbCr three-dimensional color space; converting the YCbCr three-dimensional color space into a YCb 'Cr' three-dimensional coordinate space; projecting the YCb 'Cr' three-dimensional coordinate space to a Cb '-Cr' two-dimensional subspace; and (3) approximately judging whether the human face skin color area exists in a Cb '-Cr' two-dimensional subspace by using all coordinate points in an ellipse.
Preferably, the analytical expression of the ellipse is as follows:
Figure BDA0001793733270000021
wherein the content of the first and second substances,
Figure BDA0001793733270000022
wherein, the parameters of the analytical expression of the ellipse are defined as follows: cx-109.38, cy-152.02, θ -2.53, ecx-1.6, ecy-2.41, a-25.39, and b-14.03.
Further, the step of locating the face gloss area in the V-channel gray-scale image based on the OTSU binary cycle method includes the following steps: step 1: binarizing the gray part of the V-channel gray image based on the OTSU to obtain a binarized image of the V-channel gray image; step 2: multiplying the V-channel gray level image by a binarization image thereof to remove a matt skin color area, and obtaining a remaining V-channel gray level image; and step 3: counting the ratio of the area of the rest V-channel gray level image to the area of the input V-channel gray level image; and 4, step 4: judging whether the ratio is larger than a set experience threshold value or not; if the ratio is larger than the set experience threshold, the step 1 to the step 4 are circulated; if the ratio is less than or equal to the set empirical threshold, the loop step is ended.
On the other hand, the invention also provides a face glossiness classification method, which is applied to the face glossiness classification device and comprises the following steps: receiving an input face image; segmenting the face image based on the elliptical clustering skin color model and extracting to obtain a binary image of a face skin color area; removing eye lip regions from the binary image of the face skin color region based on a projection method; converting the face image into an HSV image space, extracting a V-channel characteristic vector, and multiplying the V-channel characteristic vector by the binary image with the eye lip region removed to obtain a V-channel gray image with the eye lip region removed; positioning a face gloss area in the V-channel gray level image based on an OTSU binary cycle method; screening connected areas of white areas represented by the face gloss areas, and screening out the largest two white connected areas to perform feature extraction to obtain feature vectors of the face image; and carrying out face glossiness classification training on the feature vectors of all face images by using an SVM (support vector machine) classifier to generate a final face glossiness classifier and storing the final face glossiness classifier in a memory.
Further, the step of receiving the input face image comprises: the receiving camera unit is used for receiving a face image shot by the face of the person to be tested or reading a face image stored in advance from a memory.
Further, the step of segmenting the face image based on the elliptical cluster skin color model and extracting a binary image of the face skin color region comprises the following steps: converting RGB three channels of the face image into a YCbCr three-dimensional color space; converting the YCbCr three-dimensional color space into a YCb 'Cr' three-dimensional coordinate space; projecting the YCb 'Cr' three-dimensional coordinate space to a Cb '-Cr' two-dimensional subspace; and (3) approximately judging whether the human face skin color area exists in a Cb '-Cr' two-dimensional subspace by using all coordinate points in an ellipse.
Preferably, the analytical expression of the ellipse is as follows:
Figure BDA0001793733270000031
wherein the content of the first and second substances,
Figure BDA0001793733270000032
wherein, the parameters of the analytical expression of the ellipse are defined as follows: cx-109.38, cy-152.02, θ -2.53, ecx-1.6, ecy-2.41, a-25.39, and b-14.03.
Further, the step of locating the face gloss area in the V-channel gray-scale image based on the OTSU binary cycle method includes the following steps: step 1: binarizing the gray part of the V-channel gray image based on the OTSU to obtain a binarized image of the V-channel gray image; step 2: multiplying the V-channel gray level image by a binarization image thereof to remove a matt skin color area, and obtaining a remaining V-channel gray level image; and step 3: counting the ratio of the area of the rest V-channel gray level image to the area of the input V-channel gray level image; and 4, step 4: judging whether the ratio is larger than a set experience threshold value or not; if the ratio is larger than the set experience threshold, the step 1 to the step 4 are circulated; if the ratio is less than or equal to the set empirical threshold, the loop step is ended.
In another aspect, the present invention further provides a computer readable storage medium storing a plurality of computer program instructions, wherein the computer program instructions are loaded by a processor of a computer device and execute the method steps of the face glossiness classification method.
Compared with the prior art, the human face glossiness classification device, the human face glossiness classification method and the computer storage medium can be applied to traditional Chinese medicine face diagnosis, and compared with the existing image-based facial attention classification algorithm, the human face glossiness classification device, the human face glossiness classification method and the computer storage medium can effectively analyze the human face glossiness, and a classifier for distinguishing the human face glossiness is obtained by training based on a large number of human face samples, so that the robustness of human face glossiness classification is improved, and the accuracy of the human face glossiness classification is further improved.
Drawings
FIG. 1 is a block diagram of a preferred embodiment of the face gloss classification apparatus of the present invention;
FIG. 2 is a flowchart of a method of a preferred embodiment of the face gloss classification method of the present invention;
fig. 3 is a detailed sub-flowchart of step S25 in fig. 2.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
To further explain the technical means and effects of the present invention adopted to achieve the predetermined objects, the following detailed description of the embodiments, structures, features and effects of the present invention will be given with reference to the accompanying drawings and preferred embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Referring to fig. 1, fig. 1 is a block diagram of a preferred embodiment of the gloss classification apparatus for a human face according to the present invention. In this embodiment, the face glossiness classification device 1 is installed with a face glossiness classification system 10, and the face glossiness classification device 1 may be a computer device with a data processing function and an image processing function, such as a personal computer, a workstation computer, a traditional Chinese medicine face imaging apparatus, and a traditional Chinese medicine four-diagnosis apparatus, which are installed with the face glossiness classification system 10. In the present embodiment, the human face glossiness classification apparatus 1 includes, but is not limited to, a human face glossiness classification system 10, an image capturing unit 11, a memory 12 adapted to store a plurality of computer program instructions, and a processor 13 executing various computer program instructions. The camera unit 11 is an image input device such as a high-definition camera, and is used for inputting a face image into the face glossiness classification device 1. The memory 12 may be a read only memory ROM, a random access memory RAM, an electrically erasable programmable memory EEPROM, a FLASH memory FLASH, a magnetic or optical disk, or the like. The processor 13 is a Central Processing Unit (CPU), a Microcontroller (MCU), a data processing chip, or an information processing unit having a data processing function. It should be noted that, the gloss classification of the face according to the present invention refers to classifying the face into two types of gloss without gloss, i.e. into two cases of gloss with face and gloss without face, rather than a specific gloss level.
In the present embodiment, the facial glossiness classification system 10 is composed of program modules composed of a plurality of computer program instructions, including but not limited to a facial image input module 101, a facial image processing module 102, a glossy region location module 103, a glossy feature extraction module 104, and a facial glossiness classification module 105. The module referred to in the present invention refers to a series of computer program instruction segments capable of being executed by the processor 13 of the human face glossiness classification apparatus 1 and performing a fixed function, which are stored in the memory 12, and the specific function of each module is specifically described below with reference to fig. 2 and 3.
Referring to fig. 2, it is a flowchart of a preferred embodiment of the gloss classification method for a human face according to the present invention. In this embodiment, the various method steps of the face glossiness classification method are implemented by a computer software program stored in a computer-readable storage medium (e.g., the memory 12) in the form of computer program instructions, and the computer-readable storage medium may include: read-only memory, random access memory, magnetic or optical disk, etc., which can be loaded by a processor (e.g., the processor 13) and which performs the following steps S21 through S28.
Step S21, receiving the input face image; specifically, the face image input module 101 receives a clear face image captured from the face of the subject through the imaging unit 11 (e.g., a high-definition imaging device), and may also read a face image stored in advance, which is a color RGB image composed of R, G, B three channels, from the memory 12.
Step S22, segmenting the face image based on the elliptical cluster skin color model and extracting to obtain a binary image of the face skin color area; specifically, the face image processing module 102 segments the face skin color region based on an elliptical cluster skin color model (an elliptical skin color model proposed by oil k. jain, etc.), that is, three RGB channels of the face image are converted into a YCbCr three-dimensional color space, the YCbCr three-dimensional color space is converted into a YCb 'Cr' three-dimensional coordinate space, the YCb 'Cr' three-dimensional coordinate space is projected into a Cb '-Cr' two-dimensional subspace, and whether the face skin color region is determined by all coordinate points in an ellipse under the Cb '-Cr' two-dimensional subspace, where an analytic expression of the ellipse is as follows:
Figure BDA0001793733270000051
wherein the content of the first and second substances,
Figure BDA0001793733270000052
wherein, the parameters of the analytical expression of the ellipse are defined as follows: cx-109.38, cy-152.02, θ -2.53, ecx-1.6, ecy-2.41, a-25.39, and b-14.03. The analytic expression parameters of the ellipse in this embodiment are empirical values defined according to empirical characteristics of the face, and in other embodiments, the analytic expression parameters of the ellipse may also define parameter values according to specific characteristics of the face.
Step S23, removing the lip area of the eyes from the binary image of the face skin color area based on a projection method; in this embodiment, the face image processing module 102 removes a background image except for a face skin color to obtain a binary image from which an eye lip region is removed. Because the gloss areas of the face are generally concentrated in the areas such as the forehead, the cheek, the nose, the chin, and the like, in order to effectively analyze the gloss areas, the face image processing module 101 projects the pixel values of the areas other than the skin color areas in the height direction by using the face binary image obtained by skin color segmentation, and generally obtains two maximum peak positions which are respectively positioned in the eyebrow area and the lip area of the human eye, so as to remove the eye area and the lip area, thereby obtaining the binary image from which the lip area of the human eye is removed.
And step S24, converting the face image into an HSV image space, extracting a V-channel characteristic vector, and multiplying the V-channel characteristic vector by the binary image without the eye lip region to obtain the V-channel gray image without the eye lip region. In this embodiment, the glossy portion of the human face is generally generated by reflecting greasy light from the normal skin color portion, so that the brightness of the human face is obviously different from the normal skin color, and H, S, V in the HSV color space represents hue, saturation and lightness respectively. In order to better analyze the facial gloss area, the human face image processing module 102 of the invention analyzes and processes the V channel capable of fully distinguishing the skin color lightness in the HSV color space to extract the characteristic information to obtain a V channel characteristic vector, and then multiplies the extracted V channel characteristic vector with the binary image without the eye lip area to remove the background and the part of the eye lip area to obtain a V channel gray image without the eye lip area, namely the V channel gray image including the forehead, the cheek, the nose tip and the chin.
And step S25, positioning the face gloss area in the V-channel gray image based on the OTSU binary cycle method. In this embodiment, the face gloss area is characterized as a white area, and the OTSU is also called the tsui method, which is a maximum inter-variance algorithm of a segmentation image binarization threshold. In order to effectively locate and analyze a glossy skin color region in a face, specifically, the glossy region locating module 103 locates a glossy region of the face in the V-channel grayscale image based on the OTSU binary cycle method, referring to fig. 3, fig. 3 is a detailed sub-flowchart of step S25 in fig. 2, and the specific steps are as follows:
step S251, binarizing the gray part of the V-channel gray image based on the OTSU to obtain a binarized image of the V-channel gray image;
step S252, multiplying the V-channel gray level image by the binary image to remove the matt skin color area, and obtaining the remaining V-channel gray level image;
step S253, counting the ratio of the area of the residual V-channel gray level image to the area of the input V-channel gray level image;
step S254, determining whether the ratio is greater than a set empirical threshold; if the ratio is greater than the set empirical threshold, loop from step S251 to step S254; if the ratio is less than or equal to the set empirical threshold, the loop step is ended.
In this embodiment, the gloss region positioning module 103 binarizes the V-channel grayscale image based on OTSU, removes the black region in each cycle, and performs cyclic processing on the channel grayscale image again by using the OTSU binary cycle method until the white region is less than or equal to the set empirical threshold of the area ratio of the whole V-channel grayscale image, for example, the empirical threshold is set to 20% proportional value, or other proportional values may be set according to the processed face gloss classification accuracy requirement.
And step S26, performing connected region screening on the white regions represented by the face gloss regions, and performing feature extraction on the two largest white connected regions screened out to obtain feature vectors of the face image. In this embodiment, the gloss feature extraction module 104 performs connected region analysis on the white regions represented by the obtained human face gloss regions, and extracts 2 white connected regions with the largest area to perform analysis and extract feature vectors. The gloss feature extraction module 104 performs feature extraction on each connected region, and uses the R, G, B mean value of the connected region, the difference between the V-channel gray-scale mean value of the connected region and the V-channel gray-scale mean value of the removed background and the human eye lip region part, the area value of the connected region, and the ratio of the area of the connected region to the area of the whole human face region as the components of the feature vector, so that one connected region obtains one 6-dimensional feature vector, and finally obtains a 12-dimensional feature vector.
Step S27, judging whether the input face image reaches the preset face number; in this embodiment, in order to improve the accuracy of the facial glossiness classification, it is necessary to perform sample training using facial images of a large number of different testees, so the facial glossiness classification module 105 determines whether the input facial image reaches the predetermined number of faces, and if the input facial image does not reach the predetermined number of faces, the flow returns to step S21, and the image capturing unit 11 continues to capture a clear facial image from the next testee, or obtains the facial image of the next testee from the memory 12. If the number of faces of the input face image reaches the predetermined number of faces, the flow proceeds to step S28.
In step S28, a SVM classifier is used to perform face gloss classification training on the feature vectors of all the input face images to generate a final face gloss classifier, and the final face gloss classifier is stored in the memory 12. In this embodiment, the facial gloss classification module 105 obtains feature vectors of all the acquired face images through the feature extraction algorithm, and then performs supervised SVM classifier training on all the feature vectors and pre-classified and prepared matte face skin color samples to obtain a final face gloss classifier, which is stored in the memory 12 for subsequent face gloss classification analysis. When the face image is input to the trained face glossiness classifier, the face glossiness classifier can distinguish two situations of face glossiness and no face glossiness from the face image, namely, two situations of face glossiness and no face glossiness can be obtained, and the situations are not specific in glossiness. When a doctor performs traditional Chinese medicine facial diagnosis on a patient and needs to analyze the facial gloss of the patient, the acquired facial image is input, the trained facial gloss classifier is used for judging the facial gloss of the human face, and whether the facial gloss and the non-facial gloss exist on the human face are output and output, so that the visceral essence of the patient are diagnosed in a traditional Chinese medicine manner, the traditional Chinese medicine diagnosis result is improved, and the doctor is assisted to perform traditional Chinese medicine tongue diagnosis to obtain the health condition of the patient.
The present invention is also a computer readable storage medium storing a plurality of computer program instructions for being loaded by a processor of a computer apparatus and for performing the steps of the method for classifying gloss of a face according to the present invention. Those skilled in the art will understand that all or part of the steps of the methods in the above embodiments may be implemented by related program instructions, and the program may be stored in a computer-readable storage medium, which may include: read-only memory, random access memory, magnetic or optical disk, and the like.
Compared with the existing image-based face attention classification algorithm, the face glossiness classification device, the face glossiness classification method and the computer storage medium can be applied to traditional Chinese medicine diagnosis, and can effectively analyze the face glossiness, and a classifier for judging the face glossiness is obtained by training based on a large number of face samples, so that the robustness of face glossiness classification is improved, and the accuracy of face glossiness classification is further improved.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A human face gloss classification apparatus comprising a processor adapted to implement various computer program instructions and a memory adapted to store a plurality of computer program instructions, wherein the computer program instructions are loaded by the processor and perform the steps of:
receiving an input face image;
segmenting the face image based on the elliptical clustering skin color model and extracting to obtain a binary image of a face skin color area;
removing eye lip regions from the binary image of the face skin color region based on a projection method;
converting the face image into an HSV image space, extracting a V-channel characteristic vector, and multiplying the V-channel characteristic vector by the binary image with the eye lip region removed to obtain a V-channel gray image with the eye lip region removed;
positioning a face gloss area in the V-channel gray level image based on an OTSU binary cycle method;
screening connected areas of white areas represented by the face gloss areas, and screening out the largest two white connected areas to perform feature extraction to obtain feature vectors of the face image;
and carrying out face glossiness classification training on the feature vectors of all face images by using an SVM (support vector machine) classifier to generate a final face glossiness classifier and storing the final face glossiness classifier in a memory.
2. The device for classifying facial glossiness of a human face according to claim 1, wherein the step of segmenting the human face image based on the elliptical cluster skin color model and extracting a binary image of the human face skin color region comprises the steps of:
converting RGB three channels of the face image into a YCbCr three-dimensional color space;
converting the YCbCr three-dimensional color space into a YCb 'Cr' three-dimensional coordinate space;
projecting the YCb 'Cr' three-dimensional coordinate space to a Cb '-Cr' two-dimensional subspace;
and (3) approximately judging whether the human face skin color area exists in a Cb '-Cr' two-dimensional subspace by using all coordinate points in an ellipse.
3. The face gloss classification apparatus according to claim 2, wherein the analytical expression of the ellipse is as follows:
Figure FDA0001793733260000011
wherein the content of the first and second substances,
Figure FDA0001793733260000012
wherein, the parameters of the analytical expression of the ellipse are defined as follows: cx-109.38, cy-152.02, θ -2.53, ecx-1.6, ecy-2.41, a-25.39, and b-14.03.
4. The apparatus for classifying facial gloss according to claim 1, wherein said step of locating the gloss region of the face in the V-channel gray scale image based on OTSU binary cycle method comprises the steps of:
step 1: binarizing the gray part of the V-channel gray image based on the OTSU to obtain a binarized image of the V-channel gray image;
step 2: multiplying the V-channel gray level image by a binarization image thereof to remove a matt skin color area, and obtaining a remaining V-channel gray level image;
and step 3: counting the ratio of the area of the rest V-channel gray level image to the area of the input V-channel gray level image;
and 4, step 4: judging whether the ratio is larger than a set experience threshold value or not;
if the ratio is larger than the set experience threshold, the step 1 to the step 4 are circulated; if the ratio is less than or equal to the set empirical threshold, the loop step is ended.
5. A human face glossiness classification method is applied to a human face glossiness classification device and is characterized by comprising the following steps:
receiving an input face image;
segmenting the face image based on the elliptical clustering skin color model and extracting to obtain a binary image of a face skin color area;
removing eye lip regions from the binary image of the face skin color region based on a projection method;
converting the face image into an HSV image space, extracting a V-channel characteristic vector, and multiplying the V-channel characteristic vector by the binary image with the eye lip region removed to obtain a V-channel gray image with the eye lip region removed;
positioning a face gloss area in the V-channel gray level image based on an OTSU binary cycle method;
screening connected areas of white areas represented by the face gloss areas, and screening out the largest two white connected areas to perform feature extraction to obtain feature vectors of the face image;
and carrying out face glossiness classification training on the feature vectors of all face images by using an SVM (support vector machine) classifier to generate a final face glossiness classifier and storing the final face glossiness classifier in a memory.
6. The method of classifying facial gloss of a human face according to claim 5, wherein said step of receiving an input image of a human face comprises: the receiving camera unit is used for receiving a face image shot by the face of the person to be tested or reading a face image stored in advance from a memory.
7. The method for classifying facial glossiness of a human face according to claim 5, wherein the step of segmenting the human face image based on the elliptical cluster skin color model and extracting a binary image of the human face skin color region comprises the steps of:
converting RGB three channels of the face image into a YCbCr three-dimensional color space;
converting the YCbCr three-dimensional color space into a YCb 'Cr' three-dimensional coordinate space;
projecting the YCb 'Cr' three-dimensional coordinate space to a Cb '-Cr' two-dimensional subspace;
and (3) approximately judging whether the human face skin color area exists in a Cb '-Cr' two-dimensional subspace by using all coordinate points in an ellipse.
8. The method of classifying facial gloss of a human face according to claim 7, wherein said analytical expression of ellipse is as follows:
Figure FDA0001793733260000031
wherein the content of the first and second substances,
Figure FDA0001793733260000032
wherein, the parameters of the analytical expression of the ellipse are defined as follows: cx-109.38, cy-152.02, θ -2.53, ecx-1.6, ecy-2.41, a-25.39, and b-14.03.
9. The method for classifying facial gloss according to claim 5, wherein said step of locating the gloss region of the face in the V-channel gray scale image based on the OTSU binary cycle method comprises the steps of:
step 1: binarizing the gray part of the V-channel gray image based on the OTSU to obtain a binarized image of the V-channel gray image;
step 2: multiplying the V-channel gray level image by a binarization image thereof to remove a matt skin color area, and obtaining a remaining V-channel gray level image;
and step 3: counting the ratio of the area of the rest V-channel gray level image to the area of the input V-channel gray level image;
and 4, step 4: judging whether the ratio is larger than a set experience threshold value or not;
if the ratio is larger than the set experience threshold, the step 1 to the step 4 are circulated; if the ratio is less than or equal to the set empirical threshold, the loop step is ended.
10. A computer readable storage medium storing a plurality of computer program instructions, wherein the computer program instructions are loaded by a processor of a computer arrangement and execute the method steps of the method for classifying gloss of a face according to any one of claims 5 to 9.
CN201811047851.6A 2018-09-10 2018-09-10 Human face glossiness classification device, method and computer storage medium Pending CN110890156A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201811047851.6A CN110890156A (en) 2018-09-10 2018-09-10 Human face glossiness classification device, method and computer storage medium
PCT/CN2019/104985 WO2020052525A1 (en) 2018-09-10 2019-09-09 Facial glossiness classification device and method, and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811047851.6A CN110890156A (en) 2018-09-10 2018-09-10 Human face glossiness classification device, method and computer storage medium

Publications (1)

Publication Number Publication Date
CN110890156A true CN110890156A (en) 2020-03-17

Family

ID=69744868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811047851.6A Pending CN110890156A (en) 2018-09-10 2018-09-10 Human face glossiness classification device, method and computer storage medium

Country Status (2)

Country Link
CN (1) CN110890156A (en)
WO (1) WO2020052525A1 (en)

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10281267B2 (en) * 2014-11-10 2019-05-07 Shiseido Company, Ltd. Method for evaluating flow of skin, method for examining skin glow improvers, and skin glow improver
CN104573723B (en) * 2015-01-08 2017-11-14 上海中医药大学 A kind of feature extraction and classifying method and system of " god " based on tcm inspection
CN105426816A (en) * 2015-10-29 2016-03-23 深圳怡化电脑股份有限公司 Method and device of processing face images
CN105868735B (en) * 2016-04-25 2019-03-26 南京大学 A kind of preprocess method of track human faces and wisdom health monitor system based on video
CN108451500B (en) * 2017-12-27 2021-01-12 浙江大学台州研究院 A face colour detects and face type identification equipment for traditional chinese medical science inspection diagnosis

Also Published As

Publication number Publication date
WO2020052525A1 (en) 2020-03-19

Similar Documents

Publication Publication Date Title
KR101853006B1 (en) Recognition of Face through Detecting Nose in Depth Image
EP2737434B1 (en) Gait recognition methods and systems
Boehnen et al. A fast multi-modal approach to facial feature detection
CN110866932A (en) Multi-channel tongue edge detection device and method and storage medium
Monwar et al. Pain recognition using artificial neural network
Fernando et al. Low cost approach for real time sign language recognition
Rani et al. Image processing techniques to recognize facial emotions
CN110598574A (en) Intelligent face monitoring and identifying method and system
CN111340052A (en) Tongue tip red detection device and method for tongue diagnosis in traditional Chinese medicine and computer storage medium
CN111222371A (en) Sublingual vein feature extraction device and method
Youlian et al. Face detection method using template feature and skin color feature in rgb color space
Szczepański et al. Pupil and iris detection algorithm for near-infrared capture devices
Campadelli et al. A face recognition system based on local feature characterization
CN110890156A (en) Human face glossiness classification device, method and computer storage medium
Subban et al. Human skin segmentation in color images using gaussian color model
JP2011141799A (en) Object detection recognition apparatus, object detection recognition method, and program
JP2010033221A (en) Skin color detection apparatus, method, and program
Mariappan et al. A labVIEW design for frontal and non-frontal human face detection system in complex background
Mandhala et al. Face detection using image morphology–a review
Bhagirathi et al. Human face, eye and iris detection in real-time using image processing
Huang et al. Eye detection based on skin color analysis with different poses under varying illumination environment
JP2004013768A (en) Individual identification method
RU2735629C1 (en) Method of recognizing twins and immediate family members for mobile devices and mobile device realizing thereof
Mishra et al. Face detection for video summary using enhancement-based fusion strategy under varying illumination conditions
Semary et al. A proposed framework for robust face identification system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20200317