CN109919030B - Black eye type identification method and device, computer equipment and storage medium - Google Patents

Black eye type identification method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN109919030B
CN109919030B CN201910095118.XA CN201910095118A CN109919030B CN 109919030 B CN109919030 B CN 109919030B CN 201910095118 A CN201910095118 A CN 201910095118A CN 109919030 B CN109919030 B CN 109919030B
Authority
CN
China
Prior art keywords
target
image
eye
normal skin
black eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910095118.XA
Other languages
Chinese (zh)
Other versions
CN109919030A (en
Inventor
王晶
周桂文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Hetai Intelligent Home Appliance Controller Co ltd
Original Assignee
Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Het Data Resources and Cloud Technology Co Ltd filed Critical Shenzhen Het Data Resources and Cloud Technology Co Ltd
Priority to CN201910095118.XA priority Critical patent/CN109919030B/en
Publication of CN109919030A publication Critical patent/CN109919030A/en
Application granted granted Critical
Publication of CN109919030B publication Critical patent/CN109919030B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The application relates to a black eye type identification method and device, computer equipment and a storage medium. The method comprises the following steps: acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized, and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space; performing feature extraction on the target periocular region image and the target normal skin region image which are converted into the LAB color space to obtain target features; inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model obtained by training according to a plurality of face image samples. The method can output more accurate black eye type by using the SVM model, and improves the accuracy of black eye type identification.

Description

Black eye type identification method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing, and in particular, to a method and an apparatus for identifying a black eye type, a computer device, and a storage medium.
Background
With the improvement of living standard, people pay more and more attention to their health, and the black eye is a sub-health expression, which attracts more and more attention. If a patient with a black eye needs to be treated and nursed correspondingly, whether the patient has the black eye needs to be identified accurately, and the type of the black eye needs to be judged accurately.
However, the conventional technique is still insufficient for determining the type of the black eye.
Disclosure of Invention
Based on this, it is necessary to provide a black eye type identification method, apparatus, computer device and storage medium for solving the problem that the conventional technology is deficient in determining the black eye type.
In a first aspect, an embodiment of the present application provides a black eye type identification method, where the method includes:
acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized, and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space;
performing feature extraction on the target periocular region image and the target normal skin region image which are converted into the LAB color space to obtain target features;
inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model obtained by training according to a plurality of face image samples.
In one embodiment, the target feature includes a first mean set of each pixel point in the target periocular region image, a second mean set of each pixel point in the target normal skin region image, and a difference set of corresponding components in the first mean set and the second mean set; the first mean value set comprises an average value of L coordinate components, an average value of A coordinate components and an average value of B coordinate components of all pixel points in the target periocular region image, and the second mean value set comprises an average value of L coordinate components, an average value of A coordinate components and an average value of B coordinate components of all pixel points in the target normal skin region image.
In one embodiment, the acquiring the target periocular region image and the target normal skin region image from the face image to be recognized includes:
acquiring the face image to be recognized;
carrying out face key point detection on the face image to be recognized to obtain a plurality of face key points;
and determining a target eye surrounding area image and a target normal skin area image according to the plurality of face key points.
In one embodiment, the determining the target periocular region image and the target normal skin region image according to the plurality of face key points includes:
selecting part of face key points from the plurality of face key points as target face key points, and determining an initial region based on the target face key points;
dividing at least one periocular subregion from the initial region according to a preset first selection rule, and taking the at least one periocular subregion as a target periocular subregion image;
and determining a target normal skin area image from the initial area according to a preset second selection rule.
In one embodiment, the black eye type includes a vascular type black eye, a pigmented type black eye, and a black-free eye.
In one embodiment, when the black eye type is the vascular black eye or the pigmented black eye, the method further comprises:
determining the color difference between the target eye periphery area image and the target normal skin area image according to the difference value set;
and determining the severity of the black eye in the target eye periphery area image according to the comparison result of the chromatic aberration and a preset chromatic aberration threshold.
In one embodiment, the determining the severity of the black eye in the target eye periphery region image according to the comparison result between the color difference and a preset color difference threshold includes:
when the color difference is larger than the color difference threshold value, determining the severity of the black eye in the target eye periphery area image as serious;
when the color difference is not larger than the color difference threshold value, determining that the severity of the black eye in the target eye periphery area image is slight.
In a second aspect, an embodiment of the present application provides a black eye type identification device, including:
the processing module is used for acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space;
the extraction module is used for carrying out feature extraction on the target periocular region image and the target normal skin region image to obtain target features;
the black eye type determining module is used for inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model which is obtained by training according to a plurality of face image samples and can identify the type of the black eye.
In a third aspect, an embodiment of the present application provides a computer device, where the computer device includes a memory and a processor, the memory stores a computer program, and the processor implements the following steps when executing the computer program:
acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized, and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space;
performing feature extraction on the target periocular region image and the target normal skin region image which are converted into the LAB color space to obtain target features;
inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model obtained by training according to a plurality of face image samples.
In a fourth aspect, an embodiment of the present application provides a readable storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements the following steps:
acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized, and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space;
performing feature extraction on the target periocular region image and the target normal skin region image which are converted into the LAB color space to obtain target features;
inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model obtained by training according to a plurality of face image samples.
According to the black eye type identification method, the black eye type identification device, the computer equipment and the storage medium, the computer equipment can acquire the target periocular region image and the target normal skin region image from the face image to be identified, and convert the target periocular region image and the target normal skin region image from the initial color space to the LAB color space; further, feature extraction is carried out on the target eye surrounding area image and the target normal skin area image which are converted into the LAB color space, and target features are obtained; and inputting the target characteristics into a preset SVM model for processing so as to output the type of the black eye in the target eye surrounding area image. If the face image to be recognized has black eyes, the skin color of the target eye surrounding area image and the skin color of the target normal skin area image are obviously different, after target features extracted from the target eye surrounding area image and the target normal skin area image are input into a preset SVM model, the SVM model is a trained model capable of outputting a correct black eye type, so that the fact that the correct black eye type can be output when the trained SVM model is used for recognizing the target eye surrounding area image is guaranteed, and the accuracy of black eye type recognition is improved.
Drawings
FIG. 1 is a schematic diagram of a computer device according to an embodiment;
fig. 2 is a schematic flow chart of a black eye type identification method according to an embodiment;
fig. 3 is a schematic flow chart of a black eye type identification method according to another embodiment;
FIG. 4 is a schematic diagram of a face image to be recognized including an initial region according to an embodiment;
FIG. 5 is a schematic illustration of an initial region provided by one embodiment;
fig. 6 is a schematic flow chart of a black eye type identification method according to yet another embodiment;
fig. 7 is a schematic flowchart of a black eye type identification method according to yet another embodiment;
fig. 8 is a schematic structural diagram of a black eye type identification apparatus according to an embodiment;
fig. 9 is a schematic structural diagram of a black eye type identification device according to another embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
The black eye type identification method provided by the embodiment can be applied to a computer device shown in fig. 1, wherein the computer device comprises a processor, a memory and a network interface which are connected through a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, a computer program, and a database. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. Optionally, the computer device may be a mobile phone, a tablet computer, a personal digital assistant, and the like, and the specific form of the computer device is not limited in this embodiment.
It should be noted that, in the black eye type identification method provided in the embodiment of the present application, an execution main body of the black eye type identification method may be a black eye type identification device, and the black eye type identification device may be implemented as part or all of a computer device by software, hardware, or a combination of software and hardware. In the following method embodiments, the execution subject is a computer device as an example.
Fig. 2 is a schematic flow chart of a black eye type identification method according to an embodiment. The embodiment relates to an implementation mode that computer equipment performs feature extraction according to a target eye surrounding area image and a target normal skin area image converted into an LAB color space to obtain a target feature, and inputs the target feature into a preset Support Vector Machine (SVM) model to determine the type of a black eye in the target eye surrounding area image. As shown in fig. 2, the method may include:
s202, acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized, and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space.
Specifically, the initial color space may be an RGB color space or an HSV color space, which is not limited in this embodiment. Correspondingly, the face image to be recognized may be an image in an RGB color space, an image in an HSV color space, or the like. Taking the face image to be recognized as an image in an RGB space as an example, the RGB color space includes three primary colors of Red (Red), Green (Green), and Blue (Blue), and rich and wide colors can be generated by using the three primary colors. Optionally, the face image to be recognized may be an image directly generated after being shot by a mobile phone, a camera, or the like, or may also be a face image in an image library of a computer device, and the source of the face image to be recognized is not limited in this embodiment. The image corresponding to the target periocular region image and the target normal skin region image is also an image in an RGB color space.
The LAB color space consists of three elements, one being luminance (luminance), a and B being two color channels. The colors included in a range from dark green (low brightness value) to gray (medium brightness value) to bright pink red (high brightness value); b is from bright blue (low brightness value) to gray (medium brightness value) to yellow (high brightness value), the value range of L is 0-100, and the value ranges of A and B are + 127-128.
Taking the face image to be detected as an image in the RGB space as an example, the obtained image of the periocular region and the target normal skin region needs to be converted from the RGB color space to the LAB color space. Optionally, the computer device may convert the target periocular region image and the target normal skin region image in the RGB space into the three-dimensional reference coordinate system XYZ space coordinates, and then convert the target periocular region image and the target normal skin region image converted into the XYZ space into the LAB color space, that is, through RGB-XYZ-LAB conversion. It should be noted that, if the conversion matrix from the RGB color space to the LAB color space is known, the target periocular region image and the image corresponding to the target normal skin region image in the RGB color space may be directly converted into the LAB color space according to the conversion matrix.
And S204, performing feature extraction on the target periocular region image and the target normal skin region image which are converted into the LAB color space to obtain target features.
Specifically, the coordinate values of the pixel points of the target periocular region image and the target normal skin region image converted into the LAB color space are both expressed by L, A and B, for example, the coordinate of a certain pixel point is (L, a, B). And after the target eye surrounding area image and the target normal skin area image which are converted into the LAB color space are subjected to feature extraction, the computer equipment can obtain target features.
Optionally, the target feature may be a target pixel point, and may also include a first average set of each pixel point in the target eye periphery region image, a second average set of each pixel point in the target normal skin region image, and a difference set of corresponding components in the first average set and the second average set; the first mean value set comprises the mean value of the L coordinate components, the mean value of the A coordinate components and the mean value of the B coordinate components of all the pixel points in the target periocular region image, and the second mean value set comprises the mean value of the L coordinate components, the mean value of the A coordinate components and the mean value of the B coordinate components of all the pixel points in the target normal skin region image.
When extracting features of the target periocular region image and the target normal skin region image converted into the LAB color space, the computer device may calculate an average value of L-coordinate components, an average value of a-coordinate components, and an average value of B-coordinate components of all pixel points in the target periocular region image according to coordinates of each pixel point in the target periocular region image, and use the obtained average value of L-coordinate components, the obtained average value of a-coordinate components, and the obtained average value of B-coordinate components as a first average value set, for example, the target periocular region image includes three pixel points, where a coordinate of a first pixel point is (100, 10, 20), a coordinate of a second pixel point is (60, 20, 0), and a coordinate of a third pixel point is (100, 120, 40), and then the average value of L-coordinate components, the average value of a-coordinate components, and the average value of B-coordinate components of all pixel points in the target periocular region image are 130((100+60+100)/3 ═ 130), respectively ) 75((10+20+ 120)/3) ═ 75), and 30((20+0+ 40)/3) ═ 20), and the first average value set includes an average value 130 of L coordinate components, an average value 75 of a coordinate components, and an average value 20 of B coordinate components of all pixel points in the target eye circumference region image.
In the same way, the computer device may calculate an average value of the L coordinate component, an average value of the a coordinate component, and an average value of the B coordinate component in the target normal skin region image according to coordinates of each pixel point in the target normal skin region image, and the second average set includes the average value of the L coordinate component, the average value of the a coordinate component, and the average value of the B coordinate component in the target normal skin region image.
And a target feature is formed by a first mean value set obtained by calculating the coordinates of all pixel points in the target eye surrounding area image, a second mean value set obtained by calculating the coordinates of all pixel points in the target normal skin area image and 9 values corresponding to the difference set of corresponding components in the first mean value set and the second mean value set.
S206, inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model obtained by training according to a plurality of face image samples.
Specifically, the preset SVM model is a model which is obtained by training according to a plurality of face image samples and can recognize the type of the black eye, and after the target feature is input into the preset SVM model, the computer device can output the type of the black eye in the target eye surrounding area image. Optionally, the outputted black eye type includes at least one of a blood vessel type black eye, a pigment type black eye, and a black eye-free eye.
The vascular type dark eye circles are mainly appeared on the lower eyelid, and may be red and purplish red, the skin thickness around the eyes is only 1/10 of normal skin, fat and sweat glands are lacked, so once the subcutaneous blood vessels are slightly stagnated, the blood vessels become obvious. The vascular type black eye becomes more visible when the skin at the black eye is stretched by hand. The red triangle area still exists after covering the lower orbit, and the edema phenomenon is accompanied.
The pigmented black eye circles include light blue, cyan, light brown and dark brown, appear on the upper eyelid and the lower eyelid, are more common when the lower eyelid surrounds the eye circumference (namely the upper eyelid and the lower eyelid are provided with the black eye circles), the skin at the black eye circles is stretched by hands, and the tea brown black eye circles cannot change in color.
When training the SVM model, the computer device may input the target features corresponding to the plurality of face image samples into the initial SVM model to train the initial SVM model, so as to obtain the preset SVM model. Each face image sample has a corresponding label, for example, the label may be a blood vessel type black eye, a pigment type black eye or no black eye. Because there are three types of black eye, three SVM classifiers need to be trained, and each classifier is used for judging whether the type of the black eye is one of the blood vessel type black eye, the pigment type black eye or the non-black eye. Optionally, when the initial SVM model is trained, the selected kernel function may be a gaussian kernel function, or may be other kernel functions such as a linear kernel function, a polynomial kernel function, and the like.
In the black eye type identification method provided by this embodiment, the computer device may acquire the target eye surrounding area image and the target normal skin area image from the face image to be identified, and convert the target eye surrounding area image and the target normal skin area image from the initial color space to the LAB color space; further, feature extraction is carried out on the target eye surrounding area image and the target normal skin area image which are converted into the LAB color space, and target features are obtained; and inputting the target characteristics into a preset SVM model for processing so as to output the type of the black eye in the target eye surrounding area image. If the face image to be recognized has black eyes, the skin color of the target eye surrounding area image and the skin color of the target normal skin area image are obviously different, after target features extracted from the target eye surrounding area image and the target normal skin area image are input into a preset SVM model, the SVM model is a trained model capable of outputting a correct black eye type, so that the fact that the correct black eye type can be output when the trained SVM model is used for recognizing the target eye surrounding area image is guaranteed, and the accuracy of black eye type recognition is improved.
Fig. 3 is a schematic flowchart of a black eye type identification method according to another embodiment. The embodiment relates to a process that computer equipment detects key points of a face to be recognized and achieves a target periocular region image and a target normal skin region image according to a plurality of obtained key points of the face. On the basis of the embodiment shown in fig. 2, optionally, the step S202 of acquiring the target periocular region image and the target normal skin region image from the face image to be recognized may include:
and S302, acquiring the face image to be recognized.
S304, carrying out face key point detection on the face image to be recognized to obtain a plurality of face key points.
Specifically, the computer device may perform face key point detection on the face image to be recognized by using a preset face key point detection model to obtain a plurality of face key points. The preset face key point detection Model may be any one of a Model based on an Active Shape Model (ASM), a Model based on an Active Appearance Model (AAM), a Model based on Cascade Position Regression (CPR), or a Model based on deep learning.
S304, determining a target eye surrounding area image and a target normal skin area image according to the plurality of face key points.
Through the obtained plurality of human face key points, the computer equipment can position key region positions of the human face, including eyebrow, eyes, nose, mouth, face contour and other regions, so that a target periocular region image and a target normal skin region image are determined according to the obtained plurality of human face key points. Optionally, the computer device may further determine the target periocular region image and the target normal skin region image according to the following steps:
s3042, selecting part of face key points from the face key points as target face key points, and determining an initial region based on the target face key points.
Specifically, the detected face key points are multiple and are distributed over the whole face, for example, the right face, as shown in fig. 4, the computer device may select, as the target face key point, four key points (for example, fig. 4, including key points 31, 32, 33, 34, and 35) under the right eye circumference and one key point (key point 35 in fig. 4) at the right alar part of the nose according to multiple key points (for example, fig. 4, including key points 31, 32, 33, 34, and 35) input by the user. And each key point corresponds to respective abscissa and ordinate, and when the initial region is determined, the computer equipment can determine the abscissa range and the ordinate range of the initial region according to the selected target face key point. Optionally, the initial area may be a rectangular frame, the computer device may use an abscissa of a key point 31 at an outer corner of an eye, from among four key points around a right eye among the key points of the target face, as a first abscissa of the rectangular frame, use an abscissa of a key point 34 at an inner corner of the eye as a second abscissa of the rectangular frame, use an ordinate of a key point 33 (i.e., the key point closest to the eye) having the largest ordinate among the key points 31, 32, 33, and 34 in the corner area as a first ordinate of the rectangular frame, and use an ordinate of a key point 35 at a wing of a nose as a second ordinate of the rectangular frame. It should be noted that the target face key points are not limited to the five selected above, and may be key points in other numbers or other positions as long as the area including the black eye area can be determined. The initial region may have other shapes, such as a triangular frame, a trapezoidal frame, etc., and the shape of the initial region is not limited in this embodiment. Furthermore, the selection mode of the left face initial area is the same as that of the right face initial area. Optionally, the initial area may be a rectangular frame of the right face selected according to the selection manner, may also be a rectangular frame of the left face selected according to the selection manner, and may also include a rectangular frame of the right face and a rectangular frame of the left face selected according to the selection manner.
S3044, according to a preset first selection rule, dividing at least one periocular subregion from the initial region, and using the at least one periocular subregion as a target periocular subregion image.
Specifically, the first selection rule may be a first ratio of the width to the height of the selected initial region. Optionally, the periocular subregion may include three small rectangular frames near the eye selected from the initial region, taking fig. 5 as an example, the width of the initial region is w, and the height is h, for the periocular subregion 40, a range from (h/9-0.025h) to (h/9+0.05h) may be selected as the high range of the periocular subregion 40, and a range from (w/6) to (w/6+0.06w) may be selected as the wide range of the periocular subregion 40, so as to determine the periocular subregion 40 according to the selected high range of the periocular subregion 40 and the wide range of the periocular subregion 40; similarly, for the periocular subregion 41, a range of (h/6-0.05h) to (h/6+0.05h) may be selected as the high range of the periocular subregion 40, and a range of (w/4) to (w/4+0.06w) may be selected as the wide range of the periocular subregion 40, so as to determine the periocular subregion 41 according to the selected high range of the periocular subregion 41 and the wide range of the periocular subregion 41; the range of (2 × h/9-0.08h) to (2 × h/9+0.05h) may be selected as the high range of the periocular subregion 42, and the range of (w/3) to (w/3+0.08w) may be selected as the wide range of the periocular subregion 42, so that the periocular subregion 42 is determined according to the selected high range of the periocular subregion 42 and the selected wide range of the periocular subregion 42. The above-mentioned periocular subregions 40, 41 and 42 constitute a target upper peripheral region.
S3046, according to a preset second selection rule, determining a target normal skin area image from the initial area.
Specifically, the second selection rule may be a second ratio of the width to the height of the selected initial region. With continued reference to fig. 5, where the initial region has a width w and a height h, the computer device may select a range of (3 × h/4) to (h/10) as the high range of the target normal skin region image and a range of (w/3) to (2 × w/3) as the wide range of the periocular subregion 43, thereby determining the periocular subregion 43 based on the selected high range of the target normal skin region image 43 and the wide range of the target normal skin region image 43.
Optionally, the target normal skin area image may also be selected from skin areas at other positions in the face image to be recognized outside the initial area, or may be selected from images outside the face image to be recognized, or may be a preset normal skin image, which only may be an image that can be distinguished from a black eye area, and this embodiment does not limit the source of the target normal skin area image.
Through the above steps of S3042 to S3046, the computer apparatus may determine that the target periocular region image is represented by three small rectangular boxes 40, 41, and 42 and the target normal skin region image is represented by a large rectangular box 43, by the first selection rule and the second selection rule, since the skin colors of the target periocular region image near the inner corner and the target normal skin region image near the cheek form a distinct contrast, and then select three small rectangular boxes from the initial region to represent the target periocular region image, which may reduce the recognition range of the black eye and further improve the recognition accuracy of the black eye type.
In the black eye type identification method provided by the embodiment, computer equipment can acquire a face image to be identified; performing face key point detection on a face image to be recognized to obtain a plurality of face key points; and determining a target eye surrounding area image and a target normal skin area image according to the plurality of face key points. Because the skin colors of the target eye surrounding area image close to the inner canthus and the target normal skin area image close to the cheek form obvious contrast, the computer equipment can well identify the black eye type in the face image to be identified by using the preset SVM model, and the identification accuracy of the black eye type is improved.
Fig. 6 is a schematic flow chart of a black eye type identification method according to yet another embodiment. The embodiment relates to an implementation process in which when a target eye surrounding area image has a black eye, computer equipment determines a color difference between the target eye surrounding area image and the target normal skin area image according to the difference set, and determines the severity of the black eye according to a comparison result between the color difference and a preset color difference threshold. On the basis of the foregoing embodiment, optionally, the foregoing method may further include:
s602, determining the color difference between the target periocular region image and the target normal skin region image according to the difference value set.
Specifically, when the black eye type is a blood vessel type black eye or a pigment type black eye, the computer device may use a formula
Figure BDA0001964305290000141
Determining a color difference delta E between the target eye circumference area image and the target normal skin area image, wherein delta L is the difference between the average value of the L coordinate components in the first mean value set and the average value of the L coordinate components in the second mean value set, delta A is the difference between the average value of the A coordinate components in the first mean value set and the average value of the A coordinate components in the second mean value set, and delta B is the difference between the average value of the B coordinate components in the first mean value set and the average value of the B coordinate components in the second mean value set.
S604, determining the severity of the black eye in the target eye surrounding area image according to the comparison result of the color difference and a preset color difference threshold value.
Specifically, the severity of the black eye may include more severe, severe and mild, and the corresponding color difference threshold may be three; the severity of the dark circles may also include severe and mild, and the corresponding color difference thresholds may be two; of course, the degree of the severity of the black eye may be set according to the requirement, which is not limited in this embodiment. Optionally, the computer device may determine the severity of the black eye of the target eye surrounding area image according to whether the color difference is within a preset threshold range, or may determine the severity of the black eye of the target eye surrounding area image according to whether the color difference is greater than a preset color difference threshold.
When the severity of the black eye includes severity and mild, the determining the severity of the black eye in the target eye surrounding area image according to the comparison result between the color difference and the preset color difference threshold may include: when the color difference is larger than a color difference threshold value, determining that the severity of the black eye in the target eye surrounding area image is serious; when the color difference is not larger than the color difference threshold value, determining that the severity of the target eye circumference area image is slight.
Optionally, the severity may be represented by a numerical value, for example, when the color difference is greater than the color difference threshold, the severity may be set to 3, which indicates that the severity is severe; when the color difference is not greater than the color difference threshold, a severity of 2 may be set, indicating that the severity is slight. It should be noted that the numerical value corresponding to the severity may be set according to actual situations, and this embodiment is not limited to this.
In the black eye type identification method provided by this embodiment, the computer device may determine the color difference between the target eye periphery region image and the target normal skin region image according to the difference set; therefore, according to the comparison result of the chromatic aberration and the preset chromatic aberration threshold, the severity of the black eye in the target eye surrounding area image is determined, a user can conveniently make a corresponding treatment and care scheme according to specific conditions, and the convenience and the application range of the black eye type identification are improved.
For the understanding of those skilled in the art, the method for identifying the black eye type provided in the present application is described in detail below, and may be specifically referred to as shown in fig. 7:
s702, the computer equipment acquires the face image to be recognized.
And S704, the computer equipment detects the face key points of the face image to be recognized to obtain a plurality of face key points.
S706, the computer equipment selects part of face key points from the plurality of face key points as target face key points, and determines an initial region based on the target face key points.
S708, the computer device marks out at least one periocular subregion from the initial region according to a preset first selection rule, and takes the at least one periocular subregion as a target periocular region image.
And S710, determining a target normal skin area image from the initial area by the computer equipment according to a preset second selection rule.
S712, the computer device converts the target periocular region image and the target normal skin region image from an initial color space to an LAB color space.
And S714, carrying out feature extraction on the target periocular region image and the target normal skin region image converted into the LAB color space by the computer equipment to obtain target features.
S716, inputting the target characteristics into a preset SVM model by computer equipment for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model obtained by training according to a plurality of face image samples.
S718, the computer equipment judges whether the type of the black eye is the blood vessel type black eye or the pigment type black eye. If so, go to step S720, otherwise, end the process.
S720, the computer equipment determines the color difference between the target periocular region image and the target normal skin region image according to the difference value set.
S722, the computer device determines whether the color difference is greater than a preset color difference threshold, if so, performs S724, and if not, performs S726.
S724, the computer device determines that the severity of the black eye in the target eye surrounding area image is severe.
S726, the computer device determines that the severity of the dark circles in the target periocular region image is mild.
The working principle and technical effect of the black eye type identification method provided by this embodiment are as described in the above embodiments, and are not described herein again.
It should be understood that, although the steps in the flowcharts of fig. 2 to 7 are shown in sequence as indicated by the arrows, the steps are not necessarily performed in sequence as indicated by the arrows. The steps are not performed in the exact order shown and described, and may be performed in other orders, unless explicitly stated otherwise. Moreover, at least some of the steps in fig. 2-7 may include multiple sub-steps or multiple stages that are not necessarily performed at the same time, but may be performed at different times, and the order of performing the sub-steps or stages is not necessarily sequential, but may be performed in turn or alternately with other steps or at least some of the sub-steps or stages of other steps.
Fig. 8 is a schematic structural diagram of a black eye type identification device according to an embodiment. As shown in fig. 8, the apparatus may include a processing module 802, an extraction module 804, and a black eye type determination module 806.
Specifically, the processing module 802 is configured to obtain a target periocular region image and a target normal skin region image from a face image to be recognized, and convert the target periocular region image and the target normal skin region image from an initial color space to an LAB color space;
an extracting module 804, configured to perform feature extraction on the target periocular region image and the target normal skin region image to obtain a target feature;
a black eye type determining module 806, configured to input the target feature into a preset SVM model for processing, and determine a type of a black eye in the target eye surrounding area image; the SVM model is a model which is obtained by training according to a plurality of face image samples and can identify the type of the black eye.
Optionally, the target feature includes a first average set of each pixel in the target eye periphery region image, a second average set of each pixel in the target normal skin region image, and a difference set of corresponding components in the first average set and the second average set; the first mean value set comprises an average value of L coordinate components, an average value of A coordinate components and an average value of B coordinate components of all pixel points in the target periocular region image, and the second mean value set comprises an average value of L coordinate components, an average value of A coordinate components and an average value of B coordinate components of all pixel points in the target normal skin region image.
Optionally, the black eye type includes a blood vessel type black eye, a pigment type black eye and a black-free eye.
The black eye type identification apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
In another embodiment of the black eye type recognition apparatus provided in the present invention, on the basis of the embodiment shown in fig. 8, optionally, the processing module 802 may include an obtaining unit, a face key point determining unit, and a face key point determining unit.
Specifically, the acquiring unit is used for acquiring the face image to be recognized;
the face key point determining unit is used for detecting face key points of the face image to be recognized to obtain a plurality of face key points;
and the target area determining unit is used for determining a target eye surrounding area image and a target normal skin area image according to the plurality of face key points.
Optionally, the target region determining unit is specifically configured to select a part of the face key points from the plurality of face key points as target face key points, and determine an initial region based on the target face key points; dividing at least one periocular subregion from the initial region according to a preset first selection rule, and taking the at least one periocular subregion as a target periocular subregion image; and determining a target normal skin area image from the initial area according to a preset second selection rule.
The black eye type identification apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
Fig. 9 is a schematic structural diagram of a black eye type identification device according to yet another embodiment. Based on the above embodiment, optionally, the apparatus may further include a color difference determining module 808 and a severity determining module 810.
Specifically, the color difference determining module 808 is configured to determine a color difference between the target periocular region image and the target normal skin region image according to the difference set;
and a severity determining module 810, configured to determine a severity of a black eye in the target eye surrounding area image according to a comparison result between the color difference and a preset color difference threshold.
Optionally, the severity determining module 810 is specifically configured to determine that the severity of the black eye in the target eye surrounding area image is severe when the color difference is greater than the color difference threshold; when the color difference is not larger than the color difference threshold value, determining that the severity of the black eye in the target eye periphery area image is slight.
The black eye type identification apparatus provided in this embodiment may implement the method embodiments described above, and the implementation principle and the technical effect are similar, which are not described herein again.
In one embodiment, a computer device is provided, which may be a terminal, and its internal structure diagram may be as shown in fig. 1. The computer device includes a processor, a memory, a network interface, a display screen, and an input device connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device comprises a nonvolatile storage medium and an internal memory. The non-volatile storage medium stores an operating system and a computer program. The internal memory provides an environment for the operation of an operating system and computer programs in the non-volatile storage medium. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a black eye type recognition method. The display screen of the computer equipment can be a liquid crystal display screen or an electronic ink display screen, and the input device of the computer equipment can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on the shell of the computer equipment, an external keyboard, a touch pad or a mouse and the like.
Those skilled in the art will appreciate that the architecture shown in fig. 1 is merely a block diagram of some of the structures associated with the disclosed aspects and is not intended to limit the computing devices to which the disclosed aspects apply, as particular computing devices may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having a computer program stored therein, the processor implementing the following steps when executing the computer program:
acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized, and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space;
performing feature extraction on the target periocular region image and the target normal skin region image which are converted into the LAB color space to obtain target features;
inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model obtained by training according to a plurality of face image samples.
Optionally, the target feature includes a first average set of each pixel in the target eye periphery region image, a second average set of each pixel in the target normal skin region image, and a difference set of corresponding components in the first average set and the second average set; the first mean value set comprises an average value of L coordinate components, an average value of A coordinate components and an average value of B coordinate components of all pixel points in the target periocular region image, and the second mean value set comprises an average value of L coordinate components, an average value of A coordinate components and an average value of B coordinate components of all pixel points in the target normal skin region image.
Optionally, the black eye type includes a blood vessel type black eye, a pigment type black eye and a black-free eye.
In one embodiment, the processor, when executing the computer program, further performs the steps of: acquiring the face image to be recognized; carrying out face key point detection on the face image to be recognized to obtain a plurality of face key points; and determining a target eye surrounding area image and a target normal skin area image according to the plurality of face key points.
In one embodiment, the processor, when executing the computer program, further performs the steps of: selecting part of face key points from the plurality of face key points as target face key points, and determining an initial region based on the target face key points; dividing at least one periocular subregion from the initial region according to a preset first selection rule, and taking the at least one periocular subregion as a target periocular subregion image; and determining a target normal skin area image from the initial area according to a preset second selection rule.
In one embodiment, the processor, when executing the computer program, further performs the steps of: determining the color difference between the target eye periphery area image and the target normal skin area image according to the difference value set; and determining the severity of the black eye of the target eye surrounding area image according to the comparison result of the color difference and a preset color difference threshold value.
In one embodiment, the processor, when executing the computer program, further performs the steps of: when the color difference is larger than the color difference threshold value, determining the severity of the black eye in the target eye periphery area image as serious; when the color difference is not larger than the color difference threshold value, determining that the severity of the black eye in the target eye periphery area image is slight.
The implementation principle and technical effect of the computer device provided by the above embodiment are similar to those of the above method embodiment, and are not described herein again.
In one embodiment, a computer-readable storage medium is provided, having a computer program stored thereon, which when executed by a processor, performs the steps of:
acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized, and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space;
performing feature extraction on the target periocular region image and the target normal skin region image which are converted into the LAB color space to obtain target features;
inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model obtained by training according to a plurality of face image samples.
In one embodiment, the computer program when executed by the processor further performs the steps of: acquiring the face image to be recognized; carrying out face key point detection on the face image to be recognized to obtain a plurality of face key points; and determining a target eye surrounding area image and a target normal skin area image according to the plurality of face key points.
In one embodiment, the computer program when executed by the processor further performs the steps of: selecting part of face key points from the plurality of face key points as target face key points, and determining an initial region based on the target face key points; dividing at least one periocular subregion from the initial region according to a preset first selection rule, and taking the at least one periocular subregion as a target periocular subregion image; and determining a target normal skin area image from the initial area according to a preset second selection rule.
In one embodiment, the computer program when executed by the processor further performs the steps of: determining the color difference between the target eye periphery area image and the target normal skin area image according to the difference value set; and determining the severity of the black eye in the target eye periphery area image according to the comparison result of the chromatic aberration and a preset chromatic aberration threshold.
In one embodiment, the computer program when executed by the processor further performs the steps of: when the color difference is larger than the color difference threshold value, determining the severity of the black eye in the target eye periphery area image as serious; when the color difference is not larger than the color difference threshold value, determining that the severity of the black eye in the target eye periphery area image is slight.
The implementation principle and technical effect of the computer-readable storage medium provided by the above embodiments are similar to those of the above method embodiments, and are not described herein again.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by hardware instructions of a computer program, which can be stored in a non-volatile computer-readable storage medium, and when executed, can include the processes of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in the embodiments provided herein may include non-volatile and/or volatile memory, among others. Non-volatile memory can include read-only memory (ROM), Programmable ROM (PROM), Electrically Programmable ROM (EPROM), Electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDRSDRAM), Enhanced SDRAM (ESDRAM), Synchronous Link DRAM (SLDRAM), Rambus Direct RAM (RDRAM), direct bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM).
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (9)

1. A black eye type identification method, the method comprising:
acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized, and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space;
performing feature extraction on the target periocular region image and the target normal skin region image which are converted into the LAB color space to obtain target features; the target features comprise a first mean value set of all pixel points in the target eye periphery region image, a second mean value set of all pixel points in the target normal skin region image and a difference value set of corresponding components in the first mean value set and the second mean value set; wherein the difference value set is used for judging the severity of the black eye in the target eye periphery region image;
inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model which is obtained by training according to a plurality of face image samples and can identify the type of the black eye; wherein the types of black eye include: blood vessel type black eye, pigment type black eye and no black eye;
when the black eye type is the vascular black eye or the pigmented black eye, the method further comprises:
determining the color difference between the target eye periphery area image and the target normal skin area image according to the difference value set;
determining the severity of the black eye in the target eye surrounding area image according to the comparison result of the chromatic aberration and a preset chromatic aberration threshold;
determining the color difference between the target periocular region image and the target normal skin region image according to the difference value set, wherein the determining the color difference comprises the following steps:
according to the formula
Figure FDA0002944783630000011
Determining a color difference delta E between the target periocular region image and the target normal skin region image; wherein Δ L is a difference between an average value of L coordinate components in the first mean set and an average value of L coordinate components in the second mean set, Δ a is a difference between an average value of a coordinate components in the first mean set and an average value of a coordinate components in the second mean set, and Δ B is a difference between an average value of B coordinate components in the first mean set and an average value of B coordinate components in the second mean set.
2. The method of claim 1, wherein the first mean set comprises an average of L-coordinate components, an average of A-coordinate components, and an average of B-coordinate components of respective pixel points in the target periocular region image, and wherein the second mean set comprises an average of L-coordinate components, an average of A-coordinate components, and an average of B-coordinate components of respective pixel points in the target normal skin region image.
3. The method according to claim 1, wherein the obtaining of the target periocular region image and the target normal skin region image from the face image to be recognized comprises:
acquiring the face image to be recognized;
carrying out face key point detection on the face image to be recognized to obtain a plurality of face key points;
and determining a target eye surrounding area image and a target normal skin area image according to the plurality of face key points.
4. The method of claim 3, wherein determining the target periocular region image and the target normal skin region image from the plurality of face key points comprises:
selecting part of face key points from the plurality of face key points as target face key points, and determining an initial region based on the target face key points;
dividing at least one periocular subregion from the initial region according to a preset first selection rule, and taking the at least one periocular subregion as a target periocular subregion image;
and determining a target normal skin area image from the initial area according to a preset second selection rule.
5. The method according to claim 1, wherein the determining the severity of the dark eye in the target eye periphery region image according to the comparison result between the color difference and a preset color difference threshold comprises:
when the color difference is larger than the color difference threshold value, determining the severity of the black eye in the target eye periphery area image as serious;
when the color difference is not larger than the color difference threshold value, determining that the severity of the black eye in the target eye periphery area image is slight.
6. A black eye type identification device, characterized in that the device comprises:
the processing module is used for acquiring a target eye surrounding area image and a target normal skin area image from a face image to be recognized and converting the target eye surrounding area image and the target normal skin area image from an initial color space to an LAB color space;
the extraction module is used for carrying out feature extraction on the target periocular region image and the target normal skin region image to obtain target features; the target features comprise a first mean value set of all pixel points in the target eye periphery region image, a second mean value set of all pixel points in the target normal skin region image and a difference value set of corresponding components in the first mean value set and the second mean value set; wherein the difference value set is used for judging the severity of the black eye in the target eye periphery region image;
the black eye type determining module is used for inputting the target characteristics into a preset SVM model for processing, and determining the type of black eyes in the target eye surrounding area image; the SVM model is a model which is obtained by training according to a plurality of face image samples and can identify the type of the black eye; wherein the types of black eye include: blood vessel type black eye, pigment type black eye and no black eye;
when the black eye type is the vascular type black eye or the pigmented type black eye, the device further includes:
the color difference determining module is used for determining the color difference between the target periocular region image and the target normal skin region image according to the difference value set;
the severity determining module is used for determining the severity of the black eye in the target eye periphery area image according to the comparison result of the chromatic aberration and a preset chromatic aberration threshold;
the color difference determination module is specifically used for determining the color difference according to a formula
Figure FDA0002944783630000041
Determining a color difference delta E between the target periocular region image and the target normal skin region image; wherein Δ L is a difference between an average value of L coordinate components in the first mean set and an average value of L coordinate components in the second mean set, Δ a is a difference between an average value of a coordinate components in the first mean set and an average value of a coordinate components in the second mean set, and Δ B is a difference between an average value of B coordinate components in the first mean set and an average value of B coordinate components in the second mean set.
7. The apparatus according to claim 6, wherein the severity determination module is specifically configured to determine the severity of the black eye in the target periocular region image as severe when the color difference is greater than the color difference threshold; when the color difference is not larger than the color difference threshold value, determining that the severity of the black eye in the target eye periphery area image is slight.
8. A computer arrangement comprising a memory and a processor, the memory storing a computer program, wherein the processor implements the steps of the method according to any one of claims 1-5 when executing the computer program.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the steps of the method of any one of claims 1 to 5.
CN201910095118.XA 2019-01-31 2019-01-31 Black eye type identification method and device, computer equipment and storage medium Active CN109919030B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910095118.XA CN109919030B (en) 2019-01-31 2019-01-31 Black eye type identification method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910095118.XA CN109919030B (en) 2019-01-31 2019-01-31 Black eye type identification method and device, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109919030A CN109919030A (en) 2019-06-21
CN109919030B true CN109919030B (en) 2021-07-13

Family

ID=66961121

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910095118.XA Active CN109919030B (en) 2019-01-31 2019-01-31 Black eye type identification method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109919030B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111428552B (en) * 2019-12-31 2022-07-15 深圳数联天下智能科技有限公司 Black eye recognition method and device, computer equipment and storage medium
CN111428553B (en) * 2019-12-31 2022-07-15 深圳数联天下智能科技有限公司 Face pigment spot recognition method and device, computer equipment and storage medium
CN112468792B (en) * 2020-11-05 2023-03-28 Oppo广东移动通信有限公司 Image recognition method and device, electronic equipment and storage medium
CN112541394A (en) * 2020-11-11 2021-03-23 上海诺斯清生物科技有限公司 Black eye and rhinitis identification method, system and computer medium
CN113128377A (en) * 2021-04-02 2021-07-16 西安融智芙科技有限责任公司 Black eye recognition method, black eye recognition device and terminal based on image processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299011A (en) * 2014-10-13 2015-01-21 吴亮 Skin type and skin problem identification and detection method based on facial image identification
CN108269290A (en) * 2018-01-19 2018-07-10 厦门美图之家科技有限公司 Skin complexion recognition methods and device
CN108830184A (en) * 2018-05-28 2018-11-16 厦门美图之家科技有限公司 Black eye recognition methods and device

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2045775A4 (en) * 2006-07-25 2017-01-18 Nikon Corporation Image processing method, image processing program, and image processing device
JP6265640B2 (en) * 2013-07-18 2018-01-24 キヤノン株式会社 Image processing apparatus, imaging apparatus, image processing method, and program
CN104168478B (en) * 2014-07-29 2016-06-01 银江股份有限公司 Based on the video image color cast detection method of Lab space and relevance function
CN105608722B (en) * 2015-12-17 2018-08-31 成都品果科技有限公司 It is a kind of that pouch method and system are gone based on face key point automatically
JP6421794B2 (en) * 2016-08-10 2018-11-14 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
CN108921128B (en) * 2018-07-19 2020-09-01 厦门美图之家科技有限公司 Cheek sensitive muscle identification method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104299011A (en) * 2014-10-13 2015-01-21 吴亮 Skin type and skin problem identification and detection method based on facial image identification
CN108269290A (en) * 2018-01-19 2018-07-10 厦门美图之家科技有限公司 Skin complexion recognition methods and device
CN108830184A (en) * 2018-05-28 2018-11-16 厦门美图之家科技有限公司 Black eye recognition methods and device

Also Published As

Publication number Publication date
CN109919030A (en) 2019-06-21

Similar Documents

Publication Publication Date Title
CN109919030B (en) Black eye type identification method and device, computer equipment and storage medium
CN103914699B (en) A kind of method of the image enhaucament of the automatic lip gloss based on color space
CN109829930B (en) Face image processing method and device, computer equipment and readable storage medium
WO2020119450A1 (en) Risk identification method employing facial image, device, computer apparatus, and storage medium
JP2020522807A (en) System and method for guiding a user to take a selfie
US10509948B2 (en) Method and device for gesture recognition
WO2021051519A1 (en) Recognition model training method and apparatus, fundus feature recognition method and apparatus, device and medium
WO2018082389A1 (en) Skin colour detection method and apparatus, and terminal
JP2019504386A (en) Facial image processing method and apparatus, and storage medium
CN108463823B (en) Reconstruction method and device of user hair model and terminal
CN110929805B (en) Training method, target detection method and device for neural network, circuit and medium
CN109271930B (en) Micro-expression recognition method, device and storage medium
KR102177918B1 (en) Deep learning based personal color diagnosis and virtual make-up method and apparatus
CN111062891A (en) Image processing method, device, terminal and computer readable storage medium
KR101952804B1 (en) Emotion recognition interface apparatus
KR20150072463A (en) Health state determining method and health state determining apparatus using image of face
CN111428552B (en) Black eye recognition method and device, computer equipment and storage medium
CN108427918A (en) Face method for secret protection based on image processing techniques
EP3772038A1 (en) Augmented reality display method of simulated lip makeup
WO2018082388A1 (en) Skin color detection method and device, and terminal
KR101089847B1 (en) Keypoint matching system and method using SIFT algorithm for the face recognition
WO2023273247A1 (en) Face image processing method and device, computer readable storage medium, terminal
CN109063598A (en) Face pore detection method, device, computer equipment and storage medium
CN111191521A (en) Face living body detection method and device, computer equipment and storage medium
Manaf et al. Color recognition system with augmented reality concept and finger interaction: Case study for color blind aid system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP03 Change of name, title or address

Address after: 518000 Guangdong science and technology innovation and Research Institute, Shenzhen, Shenzhen, Nanshan District No. 6, science and technology innovation and Research Institute, Shenzhen, D 10, 1004, 10

Patentee after: Shenzhen Hetai intelligent home appliance controller Co.,Ltd.

Address before: 518051 1004, 10th floor, block D, Shenzhen Institute of aerospace technology innovation building, no.6, South Keji Road, high tech Zone, Nanshan District, Shenzhen City, Guangdong Province

Patentee before: SHENZHEN H&T DATA RESOURCES AND CLOUD TECHNOLOGY Ltd.

CP03 Change of name, title or address