CN108830184B - Black eye recognition method and device - Google Patents

Black eye recognition method and device Download PDF

Info

Publication number
CN108830184B
CN108830184B CN201810523336.4A CN201810523336A CN108830184B CN 108830184 B CN108830184 B CN 108830184B CN 201810523336 A CN201810523336 A CN 201810523336A CN 108830184 B CN108830184 B CN 108830184B
Authority
CN
China
Prior art keywords
region
euclidean distance
component
target skin
face image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810523336.4A
Other languages
Chinese (zh)
Other versions
CN108830184A (en
Inventor
关明鑫
王喆
黄炜
许清泉
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Yifu Technology Co ltd
Original Assignee
Xiamen Meitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Technology Co Ltd filed Critical Xiamen Meitu Technology Co Ltd
Priority to CN201810523336.4A priority Critical patent/CN108830184B/en
Publication of CN108830184A publication Critical patent/CN108830184A/en
Application granted granted Critical
Publication of CN108830184B publication Critical patent/CN108830184B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The embodiment of the invention provides a black eye identification method and device. The method comprises the following steps: acquiring a periocular region and a target skin region from a human face image to be detected; converting the periocular region and the target skin region to LAB color space; respectively calculating two-dimensional coordinates corresponding to L, A, B three components of the periocular region and the target skin region in an LAB color space, and calculating a first Euclidean distance of the periocular region and a second Euclidean distance of the target skin region under each component according to the two-dimensional coordinates; and identifying the black eye in the eye periphery area according to the obtained first Euclidean distance and the second Euclidean distance. Thus, the periocular region is compared with the target skin region based on the LAB color space, thereby identifying whether a black eye is included in the periocular region.

Description

Black eye recognition method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a black eye identification method and device.
Background
With the development of image processing technology, the black eye area of the human face in the image needs to be processed. In the process, the black eye circle of the human face in the image is firstly identified. However, there is no measure for accurately identifying the dark circles.
Disclosure of Invention
In order to overcome the above-mentioned deficiencies in the prior art, an embodiment of the present invention provides a method and an apparatus for identifying a black eye, which are capable of comparing a periocular region with a target skin region based on an LAB color space, so as to identify whether the periocular region includes the black eye.
The embodiment of the invention provides a black eye identification method, which comprises the following steps:
acquiring a periocular region and a target skin region from a human face image to be detected;
converting the periocular region and the target skin region to LAB color space;
respectively calculating two-dimensional coordinates corresponding to L, A, B three components of the periocular region and the target skin region in an LAB color space, and calculating a first Euclidean distance of the periocular region and a second Euclidean distance of the target skin region under each component according to the two-dimensional coordinates;
and identifying the black eye circle positioned in the eye periphery area according to the obtained first Euclidean distance and the second Euclidean distance.
Optionally, in the method, the step of calculating two-dimensional coordinates corresponding to L, A, B three components of the periocular region and the target skin region in LAB color space, respectively, and calculating a first euclidean distance of the periocular region and a second euclidean distance of the target skin region for each component according to the two-dimensional coordinates includes:
calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the periocular region, and calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the target skin region;
generating a two-dimensional coordinate corresponding to each component according to the mean value and the variance corresponding to each component, wherein the horizontal and vertical coordinates of the two-dimensional coordinate correspond to the mean value and the variance;
obtaining a first Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the eye periphery region;
and obtaining a second Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the target skin area.
Optionally, in the method, the step of identifying the black eye located in the periocular region according to the obtained first euclidean distance and the second euclidean distance includes:
adding the first preset threshold value and the second Euclidean distance corresponding to the same component to obtain the distance and the value corresponding to each component;
comparing the respective distances and values for each component with the corresponding first Euclidean distances;
if the distance sum value corresponding to at least two components is smaller than the corresponding first Euclidean distance, judging that the periocular region comprises a black eye;
and if the distance sum corresponding to at least two components is not less than the corresponding first Euclidean distance, judging that the eye circumference region does not include black eye circles.
Optionally, in the above method, the step of obtaining the periocular region and the target skin region from the face image to be detected includes:
obtaining a face image, and detecting face key points based on the face image to obtain face key points;
cutting the face image according to the obtained face key points to obtain the eye surrounding area;
and obtaining a target skin area from the face image according to a skin segmentation model.
Optionally, in the above method, the step of obtaining the target skin region from the face image according to the skin segmentation model includes:
obtaining a skin area of the face image according to a skin segmentation model and the face image;
and performing threshold segmentation in the skin area according to a second preset threshold so as to screen the target skin area from the skin area.
The embodiment of the invention also provides a device for identifying the black eye, which comprises:
the acquisition module is used for acquiring a periocular region and a target skin region from a human face image to be detected;
a conversion module for converting the periocular region and the target skin region to an LAB color space;
a calculation module, configured to calculate two-dimensional coordinates corresponding to L, A, B three components of the periocular region and the target skin region in an LAB color space, respectively, and calculate, according to the two-dimensional coordinates, a first euclidean distance of the periocular region and a second euclidean distance of the target skin region in each component;
and the identification module is used for identifying the black eye in the eye periphery area according to the obtained first Euclidean distance and the second Euclidean distance.
Optionally, in the above apparatus, the manner in which the calculation module calculates two-dimensional coordinates corresponding to L, A, B three components of the periocular region and the target skin region in LAB color space, respectively, and calculates a first euclidean distance of the periocular region and a second euclidean distance of the target skin region for each component according to the two-dimensional coordinates includes:
calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the periocular region, and calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the target skin region;
generating a two-dimensional coordinate corresponding to each component according to the mean value and the variance corresponding to each component, wherein the horizontal and vertical coordinates of the two-dimensional coordinate correspond to the mean value and the variance;
obtaining a first Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the eye periphery region;
and obtaining a second Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the target skin area.
Optionally, in the above apparatus, the means for identifying the black eye located in the eye periphery region according to the obtained first euclidean distance and the second euclidean distance includes:
adding the first preset threshold value and the second Euclidean distance corresponding to the same component to obtain the distance and the value corresponding to each component;
comparing the respective distances and values for each component with the corresponding first Euclidean distances;
if the distance sum value corresponding to at least two components is smaller than the corresponding first Euclidean distance, judging that the periocular region comprises a black eye;
and if the distance sum corresponding to at least two components is not less than the corresponding first Euclidean distance, judging that the eye circumference region does not include black eye circles.
Optionally, in the above apparatus, the obtaining module includes:
the first acquisition sub-module is used for acquiring a face image and detecting face key points based on the face image to obtain face key points;
the first obtaining submodule is further used for obtaining the eye surrounding area by cutting the face image according to the obtained face key point;
and the second acquisition submodule is used for acquiring a target skin area from the face image according to the skin segmentation model.
Optionally, in the above apparatus, the manner of obtaining the target skin region from the face image by the second obtaining sub-module according to the skin segmentation model includes:
obtaining a skin area of the face image according to a skin segmentation model and the face image;
and performing threshold segmentation in the skin area according to a second preset threshold so as to screen the target skin area from the skin area.
Compared with the prior art, the invention has the following beneficial effects:
the method comprises the steps of obtaining a periocular region and a target skin region in a face image to be detected, and converting the obtained periocular region and the target skin region into an LAB color space. And respectively calculating two-dimensional coordinates corresponding to each component of L, A, B three components of the periocular region and the target skin region in an LAB color space, and calculating a first Euclidean distance of the periocular region and a second Euclidean distance of the target skin region under each component according to the obtained two-dimensional coordinates. And identifying the black eye circle in the eye periphery area based on the first Euclidean distance and the second Euclidean distance. Thus, the periocular region is compared with the target skin region based on the LAB color space, thereby identifying whether a black eye is included in the periocular region. Meanwhile, the eye surrounding area is obtained through the key points of the human face, and the eye surrounding area is ensured to include an area possibly having black eyes. And when the target skin area is obtained, the target skin area is obtained according to the skin segmentation model and a second preset threshold value, and the obtained target skin area is ensured to be the skin without black eye circles, so that the accuracy of the identification result is further ensured.
In order to make the aforementioned and other objects, features and advantages of the present invention comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of an electronic device according to an embodiment of the present invention.
Fig. 2 is a schematic flow chart of a black eye recognition method according to an embodiment of the present invention.
Fig. 3 is a flowchart illustrating sub-steps included in step S110 in fig. 2.
Fig. 4 is a flowchart illustrating sub-steps included in sub-step S113 in fig. 3.
Fig. 5 is a flowchart illustrating sub-steps included in step S130 in fig. 2.
Fig. 6 is a flowchart illustrating sub-steps included in step S140 in fig. 2.
Fig. 7 is a block diagram illustrating a black eye recognition apparatus according to an embodiment of the present invention.
Icon: 100-an electronic device; 110-a memory; 120-a memory controller; 130-a processor; 200-black eye recognition means; 210-an obtaining module; 211-a first acquisition submodule; 212-a second acquisition submodule; 220-a conversion module; 230-a calculation module; 240-identification module.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Some embodiments of the invention are described in detail below with reference to the accompanying drawings. The embodiments described below and the features of the embodiments can be combined with each other without conflict.
Referring to fig. 1, fig. 1 is a block diagram of an electronic device 100 according to an embodiment of the invention. The electronic device 100 may be, but is not limited to, a smart phone, a tablet computer, etc. The electronic device 100 may include: memory 110, memory controller 120, processor 130, and black eye recognition device 200.
The elements of the memory 110, the memory controller 120 and the processor 130 are electrically connected directly or indirectly to realize data transmission or interaction. For example, the components may be electrically connected to each other via one or more communication buses or signal lines. The memory 110 stores therein a black eye recognition device 200, and the black eye recognition device 200 includes at least one software functional module which can be stored in the memory 110 in the form of software or firmware (firmware). The processor 130 executes various functional applications and data processing by running software programs and modules stored in the memory 110, such as the black eye recognition apparatus 200 in the embodiment of the present invention, so as to implement the black eye recognition method in the embodiment of the present invention.
The Memory 110 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like. The memory 110 is used for storing a program, and the processor 130 executes the program after receiving the execution instruction. Access to the memory 110 by the processor 130 and possibly other components may be under the control of the memory controller 120.
The processor 130 may be an integrated circuit chip having signal processing capabilities. The Processor 130 may be a general-purpose Processor including a Central Processing Unit (CPU), a Network Processor (NP), and the like. But may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It will be appreciated that the configuration shown in FIG. 1 is merely illustrative and that electronic device 100 may include more or fewer components than shown in FIG. 1 or have a different configuration than shown in FIG. 1. The components shown in fig. 1 may be implemented in hardware, software, or a combination thereof.
Referring to fig. 2, fig. 2 is a flowchart illustrating a black eye recognition method according to an embodiment of the present invention. The method is applied to the electronic device 100. The specific flow of the black eye recognition method is explained in detail below.
Step S110, a periocular region and a target skin region are obtained from a human face image to be detected.
Referring to fig. 3, fig. 3 is a flowchart illustrating sub-steps included in step S110 in fig. 2. Step S110 may include sub-step S111, sub-step S112, and sub-step S113.
And a substep S111 of obtaining a face image and detecting face key points based on the face image to obtain face key points.
In the present embodiment, a face image to be detected is obtained first. There are various acquisition methods. For example, a face image to be detected may be obtained from a gallery, a generated image may be directly used as the face image to be detected after shooting, an image sent by another device may be used as the face image to be detected, and the like. The obtained image is an image including a face image to be detected.
And carrying out face key point detection on the face image through a face alignment algorithm to obtain face key points of the face image to be detected. The face key points are key points which can be used for identifying cheek contours, eyebrow regions, eye regions, nose regions, mouth regions and the like of a face in a face image to be detected. The more the key points of the face are, the more accurately the cheek contour, the eyebrow region, the eye region, the nose region and the mouth region can be identified.
And a substep S112, cutting the face image according to the obtained face key points to obtain the eye surrounding area.
In this embodiment, the cheek contour of the face is determined according to the obtained face key points, the eye region, the eyebrow region, and the mouth region are determined according to the face key points, and the divided face regions form a face key point image layer. In the face key point image layer, the eye surrounding area can be cut out from the face image to be detected according to the eye area. The eye area obtained by adopting the face alignment algorithm contains an area with a black eye as much as possible and does not contain a silkworm sleeping area, so that the serious influence of the silkworm sleeping area on the recognition of the black eye is avoided, and the recognition precision of the black eye is improved.
And a substep S113, obtaining a target skin area from the face image according to the skin segmentation model.
Referring to fig. 4, fig. 4 is a flowchart illustrating sub-steps included in sub-step S113 in fig. 3. Sub-step S113 may include sub-step S1131 and sub-step S1132.
And a substep S1131, obtaining a skin area of the face image according to the skin segmentation model and the face image.
In this embodiment, a skin segmentation model based on skin color may be used to segment the face image, and an area close to the skin color of the face is separated from the face image to obtain the skin area.
In an implementation manner of this embodiment, the obtained skin area may be directly used as the target skin area, and the subsequent process may be performed.
In another implementation of this embodiment, sub-step S1132 is performed on the obtained skin region.
And a substep S1132, performing threshold segmentation in the skin region according to a second preset threshold, so as to screen the target skin region from the skin region.
In this embodiment, the second preset threshold may be set according to a normal skin of a human face. And screening the target skin area from the skin area by adopting a threshold segmentation method according to the second preset threshold, thereby removing the influence of highlight shadows and further improving the identification precision of black eyes. The second preset threshold may be a range, or may be other representation manners. The specific setting mode and the representation mode of the second preset threshold value can be set according to the actual situation.
Step S120, converting the periocular region and the target skin region to LAB color space.
In this embodiment, the face image to be detected, the eye surrounding region and the target skin region are all images in RGB color space, and the obtained eye surrounding region and the target skin region need to be converted from RGB color space to LAB color space. Wherein the RGB color space comprises the three primary colors Red (Red), Green (Green) and Blue (Blue). The LAB color space includes AB elements of lightness and luminous color, wherein L represents lightness (luminance), A represents a range from magenta to green, B represents a range from yellow to blue, the value range of L is 0 to 100, and the value ranges of A and B are both +127 to-128.
In the embodiment of this embodiment, since RGB cannot be directly converted into LAB, it needs to be converted into XYZ and then into LAB, that is: RGB-XYZ-LAB.
It is of course to be understood that other ways of converting the periocular region and the target skin region to LAB color space are also possible.
Step S130, respectively calculating two-dimensional coordinates corresponding to L, A, B three components of the periocular region and the target skin region in an LAB color space, and calculating a first euclidean distance of the periocular region and a second euclidean distance of the target skin region in each component according to the two-dimensional coordinates.
Referring to fig. 5, fig. 5 is a flowchart illustrating sub-steps included in step S130 in fig. 2. Step S130 may include substep S131, substep S132, substep S133, and substep S134.
And a substep S131, calculating L, A, B a mean value and a variance corresponding to the three components according to the coordinates of each pixel point in the periocular region, and calculating L, A, B a mean value and a variance corresponding to the three components according to the coordinates of each pixel point in the target skin region.
And a substep S132 of generating a two-dimensional coordinate corresponding to each component according to the mean and the variance corresponding to the component.
In this embodiment, after the periocular region and the target skin region are converted into an LAB color space, coordinates of each pixel point in the periocular region and the target skin region are represented by three components of an LAB, for example, the coordinate of a certain pixel point is (L, a, B). For the eye surrounding area, calculating the mean and the variance according to the L component of each pixel point coordinate to obtain the mean and the variance corresponding to the L component of the eye surrounding area; calculating the mean and the variance according to the A component of each pixel point coordinate to obtain the mean and the variance corresponding to the A component of the eye surrounding area; and calculating the mean and the variance according to the B component of each pixel point coordinate to obtain the mean and the variance corresponding to the B component of the eye surrounding area. Therefore, the mean value and the variance corresponding to each component of the eye circumference area are obtained, and then the two-dimensional coordinates corresponding to each component of the eye circumference area are generated according to the mean value and the variance corresponding to each component.
The horizontal and vertical coordinates of the two-dimensional coordinates correspond to the mean and the variance, for example, the horizontal coordinate of the two-dimensional coordinates corresponds to the mean, and the vertical coordinate of the two-dimensional coordinates corresponds to the variance; or the abscissa of the two-dimensional coordinate corresponds to the variance, and the ordinate of the two-dimensional coordinate corresponds to the mean.
And similarly, calculating to obtain the two-dimensional coordinates corresponding to each component of the target skin area according to the coordinates of each pixel point of the target skin area in the same way. For the process of obtaining the two-dimensional coordinates corresponding to each component of the target skin area, see the above description of the process of obtaining the two-dimensional coordinates corresponding to each component of the periocular region.
And a substep S133, obtaining a first Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the periocular region.
And a substep S134, obtaining a second Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the target skin area.
In this embodiment, the euclidean distances may be calculated according to the two-dimensional coordinates corresponding to each component of each region, so as to obtain three first euclidean distances of the periocular region and three second euclidean distances of the target skin region. Wherein, the three Euclidean distances correspond to the L component, the A component and the B component respectively. The euclidean distance refers to the true distance between two points in the m-dimensional space. Alternatively, when calculating the euclidean distance, the euclidean distance between the two-dimensional coordinates corresponding to each component and the origin of coordinates may be calculated.
Step S140, identifying the black eye located in the eye periphery region according to the obtained first euclidean distance and the second euclidean distance.
In an embodiment of this embodiment, since the target skin area does not include a black eye, the second euclidean distance of the target skin area may be directly used as the determination criterion. And comparing the first Euclidean distance and the second Euclidean distance corresponding to the same component, so as to judge whether the eye surrounding area comprises black eyes or not, and thus, the identification of the black eyes is completed.
Referring to fig. 6, fig. 6 is a flowchart illustrating sub-steps included in step S140 in fig. 2. Step S140 may include sub-step S141, sub-step S142, sub-step S143, and sub-step S144.
And a substep S141, performing an addition operation on the first preset threshold and the second euclidean distance corresponding to the same component to obtain a distance sum corresponding to each component.
And a substep S142, comparing the respective distance sum value of each component with the corresponding first euclidean distance.
In the sub-step S143, if there are at least two components corresponding to a distance and a value smaller than the corresponding first euclidean distance, it is determined that the eye circumference region includes a black eye.
And a substep S144, determining that the eye region does not include a black eye if the distance sum corresponding to at least two components is not less than the corresponding first euclidean distance.
In this embodiment, a first preset threshold corresponding to each component may be preset to assist in identifying the black eye. The first preset threshold may be set through data statistics, or may be set through other manners. When the components include L, A, B three components, correspondingly, three first preset thresholds are set. Optionally, the three first preset thresholds may be the same or different, and are set according to actual situations.
And for the target skin area, respectively calculating the sum of the second Euclidean distance corresponding to each component and the first preset threshold corresponding to the component, namely calculating the sum of the second Euclidean distance and the first preset threshold of the same component, so as to obtain the distance sum corresponding to each component.
The distance sum value for each component may be compared to the first euclidean distance for that component. After comparing all the three components to obtain a comparison result, if the comparison result is that the distance sum value corresponding to at least two components is smaller than the corresponding first Euclidean distance, it indicates that a black eye exists in the eye periphery region, and therefore it can be determined that the eye periphery region includes the black eye. If the comparison result is that the distance sum corresponding to at least two components is not less than the corresponding first Euclidean distance, it indicates that no black eye exists in the eye circumference region, and therefore it can be determined that the eye circumference region does not include the black eye.
In the method, the skin segmentation model based on deep learning is accurately segmented in the face image to be detected to obtain the skin area, and the target skin area is obtained by screening the skin area through a second preset threshold value, so that the robustness of subsequent judgment is enhanced. Meanwhile, the black eye recognition is seriously influenced by the silkworm lying area, and the black eye recognition precision is greatly improved because the silkworm lying area is prevented from being cut to the eye surrounding area through the human face alignment algorithm when the eye surrounding area is obtained. And after the target skin area and the eye surrounding area are converted into an LAB color space, calculating two-dimensional coordinates of each pixel point in the eye surrounding area and the target skin area to model three components of the LAB, and further identifying black eyes, so that the accuracy of identifying the black eyes is further improved.
Referring to fig. 7, fig. 7 is a block diagram illustrating a black eye recognition device 200 according to an embodiment of the present invention. The black eye recognition apparatus 200 may include an obtaining module 210, a converting module 220, a calculating module 230, and a recognition module 240.
The obtaining module 210 is configured to obtain a periocular region and a target skin region from a face image to be detected.
Further, the obtaining module 210 includes a first obtaining sub-module 211 and a second obtaining sub-module 212.
The first obtaining sub-module 211 is configured to obtain a face image, and perform face key point detection based on the face image to obtain a face key point.
The first obtaining sub-module 211 is further configured to obtain the periocular region by cropping from the face image according to the obtained face key point.
And a second obtaining sub-module 212, configured to obtain a target skin region from the face image according to a skin segmentation model.
Further, the manner of obtaining the target skin region from the face image by the second obtaining sub-module 212 according to the skin segmentation model includes:
obtaining a skin area of the face image according to a skin segmentation model and the face image;
and performing threshold segmentation in the skin area according to a second preset threshold so as to screen the target skin area from the skin area.
In this embodiment, the obtaining module 210 is configured to execute step S110 in fig. 2, and the detailed description about the obtaining module 210 may refer to the description of step S110 in fig. 2.
A conversion module 220 for converting the periocular region and the target skin region to an LAB color space.
In this embodiment, the converting module 220 is configured to execute step S120 in fig. 2, and the detailed description about the converting module 220 may refer to the description of step S120 in fig. 2.
A calculating module 230, configured to calculate two-dimensional coordinates corresponding to L, A, B three components of the periocular region and the target skin region in an LAB color space, respectively, and calculate, according to the two-dimensional coordinates, a first euclidean distance of the periocular region and a second euclidean distance of the target skin region in each component.
Further, the way that the calculating module 230 respectively calculates two-dimensional coordinates corresponding to three components L, A, B of the periocular region and the target skin region in the LAB color space, and calculates a first euclidean distance of the periocular region and a second euclidean distance of the target skin region at each component according to the two-dimensional coordinates includes:
calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the periocular region, and calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the target skin region;
generating a two-dimensional coordinate corresponding to each component according to the mean value and the variance corresponding to each component, wherein the horizontal and vertical coordinates of the two-dimensional coordinate correspond to the mean value and the variance;
obtaining a first Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the eye periphery region;
and obtaining a second Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the target skin area.
In this embodiment, the calculating module 230 is configured to execute step S130 in fig. 2, and the detailed description about the calculating module 230 may refer to the description of step S130 in fig. 2.
The identifying module 240 is configured to identify the black eye located in the eye periphery region according to the obtained first euclidean distance and the obtained second euclidean distance.
Further, the manner of identifying the black eye located in the eye periphery region by the identification module 240 according to the obtained first euclidean distance and the second euclidean distance includes:
adding the first preset threshold value and the second Euclidean distance corresponding to the same component to obtain the distance and the value corresponding to each component;
comparing the respective distances and values for each component with the corresponding first Euclidean distances;
if the distance sum value corresponding to at least two components is smaller than the corresponding first Euclidean distance, judging that the periocular region comprises a black eye;
and if the distance sum corresponding to at least two components is not less than the corresponding first Euclidean distance, judging that the eye circumference region does not include black eye circles.
In this embodiment, the identification module 240 is configured to execute step S140 in fig. 2, and the detailed description about the identification module 240 may refer to the description of step S140 in fig. 2.
In summary, the embodiments of the present invention provide a method and an apparatus for identifying black eyes. The method comprises the steps of firstly obtaining a periocular region and a target skin region from a human face image to be detected, and then converting the periocular region and the target skin region from an RGB color space to an LAB color space. Then, two-dimensional coordinates corresponding to the three components of the periocular region and the target skin region L, A, B are respectively calculated in an LAB color space, and then a first euclidean distance of the periocular region and a second euclidean distance of the target skin region under each component are calculated according to the obtained two-dimensional coordinates. And finally, judging whether the eye surrounding area comprises a black eye through the first Euclidean distance and the second Euclidean distance so as to finish the black eye identification of the eye surrounding area. Thus, the periocular region is compared with the target skin region based on the LAB color space, thereby identifying whether a black eye is included in the periocular region.
Meanwhile, the eye surrounding area is obtained through the key points of the human face, and the eye surrounding area is ensured to include an area possibly having black eyes. And when the target skin area is obtained, the target skin area is obtained according to the skin segmentation model and a second preset threshold value, and the obtained target skin area is ensured to be the skin without black eye circles, so that the accuracy of the identification result is further ensured.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus and method can be implemented in other ways. The apparatus embodiments described above are merely illustrative, and for example, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of apparatus, methods and computer program products according to various embodiments of the present invention. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
In addition, the functional modules in the embodiments of the present invention may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. The term "comprising", without further limitation, means that the element so defined is not excluded from the group consisting of additional identical elements in the process, method, article, or apparatus that comprises the element.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A black eye recognition method, the method comprising:
acquiring a periocular region and a target skin region from a human face image to be detected;
converting the periocular region and the target skin region to LAB color space;
respectively calculating two-dimensional coordinates corresponding to L, A, B three components of the periocular region and the target skin region in an LAB color space, and calculating a first Euclidean distance of the periocular region and a second Euclidean distance of the target skin region under each component according to the two-dimensional coordinates, wherein the horizontal and vertical coordinates of the two-dimensional coordinates corresponding to each component of one region correspond to the mean value and the variance of component values of each pixel point in the region; for each component, the first euclidean distance represents a euclidean distance between a two-dimensional coordinate corresponding to the each component of the periocular region and a preset coordinate, and the second euclidean distance represents a euclidean distance between the each component of the target skin region and the preset coordinate;
and identifying the black eye circle positioned in the eye periphery area according to the obtained first Euclidean distance and the second Euclidean distance.
2. The method according to claim 1, wherein the step of calculating the two-dimensional coordinates of the periocular region and the target skin region in the LAB color space corresponding to L, A, B components respectively, and calculating the first euclidean distance of the periocular region and the second euclidean distance of the target skin region at each component based on the two-dimensional coordinates comprises:
calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the periocular region, and calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the target skin region;
generating a two-dimensional coordinate corresponding to each component according to the mean value and the variance corresponding to each component;
obtaining a first Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the eye periphery region;
and obtaining a second Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the target skin area.
3. The method according to claim 1 or 2, wherein the step of identifying the black eye located in the periocular region according to the obtained first euclidean distance and the second euclidean distance comprises:
adding the first preset threshold value and the second Euclidean distance corresponding to the same component to obtain the distance and the value corresponding to each component;
comparing the respective distances and values for each component with the corresponding first Euclidean distances;
if the distance sum value corresponding to at least two components is smaller than the corresponding first Euclidean distance, judging that the periocular region comprises a black eye;
and if the distance sum corresponding to at least two components is not less than the corresponding first Euclidean distance, judging that the eye circumference region does not include black eye circles.
4. The method according to claim 1, wherein the step of obtaining the periocular region and the target skin region from the face image to be detected comprises:
obtaining a face image, and detecting face key points based on the face image to obtain face key points;
cutting the face image according to the obtained face key points to obtain the eye surrounding area;
and obtaining a target skin area from the face image according to a skin segmentation model.
5. The method of claim 4, wherein the step of obtaining the target skin region from the face image according to the skin segmentation model comprises:
obtaining a skin area of the face image according to a skin segmentation model and the face image;
and performing threshold segmentation in the skin area according to a second preset threshold so as to screen the target skin area from the skin area.
6. A black eye recognition device, the device comprising:
the acquisition module is used for acquiring a periocular region and a target skin region from a human face image to be detected;
a conversion module for converting the periocular region and the target skin region to an LAB color space;
the calculation module is used for calculating two-dimensional coordinates corresponding to L, A, B three components of the periocular region and the target skin region in an LAB color space respectively, and calculating a first Euclidean distance of the periocular region and a second Euclidean distance of the target skin region under each component according to the two-dimensional coordinates, wherein the horizontal and vertical coordinates of the two-dimensional coordinates corresponding to each component of one region correspond to the mean value and the variance of the component values of each pixel point in the region; for each component, the first euclidean distance represents a euclidean distance between a two-dimensional coordinate corresponding to the each component of the periocular region and a preset coordinate, and the second euclidean distance represents a euclidean distance between the each component of the target skin region and the preset coordinate;
and the identification module is used for identifying the black eye in the eye periphery area according to the obtained first Euclidean distance and the second Euclidean distance.
7. The apparatus according to claim 6, wherein the means for calculating the two-dimensional coordinates of the periocular region and the target skin region in the LAB color space corresponding to L, A, B components respectively comprises means for calculating a first Euclidean distance of the periocular region and a second Euclidean distance of the target skin region for each component according to the two-dimensional coordinates, comprising:
calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the periocular region, and calculating L, A, B mean values and variances corresponding to the three components according to the coordinates of the pixels in the target skin region;
generating a two-dimensional coordinate corresponding to each component according to the mean value and the variance corresponding to each component;
obtaining a first Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the eye periphery region;
and obtaining a second Euclidean distance corresponding to each component according to the two-dimensional coordinates corresponding to each component of the target skin area.
8. The apparatus according to claim 6 or 7, wherein the means for identifying the black eye located in the eye periphery region according to the obtained first Euclidean distance and the second Euclidean distance comprises:
adding the first preset threshold value and the second Euclidean distance corresponding to the same component to obtain the distance and the value corresponding to each component;
comparing the respective distances and values for each component with the corresponding first Euclidean distances;
if the distance sum value corresponding to at least two components is smaller than the corresponding first Euclidean distance, judging that the periocular region comprises a black eye;
and if the distance sum corresponding to at least two components is not less than the corresponding first Euclidean distance, judging that the eye circumference region does not include black eye circles.
9. The apparatus of claim 6, wherein the obtaining module comprises:
the first acquisition sub-module is used for acquiring a face image and detecting face key points based on the face image to obtain face key points;
the first obtaining submodule is further used for obtaining the eye surrounding area by cutting the face image according to the obtained face key point;
and the second acquisition submodule is used for acquiring a target skin area from the face image according to the skin segmentation model.
10. The apparatus of claim 9, wherein the second obtaining sub-module obtains the target skin region from the face image according to a skin segmentation model by:
obtaining a skin area of the face image according to a skin segmentation model and the face image;
and performing threshold segmentation in the skin area according to a second preset threshold so as to screen the target skin area from the skin area.
CN201810523336.4A 2018-05-28 2018-05-28 Black eye recognition method and device Active CN108830184B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810523336.4A CN108830184B (en) 2018-05-28 2018-05-28 Black eye recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810523336.4A CN108830184B (en) 2018-05-28 2018-05-28 Black eye recognition method and device

Publications (2)

Publication Number Publication Date
CN108830184A CN108830184A (en) 2018-11-16
CN108830184B true CN108830184B (en) 2021-04-16

Family

ID=64146237

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810523336.4A Active CN108830184B (en) 2018-05-28 2018-05-28 Black eye recognition method and device

Country Status (1)

Country Link
CN (1) CN108830184B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111241889B (en) * 2018-11-29 2023-05-12 荣耀终端有限公司 Method and device for detecting and evaluating dark circles
CN109919030B (en) * 2019-01-31 2021-07-13 深圳和而泰数据资源与云技术有限公司 Black eye type identification method and device, computer equipment and storage medium
CN109919029A (en) * 2019-01-31 2019-06-21 深圳和而泰数据资源与云技术有限公司 Black eye kind identification method, device, computer equipment and storage medium
CN112541394A (en) * 2020-11-11 2021-03-23 上海诺斯清生物科技有限公司 Black eye and rhinitis identification method, system and computer medium
CN113128374A (en) * 2021-04-02 2021-07-16 西安融智芙科技有限责任公司 Sensitive skin detection method and sensitive skin detection device based on image processing

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455790A (en) * 2013-06-24 2013-12-18 厦门美图网科技有限公司 Skin identification method based on skin color model
CN105678813A (en) * 2015-11-26 2016-06-15 乐视致新电子科技(天津)有限公司 Skin color detection method and device
KR101654287B1 (en) * 2015-05-18 2016-09-06 안양대학교 산학협력단 A Navel Area Detection Method Based on Body Structure
CN107392841A (en) * 2017-06-16 2017-11-24 广东欧珀移动通信有限公司 Livid ring around eye removing method, device and terminal in human face region
CN107730456A (en) * 2016-08-10 2018-02-23 卡西欧计算机株式会社 Image processing method and image processing apparatus

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103455790A (en) * 2013-06-24 2013-12-18 厦门美图网科技有限公司 Skin identification method based on skin color model
KR101654287B1 (en) * 2015-05-18 2016-09-06 안양대학교 산학협력단 A Navel Area Detection Method Based on Body Structure
CN105678813A (en) * 2015-11-26 2016-06-15 乐视致新电子科技(天津)有限公司 Skin color detection method and device
CN107730456A (en) * 2016-08-10 2018-02-23 卡西欧计算机株式会社 Image processing method and image processing apparatus
CN107392841A (en) * 2017-06-16 2017-11-24 广东欧珀移动通信有限公司 Livid ring around eye removing method, device and terminal in human face region

Also Published As

Publication number Publication date
CN108830184A (en) 2018-11-16

Similar Documents

Publication Publication Date Title
CN108830184B (en) Black eye recognition method and device
CN111028213A (en) Image defect detection method and device, electronic equipment and storage medium
JP4505362B2 (en) Red-eye detection apparatus and method, and program
US9652855B2 (en) Image processing apparatus that identifies image area, and image processing method
US10115189B2 (en) Image processing apparatus, image processing method, and recording medium
US9262690B2 (en) Method and device for detecting glare pixels of image
KR101631012B1 (en) Image processing apparatus and image processing method
WO2016107638A1 (en) An image face processing method and apparatus
CN107346419B (en) Iris recognition method, electronic device, and computer-readable storage medium
KR20130028610A (en) Apparatus and method for providing real-time lane detection, recording medium thereof
CN106331746B (en) Method and apparatus for identifying watermark location in video file
US10121260B2 (en) Orientation estimation method and orientation estimation device
CN112507767B (en) Face recognition method and related computer system
CN112149592A (en) Image processing method and device and computer equipment
US10140555B2 (en) Processing system, processing method, and recording medium
JP2015103188A (en) Image analysis device, image analysis method, and image analysis program
JP6340795B2 (en) Image processing apparatus, image processing system, image processing method, image processing program, and moving body control apparatus
EP2919149A2 (en) Image processing apparatus and image processing method
CN111667419A (en) Moving target ghost eliminating method and system based on Vibe algorithm
CN111767868A (en) Face detection method and device, electronic equipment and storage medium
Zou et al. Statistical analysis of signal-dependent noise: application in blind localization of image splicing forgery
US9842406B2 (en) System and method for determining colors of foreground, and computer readable recording medium therefor
JP2007219899A (en) Personal identification device, personal identification method, and personal identification program
JP2018109824A (en) Electronic control device, electronic control system, and electronic control method
KR20180111140A (en) Mehtod and apparatus for extracting architectural components information of building image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right

Effective date of registration: 20211208

Address after: 361100 568, No. 942, tonglong Second Road, torch high tech Zone (Xiang'an) Industrial Zone, Xiang'an District, Xiamen City, Fujian Province

Patentee after: Xiamen Meitu Yifu Technology Co.,Ltd.

Address before: B1f-089, Zone C, Huaxun building, software park, torch high tech Zone, Xiamen City, Fujian Province

Patentee before: XIAMEN HOME MEITU TECHNOLOGY Co.,Ltd.

TR01 Transfer of patent right