CN114255491A - Eye occlusion judgment method and device, computer readable storage medium and equipment - Google Patents

Eye occlusion judgment method and device, computer readable storage medium and equipment Download PDF

Info

Publication number
CN114255491A
CN114255491A CN202010953190.4A CN202010953190A CN114255491A CN 114255491 A CN114255491 A CN 114255491A CN 202010953190 A CN202010953190 A CN 202010953190A CN 114255491 A CN114255491 A CN 114255491A
Authority
CN
China
Prior art keywords
eye
image
feature map
light spot
eye region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010953190.4A
Other languages
Chinese (zh)
Inventor
周军
王洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Eyes Intelligent Technology Co ltd
Beijing Eyecool Technology Co Ltd
Original Assignee
Beijing Eyes Intelligent Technology Co ltd
Beijing Eyecool Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Eyes Intelligent Technology Co ltd, Beijing Eyecool Technology Co Ltd filed Critical Beijing Eyes Intelligent Technology Co ltd
Priority to CN202010953190.4A priority Critical patent/CN114255491A/en
Publication of CN114255491A publication Critical patent/CN114255491A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate

Abstract

The invention discloses a method and a device for judging eye occlusion, a computer readable storage medium and equipment, and belongs to the field of biological identification. It includes: acquiring a face image and intercepting to obtain an eye region image; detecting a light spot area on the eye area image and calculating the ratio of the light spot area in the eye area image; and inputting the eye region image with the proportion not reaching the set proportion threshold value into a pre-trained convolutional neural network, and judging whether the eyes are shielded. The invention effectively solves the problem of misjudgment caused by light spots in the process of judging eye shielding, and reduces time complexity while ensuring the accuracy of classification.

Description

Eye occlusion judgment method and device, computer readable storage medium and equipment
Technical Field
The present invention relates to the field of biometric identification, and in particular, to a method, an apparatus, a computer-readable storage medium, and a device for determining eye occlusion.
Background
The rise of deep learning brings huge development to biological feature recognition, especially to research in the directions of face recognition, image understanding and the like. The biological characteristics of the living body can not only distinguish the living body individuals, but also judge the state of the living body individuals. When the human face is identified, firstly, a clear and complete human face image with sufficient light is obtained, and then, data processing is carried out on the human face image. In a face recognition task, the accuracy rate is reduced due to the incomplete or blocked face images, and particularly, due to the blocking of eyes and other five sense organs, the influence on the performance of an algorithm is larger, so that the judgment of the eye blocking has important research value.
The method for detecting the face shielding mainly comprises the steps of firstly obtaining a face image to be processed, marking feature points of face features in the obtained face image to be processed, importing the image into a pre-trained face shielding model to obtain shielding information corresponding to the face shielding image to be processed, obtaining the shielding information of a face through the feature points contained in the face shielding image to be processed by the face shielding model, and finally outputting the shielding information.
The following defects can exist when the face shielding detection method in the prior art is used for judging the eye shielding:
firstly, although the method for judging eye occlusion based on the whole face is more comprehensive in the use of texture information and the like, the input dimension of the network is larger, and the time consumption is longer. In addition, the texture of the face area is more complex relative to the texture of the eye area, and the interference factors are more, which is not favorable for the optimization of the classification task.
Secondly, the existing human face shielding detection method is easy to misjudge the image with the light spot, which can cause the detection result of the user with the glasses to be inaccurate, and even if the area of the light spot is small, the user can be easily misjudged as human eye shielding due to the interference of the light spot generated by the lens reflecting light in the imaging area of the eyes to the eyes.
Disclosure of Invention
In order to solve the technical problems that the eye shielding judgment method in the prior art is long in time consumption and is easily influenced by light spots, the invention provides the eye shielding judgment method, the eye shielding judgment device, the computer-readable storage medium and the computer-readable storage equipment, so that the problem of misjudgment caused by the light spots in the eye shielding judgment process is effectively solved, the classification accuracy is ensured, and the time complexity is reduced.
The technical scheme provided by the invention is as follows:
in a first aspect, the present invention provides an eye occlusion determination method, including:
acquiring a face image and intercepting to obtain an eye region image;
detecting a light spot area on the eye area image and calculating the ratio of the light spot area in the eye area image;
and inputting the eye region image with the proportion not reaching the set proportion threshold value into a pre-trained convolutional neural network, and judging whether the eyes are shielded.
Further, the detecting a spot area on the eye area image includes:
graying the eye region image to obtain a gray image;
replacing the gray value of each pixel point on the gray image with the statistical value of the gray values of all pixel points in a neighborhood with a set size around the pixel point to obtain a statistical characteristic image;
dividing the statistical characteristic image into image blocks with preset sizes, and calculating the proportion of pixel points with gray values larger than a set gray threshold value in each image block to all the pixel points in the image block;
and if the proportion exceeds a set proportion threshold, marking the image block as a light spot, and connecting all the light spot points to form the light spot area.
Further, the acquiring the face image and intercepting the eye region image includes:
acquiring a face image and positioning to obtain face key points, wherein the face key points comprise eye key points;
calculating the inclination angle of the eyes according to the eye key points, and rotationally aligning the face image according to the inclination angle;
and intercepting an eye region image from the aligned face image according to the eye key points and carrying out normalization operation.
Further, the step of inputting the eye region image with the proportion not reaching the set proportion threshold value into a pre-trained convolutional neural network to judge whether the eye is blocked includes:
performing a first volume operation, a first BN operation, a first ReLU operation and a first pooling operation on the eye region image with the occupation ratio not reaching the set occupation ratio threshold value to obtain a first feature map;
expanding a plurality of pixels at the edge of the first feature map, and performing a second convolution operation, a second BN operation, a second ReLU operation and a second pooling operation to obtain a second feature map;
expanding a plurality of pixels at the edge of the second feature map, and performing a third convolution operation, a third BN operation, a third ReLU operation and a third pooling operation to obtain a third feature map;
performing a first full connection operation, a fourth ReLU operation, a first dropout operation, a second full connection operation, a fifth ReLU operation and a second dropout operation on the third feature map to obtain a fourth feature map;
performing a third full-connection operation with an output dimension of 2 on the fourth feature map, and classifying by using Softmax to obtain the probability that the eye region image is an eye and the probability that the eye region image is not an eye;
if the probability that the eye region image is the eye exceeds the set occlusion threshold, judging that the eye is not occluded, otherwise, judging that the eye is occluded.
Further, the convolutional neural network is obtained by training as follows:
constructing a training set, wherein the training set comprises a plurality of training samples, the training samples comprise positive samples and negative samples, the positive samples are eye images which are not shielded, and the negative samples are eye images which are shielded by non-eye images;
performing data expansion on the training set, the data expansion including translation, rotation, contrast modification and/or noise addition on the training samples;
and training the convolutional neural network by using the training set after data expansion.
Further, the non-eye image comprises a non-skin color image and a skin color image outside the eye area;
the data expansion further comprises small spot expansion of the training samples, the small spot expansion comprising:
randomly acquiring a training sample from a training set according to a certain probability and positioning an eye region in the training sample;
mapping the eye region of the RGB space to the HSV space and reducing the value of the brightness space H;
randomly adding small white light spots on the eye region, wherein the ratio of the small light spots in the eye region is smaller than the ratio threshold value.
In a second aspect, the present invention provides an eye occlusion determination device, comprising:
the image acquisition module is used for acquiring a face image and intercepting an eye region image;
the light spot detection module is used for detecting a light spot area on the eye area image and calculating the proportion of the light spot area in the eye area image;
and the occlusion judgment module is used for inputting the eye region image with the proportion not reaching the set proportion threshold value into a pre-trained convolutional neural network and judging whether the eyes are occluded or not.
Further, the light spot detection module includes:
the graying unit is used for graying the eye region image to obtain a grayscale image;
the first statistical unit is used for replacing the gray value of each pixel point on the gray image with the statistical value of the gray values of all pixel points in a neighborhood with a set size around the pixel point to obtain a statistical characteristic image;
the second statistical unit is used for dividing the statistical characteristic image into image blocks with preset sizes and calculating the proportion of pixel points with the gray value larger than a set gray threshold value in each image block to all the pixel points in the image block;
and the light spot judging unit is used for marking the image block as a light spot if the proportion exceeds a set proportion threshold, and all the light spot points are connected to form the light spot area.
Further, the image acquisition module includes:
the system comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring a face image and positioning to obtain face key points, and the face key points comprise eye key points;
the rotating unit is used for calculating the inclination angle of the eyes according to the eye key points and rotationally aligning the face image according to the inclination angle;
and the intercepting and normalizing unit is used for intercepting the eye region image from the aligned face image according to the eye key points and carrying out normalization operation.
Further, the shielding judgment module includes:
a first operation unit configured to perform a first volume operation, a first BN operation, a first ReLU operation, and a first pooling operation on an eye region image whose occupancy ratio does not reach a set occupancy ratio threshold, to obtain a first feature map;
the second operation unit is used for expanding the first feature map edge by a plurality of pixels and performing second convolution operation, second BN operation, second ReLU operation and second pooling operation to obtain a second feature map;
a third operation unit, configured to extend the second feature map edge by a number of pixels, and perform a third convolution operation, a third BN operation, a third ReLU operation, and a third pooling operation to obtain a third feature map;
a fourth operation unit, configured to perform a first full connection operation, a fourth ReLU operation, a first dropout operation, a second full connection operation, a fifth ReLU operation, and a second dropout operation on the third feature map to obtain a fourth feature map;
the classification unit is used for performing third full-connection operation with the output dimension of 2 on the fourth feature map and performing classification by using Softmax to obtain the probability that the eye region image is an eye and the probability that the eye region image is not an eye;
and the occlusion judgment unit is used for judging that the eyes are not occluded if the probability that the eyes are in the eye region image exceeds a set occlusion threshold value, and otherwise, judging that the eyes are occluded.
Further, the convolutional neural network is obtained by training through the following modules:
the training set constructing module is used for constructing a training set, the training set comprises a plurality of training samples, the training samples comprise positive samples and negative samples, the positive samples are eye images which are not shielded, and the negative samples are eye images which are shielded by the eye images;
a data expansion module for performing data expansion on the training set, wherein the data expansion comprises translation, rotation, contrast modification and/or noise addition on the training samples;
and the training module is used for training the convolutional neural network by using the training set after data expansion.
Further, the non-eye image comprises a non-skin color image and a skin color image outside the eye area;
the data expansion further comprises small spot expansion of the training samples, the small spot expansion comprising:
the system comprises a sample acquisition unit, a training unit and a positioning unit, wherein the sample acquisition unit is used for randomly acquiring training samples from a training set at a certain probability and positioning eye regions in the training samples;
the brightness adjusting unit is used for mapping the eye region of the RGB space to the HSV space and reducing the value of the brightness space H;
and the light spot adding unit is used for randomly adding a small white light spot on the eye area, and the ratio of the small light spot in the eye area is smaller than the ratio threshold.
In a third aspect, the present invention provides a computer readable storage medium for eye occlusion determination, comprising a memory for storing processor executable instructions, which when executed by the processor, implement steps comprising the eye occlusion determination method of the first aspect.
In a fourth aspect, the present invention provides an apparatus for eye occlusion determination, comprising at least one processor and a memory storing computer-executable instructions, which when executed by the processor implement the steps of the eye occlusion determination method of the first aspect.
The invention has the following beneficial effects:
according to the eye shading judgment method, the eye area images with large light spots are removed through light spot detection, then eye shading classification is carried out on the eye area images without the light spots or with small light spots through a convolutional neural network, and the problem of misjudgment caused by the light spots in the eye shading judgment process is effectively solved. The input of the convolutional neural network is an eye region image, compared with a face image input in the prior art, the input dimensionality is small, the time consumption of the convolutional neural network is low, the efficiency is high, the classification accuracy is guaranteed, and the time complexity is reduced. And eye region images are simpler than face images, interference factors are few, and the optimization of classification tasks is facilitated.
Drawings
FIG. 1 is a flowchart of an eye occlusion determination method of the present invention;
FIG. 2 is a flowchart of S200 in the eye shielding determination method of the present invention;
FIG. 3 is a flowchart of S100 in the eye shielding determination method of the present invention;
FIG. 4 is a flowchart of S300 in the eye shielding determination method of the present invention;
FIG. 5 is a schematic diagram of a training process of a convolutional neural network in the present invention;
FIG. 6 is a schematic diagram of eye key points among key points of a human face;
FIG. 7 is a schematic diagram of a process of eye region image acquisition and normalization;
fig. 8 is a schematic diagram of all pixel points in a neighborhood of size 3 x 3;
FIG. 9 is an exemplary diagram of a statistical characteristic image obtained in step S220 of the eye shielding determination method according to the present invention;
FIG. 10 is a schematic diagram of the structure of a convolutional neural network in the present invention;
FIG. 11 is a schematic view of an eye occlusion determination device according to the present invention;
FIG. 12 is a schematic diagram of a light spot detection module in the eye shielding determination apparatus according to the present invention;
FIG. 13 is a schematic diagram of an image capturing module in the eye shielding determination apparatus according to the present invention;
FIG. 14 is a schematic diagram of a block determination module in the eye block determination device according to the present invention;
FIG. 15 is a schematic diagram of a training module of the convolutional neural network of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantages of the present invention clearer, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings and specific embodiments. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
Example 1:
an embodiment of the present invention provides an eye occlusion determination method, as shown in fig. 1, the method includes:
s100: and acquiring a face image and intercepting to obtain an eye region image.
The step is used for obtaining the eye region image, the specific implementation mode of the step can be various, exemplarily, the human face detection and the eye key point positioning are carried out after the human face image is obtained, and the eye region image is intercepted according to the positioned eye key point. Of course, the preprocessing operations such as rotation correction, blur correction, normalization and the like can also be performed on the face image or the eye region image.
S200: and detecting a light spot area on the eye area image and calculating the proportion of the light spot area in the eye area image.
Although the face occlusion detection method in the prior art can also be used for judging eye occlusion, the method is sensitive to illumination, and particularly in the use process of a face system, the user glasses wearing glasses can generate large light spots under illumination, and if the light spots are greatly superposed with the eye area of a human, the occlusion algorithm can generate wrong judgment.
In order to solve the above problem, the present invention needs to perform spot detection on an eye region image and determine an overlapping area of a spot and the eye region, which is specifically represented by a ratio of the spot region in the eye region image. If the duty ratio reaches a set duty ratio threshold (e.g., 20%), the spot is considered to be a large spot. The large light spots can influence the shielding judgment, and influence the accuracy of identification such as face identification or iris identification, so that the images with the large light spots are generally unusable, the face images with the large light spots can be directly considered to have illumination problems, the images are judged to be unqualified, and the people can not login.
S200: and inputting the eye region image with the proportion not reaching the set proportion threshold value into a pre-trained convolutional neural network, and judging whether the eyes are shielded.
If the proportion does not reach the set proportion threshold value, the eye shielding algorithm is judged to be wrong due to the fact that no light spot is available or the light spot is small, and only the large light spot can cause the shielding algorithm to generate wrong judgment, so that the eye region image without the light spot or with the small light spot can be input into the convolutional neural network to conduct eye shielding judgment.
Compared with the prior art, the eye occlusion judging method and the eye occlusion judging device have the advantages that the eye area image with the large light spot is removed through light spot detection, then eye occlusion classification is carried out on the eye area image without the light spot or with the small light spot through a convolutional neural network, and the problem of misjudgment caused by the light spot in the eye occlusion judging process is effectively solved. The input of the convolutional neural network is an eye region image, compared with a human face image input in the prior art, the input dimensionality is small, the time consumption of the convolutional neural network is low, the efficiency is high, the accuracy of classification is guaranteed, and the time complexity is reduced. And compared with a human face image, the eye region image is simpler, has few interference factors and is beneficial to the optimization of a classification task.
In the present invention, the light spot region can be detected by various methods, and the following two specific examples are illustrated.
Example one:
firstly, graying an eye region image, converting a three-channel color image into a gray image, and counting the gray value of each pixel point of the gray image. According to the characteristics of the eye region image, the gray values of the pixel points at the non-light spots should be smaller than the gray threshold (the gray threshold may be 200). Therefore, if a pixel point with the gray value greater than or equal to 200 exists in the eye region image, the pixel point can be classified as a light spot region.
Example two:
in the foregoing example one, the light spot detection is performed based on the statistics of the gray value of each pixel point, and the method is poor in robustness and prone to erroneous judgment of light spot detection. In order to enhance the robustness of the light spot detection and reduce the misjudgment of the light spot detection, the following method is adopted in the present example to detect the light spot region on the eye region image (as shown in fig. 2):
s210: and graying the eye region image to obtain a grayscale image.
S220: and replacing the gray value of each pixel point on the gray image with the statistical value of the gray values of all pixel points in the neighborhood with the set size around the pixel point to obtain a statistical characteristic image.
In this step, the size of the neighborhood may be set, for example, the neighborhood may be 3 × 3. For each pixel, the statistical value (for example, an average value) of the gray values of all the pixels in the 3 × 3 neighborhood around the pixel is used to replace the gray value of the pixel.
As shown in fig. 8 for example, if the grayscale value at 0 position is replaced with the grayscale values at 8 surrounding positions, s (0) is the grayscale value at 0 position after replacement, and r1, r2, … …, and r8 are the grayscale values at 1-8 positions surrounding 0 position:
Figure BDA0002677715610000091
by performing the above operation on each pixel point of the eye region image, a statistical characteristic image can be obtained, and the statistical value of each position (x, y) can be recorded as s (x, y), as shown in fig. 9.
S230: dividing the statistical characteristic image into image blocks with preset sizes, and calculating the proportion of pixel points with gray values larger than a set gray threshold value in each image block to all pixel points in the image block.
In this step, the size of the image block may be set, for example, the size of the image block may be 3 × 3, the image block of 3 × 3 is statistically voted, and the number of pixels whose statistical value exceeds 200 is calculated.
S240: if the proportion exceeds a set proportion threshold, marking the image block as a light spot point, and connecting all the light spots to form a light spot area.
If the statistical value of more than 50% of the pixel points exceeds 200, the image block is judged to be a light spot, otherwise, the image block is not considered to be the light spot.
As shown in fig. 9, the statistical value of 3 × 3 at the (i, j) position is counted, and if there are 4 or more statistical values all exceeding 200, the position is considered as one light spot, and all the light spots detected by the above method are connected into a light spot region.
The light spot detection method can detect the position and the size of the light spot rapidly, calculate the overlapping area of the light spot and the face area and give corresponding judgment.
As a modification of the embodiment of the present invention, as shown in fig. 3, the foregoing S100 includes:
s110: and acquiring a face image and positioning to obtain face key points, wherein the face key points comprise eye key points.
The method for acquiring the face image is not limited in the step, and illustratively, the face image can be acquired through a face lens, and during acquisition, a single image can be acquired, or a video can be acquired, and a single-frame or multi-frame image can be acquired from the video.
After the face image is obtained, face detection and key point positioning need to be performed, illustratively, the face image can be subjected to face detection by using an SSD face detector, and then key point positioning is performed by using a TDCNN method to obtain coordinates of face key points. The face detector is not limited to SSD, but may use CRAFT, ADABOOST, etc. Similarly, the face key point positioning method is not limited to using TDCNN, but may also use SDM, etc., and only needs to obtain eye key points.
The number of face key points obtained by different face key point positioning methods is different, for example, 68, 106, 1000, etc., and these key points all need to include eye key points.
S120: and calculating the inclination angle of the eyes according to the eye key points, and rotationally aligning the face image according to the inclination angle.
In this step, it is preferable to obtain the left-eye center coordinates (x1, y1) and the right-eye center coordinates (x2, y2) from the eye key points, and calculate the tilt angle of the eye from the left-eye center coordinates and the right-eye center coordinates. The inclination angle of the eyes is the included angle a between the connecting line of the centers of the left and right eyes and the horizontal direction, and the calculation method of the angle a is as follows:
Figure RE-GDA0002796432290000101
the face images can be aligned by rotating the face images by an angle a, and the inclination angles of the eyes are eliminated.
S130: and intercepting eye region images from the aligned face images according to the eye key points and carrying out normalization operation.
When the eye region image is captured, one specific implementation manner is as shown in fig. 6 and 7: finding out the most marginal points of the left eye (or the right eye) in the positioned key points of the human face, namely four key points at the leftmost, the rightmost, the uppermost and the lowermost, respectively expanding the four key points outwards by 5 pixels to obtain new 4 marginal key points, taking the four marginal key points as the edges to obtain an eye region image, and scaling the image to an image with the width and the height of 80 pixels respectively.
In the present invention, the structure of the convolutional neural network is shown in fig. 10, and the specific processing procedure of the convolutional neural network on the eye region image is shown in fig. 4, and the processing procedure includes:
s310: a first volume operation, a first BN operation, a first ReLU operation and a first pooling operation are performed on the eye region image whose occupancy ratio does not reach a set occupancy ratio threshold value, and a first feature map is obtained.
Illustratively, the eye region image (data) is a 3-channel image with 80 pixels in width and height, the convolution kernel (kemel size) of the first convolution operation (conv1) has a size of 7 × 7 and a step size (stride) of 4, and the eye region image is subjected to the first convolution operation to obtain a feature map of 36 channels with a size of 19 × 19. The signature obtained by performing the first BN operation (batch1) and the first ReLU operation (ReLU1) on this signature is also a signature of 36 channels with a size of 19 × 19.
And performing a first pooling operation (pool1) on the feature map of the 36 channels with the size of 19 × 19, wherein the first pooling operation is a Maxpooling operation with the size of 3 × 3 and the step size of 2, and a 36-channel sampling result with the size of 9 × 9 is obtained, namely the first feature map.
S320: and expanding the edge of the first feature map by a plurality of pixels, and performing a second convolution operation, a second BN operation, a second ReLU operation and a second pooling operation to obtain a second feature map.
Illustratively, the edge of the first feature map of 36 channels with the size of 9 × 9 is extended by 2 pixels, a convolution kernel with the size of 5 × 5 is used, a second convolution operation (conv2) is performed by 1 as a step size, a feature map of 96 channels with the size of 9 × 9 is obtained, a second BN operation (batch2) and a second ReLU (ReLU2) are performed on the obtained feature map, and the obtained feature map is also the feature map of 96 channels with the size of 9 × 9.
And performing a second pooling operation (pool2) on the feature map of the 96 channels with the size of 9 × 9, wherein the second pooling operation is a Maxpooling operation with the size of 3 × 3 and the step size of 2, and a 96-channel sampling result with the size of 4 × 4 is obtained, namely the second feature map.
S330: and expanding the edge of the second feature map by a plurality of pixels, and performing a third convolution operation, a third BN operation, a third ReLU operation and a third pooling operation to obtain a third feature map.
Illustratively, the edge of the second feature map of 96 channels with the size of 4 × 4 is extended by 1 pixel, a convolution kernel with the size of 3 × 3 is used, a third convolution operation (conv3) is performed by 1 step to obtain feature maps of 144 channels with the size of 4 × 4, and a third BN operation (batch3) and a third ReLU operation (ReLU3) are performed on the obtained feature maps, so that the obtained feature maps are also feature maps of 144 channels with the size of 4 × 4.
And performing a third pooling operation (pool5) on the feature map of 144 channels with the size of 4 × 4, wherein the third pooling operation is a Maxpooling operation with the size of 3 × 3 and the step size of 2, and a 144-channel sampling result with the size of 2 × 2 is obtained, and is the third feature map.
S340: and performing a first full connection operation, a fourth ReLU operation, a first dropout operation, a second full connection operation, a fifth ReLU operation and a second dropout operation on the third characteristic diagram to obtain a fourth characteristic diagram.
Illustratively, a first full join operation (fc6) is performed on a third feature map of 144 channels of size 2 × 2, resulting in a feature vector of dimension 512, and then a fourth ReLU operation (ReLU6) and a first 0.5-rate drop operation (drop6) are performed on the feature vector.
The second full join operation (fc7) is continued on the result of the first dropout operation, resulting in a feature vector with dimension 512, and then for this feature vector a fifth ReLU operation (ReLU7) and a second dropout operation (drop7) with a ratio of 0.5, resulting in a fourth feature map.
S350: the third full join operation (fc8) with the output dimension 2 is performed on the fourth feature map, and classification is performed using Softmax, resulting in the probability that the eye area image is an eye and the probability that it is not an eye.
The object of the convolutional neural network classification in the invention is to judge whether an eye region image is an eye or not, the output of the network is a two-dimensional vector, the first dimension represents the probability value that the image is not an eye, and the second dimension represents the probability value that the image is an eye.
S350: if the probability of the eye region image is that the eye exceeds the set occlusion threshold, judging that the eye is not occluded, otherwise, judging that the eye is occluded.
This step uses the output value of the second dimension (which is the probability value of the eye) to equivalently replace the possibility that the eye is not occluded, denoted as p. And setting an occlusion threshold T, if p > T, indicating that the eyes are not occluded, otherwise, indicating that the eyes are occluded.
The network used in the invention is effective and has fewer parameters, has better description effect on the characteristics of whether eyes are shielded, ensures the accuracy of classification, controls the depth and width of the network, reduces the time complexity and ensures that the classification has higher accuracy and higher speed.
The convolutional neural network needs to be trained in advance before being used, and the training method is as shown in fig. 5, and includes:
s100': and constructing a training set, wherein the training set comprises a plurality of training samples, the training samples comprise positive samples and negative samples, the positive samples are eye images which are not blocked, and the negative samples are eye images which are blocked by the non-eye images.
The positive samples of the training set are normal, unobstructed eye images, and for the negative samples, the eye images can be constructed using a variety of non-eye images to occlude.
S200': and performing data expansion on the training set, wherein the data expansion comprises translation, rotation, contrast modification and/or noise addition on the training samples.
In order to obtain a better training effect, data expansion is required to be carried out on both the positive samples and the negative samples, and the diversity of data is enhanced.
S300': and training the convolutional neural network by using the training set after data expansion.
The non-eye image includes a non-skin color image and a skin color image outside an eye area, the non-skin color image may be a cardboard image, a black color block, or the like, and the skin color image outside the eye area may be a skin color image of hands, faces, or the like.
It should be noted that the light spot detection method of the present invention detects large light spots, which solves the problem of misjudgment of image coverage caused by large light spots. For the problem of image occlusion with a small light spot area, a convolutional neural network is also needed for classification and identification. Therefore, the data expansion of the invention also comprises the small light spot expansion of the training sample, and the small light spot is added on the positive and negative samples of the training, so that the operation is favorable for the network to have better classification effect on the small light spot.
The small light spot expansion method comprises the following steps:
s210': randomly acquiring training samples from the training set with a certain probability and positioning the eye region in the training samples.
In this step, a certain probability may be set to 0.5, so as to obtain training samples with random probability, perform face detection and feature point positioning on these samples (the methods of face detection and feature point positioning are the same as those described above), obtain eye regions according to the positioning results,
s220': the eye region of the RGB space is mapped to the HSV space and the value of the luminance space H is reduced.
The step is used for adjusting the brightness of the eye region, so that the overall brightness of the eye region is reduced.
S230': and randomly adding small white light spots on the eye region, wherein the ratio of the small light spots in the eye region is smaller than a ratio threshold value.
The size of the small light spot should be smaller than the size of the large light spot, that is, the ratio of the small light spot in the eye region should be smaller than a set ratio threshold (20%), and the size of the small light spot may be 3 × 3, 9 × 9, 27 × 27.
Example 2:
an embodiment of the present invention provides an eye shielding determination device, as shown in fig. 11, the device includes:
and the image acquisition module 10 is used for acquiring a face image and intercepting an eye region image.
And the light spot detection module 20 is configured to detect a light spot area on the eye area image and calculate a ratio of the light spot area in the eye area image.
And the occlusion judging module 30 is configured to input the eye region image with the proportion not reaching the set proportion threshold into a pre-trained convolutional neural network, and judge whether the eye is occluded.
Compared with the prior art, the eye occlusion judging method and the eye occlusion judging device have the advantages that the eye area image with the large light spot is removed through light spot detection, then eye occlusion classification is carried out on the eye area image without the light spot or with the small light spot through a convolutional neural network, and the problem of misjudgment caused by the light spot in the eye occlusion judging process is effectively solved. The input of the convolutional neural network is an eye region image, compared with a human face image input in the prior art, the input dimensionality is small, the time consumption of the convolutional neural network is low, the efficiency is high, the accuracy of classification is guaranteed, and the time complexity is reduced. And compared with a human face image, the eye region image is simpler, has few interference factors and is beneficial to the optimization of a classification task.
As shown in fig. 12, the light spot detection module 20 of the present invention includes:
and a graying unit 21 configured to gray the eye region image to obtain a grayscale image.
The first statistical unit 22 is configured to replace the gray value of each pixel point on the gray image with the statistical value of the gray values of all pixel points in the neighborhood of the pixel point with the set size, so as to obtain a statistical characteristic image.
The second statistical unit 23 is configured to divide the statistical feature image into image blocks of a predetermined size, and calculate a ratio of pixel points in each image block, where a gray value of each pixel point is greater than a set gray threshold, to all pixel points in the image block.
And the light spot judging unit 24 is configured to mark the image block as a light spot point if the ratio exceeds a set ratio threshold, and all the light spot points are connected to form a light spot area.
As an improvement of the embodiment of the present invention, the image obtaining module 10 is shown in fig. 13, and includes:
the acquiring unit 11 is configured to acquire a face image and locate to obtain face key points, where the face key points include eye key points.
And the rotating unit 12 is used for calculating the inclination angle of the eyes according to the eye key points and rotationally aligning the face image according to the inclination angle.
And the intercepting and normalizing unit 13 is used for intercepting and taking out the eye region image from the aligned face image according to the eye key point and performing normalization operation.
In the present invention, the structure of the convolutional neural network is shown in fig. 10, and correspondingly, the occlusion determining module 30 is shown in fig. 14, and includes:
a first operation unit 31 for performing a first volume operation, a first BN operation, a first ReLU operation, and a first pooling operation on the eye area image whose duty does not reach the set duty threshold to obtain a first feature map.
And a second operation unit 32, configured to extend the first feature map edge by a number of pixels, and perform a second convolution operation, a second BN operation, a second ReLU operation, and a second pooling operation to obtain a second feature map.
And a third operation unit 33, configured to extend the second feature map edge by several pixels, and perform a third convolution operation, a third BN operation, a third ReLU operation, and a third pooling operation to obtain a third feature map.
And a fourth operation unit 34, configured to perform the first full connection operation, the fourth ReLU operation, the first dropout operation, the second full connection operation, the fifth ReLU operation, and the second dropout operation on the third feature map to obtain a fourth feature map.
And the classifying unit 35 is used for performing a third full-connection operation with the output dimension of 2 on the fourth feature map, and classifying the fourth feature map by using Softmax to obtain the probability that the eye region image is the eye and the probability that the eye region image is not the eye.
And the occlusion determining unit 36 is configured to determine that the eye is not occluded if the probability that the eye area image is the eye exceeds the set occlusion threshold, and otherwise determine that the eye is occluded.
The convolutional neural network needs to be trained in advance before being used, as shown in fig. 15, which is obtained by training the following modules:
the training set constructing module 10' is used for constructing a training set, the training set comprises a plurality of training samples, the training samples comprise positive samples and negative samples, the positive samples are eye images which are not shielded, and the negative samples are eye images which are shielded by the eye images.
A data expansion module 20' for data expansion of the training set, the data expansion including translation, rotation, contrast modification and/or noise addition of the training samples.
And a training module 30' for training the convolutional neural network by using the training set after data expansion.
The aforementioned non-eye image includes a non-skin color image and a skin color image outside the eye area.
The data expansion further comprises small light spot expansion on the training sample, wherein the small light spot expansion comprises the following steps:
and the sample acquisition unit is used for randomly acquiring the training samples from the training set with a certain probability and positioning the eye region in the training samples.
And the brightness adjusting unit is used for mapping the eye region of the RGB space to the HSV space and reducing the value of the brightness space H.
And the light spot adding unit is used for randomly adding small white light spots on the eye area, and the ratio of the small light spots in the eye area is smaller than a ratio threshold value.
The device provided by the embodiment of the present invention has the same implementation principle and technical effect as the method embodiment 1, and for the sake of brief description, reference may be made to the corresponding content in the method embodiment 1 where no mention is made in the device embodiment. It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the apparatus and the unit described above may all refer to the corresponding processes in the above method embodiment 1, and are not described herein again.
Example 3:
the method provided by the present specification and described in the above embodiment 1 can implement business logic by a computer program and is recorded on a storage medium, and the storage medium can be read and executed by a computer, so as to achieve the effect of the solution described in embodiment 1 of the present specification. Accordingly, the present invention also provides a computer readable storage medium for eye occlusion determination, comprising a memory for storing processor executable instructions which, when executed by a processor, implement the steps comprising the eye occlusion determination method of embodiment 1.
According to the eye shading judgment method, the eye area images with large light spots are removed through light spot detection, then eye shading classification is carried out on the eye area images without the light spots or with small light spots through a convolutional neural network, and the problem of misjudgment caused by the light spots in the eye shading judgment process is effectively solved. The input of the convolutional neural network is an eye region image, compared with a face image input in the prior art, the input dimensionality is small, the time consumption of the convolutional neural network is low, the efficiency is high, the classification accuracy is guaranteed, and the time complexity is reduced. And eye region images are simpler than face images, interference factors are few, and the optimization of classification tasks is facilitated.
The storage medium may include a physical device for storing information, and typically, the information is digitized and then stored using an electrical, magnetic, or optical media. The storage medium may include: devices that store information using electrical energy, such as various types of memory, e.g., RAM, ROM, etc.; devices that store information using magnetic energy, such as hard disks, floppy disks, tapes, core memories, bubble memories, and usb disks; devices that store information optically, such as CDs or DVDs. Of course, there are other ways of storing media that can be read, such as quantum memory, graphene memory, and so forth.
The device described above may also include other implementations in accordance with the description of method embodiment 1. The specific implementation manner may refer to the description of the related method embodiment 1, and is not described in detail here.
Example 4:
the invention also provides a device for judging eye occlusion, which can be a single computer, and can also comprise an actual operation device and the like using one or more methods or one or more embodiment devices in the specification. The apparatus for eye occlusion determination may comprise at least one processor and a memory storing computer executable instructions, which when executed by the processor implement the steps of the eye occlusion determination method of any one or more of embodiments 1 above.
According to the eye shading judgment method, the eye area images with large light spots are removed through light spot detection, then eye shading classification is carried out on the eye area images without the light spots or with small light spots through a convolutional neural network, and the problem of misjudgment caused by the light spots in the eye shading judgment process is effectively solved. The input of the convolutional neural network is an eye region image, compared with a face image input in the prior art, the input dimensionality is small, the time consumption of the convolutional neural network is low, the efficiency is high, the classification accuracy is guaranteed, and the time complexity is reduced. And eye region images are simpler than face images, interference factors are few, and the optimization of classification tasks is facilitated.
The description of the above-described device according to the method or apparatus embodiment may also include other embodiments, and a specific implementation may refer to the description of related method embodiment 1, which is not described in detail herein.
It should be noted that, the above-mentioned apparatus or system in this specification may also include other implementation manners according to the description of the related method embodiment, and a specific implementation manner may refer to the description of the method embodiment, which is not described herein in detail. The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the hardware + program class, storage medium + program embodiment, since it is basically similar to the method embodiment, the description is simple, and the relevant points can be referred to the partial description of the method embodiment.
The foregoing description has been directed to specific embodiments of this disclosure. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The systems, devices, modules or units illustrated in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. One typical implementation device is a computer. In particular, the computer may be, for example, a personal computer, a laptop computer, a vehicle human interaction device, a cellular telephone, a camera phone, a smart phone, a personal digital assistant, a media player, a navigation device, an email device, a game console, a tablet computer, a wearable device, or a combination of any of these devices.
For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. However, when implementing one or more of the present description, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a plurality of sub-modules or sub-units, and so on. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and the actual implementation may have another division, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method or apparatus that comprises the element.
As will be appreciated by one skilled in the art, one or more embodiments of the present description may be provided as a method, system, or computer program product. Accordingly, one or more embodiments of the present description may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, one or more embodiments of the present description may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
One or more embodiments of the specification may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. One or more embodiments of the present specification can also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for the system embodiment, since it is substantially similar to the method embodiment, the description is simple, and for the relevant points, reference may be made to the partial description of the method embodiment. In the description of the specification, reference to the description of the term "one embodiment," "some embodiments," "an example," "a specific example," or "some examples," etc., means that a particular feature, structure, material, or characteristic described in connection with the embodiment or example is included in at least one embodiment or example of the specification. In this specification, the schematic representations of the terms used above are not necessarily intended to refer to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Moreover, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present invention, which are used for illustrating the technical solutions of the present invention and not for limiting the same, and the scope of the present invention is not limited thereto, although the present invention is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: those skilled in the art can still make modifications or changes to the technical solutions described in the foregoing embodiments or make equivalent substitutions for some technical features within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the present invention in its spirit and scope. Are intended to be covered by the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An eye occlusion determination method, comprising:
acquiring a face image and intercepting to obtain an eye region image;
detecting a light spot area on the eye area image and calculating the ratio of the light spot area in the eye area image;
and inputting the eye region image with the proportion not reaching the set proportion threshold value into a pre-trained convolutional neural network, and judging whether the eyes are shielded.
2. The eye occlusion determination method according to claim 1, wherein the detecting a spot region on the eye region image comprises:
graying the eye region image to obtain a gray image;
replacing the gray value of each pixel point on the gray image with the statistical value of the gray values of all pixel points in the neighborhood with the set size around the pixel point to obtain a statistical characteristic image;
dividing the statistical characteristic image into image blocks with preset sizes, and calculating the proportion of pixel points with gray values larger than a set gray threshold value in each image block to all pixel points in the image block;
and if the proportion exceeds a set proportion threshold, marking the image block as a light spot point, and connecting all the light spot points to form the light spot area.
3. The eye shielding judgment method according to claim 2, wherein the obtaining of the face image and the capturing of the eye region image comprises:
acquiring a face image and positioning to obtain face key points, wherein the face key points comprise eye key points;
calculating the inclination angle of the eyes according to the eye key points, and rotationally aligning the face image according to the inclination angle;
and intercepting an eye region image from the aligned face image according to the eye key points and carrying out normalization operation.
4. The method for determining eye occlusion according to any one of claims 1 to 3, wherein the step of inputting the eye region image whose proportion does not reach the set proportion threshold value into a pre-trained convolutional neural network to determine whether the eye is occluded comprises:
performing a first volume operation, a first BN operation, a first ReLU operation and a first pooling operation on the eye region image with the occupation ratio not reaching the set occupation ratio threshold value to obtain a first feature map;
expanding a plurality of pixels at the edge of the first feature map, and performing a second convolution operation, a second BN operation, a second ReLU operation and a second pooling operation to obtain a second feature map;
expanding a plurality of pixels at the edge of the second feature map, and performing a third convolution operation, a third BN operation, a third ReLU operation and a third pooling operation to obtain a third feature map;
performing a first full connection operation, a fourth ReLU operation, a first dropout operation, a second full connection operation, a fifth ReLU operation and a second dropout operation on the third feature map to obtain a fourth feature map;
performing a third full-connection operation with an output dimension of 2 on the fourth feature map, and classifying by using Softmax to obtain the probability that the eye region image is an eye and the probability that the eye region image is not an eye;
if the probability that the eye region image is the eye exceeds the set occlusion threshold, judging that the eye is not occluded, otherwise, judging that the eye is occluded.
5. The eye occlusion determination method of claim 4, wherein the convolutional neural network is trained by:
constructing a training set, wherein the training set comprises a plurality of training samples, the training samples comprise positive samples and negative samples, the positive samples are eye images which are not shielded, and the negative samples are eye images which are shielded by non-eye images;
performing data expansion on the training set, the data expansion including translation, rotation, contrast modification and/or noise addition on the training samples;
and training the convolutional neural network by using the training set after data expansion.
6. The eye occlusion determination method of claim 5, wherein the non-eye image comprises a non-skin color image and a skin color image outside the eye area;
the data expansion further comprises small spot expansion of the training samples, the small spot expansion comprising:
randomly acquiring training samples from a training set according to a certain probability and positioning eye regions in the training samples;
mapping the eye region of the RGB space to the HSV space and reducing the value of the brightness space H;
randomly adding small white light spots on the eye region, wherein the ratio of the small light spots in the eye region is smaller than the ratio threshold value.
7. An eye occlusion determination device, comprising:
the image acquisition module is used for acquiring a face image and intercepting an eye region image;
the light spot detection module is used for detecting a light spot area on the eye area image and calculating the proportion of the light spot area in the eye area image;
and the occlusion judgment module is used for inputting the eye region image with the proportion not reaching the set proportion threshold value into a pre-trained convolutional neural network and judging whether the eyes are occluded or not.
8. The eye occlusion determination device of claim 7, wherein the occlusion determination module comprises:
a first operation unit configured to perform a first volume operation, a first BN operation, a first ReLU operation, and a first pooling operation on the eye area image whose occupancy ratio does not reach a set occupancy ratio threshold to obtain a first feature map;
the second operation unit is used for expanding the first feature map edge by a plurality of pixels and performing second convolution operation, second BN operation, second ReLU operation and second pooling operation to obtain a second feature map;
a third operation unit, configured to extend the second feature map edge by a number of pixels, and perform a third convolution operation, a third BN operation, a third ReLU operation, and a third pooling operation to obtain a third feature map;
a fourth operation unit, configured to perform a first full connection operation, a fourth ReLU operation, a first dropout operation, a second full connection operation, a fifth ReLU operation, and a second dropout operation on the third feature map to obtain a fourth feature map;
a classification unit, configured to perform a third full join operation with an output dimension of 2 on the fourth feature map, and perform classification using Softmax, so as to obtain a probability that the eye region image is an eye and a probability that the eye region image is not an eye;
and the occlusion judging unit is used for judging that the eyes are not occluded if the probability that the eyes are in the eye region image exceeds a set occlusion threshold value, and otherwise, judging that the eyes are occluded.
9. A computer-readable storage medium for eye occlusion determination, comprising a memory for storing processor-executable instructions which, when executed by the processor, implement steps comprising the eye occlusion determination method of any of claims 1-6.
10. An apparatus for eye occlusion determination, comprising at least one processor and a memory storing computer executable instructions, the processor implementing the steps of the eye occlusion determination method of any one of claims 1-6 when executing the instructions.
CN202010953190.4A 2020-09-11 2020-09-11 Eye occlusion judgment method and device, computer readable storage medium and equipment Pending CN114255491A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010953190.4A CN114255491A (en) 2020-09-11 2020-09-11 Eye occlusion judgment method and device, computer readable storage medium and equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010953190.4A CN114255491A (en) 2020-09-11 2020-09-11 Eye occlusion judgment method and device, computer readable storage medium and equipment

Publications (1)

Publication Number Publication Date
CN114255491A true CN114255491A (en) 2022-03-29

Family

ID=80788053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010953190.4A Pending CN114255491A (en) 2020-09-11 2020-09-11 Eye occlusion judgment method and device, computer readable storage medium and equipment

Country Status (1)

Country Link
CN (1) CN114255491A (en)

Similar Documents

Publication Publication Date Title
CN110060237B (en) Fault detection method, device, equipment and system
EP3916627A1 (en) Living body detection method based on facial recognition, and electronic device and storage medium
CN102667810B (en) Face recognition in digital images
JP7165742B2 (en) LIFE DETECTION METHOD AND DEVICE, ELECTRONIC DEVICE, AND STORAGE MEDIUM
Mai et al. Rule of thirds detection from photograph
CN108875723B (en) Object detection method, device and system and storage medium
US8805077B2 (en) Subject region detecting apparatus
CN109583449A (en) Character identifying method and Related product
He et al. Saliency detection with flash and no-flash image pairs
CN108351961A (en) Image and characteristic mass merge ocular angiogenesis and face and/or sub- facial information for the image enhancement and feature extraction of ocular angiogenesis and face recognition and for biological recognition system
CN110929805B (en) Training method, target detection method and device for neural network, circuit and medium
JP6351243B2 (en) Image processing apparatus and image processing method
US8923610B2 (en) Image processing apparatus, image processing method, and computer readable medium
JP2014531097A (en) Text detection using multi-layer connected components with histograms
CN111626163B (en) Human face living body detection method and device and computer equipment
CN108416291B (en) Face detection and recognition method, device and system
US9082000B2 (en) Image processing device and image processing method
CN111626295B (en) Training method and device for license plate detection model
CN111598065A (en) Depth image acquisition method, living body identification method, apparatus, circuit, and medium
CN110443184A (en) ID card information extracting method, device and computer storage medium
Zhang et al. Distinguishing photographic images and photorealistic computer graphics using visual vocabulary on local image edges
CN107113421B (en) The detection method and device of a kind of optical system imaging quality
CN112364846B (en) Face living body identification method and device, terminal equipment and storage medium
CN112818774A (en) Living body detection method and device
JP4749884B2 (en) Learning method of face discriminating apparatus, face discriminating method and apparatus, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination