CN109543629A - A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing - Google Patents

A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing Download PDF

Info

Publication number
CN109543629A
CN109543629A CN201811429644.7A CN201811429644A CN109543629A CN 109543629 A CN109543629 A CN 109543629A CN 201811429644 A CN201811429644 A CN 201811429644A CN 109543629 A CN109543629 A CN 109543629A
Authority
CN
China
Prior art keywords
upper eyelid
boundary curve
image
blink
eyes
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811429644.7A
Other languages
Chinese (zh)
Inventor
王义文
王健宗
肖京
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201811429644.7A priority Critical patent/CN109543629A/en
Publication of CN109543629A publication Critical patent/CN109543629A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive

Abstract

The embodiment of the invention discloses a kind of blink recognition methods, device, equipment and readable storage medium storing program for executing, this method comprises: capturing the boundary curve of the upper eyelid in video flowing in continuous multiple frames image;According to the variation track of the boundary curve simulation eye upper eyelid of the upper eyelid in the multiple image;It determines whether to blink according to the variation track of the eyes upper eyelid, wherein, if any position residence time of the upper eyelid in the variation track is more than the first preset time, and any position is returned to when leaving the time of any position no more than the second preset time, it is determined that blink.Using the embodiment of the present invention, by the boundary curve for capturing eyes upper eyelid, the situation of change of upper eyelid is simulated to determine whether blink, the accuracy that modeling analysis improves the situation of change of analysis upper eyelid is carried out by the boundary curve to the upper eyelid captured simultaneously, to improve the accuracy rate of blink identification.

Description

A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing
Technical field
The present invention relates to technical field of biometric identification more particularly to a kind of blink recognition methods, device, equipment and readable deposit Storage media.
Background technique
With the development of biological identification technology, face recognition technology has tended to be mature, and face identification system can be compared with Good carry out human face detection and tracing still for the face recognition technology of the systems such as some gate inhibitions, login, is used Family can use In vivo detection technology then can be to avoid by photo etc. in systems by illegal means fraud systems such as photos The case where prosthese is cheated.
The common method of In vivo detection has the detection based on user's cooperation and automatic two kinds of the detection without user's cooperation.With The system of family cooperation detection generally require user according to the prompt of system make corresponding movement using prove its as living body, such side Formula process is relative complex, reduces user experience;Automatic detection be then just completed in the ignorant situation of user detection and Identification, such detection recognition method mainly have some casual movements of detection user or detection user to whether there is with it A certain biological characteristic, if user produces these casual movements or there is a certain biological characteristic, then it is assumed that its For living body, in comparison, the mode detected automatically has very big advantage in terms of detection efficiency, accuracy rate and user experience. How to carry out detecting automatically is the technical issues of those skilled in the art is studying.
Summary of the invention
The embodiment of the invention provides a kind of blink recognition methods, device, equipment and readable storage medium storing program for executing, by capturing eye Then the boundary curve of eyeball upper eyelid simulates the situation of change of upper eyelid to determine whether blinking, while by upper to what is captured The boundary curve of eyelid carries out the accuracy that modeling analysis improves the situation of change of analysis upper eyelid, in addition, increasing determination Whether object is the pretreating scheme of eyes, to improve the accuracy rate of blink identification.
In a first aspect, the embodiment of the invention provides a kind of blink recognition methods, this method comprises:
Capture the boundary curve of the upper eyelid in video flowing in continuous multiple frames image;
According to the variation track of the boundary curve simulation eye upper eyelid of the upper eyelid in the multiple image;
It determines whether to blink according to the variation track of the eyes upper eyelid, wherein if the upper eyelid is in institute Any position residence time stated in variation track is more than the first preset time, and the time for leaving any position does not surpass Any position is returned to when crossing the second preset time, it is determined that blink.
Using the embodiment of the present invention, then the boundary curve by capturing eyes upper eyelid simulates the trail change of upper eyelid Situation is to determine whether blink, while carrying out modeling analysis by the boundary curve to the upper eyelid captured and improving in analysis The accuracy of the trail change situation of eyelid, to improve the accuracy rate of blink identification.
With reference to first aspect, continuous in the capture video flowing in the first possible embodiment of first aspect The boundary curve of upper eyelid in multiple image, comprising:
Edge detection is carried out to each frame image in the continuous multiple frames image using edge detection operator, is obtained described The boundary curve of upper eyelid in multiple image.
With reference to first aspect or the first possible embodiment of first aspect, second in first aspect are possible Embodiment in, the variation track of the boundary curve simulation eye upper eyelid according to the upper eyelid in the multiple image Include:
Determine the center position of each boundary curve in the boundary curve of the upper eyelid in the multiple image;
By the center position of the boundary curve of frame image any one in the multiple image and any one frame image Adjacent image boundary curve center position be connected, obtain the variation track of the eyes upper eyelid.
The possible embodiment of second with reference to first aspect, in the third possible embodiment of first aspect In, in the boundary curve of the upper eyelid in the determination multiple image before the center position of each boundary curve, also Include:
The boundary curve of the upper eyelid of the multiple image of capture is put in a plane, so that each in the multiple image Two endpoints of the boundary curve of the upper eyelid of frame image are respectively superposed.
The third possible embodiment with reference to first aspect, in the 4th kind of possible embodiment of first aspect In, the variation track according to the eyes upper eyelid determines whether to blink, wherein if the upper eyelid is described Any position residence time in variation track is more than the first preset time, and the time for leaving any position is no more than Any position is returned to when the second preset time, it is determined that blink, comprising:
The variation track according to the eyes upper eyelid central point determines whether to blink, wherein if described Upper eyelid central point is more than the first preset time in any position residence time, and the time for leaving any position does not surpass Any position is returned to when crossing the second preset time, it is determined that blink.
With reference to first aspect, the possible embodiment of the first of first aspect, the third possible reality of first aspect Apply the 4th kind of possible embodiment of mode or first aspect, in the 5th kind of possible embodiment of first aspect, institute After the boundary curve for stating the upper eyelid in capture video flowing in continuous multiple frames image, further includes:
Capture the boundary curve of the lower eyelid of the first frame image in the continuous multiple frames image;
Determine the edge of the upper eyelid by the boundary curve and first frame image of the lower eyelid of the first frame image The boundary rectangle in the region of curve composition;
Calculate the length-width ratio of the boundary rectangle;
When the length-width ratio within a preset range when, then execute the edge according to the upper eyelid in the multiple image The operation of the situation of change of curve simulation eyes upper eyelid.
The embodiment of the present invention increase determining object whether be eyes pretreating scheme, further increase blink know Other accuracy rate.
Second aspect, the embodiment of the invention provides a kind of blink identification device, which includes:
Capture module, for capturing the boundary curve of the upper eyelid in video flowing in continuous multiple frames image;
Analog module, for the variation according to the boundary curve simulation eye upper eyelid of the upper eyelid in the multiple image Track;
Judgment module, for determining whether to blink according to the variation track of the eyes upper eyelid, wherein if institute Stating any position residence time of the upper eyelid in the variation track is more than the first preset time, and leaves any bit The time set is returned to any position when being no more than the second preset time, it is determined that blinks.
Using the embodiment of the present invention, then the boundary curve by capturing eyes upper eyelid simulates the situation of change of upper eyelid To determine whether blink, while modeling analysis is carried out by the boundary curve to the upper eyelid captured and improves analysis upper eyelid Situation of change accuracy, thus improve blink identification accuracy rate.
In conjunction with second aspect, in the first possible embodiment of second aspect, the capture module, for capturing The boundary curve of upper eyelid in video flowing in continuous multiple frames image, specifically:
The capture module is used to carry out each frame image in the continuous multiple frames image using edge detection operator Edge detection obtains the boundary curve of the upper eyelid in the multiple image.
In conjunction with the possible embodiment of the first of second aspect or second aspect, second in second aspect may Embodiment in, the analog module, for the boundary curve simulation eye according to the upper eyelid in the multiple image The variation track of eyelid, specifically:
The analog module is used to determine each boundary curve in the boundary curve of the upper eyelid in the multiple image Center position;
By the center position of the boundary curve of frame image any one in the multiple image and any one frame image Adjacent image boundary curve center position be connected, obtain the variation track of the eyes upper eyelid.
In conjunction with second of possible embodiment of second aspect, in the third possible embodiment of second aspect In, described device further includes processing module, and the processing module is for the upper eyelid in the determination multiple image In boundary curve before the center position of each boundary curve, the boundary curve of the upper eyelid of the multiple image of capture is placed on In one plane, so that two endpoints of the boundary curve of the upper eyelid of each frame image are respectively superposed in the multiple image.
In conjunction with the third possible embodiment of second aspect, in the 4th kind of possible embodiment of second aspect In, the judgment module, for determining whether to blink according to the variation track of the eyes upper eyelid, wherein if institute Stating any position residence time of the upper eyelid in the variation track is more than the first preset time, and leaves any bit The time set is returned to any position when being no more than the second preset time, it is determined that it blinks, specifically:
The judgment module is used to determine whether to blink according to the variation track of the eyes upper eyelid central point, In, if the upper eyelid central point is more than the first preset time in any position residence time, and leave any bit The time set is returned to any position when being no more than the second preset time, it is determined that blinks.
The first possible embodiment, the third possible reality of second aspect in conjunction with second aspect, second aspect The 4th kind of possible embodiment for applying mode or second aspect, in the 5th kind of possible embodiment of second aspect, Described device further includes preprocessing module;
The edge that the capture module is also used to capture the lower eyelid of the first frame image in the continuous multiple frames image is bent Line;
The preprocessing module is for determining boundary curve and the first frame by the lower eyelid of the first frame image The boundary rectangle in the region of the boundary curve composition of the upper eyelid of image;
The preprocessing module is used to calculate the length-width ratio of the boundary rectangle;
When the length-width ratio within a preset range when, then the preprocessing module is used to indicate the analog module and executes institute State the operation of the situation of change of the boundary curve simulation eye upper eyelid according to the upper eyelid in the multiple image.
The embodiment of the present invention increase determining object whether be eyes pretreating scheme, further increase blink know Other accuracy rate.
The third aspect, the embodiment of the invention provides a kind of computer readable storage medium, the computer storage medium It is stored with computer program, the computer program includes program instruction, and described program instruction makes institute when being executed by a processor It states processor and executes method described in above-mentioned first aspect.
Fourth aspect, the embodiment of the invention provides a kind of equipment, including processor, input equipment, output equipment, communication Equipment and memory, the processor, input equipment, output equipment, communication equipment and memory are connected with each other, wherein described Memory is configured for calling the application code, execute above-mentioned for storing application code, the processor Method described in first aspect.
In conclusion using the embodiment of the present invention then the boundary curve by capturing eyes upper eyelid simulates upper eyelid Trail change situation to determine whether blink, while modeling analysis is carried out by boundary curve to the upper eyelid captured and is mentioned The high accuracy of the trail change situation of analysis upper eyelid, in addition, increase determining object whether be eyes pretreatment side Case, to improve the accuracy rate of blink identification.
Detailed description of the invention
Attached drawing needed in the embodiment of the present invention will be described below.
Fig. 1 is the feature templates schematic diagram of face Detection and Extraction method;
Fig. 2 is that the integrogram in face Detection and Extraction method defines exemplary diagram;
Fig. 3 is the Face datection process frame diagram in face Detection and Extraction method;
Fig. 4 is the self-adaptive enhancement algorithm schematic diagram in face Detection and Extraction method;
Fig. 5 is the flow diagram of the strong classifier cascade detection face in face Detection and Extraction method;
Fig. 6 A is the face inside key point schematic diagram in face key point extracting method;
Fig. 6 B is the facial contour key point schematic diagram in face key point extracting method;
Fig. 7 is the input and output signal of crucial point prediction inside the face of the first level in face key point extracting method Figure;
Fig. 8 is the input and output signal of crucial point prediction inside the face of the second level in face key point extracting method Figure;
Fig. 9 is the input and output signal of crucial point prediction inside the face of third level in face key point extracting method Figure;
Figure 10 is the input and output signal of crucial point prediction inside the face of the 4th level in face key point extracting method Figure;
Figure 11 is the input and output signal of the facial contour key point prediction of the second level in face key point extracting method Figure;
Figure 12 is a kind of flow diagram of recognition methods of blinking provided in an embodiment of the present invention;
Figure 13 is the schematic diagram of eyes upper eyelid boundary curve in a kind of blink recognition methods provided in an embodiment of the present invention;
Figure 14 A be in a kind of blink recognition methods provided in an embodiment of the present invention nth frame to (N+9) frame upper eyelid edge The schematic diagram of curve;
Figure 14 B be in a kind of blink recognition methods provided in an embodiment of the present invention nth frame to (N+9) frame upper eyelid edge The schematic diagram of center of curve point;
Figure 15 A is eyes upper eyelid variable condition when blinking in a kind of blink recognition methods provided in an embodiment of the present invention Schematic diagram;
Figure 15 B is that eyes upper eyelid central point changes when blinking in a kind of blink recognition methods provided in an embodiment of the present invention The schematic diagram of state;
Figure 16 is the boundary curve composition of lower eyelid on eyes in a kind of blink recognition methods provided in an embodiment of the present invention Region boundary rectangle schematic diagram;
Figure 17 is the signal of the white of the eye, iris and pupil region in a kind of blink recognition methods provided in an embodiment of the present invention Figure;
Figure 18 is a kind of structural schematic diagram of identification device of blinking provided in an embodiment of the present invention;
Figure 19 is a kind of structural schematic diagram of blink identification equipment provided in an embodiment of the present invention.
Specific embodiment
The present invention provides a kind of blink recognition methods and devices, it is intended to right by the boundary curve for capturing eyes upper eyelid The situation of change of upper eyelid is simulated afterwards to determine whether blinking, while building by the boundary curve to the upper eyelid captured Mould analysis improve analysis upper eyelid situation of change accuracy, in addition, increase determining object whether be eyes pre- place Reason scheme, to improve the accuracy rate of blink identification.
The term " includes " and " having " and their any changes occurred in description of the invention, claims and attached drawing Shape, it is intended that cover and non-exclusive include.Such as contain the process, method of a series of steps or units, system, product or Equipment is not limited to listed step or unit, but optionally further comprising the step of not listing or unit or optional Ground further includes the other step or units intrinsic for these process, methods, product or equipment.In addition, term " first ", " the Two " and " third " etc. are and to be not intended to describe specific sequence for distinguishing different objects.
In order to make those skilled in the art more fully understand the present invention program, below in conjunction with attached in the embodiment of the present invention Figure, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is only this The embodiment of a part is invented, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art Every other embodiment obtained without making creative work, should fall within the scope of the present invention.
It is described in detail separately below.
This programme proposes a kind of blink recognition methods, and this method is to extract to extract with face key point in Face datection On the basis of implement, for example, first recycling face to close using the face in Face datection extracting method Detection and Extraction video image Key point extracting method orients 68 characteristic points of face, and the specific position of eyes can be oriented according to the characteristic point oriented It sets, then judges whether to generate blink using blink recognition methods.The embodiment of this programme in order to better understand, below first Face Detection and Extraction method and face key point extracting method are introduced respectively.
Face datection extracting method is mainly based upon Haar (Ha Er) feature (rectangular characteristic), integrogram, AdaBoost (certainly Adapt to enhancing) algorithm and cascade classifier method.
Haar rectangular characteristic is the digital picture feature for object detection.The template of Haar rectangular characteristic as shown in Figure 1, It will be noted from fig. 1 that Haar rectangular characteristic includes edge feature, linear character and center ring characteristics.This kind of rectangular characteristic Template by two or more congruences black and white rectangle it is adjacent be composed, and rectangular characteristic value is the sum of the gray value of white rectangle The sum of the gray value of black rectangle is subtracted, rectangular characteristic is to some simple graphic structures, as line segment, edge are more sensitive.Such as Such rectangle is placed on a non-face region by fruit, then calculated characteristic value should be different with face characteristic value, institute It is exactly in order to which face characteristic is quantified, to distinguish face and non-face with these rectangles.
For a 24*24 pixel resolution image, in matrix character number about more than 160000, need to pass through Special algorithm selects suitable matrix character, and is combined into strong classifier and could detect face.
After obtaining rectangular characteristic, the value of rectangular characteristic is calculated.Integrogram is defined as the integrogram of coordinate A (x, y) It is the sum of all pixels in its upper left corner, carrying out a small amount of " integrogram " that is calculated to each pixel in this way can be identical Time in calculate scale rectangular characteristic value of different sizes, therefore greatly improve calculating speed.For a point A in image (x, y) defines its integrogram ii (x, y) are as follows:
S (x, y)=s (x, y-1)+i (x, y)
Ii (x, y)=ii (x-1, y)+s (x, y)
Wherein i (x', y') is the pixel value at point (x', y').The integrogram of coordinate A (x, y) is defined as its upper left angular moment The sum of battle array all pixels, as shown in Fig. 2 dash area.
It follows that calculate the difference (calculating the characteristic value of rectangle template) of two area pixel values, it is only necessary to spy Levy region endpoint integrogram come carry out simply add and subtract operation can.Rectangle spy can be quickly calculated with the method for integrogram The characteristic value of sign.
Then, the selection of Weak Classifier in Face datection algorithm is carried out to the calculating of rectangular characteristic using integrogram.Haar Classifier is made of Haar feature extraction, discrete strong classifier, strong classification cascade device.Core concept is to extract the Haar spy of face Sign, quickly calculates feature using integrogram, then picks out a small amount of key feature, is sent into the grade being made of strong classifier Connection classifier is iterated training.Haar classifier uses AdaBoost algorithm, but the cascade point it organized for screening type Class device, each node is the classifier that multiple trees are constituted, and the correct recognition rata of each node is very high.In any level calculating, Once obtaining the conclusion of " not in classification ", then termination is calculated.Only by all ranks in classifier, face quilt just will be considered that It detects.The advantage that be when proportion is smaller in the picture for face, the cascade classifier of screening type can significantly Calculation amount is reduced, because largely detected region can be screened very early, judges that the region does not require to be detected rapidly The face of survey.
Face datection process frame diagram as shown in figure 3, in Fig. 3, classifier first pass through in advance a large amount of sample of training (including Positive sample and negative sample) it selects and extracts special characteristic, when there is image input, by scanning window, selects and extract this The feature of the image is input in classifier and is compared with the special characteristic trained in classifier by the feature of image, and Judge whether the feature of image is consistent with the special characteristic in classifier, last output category result, i.e. output are classified device Detection classification is finally judged as the region of face.
AdaBoost algorithm is the algorithm specifically for an iteration during Face datection, and purpose is exactly from training sample Learn a series of Weak Classifiers in this, these Weak Classifiers are then combined into a strong classifier.The signal of AdaBoost algorithm Figure as shown in figure 4, it can be seen in fig. 4 that each sample initialization weight be it is equal, sample by classifier it Afterwards, new sample weights are calculated according to the error rate of classifier, point pair the weight of sample reduce, the weight of the sample of misclassification mentions Height allows for more being paid attention in classifier of the high sample of error rate below in this way, then inputs next fraction again Classify in class device, so repeat, when the classifier of process reaches specified in advance number such as T, this T is classified Device is integrated by aggregation policy, obtains final strong learner.
Specific step is as follows for the AdaBoost algorithm of Face datection:
(1) training sample (x1, y1) ..., (xi, yi) ..., (xn, yn) is given, wherein (xi, yi) indicates i-th of sample This, yi=0 is expressed as negative sample, and yi=1 is expressed as positive sample.N is training sample sum;
(2) weight of training sample is initialized;
(3) first time iteration trains a Weak Classifier first, calculates error rates of weak classifiers;Appropriate threshold is chosen, is made It is minimum to obtain error;Update sample weights;
(4) after T circulation, T Weak Classifier is obtained, according to the weight for the importance for evaluating each Weak Classifier It is weighted superposition, finally obtains strong classifier.
After obtaining strong classifier, in order to improve the speed and precision of Face datection, final classifier is also needed by several A strong classifier cascades to obtain.In a cascade sort system, picture is inputted for each, sequentially passes through each strong classification The strong classifier of device, front is relatively easy, it includes Weak Classifier it is also relatively fewer, subsequent strong classifier is complicated step by step, Subsequent strong classifier detection, what earlier classification only could be sent by the picture after the strong classification and Detection of front Device can filter out most unqualified picture, and the picture region for only having passed through all strong classifier detections is only effective people Face region.The flow diagram of strong classifier cascade detection face is as shown in Figure 5.
In fig. 5 it may be seen that image to be detected is divided into multiple child windows to be detected, it is input to cascade strong point In class device, detection classification is carried out to these child windows step by step, is just just judged to non-face son in what classifier in front Window is filtered, and the detection classification for only having passed through all cascade strong classifiers is just judged as face later.
It is above-mentioned to be gone out using the Face datection extracting method Detection and Extraction based on Haar feature, integrogram and AdaBoost algorithm Face, below we using based on convolutional neural networks (Coarse-to-fine Convolutional from thick to thin Neural network, Coarse-to-fine CNN) face key point extracting method face that Detection and Extraction are arrived Facial modeling is carried out, the critical zone locations of face, including eyebrow, eyes, nose, mouth, face mask are oriented Deng.The process employs by slightly to the positioning thought of essence, utilizing depth convolutional neural networks (Deep convolutional Neural network, DCNN) algorithm realizes the positioning of 68 human face characteristic points.This method first predicts internal point respectively The minimum bounding box of (Inner points) and profile point (Contour points), the prediction of the characteristic point of the two parts are Distinct, respective network can carry out parallel training, prediction.
Inner points is mainly the feature point prediction of eyebrow in face, eyes, nose, mouth, this fractional prediction point Number be 51;Contour points refers to 17 characteristic points of face outer profile, Inner points and Contour The position of points is as shown in Figure 6.
The prediction of Inner points is the DCNN model of four levels.Level 1 is exactly the prediction minimum that front is said Bounding box;Level 2 is used for the initial alignment of this 51 points, that is, coarse positioning;Level 3 is used for fine positioning;Level 4 is used In more fine positioning.
First level (level 1): this level is exactly to predict minimum bounding box.The face that input Detection and Extraction are arrived Region picture predicts the minimum bounding box (Bounding Box) of this 51 points by this layer of CNN.Bounding box packet The coordinate in the upper left corner containing rectangle, the lower right corner coordinate, that is, network output be one 4 dimension vector, as shown in Figure 7.
Second level (level 2): in this level, the input of network is exactly 51 predicted by the first level Face picture in the minimum bounding box of point;The output of network is 51 characteristic point predicted positions (coarse positioning position).This layer It is secondary fairly simple, it is exactly common convolutional neural networks (convolutional neural network, CNN), with common CNN predicts 51 characteristic points, and schematic diagram refers to Fig. 8.
The input of network: cutting out the picture in the minimum bounding box that level 1 is obtained to come, and as input, carries out 51 A face characteristic point prediction.
The output of network: because we are to predict 51 characteristic points, the output of CNN is 102 neurons.
Third level (level 3): having predicted 51 characteristic points in a upper level, however the position of this 51 points It is high not enough, it needs to do it further fine positioning.Second level prediction 51 points, why precision is not high enough, be because Picture for input is big picture, is to be easy the interference by redundancy to global unified prediction.Such as, it is assumed that determine The characteristic point of position mouth, input picture should just only have the region picture of mouth part, and such precision just can be relatively high, without It is using one whole face picture as input, such redundancy is too many, is easy to interfere the positioning in mouth region.Therefore this The network of level seeks to 51 points arrived using the neural network forecast of level 2, eyebrow, eyes, nose, mouth in face Picture cut out come, the positioning that then eyebrow, eyes, nose, mouth are distinguished, schematic diagram refer to Fig. 9.
Network inputs: using 51 points of level 2, eyebrow in face, eyes, nose, mouth can be detected roughly Region, the region of the eyebrow detected roughly, eyes, nose, mouth is then cut out respectively, the eyebrow, eyes, nose of cutting Sub, mouth region carries out separating training, prediction.Because each organ is wanted separately to train, this layer needs 4 naturally A CNN model, each model is for predicting respective characteristic point
Network output: the characteristic point of eyebrow, eyes, nose, mouth.
4th level (level 4): this level calculates the rotation angle of eyebrow, eyes, nose, mouth, then eyebrow Hair, eyes, nose, mouth are all ajusted, and are inputted picture as the 4th hierarchical network, are then predicted again, schematic diagram reference Figure 10.
Contour points prediction is fairly simple, and just only the DCNN model of two levels, the two levels are said just It is the first and second two hierarchical networks of Inner points like above.
First level: this said in front, that is, the minimum bounding box of prediction Contour points.
Second level: CNN directly predicts this 17 characteristic points, and schematic diagram refers to Figure 11.
Behind the position for finding eyes by outrunner's face Detection and Extraction method and face key point extracting method, Zai Yibu Blink identification is carried out, referring to Figure 12, it is specific to blink that steps are as follows for recognition methods.
Step S101: the boundary curve of the upper eyelid in video flowing in continuous multiple frames image is captured.
In specific embodiment, eyes are being found using face Detection and Extraction method and face key point extracting method After position, server captures the boundary curve of the upper eyelid in video flowing in continuous multiple frames image.
In a kind of wherein embodiment, the edge for capturing the upper eyelid in video flowing in continuous multiple frames image is bent Line, comprising:
Edge detection is carried out to each frame image in the continuous multiple frames image using edge detection operator, is obtained described The boundary curve of upper eyelid in multiple image.
Specifically, side edge detection can be used in the boundary curve for capturing the upper eyelid in video flowing in continuous multiple frames image Method carries out the boundary curve that edge detection obtains the upper eyelid of eyes to each frame image in the continuous multiple frames image.Detection The boundary curve for obtaining the upper eyelid of eyes can be with reference to Figure 13, and white curve is that the edge of upper eyelid is bent above eyes Line.Edge detection method realizes that common edge detection operator has Luo Baici to intersect (Roberts by edge detection operator Cross) operator, Prewitt (Prewitt) operator, Sobel (Sobel) operator, Leo Kirch (Kirsch) operator, compass are calculated Son, Tuscany (Canny) operator, Laplce (Laplacian) operator etc..By edge detection operator Detection and Extraction edge to Obtain boundary curve.
In a kind of wherein embodiment, the boundary curve for capturing the upper eyelid in video flowing in continuous multiple frames image can be with A classifier, then benefit are constituted by the feature that the boundary curve training set of eyes upper eyelid trains the boundary curve of upper eyelid Edge extracting is carried out with edge detection algorithm and obtains boundary curve, will be extracted obtained boundary curve and is input in classifier, point Class device then exports the boundary curve for meeting its feature, and the boundary curve of output is the boundary curve of upper eyelid.
Step S102: equipment is according to the variation of the boundary curve simulation eye upper eyelid of the upper eyelid in the multiple image Track.
In a specific embodiment, it is exactly bent according to the edge for capturing obtained upper eyelid for simulating the variation track of upper eyelid The motion conditions of upper eyelid when line simulation eye is blinked.When blink, the upper eyelid of eyes can rapidly moving downward formation eye closing State, it is then again rapidly moving upward to open eyes.Citing gives nth frame to the upper eyelid edge of (N+9) frame in Figure 14 A Curve, it is assumed here that the upper eyelid of eyes at the uniform velocity moves when blink, is identical additionally, due to the time interval between every frame , therefore the upper eyelid boundary curve of nth frame and (N+9) frame is overlapped in Figure 14 A, the edge of (N+1) frame and (N+8) frame Curve overlapping, the boundary curve position of other frames is as shown in fig. 14 a.It can further be seen that pushing away with frame number from Figure 14 A It moves, the boundary curve of upper eyelid first moves down, and then moves up again.
In a kind of wherein embodiment, the boundary curve simulation eye according to the upper eyelid in the multiple image The variation track of upper eyelid includes:
Determine the center position of each boundary curve in the boundary curve of the upper eyelid in the multiple image;
By the center position of the boundary curve of frame image any one in the multiple image and any one frame image Adjacent image boundary curve center position be connected, obtain target trajectory;Wherein, the target trajectory is for embodying institute State the variation track of upper eyelid.
Specifically, the embodiment of the present invention is modeled using the boundary curve of the upper eyelid of eyes as target object, it is first First, it determines the center position of each boundary curve in the boundary curve of the upper eyelid in the multiple image captured, then will These center positions are connected by the sequencing of frame number, the variation rail that upper eyelid moves up and down when simulating eyes blink Mark.
Preferably, in the boundary curve of the upper eyelid in the determination multiple image each boundary curve central point Before position, further includes:
The boundary curve of the upper eyelid of the multiple image of capture is put in a plane, so that each in the multiple image Two endpoints of the boundary curve of the upper eyelid of frame image are respectively superposed.
Specifically, the operation after above-mentioned modeling is to be put into the boundary curve of all upper eyelids in one plane to carry out , while two endpoints of the boundary curve of all upper eyelids are respectively superposed.
It can see with reference to Figure 14 B, determine the central point of every frame upper eyelid boundary curve, by every frame upper eyelid boundary curve Central point connected by the sequencing of frame number, the process of simulation eye upper eyelid blink, i.e. eyes upper eyelid first moves down It moves up again, obtains motion profile as shown in Figure 14B, due to all corresponding weight of two endpoints of every frame upper eyelid boundary curve It stacks, two endpoints are also motionless when mobile, therefore what the motion profile of central point showed is straight line.
Step S103: equipment determines whether to blink according to the variation track of the eyes upper eyelid.
Wherein, if the upper eyelid in any position residence time is more than the first preset time, and described appoint is left The time of one position is returned to any position when being no more than the second preset time, it is determined that blinks.
Specifically, normal person's average minute clock will blink ten several times, it will blink within usual 2~6 seconds primary, blink every time Journey will continue 0.2~0.4 seconds, which is that the primary blink that can substantially characterize of configuration terminates to next Secondary blink starts this period, for example, can be set to a value between 1.6~5.8 seconds, second preset time is to match That sets can substantially characterize eyes blink duration, for example, can be set to a value between 0.2~0.4 second.Separately Outside, it is any position which, which is more than above-mentioned first preset time so position in which position residence time,.
Specifically, the motion track that the central point of the boundary curve of upper eyelid has been obtained by the above modeling Simulation, passes through This motion track can learn the motion change situation of upper eyelid boundary curve, according to the motion change of upper eyelid boundary curve Situation can learn the variation track of eyes upper eyelid, then determining whether eyes occur according to the variation track of eyes upper eyelid Blink, if the track of eyes upper eyelid does not change within certain a period of time, then to some direction, setting in motion is had left Then position originally moves back into original position in the opposite direction again, then blink can be determined to be, tie below Figure 15 A is closed come description of illustrating, using the lower eyelid of eyes as object of reference, because the lower eyelid of eyes may be considered not when blink Dynamic, 1~12 respectively indicates the upper eyelid of eyes, and respectively indicates by the sequencing of number from blink is started to blink knot The eyes upper eyelid successively changed during beam.1 and 12 be the state of eyes upper eyelid when eyes normally open, and is not batted an eyelid When eyes upper eyelid keep this state constant, when start blink when, the upper eyelid of eyes start by 2~11 it is successive Sequence changes, it can be seen that the direction of the upper eyelid of eyes eyelid gradually downward is mobile during blink, then again to lower eyelid Opposite direction is mobile, the state that eyes normally open is eventually passed back to, as long as there is such a process, then being judged as There is primary blink in eyes.
In a kind of wherein embodiment, the variation track according to the eyes upper eyelid determines whether to blink Eye, wherein if the upper eyelid is more than the first preset time in any position residence time, and leave any position Time be no more than the second preset time when be returned to any position, it is determined that blink, comprising:
The variation track according to the eyes upper eyelid central point determines whether to blink, wherein if described Upper eyelid central point is more than the first preset time in any position residence time, and the time for leaving any position does not surpass Any position is returned to when crossing the second preset time, it is determined that blink.
Can specifically illustrate description in conjunction with Figure 15 B, using the lower eyelid of eyes as object of reference, because of eyes when blink Lower eyelid may be considered motionless, 1~12 respectively indicates the central point of the upper eyelid of eyes, and by the successive suitable of number Sequence is respectively indicated from the eyes that successively change of relative position during terminated relative to lower eyelid that start to blink to blink The central point of eyelid.1 and 12 be the location status of eyes upper eyelid central point when eyes normally open, eye when not batting an eyelid The central point of the upper eyelid of eyeball keeps this location status constant, when starting blink, the position of the central point of eyes upper eyelid Start to change by 2~11 sequencing, it can be seen that the side of the central point of eyes upper eyelid eyelid gradually downward during blink It is then again mobile to the direction opposite with lower eyelid to movement, the state that eyes normally open is eventually passed back to, as long as there is this One process of sample, then being judged as eyes primary blink occurs.
In addition, capturing continuous multiple frames in video flowing in the operation for carrying out step S101 in order to avoid the interference of other factors It, can also be again by correlated characteristic (such as feature of white of the eye eye benevolence or upper eye after the boundary curve of upper eyelid in image Feature or iris feature between skin and lower eyelid etc.) further determine that whether position to be detected is eyes, to improve The accuracy rate of In vivo detection.It is implemented as follows:
In a kind of wherein embodiment, the boundary curve for capturing the upper eyelid in video flowing in continuous multiple frames image Later, further includes:
Capture the boundary curve of the lower eyelid of the first frame image in the continuous multiple frames image;
Determine the edge of the upper eyelid by the boundary curve and first frame image of the lower eyelid of the first frame image The boundary rectangle in the region of curve composition;
Calculate the length-width ratio of the boundary rectangle;
When the length-width ratio within a preset range when, then execute the edge according to the upper eyelid in the multiple image The operation of the situation of change of curve simulation eyes upper eyelid.
Specifically, capturing the boundary curve of the lower eyelid of a certain frame image in continuous multiple frames image;Determine this frame figure The boundary rectangle in the region of the boundary curve composition of the upper eyelid of the boundary curve of the lower eyelid of picture and this frame image;Then it counts Calculate the length-width ratio of this boundary rectangle;When the length-width ratio being calculated meets the region that lower eyelid boundary curve forms on eyes When the length-width ratio of boundary rectangle, it is determined that the boundary curve captured is the boundary curve of eyes eyelid, and schematic diagram refers to Figure 16. Usually the length of the boundary rectangle in the region of lower eyelid boundary curve composition is 20~30mm or so, width on adult eyes (the widest part) is 10~15mm, then the length-width ratio of the boundary rectangle in the region that lower eyelid boundary curve forms on eyes generally exists Between 1.33~3, that is to say, that when the length-width ratio of the boundary rectangle in the region of lower eyelid boundary curve composition on eyes is 1.33 ~3 within the scope of this when, it can be determined that it is eyes, so that it is determined that the boundary curve captured be eyes edge it is bent Line, rather than other interference curves.
Alternatively, schematic diagram refers to Figure 17 in a kind of wherein embodiment, firstly, capturing certain in continuous multiple frames image First area with the first color and the second area with the second color in one frame image, wherein first color is The color of the white of the eye trained in advance, second color are the color of the iris and pupil that train in advance;Secondly, detection institute It states first area and whether second area is connected, if the first area and second area linking, then it is determined that described first The combination zone in region and second area composition;Then, judge the combination zone boundary line and the upper eyelid Whether boundary curve partly overlaps, if the upper eyelid in the boundary line of the combination zone and the first frame image Boundary curve partly overlaps, it is determined that the boundary curve captured is the boundary curve of eyes upper eyelid.
Again alternatively, detecting the opposite of the boundary curve of the upper eyelid captured in a certain frame image in continuous multiple frames image It whether there is iris region in the range of side, if it is present determining that the boundary curve captured is the side of eyes upper eyelid Edge curve.
The embodiment of the present invention is sentenced by capturing the boundary curve of eyes upper eyelid and then simulating the variation track of upper eyelid It is disconnected whether to blink, while the change that modeling analysis improves analysis upper eyelid is carried out by the boundary curve to the upper eyelid captured Change the accuracy of track, in addition, increasing whether determining object is the pretreating scheme of eyes, to improve blink identification Accuracy rate.
For the ease of better implementing above scheme of the invention, the embodiment of the present invention is also corresponding to provide a kind of blink knowledge Other device 18 is described in detail with reference to the accompanying drawing:
Figure 18 show a kind of structural schematic diagram of identification device 1800 of blinking, which can be above equipment, described Device 1800 includes: capture module 1801, analog module 1802, judgment module 1803, in which:
Capture module 1801, for capturing the boundary curve of the upper eyelid in video flowing in continuous multiple frames image;
Analog module 1802, for according to the boundary curve simulation eye upper eyelid of the upper eyelid in the multiple image Variation track;
Judgment module 1803, for determining whether to blink according to the variation track of the eyes upper eyelid, wherein such as Any position residence time of the upper eyelid described in fruit in the variation track is more than the first preset time, and leaves described appoint The time of one position is returned to any position when being no more than the second preset time, it is determined that blinks.
In a kind of wherein embodiment, the capture module 1801, for capturing in video flowing in continuous multiple frames image Upper eyelid boundary curve, specifically:
The capture module 1801 is used for using edge detection operator to each frame image in the continuous multiple frames image Edge detection is carried out, the boundary curve of the upper eyelid in the multiple image is obtained.
In a kind of wherein embodiment, the analog module 1802, for according to the upper eyelid in the multiple image Boundary curve simulation eye upper eyelid variation track, specifically:
The analog module 1802 is used to determine in the boundary curve of the upper eyelid in the multiple image that each edge to be bent The center position of line;
By the center position of the boundary curve of frame image any one in the multiple image and any one frame image Adjacent image boundary curve center position be connected, obtain the variation track of the eyes upper eyelid.
In a kind of wherein embodiment, described device 1800 further includes processing module, and the processing module is used in institute Before the center position for stating each boundary curve in the boundary curve for determining the upper eyelid in the multiple image, by capture The boundary curve of the upper eyelid of multiple image is put in a plane, so that the upper eyelid of each frame image in the multiple image Two endpoints of boundary curve are respectively superposed.
In a kind of wherein embodiment, the judgment module 1803, for the variation rail according to the eyes upper eyelid Mark determines whether to blink, wherein if any position residence time of the upper eyelid in the variation track is super The time for crossing the first preset time, and leaving any position is returned to any bit when being no more than the second preset time It sets, it is determined that it blinks, specifically:
The judgment module 1803 according to the situation of change of the eyes upper eyelid central point for determining whether to blink Eye, wherein if the upper eyelid central point is more than the first preset time in any position residence time, and leave described appoint The time of one position is returned to any position when being no more than the second preset time, it is determined that blinks.
In a kind of wherein embodiment, described device 1800 further includes preprocessing module;
The capture module 1801 is also used to capture the side of the lower eyelid of the first frame image in the continuous multiple frames image Edge curve;
The preprocessing module is for determining boundary curve and the first frame by the lower eyelid of the first frame image The boundary rectangle in the region of the boundary curve composition of the upper eyelid of image;
The preprocessing module is used to calculate the length-width ratio of the boundary rectangle;
When the length-width ratio within a preset range when, then the preprocessing module is used to indicate the analog module 1802 and holds The operation of the situation of change of the row boundary curve simulation eye upper eyelid according to the upper eyelid in the multiple image.
The specific implementation of modules and beneficial effect can correspond to referring to Fig.1 shown in 2 in device 1800 shown in Figure 18 Embodiment of the method in corresponding description, details are not described herein again.
9, Figure 19 is a kind of blink identification equipment 1900 provided in an embodiment of the present invention, the equipment 1900 packet referring to Figure 1 Include processor 1901, memory 1902 and communication interface 1903, the processor 1901, memory 1902 and communication interface 1903 It is connected with each other by bus 1904.
Memory 1902 include but is not limited to be random access memory (random access memory, RAM), it is read-only Memory (read-only memory, ROM), Erasable Programmable Read Only Memory EPROM (erasable programmable Read only memory, EPROM) or portable read-only memory (compact disc read-only memory, CD- ROM), storage of the memory 1902 for dependent instruction and data.Communication interface 1903 is for sending and receiving data.
Processor 1901 can be one or more central processing units (central processing unit, CPU), In the case that processor 1901 is a CPU, which can be monokaryon CPU, be also possible to multi-core CPU.
Processor 1901 in the equipment 1900 for reading the program code stored in the memory 1902, execute with Lower operation:
Processor 1901 captures the boundary curve of the upper eyelid in video flowing in continuous multiple frames image;
Processor 1901 is according to the variation rail of the boundary curve simulation eye upper eyelid of the upper eyelid in the multiple image Mark;
Processor 1901 determines whether to blink according to the variation track of the eyes upper eyelid, wherein if described Any position residence time of the upper eyelid in the variation track is more than the first preset time, and leaves any position Time be no more than the second preset time when be returned to any position, it is determined that blink.
In a kind of wherein embodiment, processor 1901 captures the side of the upper eyelid in video flowing in continuous multiple frames image Edge curve, specifically:
Processor 1901 carries out edge inspection to each frame image in the continuous multiple frames image using edge detection operator It surveys, obtains the boundary curve of the upper eyelid in the multiple image.
In a kind of wherein embodiment, processor 1901 is according to the boundary curve mould of the upper eyelid in the multiple image The variation track of quasi- eyes upper eyelid, specifically:
Processor 1901 determines the central point of each boundary curve in the boundary curve of the upper eyelid in the multiple image Position;
By the center position of the boundary curve of frame image any one in the multiple image and any one frame image Adjacent image boundary curve center position be connected, obtain the variation track of the eyes upper eyelid.
In a kind of wherein embodiment, the edge of upper eyelid of the processor 1901 in the determination multiple image In curve before the center position of each boundary curve, the boundary curve of the upper eyelid of the multiple image of capture is placed on one In plane, so that two endpoints of the boundary curve of the upper eyelid of each frame image are respectively superposed in the multiple image.
In a kind of wherein embodiment, processor 1901 determines whether according to the variation track of the eyes upper eyelid It now blinks, wherein if any position residence time of the upper eyelid in the variation track is more than first default Between, and leave any position time be no more than the second preset time when be returned to any position, it is determined that occur Blink, specifically:
Processor 1901 determines whether to blink according to the variation track of the eyes upper eyelid central point, wherein such as Upper eyelid central point described in fruit is more than the first preset time in any position residence time, and leave any position when Between be no more than the second preset time when be returned to any position, it is determined that blink.
In a kind of wherein embodiment, processor 1901 captures the side of the upper eyelid in video flowing in continuous multiple frames image After edge curve:
Processor 1901 also captures the boundary curve of the lower eyelid of the first frame image in the continuous multiple frames image;
Determine the edge of the upper eyelid by the boundary curve and first frame image of the lower eyelid of the first frame image The boundary rectangle in the region of curve composition;
Calculate the length-width ratio of the boundary rectangle;
When the length-width ratio within a preset range when, then execute the edge according to the upper eyelid in the multiple image The operation of the situation of change of curve simulation eyes upper eyelid.
It is retouched it should be noted that the realization of each operation can also correspond to the corresponding of embodiment of the method shown in 2 referring to Fig.1 It states.
In the account information processing equipment 1900 described in Figure 19, by capturing the boundary curve of eyes upper eyelid then The trail change situation of upper eyelid is simulated to determine whether blinking, while the boundary curve passed through to the upper eyelid captured carries out Modeling analysis improves the accuracy of the trail change situation of analysis upper eyelid, in addition, increasing whether determining object is eyes Pretreating scheme, thus improve blink identification accuracy rate.
The embodiment of the invention also provides a kind of equipment, including processor (such as CPU), input equipment (such as image Head), output equipment (such as display screen, microphone), communication equipment (such as transceiver) and memory, the processor, input Equipment, output equipment, communication equipment and memory are connected with each other, wherein and the memory is used to store application code, when When the processor is configured for calling the application code, method flow shown in Figure 12 is achieved.
The embodiment of the invention also provides a kind of computer readable storage medium, the computer storage medium is stored with meter Calculation machine program, the computer program include program instruction, when described program instruction is executed by processor, side shown in Figure 12 Method process is achieved.
In conclusion using the embodiment of the present invention then the boundary curve by capturing eyes upper eyelid simulates upper eyelid Situation of change to determine whether blink, while modeling analysis is carried out by boundary curve to the upper eyelid captured and is improved The accuracy for analyzing the situation of change of upper eyelid, in addition, increasing whether determining object is the pretreating scheme of eyes, to mention The high accuracy rate of blink identification.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, the process Relevant hardware can be instructed to complete by computer program, which can be stored in computer-readable storage medium, should Program is when being executed, it may include such as the process of above-mentioned each method embodiment.And storage medium above-mentioned includes: ROM or deposits at random Store up the medium of the various program storage codes such as memory body RAM, magnetic or disk.
In several embodiments provided by the present invention, it should be understood that disclosed device and method can pass through it Its mode is realized.For example, the apparatus embodiments described above are merely exemplary, for example, the division of the unit, only Only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components can be tied Another system is closed or is desirably integrated into, or some features can be ignored or not executed.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to So be possible to modify the technical solutions described in the foregoing embodiments, or to some or all of the technical features into Row equivalent replacement;And these are modified or replaceed, various embodiments of the present invention technology that it does not separate the essence of the corresponding technical solution The range of scheme.

Claims (10)

1. a kind of blink recognition methods characterized by comprising
Capture the boundary curve of the upper eyelid in video flowing in continuous multiple frames image;
According to the variation track of the boundary curve simulation eye upper eyelid of the upper eyelid in the multiple image;
It determines whether to blink according to the variation track of the eyes upper eyelid, wherein if the upper eyelid is in the change Any position residence time changed in track is more than the first preset time, and leaves time of any position no more than the Any position is returned to when two preset times, it is determined that blink.
2. method according to claim 1, which is characterized in that the upper eyelid captured in video flowing in continuous multiple frames image Boundary curve, comprising:
Edge detection is carried out to each frame image in the continuous multiple frames image using edge detection operator, obtains the multiframe The boundary curve of upper eyelid in image.
3. method according to claim 1 or claim 2, which is characterized in that the side according to the upper eyelid in the multiple image The variation track of edge curve simulation eyes upper eyelid includes:
Determine the center position of each boundary curve in the boundary curve of the upper eyelid in the multiple image;
By the phase of the center position of the boundary curve of frame image any one in the multiple image and any one frame image The center position of the boundary curve of adjacent image is connected, and obtains the variation track of the eyes upper eyelid.
4. according to the method described in claim 3, it is characterized in that, the edge of the upper eyelid in the determination multiple image In curve before the center position of each boundary curve, further includes:
The boundary curve of the upper eyelid of the multiple image of capture is put in a plane, so that each frame figure in the multiple image Two endpoints of the boundary curve of the upper eyelid of picture are respectively superposed.
5. method according to claim 4, which is characterized in that described to be according to the variation track determination of the eyes upper eyelid It is no to blink, wherein if any position residence time of the upper eyelid in the variation track is more than first pre- If the time, and leave any position time be no more than the second preset time when be returned to any position, it is determined that It blinks, comprising:
The variation track according to the eyes upper eyelid central point determines whether to blink, wherein if the upper eye Skin central point is more than the first preset time in any position residence time, and the time for leaving any position is no more than the Any position is returned to when two preset times, it is determined that blink.
6. according to claim 1, any one of 2,4 and 5 the method, which is characterized in that continuous more in the capture video flowing After the boundary curve of upper eyelid in frame image, further includes:
Capture the boundary curve of the lower eyelid of the first frame image in the continuous multiple frames image;
Determine the boundary curve of the upper eyelid by the boundary curve and first frame image of the lower eyelid of the first frame image The boundary rectangle in the region of composition;
Calculate the length-width ratio of the boundary rectangle;
When the length-width ratio within a preset range when, then execute the boundary curve according to the upper eyelid in the multiple image The operation of the situation of change of simulation eye upper eyelid.
7. a kind of blink identification device characterized by comprising
Capture module, for capturing the boundary curve of the upper eyelid in video flowing in continuous multiple frames image;
Analog module, for the variation rail according to the boundary curve simulation eye upper eyelid of the upper eyelid in the multiple image Mark;
Judgment module, for determining whether to blink according to the variation track of the eyes upper eyelid, wherein if on described Any position residence time of the eyelid in the variation track is more than the first preset time, and leaves any position Time is returned to any position when being no more than the second preset time, it is determined that blinks.
8. device according to claim 7, which is characterized in that the capture module, for capturing continuous multiple frames in video flowing The boundary curve of upper eyelid in image, specifically:
The capture module is used to carry out edge to each frame image in the continuous multiple frames image using edge detection operator Detection, obtains the boundary curve of the upper eyelid in the multiple image.
9. a kind of equipment, which is characterized in that including processor and readable storage medium storing program for executing, the readable storage medium storing program for executing is for storing journey Sequence instruction, the processor require the described in any item methods of 1-6 for calling described program instruction to carry out perform claim.
10. a kind of readable storage medium storing program for executing, which is characterized in that the readable storage medium storing program for executing is for storing program instruction, described program When instruction is run on a processor, method described in any one of claims 1-6 is realized.
CN201811429644.7A 2018-11-26 2018-11-26 A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing Pending CN109543629A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811429644.7A CN109543629A (en) 2018-11-26 2018-11-26 A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811429644.7A CN109543629A (en) 2018-11-26 2018-11-26 A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing

Publications (1)

Publication Number Publication Date
CN109543629A true CN109543629A (en) 2019-03-29

Family

ID=65850591

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811429644.7A Pending CN109543629A (en) 2018-11-26 2018-11-26 A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing

Country Status (1)

Country Link
CN (1) CN109543629A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110399812A (en) * 2019-07-08 2019-11-01 中国平安财产保险股份有限公司 Face characteristic intelligent extract method, device and computer readable storage medium
CN110472491A (en) * 2019-07-05 2019-11-19 深圳壹账通智能科技有限公司 Abnormal face detecting method, abnormality recognition method, device, equipment and medium
EP3804608A1 (en) * 2019-10-07 2021-04-14 Optos PLC System, method, and computer-readable medium for rejecting full and partial blinks for retinal tracking
CN114120386A (en) * 2020-08-31 2022-03-01 腾讯科技(深圳)有限公司 Face recognition method, device, equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11147428A (en) * 1997-11-18 1999-06-02 Mitsubishi Motors Corp Detection method for awake state
JP2009279099A (en) * 2008-05-20 2009-12-03 Asahi Kasei Corp Blinking kind identifying device, blinking kind identifying method, and blinking kind identifying program
JP2011229741A (en) * 2010-04-28 2011-11-17 Toyota Motor Corp Instrument for estimating sleepiness and method for estimating sleepiness
CN103092160A (en) * 2012-12-27 2013-05-08 深圳市元征软件开发有限公司 Vehicle-mounted monitoring system with eye identification, vehicle-mounted monitoring method and vehicle-mounted terminal
CN105224285A (en) * 2014-05-27 2016-01-06 北京三星通信技术研究有限公司 Eyes open and-shut mode pick-up unit and method
CN106446822A (en) * 2016-09-20 2017-02-22 西安科技大学 Blink detection method based on circle fitting
CN106687037A (en) * 2014-06-20 2017-05-17 弗劳恩霍夫应用研究促进协会 Device, method, and computer program for detecting momentary sleep
CN107809952A (en) * 2015-06-22 2018-03-16 罗伯特·博世有限公司 For watching the method and apparatus made a distinction to blink event and instrument in the case where using eyes opening width

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11147428A (en) * 1997-11-18 1999-06-02 Mitsubishi Motors Corp Detection method for awake state
JP2009279099A (en) * 2008-05-20 2009-12-03 Asahi Kasei Corp Blinking kind identifying device, blinking kind identifying method, and blinking kind identifying program
JP2011229741A (en) * 2010-04-28 2011-11-17 Toyota Motor Corp Instrument for estimating sleepiness and method for estimating sleepiness
CN103092160A (en) * 2012-12-27 2013-05-08 深圳市元征软件开发有限公司 Vehicle-mounted monitoring system with eye identification, vehicle-mounted monitoring method and vehicle-mounted terminal
CN105224285A (en) * 2014-05-27 2016-01-06 北京三星通信技术研究有限公司 Eyes open and-shut mode pick-up unit and method
CN106687037A (en) * 2014-06-20 2017-05-17 弗劳恩霍夫应用研究促进协会 Device, method, and computer program for detecting momentary sleep
CN107809952A (en) * 2015-06-22 2018-03-16 罗伯特·博世有限公司 For watching the method and apparatus made a distinction to blink event and instrument in the case where using eyes opening width
CN106446822A (en) * 2016-09-20 2017-02-22 西安科技大学 Blink detection method based on circle fitting

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
成波;张广渊;冯睿嘉;李家文;张希波;: "基于眼睛状态识别的驾驶员疲劳实时监测", 汽车工程, no. 11 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110472491A (en) * 2019-07-05 2019-11-19 深圳壹账通智能科技有限公司 Abnormal face detecting method, abnormality recognition method, device, equipment and medium
CN110399812A (en) * 2019-07-08 2019-11-01 中国平安财产保险股份有限公司 Face characteristic intelligent extract method, device and computer readable storage medium
CN110399812B (en) * 2019-07-08 2023-05-30 中国平安财产保险股份有限公司 Intelligent face feature extraction method and device and computer readable storage medium
EP3804608A1 (en) * 2019-10-07 2021-04-14 Optos PLC System, method, and computer-readable medium for rejecting full and partial blinks for retinal tracking
CN112700400A (en) * 2019-10-07 2021-04-23 奥普托斯股份有限公司 System and method for rejection of full and partial blinks in retinal tracking
US11568540B2 (en) 2019-10-07 2023-01-31 Optos Plc System, method, and computer-readable medium for rejecting full and partial blinks for retinal tracking
CN114120386A (en) * 2020-08-31 2022-03-01 腾讯科技(深圳)有限公司 Face recognition method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105518708B (en) For verifying the method for living body faces, equipment and computer program product
CN109543629A (en) A kind of blink recognition methods, device, equipment and readable storage medium storing program for executing
CN110223322B (en) Image recognition method and device, computer equipment and storage medium
CN102831439B (en) Gesture tracking method and system
CN109961034A (en) Video object detection method based on convolution gating cycle neural unit
CN107895160A (en) Human face detection and tracing device and method
CN109635727A (en) A kind of facial expression recognizing method and device
CN110119672A (en) A kind of embedded fatigue state detection system and method
CN106469298A (en) Age recognition methodss based on facial image and device
CN105809144A (en) Gesture recognition system and method adopting action segmentation
CN108182409A (en) Biopsy method, device, equipment and storage medium
CN106778496A (en) Biopsy method and device
CN107330371A (en) Acquisition methods, device and the storage device of the countenance of 3D facial models
CN108124486A (en) Face living body detection method based on cloud, electronic device and program product
CN105426882B (en) The method of human eye is quickly positioned in a kind of facial image
CN109377429A (en) A kind of recognition of face quality-oriented education wisdom evaluation system
CN109325462A (en) Recognition of face biopsy method and device based on iris
CN106599785A (en) Method and device for building human body 3D feature identity information database
CN109886153A (en) A kind of real-time face detection method based on depth convolutional neural networks
CN110458140A (en) Site satisfaction evaluation method and apparatus based on Expression Recognition
KR20200012355A (en) Online lecture monitoring method using constrained local model and Gabor wavelets-based face verification process
CN112001215A (en) Method for identifying identity of text-independent speaker based on three-dimensional lip movement
CN106874867A (en) A kind of face self-adapting detecting and tracking for merging the colour of skin and profile screening
CN110222608A (en) A kind of self-service examination machine eyesight detection intelligent processing method
CN109298783A (en) Mark monitoring method, device and electronic equipment based on Expression Recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination