CN104809458A - Pupil center positioning method and pupil center positioning device - Google Patents

Pupil center positioning method and pupil center positioning device Download PDF

Info

Publication number
CN104809458A
CN104809458A CN201410834735.4A CN201410834735A CN104809458A CN 104809458 A CN104809458 A CN 104809458A CN 201410834735 A CN201410834735 A CN 201410834735A CN 104809458 A CN104809458 A CN 104809458A
Authority
CN
China
Prior art keywords
pixel
image
point
ocular
inner eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410834735.4A
Other languages
Chinese (zh)
Other versions
CN104809458B (en
Inventor
盛斌
夏立
殷本俊
方奎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Shanghai Jiaotong University
Original Assignee
Huawei Technologies Co Ltd
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd, Shanghai Jiaotong University filed Critical Huawei Technologies Co Ltd
Priority to CN201410834735.4A priority Critical patent/CN104809458B/en
Publication of CN104809458A publication Critical patent/CN104809458A/en
Application granted granted Critical
Publication of CN104809458B publication Critical patent/CN104809458B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The embodiment of the invention discloses a pupil center positioning method and a pupil center positioning device, which is used for positioning the pupil center quickly and accurately. The method disclosed by the embodiment of the invention comprises the steps of positioning an inner eye corner point of a target image, wherein the inner eye corner point an eye corner point at the near end of the nose in the horizontal direction; intercepting an eye region image in the target image according to the inner eye corner point; and calculating the gradient of pixel points in the eye region image according to difference in image gray scale of the eye region image, and determining the pupil center in the eye region image according to the gradient of the pixel points.

Description

A kind of pupil center's localization method and device
Technical field
The present invention relates to image domains, particularly relate to a kind of pupil center localization method and device.
Background technology
Pupil diameter is the first step of much computer vision application is also a most important step.Such as recognition of face, face characteristic is followed the trail of, facial expression analysis, and iris detection and location also be unable to do without Pupil diameter simultaneously.The precision of pupil center location is by directly and have a deep effect on next step process and analysis.In fatigue driving application aspect, the automatic recovery of position of human eye and state is also an important research topic.
The first step of Eye-controlling focus is pupil center location.Sight line tracking system is that in real time cognitive process and information transmission provide a powerful analysis tool.Nearly two the important applications of Eye-controlling focus: diagnostic analysis and man-machine interaction.It is that the sight line of record reader provides a strong approach objectively, to quantize that diagnostic eye moves tracing system.Move by eye these information that tracing system provides, at numerous areas, there is major application and be worth.Such as analyze lime light when people watch advertisement, be used in Operations Analyst instrument panel, mutual with computing machine, analyze in addition and understand the lime light of the mankind.A representational example of most is: physical disabilities can utilize eye to move tracing system operating computer, complete many affairs such as typewriting.
In the prior art, there is various pupil center localization method, summary is got up, and mainly contains three major types other.
One is being placed with electrode close on the skin of human eye.When oculomotor time, will electric potential difference be produced at the right and left of eyeball, be judged moving direction and the distance of eyes by the difference measuring this electromotive force.
Its two, be lay a kind of mechanical hook-up at inside ofeye.The motion of eyeball will be directly transferred to this device, and then measure the track of motion.
Its three, be the mode based on image.This is that current eye moves the main way following the trail of development.Method based on image catches the eye of people and the picture of head by video camera, utilizes the motion conditions of Computer Analysis eyes and head, so locking human eye watch position attentively.
Mainly contain following several method based on framing pupil center at present.
(1) method of Shape-based interpolation;
This mode only processes the correlated characteristic of the local of eyes.This correlated characteristic can be edge, canthus, or by point of fixity that specific filter screens.Limbus, the i.e. border of iris and sclera, be often used as this correlated characteristic.This method can be divided into again solid shape and shape-variable two kinds of methods.
Pupil is often reduced to an ellipse or even circle (circle is oval special circumstances), and thus oval simplified model just can use.But this simplified model can not process some special circumstances, such as, when the pupil captured is very little or with eyelid overlapping time, simplified model just fails.In addition, pupil and iris are not justify completely or ellipse, and when the eyes turn away of people time, pupil and iris there occurs very large deformation, and this time, simplified model was just no longer applicable.In order to address this problem, people also been proposed shape-variable model, attempt by more parameter pupil and the iris of accurately portraying eyes.This method looks really more reasonable, and in general also more accurately with general, but this method also has its defect.More parameter means larger calculated amount, needs the input picture of high-contrast in addition.Certainly, this method still could not solve the situation that pupil is blocked by major part.
(2) method of feature based;
The method of feature based makes full use of the characteristic of human eye, identifies the different feature of a series of human eye.Conventional feature comprises limbus, pupil, corneal reflection etc.Pupil is black, and outer field iris is grayish, and outermost sclera is generally white, and distinguishing location pupil center by the difference of this gray scale is a kind of method wherein.Such as, calculate the shade of gray of whole ocular, find out the center of gradient, just can think the center of pupil.The method of feature based can solve pupil and to be blocked the problem of (when such as blinking), and experiment shows that this method has good robustness for the change of illumination condition.But the degree of accuracy of the pupil center that the mode of feature based is located depends on the resolution of input picture.And high resolving power often means larger operand.This is the contradiction that is in harmonious proportion of a pair needs inherently.
(3) method mixed;
The method of mixing, as the term suggests, be exactly by used in combination for above-mentioned method, to reaching the object of learning from other's strong points to offset one's weaknesses.Such as, shape and Feature Combination being got up, is exactly the so-called method based on parts.This method attempts to utilize shape to be that specific images match builds unified model.
(4) infrared light supply method;
In each stage that the method is implemented, comprise detection, follow the tracks of, location etc., all be unable to do without infrared light supply.The method of simple dependence visible ray is called as passive method, and additive method is then referred to as active method.When visible ray is not very strong, pupil and iris are not easy to be distinguished, and can be distinguished easily under infrared light supply.Further, for infrared Absorption, iris can reflect for infrared light pupil, forms a bright flare.People utilize the relative position relation of this hot spot and eyeball just almost can locate pupil center accurately.
But based on the method for infrared light supply, first, if control to talk about improperly, likely user is damaged.Secondly, the mode of this increase light source can cause inconvenience to the user, and too increases hardware cost simultaneously.Finally, infrared light supply will improve the precision of location greatly in indoor use, but in outdoor, the place that especially daylight is strong, infrared light supply also can seem unable to do what one wishes, almost loses efficacy.How to reduce to such an extent as to abandon the dependence for infrared light supply, the important topic that after being, eye movements system development will face.
Hough transformation is a kind of feature extraction technology, is generally the shape in order to find particular types, such as straight line, circle or ellipse etc.In the prior art, Hough transformation shellfish is used to locate pupil center.
For circumference, the value of given radius, Hough transformation just can find " best candidate " that satisfy condition.For each candidate's circumference, an all given poll, is used for representing the number belonging to the point of this circumference in image.Because the exact value of radius is unknown, thus this algorithm iterates around first time estimated value.Implement the method and need following step:
The first step, obtains eye area-of-interest.For this reason, first need the image of being opened eyes by two width and blinking to do difference, obtain the gray-scale map of ocular.Then to this image binaryzation process, eye area-of-interest is obtained.
Second step, utilizes Sobel (Sobel) operator to depict the profile of ocular, the especially juncture area of iris and sclera.Because the contrast of iris and sclera is very high, be thus easy to just to utilize the difference operator such as Sobel or accurate (Canny) detect and extract its edge.
3rd step, with Hough transformation edge figure process, obtains the candidate target collection of the Hough circle meeting this border.Selecting suitable Hough circle to concentrate from candidate target, needing that Hough transformation is done some and improving.Each is had to the Hough circle group of similar quality, namely hive off according to radius and the center of circle, this group is fused in a new Hough circle, and all poll sums belonging to this group are assigned to the new Hough circle be fused into.The Hough circle that number of votes obtained is maximum just can think the border of iris and sclera, so pupil center is the center of circle of this circle.
Although the border of iris and sclera is reduced to a convenient analyzing and processing of circle, also can reach higher precision when facing video camera, this method depends on this border and occurs with the form of a full circle all the time.Once border is blocked, such as micro-ly to narrow eyes, the method for Hough circle just fails.In addition, when eyes secundly time, the border of whole iris will be no longer a full circle, but the ellipse that a major axis is shorter than minor axis, now will there is mistake by Hough circle coupling again.
Summary of the invention
Embodiments provide a kind of pupil center localization method and device, for the location of carrying out fast pupil center accurately.
Pupil center's localization method that the embodiment of the present invention provides, comprising:
The inner eye corner point of localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point;
The ocular image in described target image is intercepted according to described inner eye corner point;
Calculate the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determine the pupil center in described ocular image according to the gradient of described pixel.
In conjunction with first aspect, in the implementation that the first is possible, the described ocular image intercepted according to described inner eye corner point in described target image, comprising:
According to canthus horizontal range and described inner eye corner point determination external eyes angle point, described canthus horizontal range is default described inner eye corner point and described external eyes angle point distance in the horizontal direction, and described external eyes angle point is in the horizontal direction at the canthus of bridge of the nose far-end point;
According to human eye in the horizontal direction and the preset ratio of vertical direction, and with described inner eye corner point and the described tail of the eye for boundary's setting intercepts frame;
Use described intercepting frame in described target image, intercept described ocular image.
In conjunction with the first possible implementation of first aspect, in the implementation that the second is possible, described according to canthus horizontal range and described inner eye corner point determination external eyes angle point, comprising:
According to the area image of described canthus horizontal range and described inner eye corner point determination external eyes angle point;
Binary conversion treatment is carried out to the area image of described external eyes angle point;
Carry out traversal to the pixel in described binary conversion treatment rear region image to search, to determine in level toward the first difference pixel on bridge of the nose direction to be external eyes angle point.
In conjunction with first aspect, in the implementation that the third is possible, describedly calculate the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determine the pupil center in described ocular image according to the gradient of described pixel, comprising:
1) choose the first pixel, described first pixel is a pixel in described ocular image;
2) choose the second pixel, described second pixel is a pixel in described ocular image except described first pixel;
3) determine by described first pixel to the motion vector of described second pixel;
4) determine the gradient vector of described second pixel, described gradient vector represents the change direction in described second pixel place gray scale;
5) described motion vector is multiplied with described gradient vector, obtains vector product;
6) if the second pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 5); If the second pixel be not selected in described ocular image, then the absolute value of described vector product corresponding to the second pixel described in each or square value are sued for peace, obtain central point weights;
7) if the first pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 6); If the first pixel be not selected in described ocular image, then determine that the first pixel of described central point maximum weight is as the pupil center in described ocular image.
In conjunction with the third possible implementation of first aspect, in the 4th kind of possible implementation,
Describedly to determine by described first pixel, to the motion vector of described second pixel, to comprise:
Determine by described first pixel to the motion vector of described second pixel, and described motion vector is normalized;
The described gradient vector determining described second pixel, comprising:
Determine the gradient vector of described second pixel, and described gradient vector is normalized.
In conjunction with first aspect, in the 5th kind of possible implementation, the inner eye corner point of described localizing objects image, comprising:
The Harris corner detection approach of use Harris Harris corner detection approach, improvement, Scale invariant features transform SIFT algorithm, acceleration robust feature SURF algorithm, fast FAST algorithm, succinctly BRIEF algorithm or desirable features follow the tracks of the inner eye corner point of GFTT algorithm localizing objects image.
In conjunction with first aspect, in the 6th kind of possible implementation, the inner eye corner point of described localizing objects image, comprising:
The eigenwert of pixel in described target image is calculated according to corner detection approach;
Determine the inner eye corner point of the maximum pixel of described eigenwert as described target image.
In conjunction with first aspect the 6th kind of possible implementation, in the 7th kind of possible implementation, calculate the eigenwert of pixel in described target image according to corner detection approach after, comprising:
Obtain the maximum m of a described eigenwert pixel, described m is greater than 5 and is less than the integer of 20;
Length summation is carried out to m* (m-1)/2 line segment that two pixels any in a described m pixel are formed;
If the result of described summation is not in preset range, then do not carry out the location of pupil center according to described target image, the threshold value of described preset range to be described target image be eye opening image.
A kind of pupil center locating device that the embodiment of the present invention provides, comprising:
Canthus positioning unit, for the inner eye corner point of localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point;
Area determination unit, for intercepting the ocular image in described target image according to described inner eye corner point;
Centering unit, for calculating the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determines the pupil center in described ocular image according to the gradient of described pixel.
In conjunction with first aspect, in the implementation that the first is possible, described area determination unit specifically for:
According to canthus horizontal range and described inner eye corner point determination external eyes angle point, described canthus horizontal range is default described inner eye corner point and described external eyes angle point distance in the horizontal direction, and described external eyes angle point is in the horizontal direction at the canthus of bridge of the nose far-end point;
According to human eye in the horizontal direction and the preset ratio of vertical direction, and with described inner eye corner point and the described tail of the eye for boundary's setting intercepts frame;
Use described intercepting frame in described target image, intercept described ocular image.
In conjunction with the first possible implementation of first aspect, in the implementation that the second is possible, described area determination unit concrete also for:
According to the area image of described canthus horizontal range and described inner eye corner point determination external eyes angle point;
Binary conversion treatment is carried out to the area image of described external eyes angle point;
Carry out traversal to the pixel in described binary conversion treatment rear region image to search, to determine in level toward the first difference pixel on bridge of the nose direction to be external eyes angle point.
In conjunction with first aspect, in the implementation that the third is possible, described centering unit specifically for:
1) choose the first pixel, described first pixel is a pixel in described ocular image;
2) choose the second pixel, described second pixel is a pixel in described ocular image except described first pixel;
3) determine by described first pixel to the motion vector of described second pixel;
4) determine the gradient vector of described second pixel, described gradient vector represents the change direction in described second pixel place gray scale;
5) described motion vector is multiplied with described gradient vector, obtains vector product;
6) if the second pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 5); If the second pixel be not selected in described ocular image, then the absolute value of described vector product corresponding to the second pixel described in each or square value are sued for peace, obtain central point weights;
7) if the first pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 6); If the first pixel be not selected in described ocular image, then determine that the first pixel of described central point maximum weight is as the pupil center in described ocular image.
In conjunction with the third possible implementation of first aspect, in the 4th kind of possible implementation, described centering unit concrete also for:
Determine by described first pixel to the motion vector of described second pixel, and described motion vector is normalized;
Determine the gradient vector of described second pixel, and described gradient vector is normalized.
In conjunction with first aspect, in the 5th kind of possible implementation, described canthus positioning unit specifically for:
The Harris corner detection approach of use Harris Harris corner detection approach, improvement, Scale invariant features transform SIFT algorithm, acceleration robust feature SURF algorithm, fast FAST algorithm, succinctly BRIEF algorithm or desirable features follow the tracks of the inner eye corner point of GFTT algorithm localizing objects image.
In conjunction with first aspect, in the 6th kind of possible implementation, described canthus positioning unit concrete also for:
The eigenwert of pixel in described target image is calculated according to corner detection approach;
Determine the inner eye corner point of the maximum pixel of described eigenwert as described target image.
In conjunction with first aspect the 6th kind of possible implementation, in the 7th kind of possible implementation, described device also comprises:
Eye opening identifying unit, for obtaining the maximum m of a described eigenwert pixel, described m is greater than 5 and is less than the integer of 20; Length summation is carried out to m* (m-1)/2 line segment that two pixels any in a described m pixel are formed; If the result of described summation is not in preset range, then do not carry out the location of pupil center according to described target image, the threshold value of described preset range to be described target image be eye opening image.
As can be seen from the above technical solutions, the embodiment of the present invention has the following advantages:
In embodiments of the present invention, the inner eye corner point of localizing objects image is projected in the reference point of viewing plane as eyes, intercepting the ocular image in described target image again according to this reference point, making head portrait still can get ocular image accurately when rocking; And, because the gray scale in pupil is the change that has levels from inside to outside, therefore the embodiment of the present invention calculates the gradient of pixel in ocular image according to the difference of gradation of image in ocular image, and determine the pupil center in described ocular image according to the gradient of described pixel, the location of pupil center can be realized accurately, and when micro-narrowing eyes (pupil in ocular image is not a complete circle), also can realize the location of pupil center.
Accompanying drawing explanation
In order to be illustrated more clearly in the embodiment of the present invention or technical scheme of the prior art, be briefly described to the accompanying drawing used required in embodiment below, apparently, accompanying drawing in the following describes is only some embodiments of the present invention, for those of ordinary skill in the art, under the prerequisite not paying creative work, other accompanying drawing can also be obtained according to these accompanying drawings.
Fig. 1 is a schematic flow sheet of pupil center's localization method in the embodiment of the present invention;
Fig. 2 is the schematic diagram of pupil alignment in the embodiment of the present invention;
Fig. 3 is another schematic flow sheet of pupil center's localization method in the embodiment of the present invention;
Fig. 4 is the schematic diagram that in the embodiment of the present invention, pupil center calculates;
Fig. 5 is another schematic flow sheet of pupil center's localization method in the embodiment of the present invention;
Fig. 6 is the schematic diagram that in the embodiment of the present invention, unique point obtains;
Fig. 7 is a structural representation of pupil center's locating device in the embodiment of the present invention;
Fig. 8 is the computing machine schematic diagram based on pupil center's localization method in the embodiment of the present invention.
Embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, be clearly and completely described the technical scheme in the embodiment of the present invention, obviously, described embodiment is only the present invention's part embodiment, instead of whole embodiments.Based on the embodiment in the present invention, those of ordinary skill in the art, not making the every other embodiment obtained under creative work prerequisite, belong to the scope of protection of the invention.
It should be noted that, pupil center's locating device in the embodiment of the present invention has all functions in the embodiment of the present invention described by pupil center's localization method, it can be independent physical equipment, also can be by the equipment formed during several physics, for loading program on computers, specifically can also be not construed as limiting herein.
Refer to Fig. 1, in the embodiment of the present invention, an embodiment of pupil center's localization method comprises:
101, the inner eye corner point of localizing objects image;
The inner eye corner point of pupil center's locating device localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point.
Described target image is the image including eye feature, is specifically as follows face head portrait, and also can be upper half of human body picture, the type of concrete image be determined according to the image in practical application captured by picture pick-up device, is not construed as limiting herein.
For computer equipment, in facial image, inner eye corner point is obviously unique point, and computer equipment can be identified from facial image by corner detection approach.
Concrete, in the target image taken by picture pick-up device, the part of inner eye corner point can include multiple pixel, and pupil center's locating device can choose in the plurality of pixel one as inner eye corner point.
Concrete, the inner eye corner point of corner detection approach localizing objects image can be used, described corner detection approach comprises: the Harris corner detection approach of Harris (Harris) corner detection approach, improvement, Scale invariant features transform (SIFT, Scale-invariant feature transform) algorithm, accelerate robust feature (SURF, Speeded Up Robust Features) algorithm, quick (FAST) algorithm, succinct (BRIEF) algorithm or desirable features tracking (GFTT, Good Feature To Track) algorithm.
Wherein, above-mentioned often kind of algorithm has the Pros and Cons in respective performance, and in embodiments of the present invention, whether the unique point that this algorithm of primary concern filters out can concentrate near ocular; In actual applications, if the application of pupil center location also has the requirement in computing velocity, then preferred computing velocity algorithm faster.
102, the ocular image in described target image is intercepted according to described inner eye corner point;
Pupil center's locating device intercepts the ocular image in described target image according to described inner eye corner point.
In embodiments of the present invention, the pupil center that locate, relatively with the head of people, is constantly variation (that is, the head of hypothesis people is different, and eyeball also can gyrate because of the content of diverse location in view screen), and in actual applications, even if always at view screen, the head of people also rocks through being everlasting, obtain in target image process at picture pick-up device, due to rocking of head, picture pick-up device is also located in the process of eyes image wherein at acquisition target image, is often difficult to get ocular image accurately (namely, the ocular image got includes the pixel needing not belong to eye feature), make follow-up pupil center's position fixing process produce larger error, therefore, based on above-mentioned pupil center relative to head movement, and the situation that head moves relative to view screen, when locating ocular image, need in the face feature of people, find one relatively fixing in the face feature of people, and be the reference point moved relative to pupil center, during the screen that the ocular of people is projected in observed by eyes, there is a reliable reference object, carry out in the process of ocular Image Acquisition follow-up, the intercepting of ocular image can be carried out according to this reference point, thus get described ocular image accurately.In embodiments of the present invention, inner eye corner point and the reference point mentioned described above.
Concrete, because eyes have relatively-stationary shape facility, by a large amount of data acquisitions, can obtain the ratio of eye shape feature, pupil center's locating device can intercept the ocular image in described target image according to the ratio of described eye shape feature and described inner eye corner point.
103, the pupil center in described ocular image is determined.
Pupil center's locating device calculates the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determines the pupil center in described ocular image according to the gradient of described pixel.
The typical front view of eyes that Fig. 2 behaves, cornea is the layer of transparent coat being wrapped in eyeball.Iris is the musculature controlling pupil size, just as the aperture of video camera, controls the power of the light entering inside ofeye.Iris is coloured, and varies with each individual, and thus usually can be used as the powerful of biological assay.Sclera is the outer boundary of eyeball, usually presents white.As shown in Figure 2, gray scale in pupil has levels change from inside to outside (Fig. 2 just schematically, in actual applications, pupil has abundanter layer variability), therefore, the central point of the pupil that can find according to the change direction of pupil gray scale in ocular image (inside outwards, or ecto-entad).
In embodiments of the present invention, the inner eye corner point of localizing objects image is projected in the reference point of viewing plane as eyes, intercepting the ocular image in described target image again according to this reference point, making head portrait still can get ocular image accurately when rocking; And, because the gray scale in pupil is the change that has levels from inside to outside, therefore the embodiment of the present invention calculates the gradient of pixel in ocular image according to the difference of gradation of image in ocular image, and determine the pupil center in described ocular image according to the gradient of described pixel, the location of pupil center can be realized accurately, and when micro-narrowing eyes (pupil in ocular image is not a complete circle), also can realize the location of pupil center.
Embodiment is to the acquisition methods of ocular image in the embodiment of the present invention below, and be described in detail according to the method for ocular framing pupil center, refer to Fig. 3, in the embodiment of the present invention, another embodiment of pupil center's localization method comprises:
301, the inner eye corner point of localizing objects image;
The inner eye corner point of pupil center's locating device localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point.
Described target image is the image including eye feature, is specifically as follows face head portrait, and also can be upper half of human body picture, the type of concrete image be determined according to the image in practical application captured by picture pick-up device, is not construed as limiting herein.
302, according to canthus horizontal range and described inner eye corner point determination external eyes angle point;
Pupil center's locating device is according to canthus horizontal range and described inner eye corner point determination external eyes angle point, described canthus horizontal range is default described inner eye corner point and described external eyes angle point distance in the horizontal direction, and described external eyes angle point is in the horizontal direction at the canthus of bridge of the nose far-end point.
Described canthus horizontal range is according to a large amount of data acquisitions, the expectation value of the inner eye corner point obtained and described external eyes angle point distance in the horizontal direction.
Concrete, the method according to canthus horizontal range and described inner eye corner point determination external eyes angle point can be:
First, according to the area image of described canthus horizontal range and described inner eye corner point determination external eyes angle point; Exemplary, can according to the general location at canthus outside described canthus horizontal range and described inner eye corner point estimation, the border circular areas constructed centered by this position again or rectangular area, this border circular areas or rectangular area are then the area image of described external eyes angle point.
Secondly, binary conversion treatment is carried out to the area image of described external eyes angle point.
Again, traversal is carried out to the pixel in described binary conversion treatment rear region image and searches, to determine in level toward the first difference pixel on bridge of the nose direction to be external eyes angle point.Exemplary, if be black white image by image binaryzation, and canthus part binarized be black color dots, then this difference pixel is this black color dots.
303, according to human eye in the horizontal direction and the preset ratio of vertical direction, and with described inner eye corner point and the described tail of the eye for boundary's setting intercepts frame;
Pupil center's locating device according to human eye in the horizontal direction and the preset ratio of vertical direction, and with described inner eye corner point and the described tail of the eye for boundary's setting intercepts frame.
Concrete, described intercepting frame can be rectangular shaped rim, also can be the frame of eye shape (as shown in Figure 2).
304, use described intercepting frame in described target image, intercept described ocular image;
Pupil center's locating device uses described intercepting frame in described target image, intercept described ocular image.
In embodiments of the present invention, with inner eye corner point and external eyes angle point for border, the ocular image that fully comprises whole eyes image is intercepted.What is called fully comprises to refer to both does not have unnecessary region can not omit any important area about eye.Intercept through this step, its processing speed also will improve greatly.The more important thing is, no matter how the head of user moves, as long as location inner eye corner point and external eyes angle point can be prepared, according to the image that canthus intercepts out, large change occurs hardly, this just makes the method for the embodiment of the present invention also can keep good robustness when rocking appears in user's head.
305, the pupil center in described ocular image is determined.
Pupil center's locating device calculates the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determines the pupil center in described ocular image according to the gradient of described pixel.
Exemplary, determine that the detailed process of the pupil center in described ocular image can be:
1) choose the first pixel, described first pixel is a pixel in described ocular image;
2) choose the second pixel, described second pixel is a pixel in described ocular image except described first pixel;
3) determine by described first pixel to the motion vector of described second pixel; Further, in order to make the weight of each pixel equal (namely only considering the impact of pixel on direction), can also be normalized described motion vector.
4) determine the gradient vector of described second pixel, described gradient vector represents the change direction in described second pixel place gray scale; Further, in order to improve robustness under different illumination conditions and contrast, can also be normalized described gradient vector.
5) described motion vector is multiplied with described gradient vector, obtains vector product;
6) if the second pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 5); If the second pixel be not selected in described ocular image, then the absolute value of described vector product corresponding to the second pixel described in each or square value are sued for peace, obtain central point weights;
7) if the first pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 6); If the first pixel be not selected in described ocular image, then determine that the first pixel of described central point maximum weight is as the pupil center in described ocular image.
Concrete, can suppose that the pixel of pupil center in ocular image is the first pixel C, a pixel in ocular image except described first pixel is the second pixel x i, the gradient vector of the second pixel is designated as g i; By described first pixel C to described second pixel x imotion vector be d i, as shown in Figure 4, work as g iwith d idirection identical time, g iwith d ithe value of vector product maximum.
In actual applications, because the gray scale in pupil is the change that has levels from inside to outside, only the first pixel C is when pupil center location, for an arbitrary second pixel x i, g could be ensured to greatest extent iwith d iin the same direction (namely unique direction of pixel is identical with grey scale change direction).Therefore, in the target image, given optional position x i, i ∈ 1 ..., and N}, wherein, n is the number of all pixels, and an annular optimum center can be expressed as:
C * = max { Σ i = 1 N ( d i T g i ) 2 } Formula (1)
Wherein, d iexpression formula be:
d i = x i - c | | x i - c | | 2 , ∀ i : | | g i | | 2 = 1 Formula (2)
Wherein represent d itransposition, goodly show any i, || g i|| 2represent g itwo norms.
In some cases, above-mentioned formula (2) can not obtain maximal value, or can only obtain a wrong local maximum.Therefore, some prioris can be used to improve robustness.Because pupil is compared to the gray-scale value smaller (darker) of iris and skin, thus for each possible pupil center, give an one weights Wc, the probability that the pixel making gray-scale value little (darker point) becomes pupil center is higher than the large point (brighter point) of gray-scale value.So obtain:
C * = max { Σ i = 1 N Wc ( d i T g i ) 2 } Formula (3)
In actual applications, if the facial image in target image is an eye closing image, then be difficult to the location of carrying out pupil center according to described target image, embodiment is described in detail getting rid of eye closing image method in the embodiment of the present invention below, refer to Fig. 5, in the embodiment of the present invention, another embodiment of pupil center's localization method comprises:
501, determine that current target image is not eye closing image;
Pupil center's locating device obtains current target image, and carries out feature point detection to described target image.
Exemplary, in embodiments of the present invention, detect unique point according to GFTT algorithm, be then specifically as follows: given any motion vector (u, v), wherein, u, v are illustrated respectively in the shift value in x and y direction; Corresponding, one of them pixel gray scale difference in all directions can be expressed as:
E (u, v)=∑ x, yw (x, y) [I (x+u, y+v)-I (x, y)] 2formula (4)
Wherein, w (x, y) is window function, can be a rectangular window, also can be the Gauss function of given weights.Detect unique point (that is, the angle point in GFTT algorithm) to be equivalent to and to ask fixed point (x, y), make the E (u, v) at this some place obtain maximal value.I (x, y) represents the image intensity value of (x, y) position in Given Graph picture.
By Taylor series, the Section 2 on the right of equation is launched, can obtain:
E ( u , v ) ≈ u v · M · u v Formula (5)
formula (6)
Wherein, I xand I ythe derivative of image in x and y direction respectively.In order to determine whether given window function comprises unique point, just constructing one and differentiating equation:
R=min (λ 1λ 2) formula (7)
Wherein λ 1, λ 1it is the characteristic root of Metzler matrix.If R is greater than a threshold value, just think unique point.
In actual applications, can by arranging threshold value in function or condition limits the unique point filtered out, such as:
First, the minor increment between the detected next unique point of restriction, filters out too intensive characteristic point group.In test, setting this value is 17 pixels; Secondly, the maximum number limiting the unique point screened is N (judging according to blink detection model), namely can only filter out at most the N number of unique point possessing maximum probability satisfied condition; Again, setting accuracy value is 0.1, and namely when calculating the eigenwert that feature is pointed out, minimal eigenvalue must not be less than 1/10th of eigenvalue of maximum.
Concrete, if the unique point of required screening only has inner eye corner point, the maximum number then arranging the unique point screened is 1, and pupil center's locating device then can export a maximum pixel of eigenwert in the pixel of described target image, and this pixel is exactly inner eye corner point.
Concrete, to carry out the judgement of eye closing image, then need to obtain m maximum pixel of eigenwert, described m is greater than 5 and is less than the integer of 20; Length summation (that is, asking the Wei Na value of a described m pixel) is carried out to m* (m-1)/2 line segment that two pixels any in a described m pixel are formed; If the result of described summation is not in preset range, then determine that described target image is eye closing image, the location of pupil center is not then carried out according to described target image, if in the result preset range of described summation, then determine that described target image is not eye closing image, the threshold value of described preset range to be described target image be eye opening image.
In actual applications, when eyes are opened time, m unique point is gathered in peribulbar (as shown in Figure 6), and now the Wei Na value of connected graph is less; When eyes closed time, m unique point be not gathered in peribulbar, but disperses to surrounding.Now the Wei Na value of connected graph will become suddenly large.
502, the inner eye corner point of localizing objects image;
The inner eye corner point of pupil center's locating device localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point.
Concrete, can with reference to the feature point detecting method described by above-mentioned steps 501, if the unique point of required screening only has inner eye corner point, the maximum number then arranging the unique point screened is 1, pupil center's locating device then can export a maximum pixel of eigenwert in the pixel of described target image, and this pixel is exactly inner eye corner point.
503, the ocular image in described target image is intercepted according to described inner eye corner point;
Pupil center's locating device intercepts the ocular image in described target image according to described inner eye corner point.
Concrete, because eyes have relatively-stationary shape facility, by a large amount of data acquisitions, can obtain the ratio of eye shape feature, pupil center's locating device can intercept the ocular image in described target image according to the ratio of described eye shape feature and described inner eye corner point.
504, the pupil center in described ocular image is determined.
Pupil center's locating device calculates the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determines the pupil center in described ocular image according to the gradient of described pixel.
Below the pupil center's locating device realizing above-mentioned pupil center localization method in the embodiment of the present invention is described, refer to Fig. 7, it should be noted that, method described in each embodiment of above-mentioned pupil center localization method, may be implemented in pupil center of the present invention locating device.In the embodiment of the present invention, an embodiment of pupil center's locating device comprises:
Canthus positioning unit 701, for the inner eye corner point of localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point;
Area determination unit 702, for intercepting the ocular image in described target image according to described inner eye corner point;
Centering unit 703, for calculating the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determines the pupil center in described ocular image according to the gradient of described pixel.
Further, described area determination unit 702 specifically for:
According to canthus horizontal range and described inner eye corner point determination external eyes angle point, described canthus horizontal range is default described inner eye corner point and described external eyes angle point distance in the horizontal direction, and described external eyes angle point is in the horizontal direction at the canthus of bridge of the nose far-end point;
According to human eye in the horizontal direction and the preset ratio of vertical direction, and with described inner eye corner point and the described tail of the eye for boundary's setting intercepts frame;
Use described intercepting frame in described target image, intercept described ocular image.
Further, described area determination unit 702 concrete also for:
According to the area image of described canthus horizontal range and described inner eye corner point determination external eyes angle point;
Binary conversion treatment is carried out to the area image of described external eyes angle point;
Carry out traversal to the pixel in described binary conversion treatment rear region image to search, to determine in level toward the first difference pixel on bridge of the nose direction to be external eyes angle point.
Further, described centering unit 703 specifically for:
1) choose the first pixel, described first pixel is a pixel in described ocular image;
2) choose the second pixel, described second pixel is a pixel in described ocular image except described first pixel;
3) determine by described first pixel to the motion vector of described second pixel;
4) determine the gradient vector of described second pixel, described gradient vector represents the change direction in described second pixel place gray scale;
5) described motion vector is multiplied with described gradient vector, obtains vector product;
6) if the second pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 5); If the second pixel be not selected in described ocular image, then the absolute value of described vector product corresponding to the second pixel described in each or square value are sued for peace, obtain central point weights;
7) if the first pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 6); If the first pixel be not selected in described ocular image, then determine that the first pixel of described central point maximum weight is as the pupil center in described ocular image.
Further, described centering unit 703 concrete also for:
Determine by described first pixel to the motion vector of described second pixel, and described motion vector is normalized;
Determine the gradient vector of described second pixel, and described gradient vector is normalized.
Further, described canthus positioning unit 701 specifically for:
The Harris corner detection approach of use Harris Harris corner detection approach, improvement, Scale invariant features transform SIFT algorithm, acceleration robust feature SURF algorithm, fast FAST algorithm, succinctly BRIEF algorithm or desirable features follow the tracks of the inner eye corner point of GFTT algorithm localizing objects image.
Further, described canthus positioning unit 701 concrete also for:
The eigenwert of pixel in described target image is calculated according to corner detection approach;
Determine the inner eye corner point of the maximum pixel of described eigenwert as described target image.
Further, described device also comprises:
Eye opening identifying unit 704, for obtaining the maximum m of a described eigenwert pixel, described m is greater than 5 and is less than the integer of 20; Length summation is carried out to m* (m-1)/2 line segment that two pixels any in a described m pixel are formed; If the result of described summation is not in preset range, then do not carry out the location of pupil center according to described target image, the threshold value of described preset range to be described target image be eye opening image.
Below the specific operation process of unit in the embodiment of the present invention is described:
Eye opening identifying unit 704 obtains current target image, and carries out feature point detection to described target image.Exemplary, in embodiments of the present invention, detect unique point according to GFTT algorithm, be then specifically as follows: given any motion vector (u, v), wherein, u, v are illustrated respectively in the shift value in x and y direction; Corresponding, one of them pixel gray scale difference in all directions can be expressed as:
E (u, v)=∑ x, yw (x, y) [I (x+u, y+v)-I (x, y)] 2formula (4)
Wherein, w (x, y) is window function, can be a rectangular window, also can be the Gauss function of given weights.Detect unique point (that is, the angle point in GFTT algorithm) to be equivalent to and to ask fixed point (x, y), make the E (u, v) at this some place obtain maximal value.I (x, y) represents the image intensity value of (x, y) position in Given Graph picture.
By Taylor series, the Section 2 on the right of equation is launched, can obtain:
E ( u , v ) ≈ u v · M · u v Formula (5)
M = Σ x , y w ( x , y ) I x I x I x I y I x I y I y I y Formula (6)
Wherein, I xand I ythe derivative of image in x and y direction respectively.In order to determine whether given window function comprises unique point, just constructing one and differentiating equation:
K=min (λ 1λ 2) formula (7)
Wherein λ 1: λ 2it is the characteristic root of Metzler matrix.If R is greater than a threshold value, just think unique point.
In actual applications, can by arranging threshold value in function or condition limits the unique point filtered out, such as:
First, the minor increment between the detected next unique point of restriction, filters out too intensive characteristic point group.In test, setting this value is 17 pixels; Secondly, the maximum number limiting the unique point screened is N (judging according to blink detection model), namely can only filter out at most the N number of unique point possessing maximum probability satisfied condition; Again, setting accuracy value is 0.1, and namely when calculating the eigenwert that feature is pointed out, minimal eigenvalue must not be less than 1/10th of eigenvalue of maximum.
Concrete, if the unique point of required screening only has inner eye corner point, the maximum number then arranging the unique point screened is 1, and pupil center's locating device then can export a maximum pixel of eigenwert in the pixel of described target image, and this pixel is exactly inner eye corner point.
Concrete, to carry out the judgement of eye closing image, then need to obtain m maximum pixel of eigenwert, described m is greater than 5 and is less than the integer of 20; Length summation (that is, asking the Wei Na value of a described m pixel) is carried out to m* (m-1)/2 line segment that two pixels any in a described m pixel are formed; If the result of described summation is not in preset range, then determine that described target image is eye closing image, the location of pupil center is not then carried out according to described target image, if in the result preset range of described summation, then determine that described target image is not eye closing image, the threshold value of described preset range to be described target image be eye opening image.
After determining that described target image is eye opening image, the inner eye corner point of canthus positioning unit 701 localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point.If the unique point of required screening only has inner eye corner point, the maximum number then arranging the unique point screened is 1, pupil center's locating device then can export a maximum pixel of eigenwert in the pixel of described target image, and this pixel is exactly inner eye corner point.
Area determination unit 702 is according to canthus horizontal range and described inner eye corner point determination external eyes angle point, described canthus horizontal range is default described inner eye corner point and described external eyes angle point distance in the horizontal direction, and described external eyes angle point is in the horizontal direction at the canthus of bridge of the nose far-end point.
Concrete, the method according to canthus horizontal range and described inner eye corner point determination external eyes angle point can be:
First, according to the area image of described canthus horizontal range and described inner eye corner point determination external eyes angle point; Exemplary, can according to the general location at canthus outside described canthus horizontal range and described inner eye corner point estimation, the border circular areas constructed centered by this position again or rectangular area, this border circular areas or rectangular area are then the area image of described external eyes angle point.
Secondly, binary conversion treatment is carried out to the area image of described external eyes angle point.
Again, traversal is carried out to the pixel in described binary conversion treatment rear region image and searches, to determine in level toward the first difference pixel on bridge of the nose direction to be external eyes angle point.Exemplary, if be black white image by image binaryzation, and canthus part binarized be black color dots, then this difference pixel is this black color dots.
Area determination unit 702 again according to human eye in the horizontal direction and the preset ratio of vertical direction, and with described inner eye corner point and the described tail of the eye for boundary's setting intercepts frame; Finally, use described intercepting frame in described target image, intercept described ocular image.
In embodiments of the present invention, with inner eye corner point and external eyes angle point for border, the ocular image that fully comprises whole eyes image is intercepted.What is called fully comprises to refer to both does not have unnecessary region can not omit any important area about eye.Intercept through this step, its processing speed also will improve greatly.The more important thing is, no matter how the head of user moves, as long as location inner eye corner point and external eyes angle point can be prepared, according to the image that canthus intercepts out, large change occurs hardly, this just makes the method for the embodiment of the present invention also can keep good robustness when rocking appears in user's head.
After determining ocular image, centering unit 703 calculates the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determines the pupil center in described ocular image according to the gradient of described pixel.
Exemplary, determine that the detailed process of the pupil center in described ocular image can be:
1) choose the first pixel, described first pixel is a pixel in described ocular image;
2) choose the second pixel, described second pixel is a pixel in described ocular image except described first pixel;
3) determine by described first pixel to the motion vector of described second pixel; Further, in order to make the weight of each pixel equal (namely only considering the impact of pixel on direction), can also be normalized described motion vector.
4) determine the gradient vector of described second pixel, described gradient vector represents the change direction in described second pixel place gray scale; Further, in order to improve robustness under different illumination conditions and contrast, can also be normalized described gradient vector.
5) described motion vector is multiplied with described gradient vector, obtains vector product;
6) if the second pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 5); If the second pixel be not selected in described ocular image, then the absolute value of described vector product corresponding to the second pixel described in each or square value are sued for peace, obtain central point weights;
7) if the first pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 6); If the first pixel be not selected in described ocular image, then determine that the first pixel of described central point maximum weight is as the pupil center in described ocular image.
Concrete, can suppose that the pixel of pupil center in ocular image is the first pixel C, a pixel in ocular image except described first pixel is the second pixel x i, the gradient vector of the second pixel is designated as g i; By described first pixel C to described second pixel x imotion vector be d i, as shown in Figure 4, work as g iwith d idirection identical time, g iwith d ithe value of vector product maximum.
In actual applications, because the gray scale in pupil is the change that has levels from inside to outside, only the first pixel C is when pupil center location, for an arbitrary second pixel x i, g could be ensured to greatest extent iwith d iin the same direction (namely unique direction of pixel is identical with grey scale change direction).Therefore, in the target image, given optional position x i, i ∈ 1 ..., and N}, wherein, n is the number of all pixels, and an annular optimum center can be expressed as:
C * = max { Σ i = 1 N ( d i T g i ) 2 } Formula (1)
Wherein, d iexpression formula be:
d i = x i - c | | x i - c | | 2 , ∀ i : | | g i | | 2 = 1 Formula (2)
Wherein represent d itransposition, represent any i, || g i|| 2represent g itwo norms.
In some cases, above-mentioned formula (2) can not obtain maximal value, or can only obtain a wrong local maximum.Therefore, some prioris can be used to improve robustness.Because pupil is compared to the gray-scale value smaller (darker) of iris and skin, thus for each possible pupil center, give an one weights Wc, the probability that the pixel making gray-scale value little (darker point) becomes pupil center is higher than the large point (brighter point) of gray-scale value.So obtain:
C * = max { Σ i = 1 N Wc ( d i T g i ) 2 } Formula (3)
The structural representation of Tu8Shi embodiment of the present invention pupil center locating device.Pupil center's locating device can comprise input block 810, output unit 820, processor 830, storer 840, bus system 850.
Input block 810, is input in pupil center's locating device for the mutual and/or information realizing user and pupil center's locating device.Such as, input block 810 can receive numeral or the character information of user's input, to arrange or function controls the input of relevant signal to produce with user.In the specific embodiment of the invention, input block 810 can be contact panel, also can be other people machine interactive interface, such as entity enter key, microphone etc., also other external information capture devices, such as, make a video recording first-class.Contact panel, also referred to as touch-screen or touch screen, can collect user and touch thereon or close operational motion.Such as user uses the operational motion of any applicable objects such as finger, stylus or the position of annex on contact panel or close to contact panel, and drives corresponding coupling arrangement according to the formula preset.Optionally, contact panel can comprise touch detecting apparatus and touch controller two parts.Wherein, touch detecting apparatus detects the touch operation of user, and the touch operation detected is converted to electric signal, and sends described electric signal to touch controller; Touch controller receives described electric signal from touch detecting apparatus, and converts it to contact coordinate, then gives processor 830.Described touch controller can also receiving processor 830 order of sending performing.In addition, the polytypes such as resistance-type, condenser type, infrared ray (Infrared) and surface acoustic wave can be adopted to realize contact panel.In other embodiments of the present invention, the entity enter key that input block 810 adopts can include but not limited in physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, control lever etc. one or more.The input block 810 of microphone form can collect user or environment input voice and convert thereof into electrical signal form, the executable order of processor 830.
Output unit 820 includes but not limited to image output unit and voice output unit.Image output unit is used for output character, picture and/or video.Described image output unit can comprise display panel, such as adopt LCD (Liquid Crystal Display, liquid crystal display), OLED (Organic Light-Emitting Diode, Organic Light Emitting Diode), the display panel that configures of the form such as Field Emission Display (field emission display, FED).Or described image output unit can comprise reflected displaying device, such as electrophoresis-type (electrophoretic) display, or the display utilizing interference of light modulation tech (Interferometric Modulation of Light).Described image output unit can comprise multiple displays of individual monitor or different size.In the specific embodiment of the present invention, the contact panel that above-mentioned input block 810 adopts also can simultaneously as the display panel of output unit 820.Such as, after contact panel detects touch thereon or close gesture operation, send processor 830 to determine the type of touch event, provide corresponding vision to export with preprocessor 830 on a display panel according to the type of touch event.Input block 810 and output unit 820 can as two independently parts realize the input and output function of pupil center's locating device, but in certain embodiments, can by contact panel and display panel integrated and realize the input and output function of pupil center's locating device.Such as, described image output unit can show various Graphic User Interfaces (Graphical User Interface, GUI) using as virtual controlling assembly, include but not limited to window, scroll bar, icon and scrapbook, operated by touch control manner for user.
Storer 840 can comprise ROM (read-only memory) and random access memory, and provides instruction and data to processor 330.A part for storer 840 can also comprise nonvolatile RAM (NVRAM).
Storer 840 stores following element, executable module or data structure, or their subset, or their superset:
Operational order: comprise various operational order, for realizing various operation.
Operating system: comprise various system program, for realizing various basic business and processing hardware based task.
In embodiments of the present invention, the operational order (this operational order can store in an operating system) that processor 830 stores by calling storer 840, performs and operates as follows:
The inner eye corner point of localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point;
The ocular image in described target image is intercepted according to described inner eye corner point;
Calculate the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determine the pupil center in described ocular image according to the gradient of described pixel.
Processor 830 controls the operation of pupil center's locating device, and processor 830 can also be called CPU (Central Processing Unit, CPU (central processing unit)).Storer 840 can comprise ROM (read-only memory) and random access memory, and provides instruction and data to processor 830.A part for storer 840 can also comprise nonvolatile RAM (NVRAM).In concrete application, each assembly of pupil center's locating device is coupled by bus system 850, and wherein bus system 850 is except comprising data bus, can also comprise power bus, control bus and status signal bus in addition etc.But for the purpose of clearly demonstrating, in the drawings various bus is all designated as bus system 850.
The method that the invention described above embodiment discloses can be applied in processor 830, or is realized by processor 830.Processor 830 may be a kind of integrated circuit (IC) chip, has the processing power of signal.In implementation procedure, each step of said method can be completed by the instruction of the integrated logic circuit of the hardware in processor 830 or software form.Above-mentioned processor 830 can be general processor, digital signal processor (DSP), special IC (ASIC), ready-made programmable gate array (FPGA) or other programmable logic device (PLD), discrete gate or transistor logic, discrete hardware components.Can realize or perform disclosed each method, step and the logic diagram in the embodiment of the present invention.The processor etc. of general processor can be microprocessor or this processor also can be any routine.Step in conjunction with the method disclosed in the embodiment of the present invention directly can be presented as that hardware decoding processor is complete, or combines complete by the hardware in decoding processor and software module.Software module can be positioned at random access memory, flash memory, ROM (read-only memory), in the storage medium of this area maturations such as programmable read only memory or electrically erasable programmable storer, register.This storage medium is positioned at storer 840, and processor 830 reads the information in storer 840, completes the step of said method in conjunction with its hardware.
Further, described processor 830 intercepts the ocular image in described target image according to described inner eye corner point, comprising:
According to canthus horizontal range and described inner eye corner point determination external eyes angle point, described canthus horizontal range is default described inner eye corner point and described external eyes angle point distance in the horizontal direction, and described external eyes angle point is in the horizontal direction at the canthus of bridge of the nose far-end point;
According to human eye in the horizontal direction and the preset ratio of vertical direction, and with described inner eye corner point and the described tail of the eye for boundary's setting intercepts frame;
Use described intercepting frame in described target image, intercept described ocular image.
Described processor 830, according to canthus horizontal range and described inner eye corner point determination external eyes angle point, comprising:
According to the area image of described canthus horizontal range and described inner eye corner point determination external eyes angle point;
Binary conversion treatment is carried out to the area image of described external eyes angle point;
Carry out traversal to the pixel in described binary conversion treatment rear region image to search, to determine in level toward the first difference pixel on bridge of the nose direction to be external eyes angle point.
Described processor 830 calculates the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determines the pupil center in described ocular image according to the gradient of described pixel, comprising:
1) choose the first pixel, described first pixel is a pixel in described ocular image;
2) choose the second pixel, described second pixel is a pixel in described ocular image except described first pixel;
3) determine by described first pixel to the motion vector of described second pixel;
4) determine the gradient vector of described second pixel, described gradient vector represents the change direction in described second pixel place gray scale;
5) described motion vector is multiplied with described gradient vector, obtains vector product;
6) if the second pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 5); If the second pixel be not selected in described ocular image, then the absolute value of described vector product corresponding to the second pixel described in each or square value are sued for peace, obtain central point weights;
7) if the first pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 6); If the first pixel be not selected in described ocular image, then determine that the first pixel of described central point maximum weight is as the pupil center in described ocular image.
Further, described processor 830 can also be used for:
Obtain the maximum m of a described eigenwert pixel, described m is greater than 5 and is less than the integer of 20; Length summation is carried out to m* (m-1)/2 line segment that two pixels any in a described m pixel are formed; If the result of described summation is not in preset range, then do not carry out the location of pupil center according to described target image, the threshold value of described preset range to be described target image be eye opening image.
In several embodiments that the application provides, should be understood that, disclosed apparatus and method can realize by another way.Such as, device embodiment described above is only schematic, such as, the division of described unit, be only a kind of logic function to divide, actual can have other dividing mode when realizing, such as multiple unit or assembly can in conjunction with or another system can be integrated into, or some features can be ignored, or do not perform.Another point, shown or discussed coupling each other or direct-coupling or communication connection can be by some interfaces, and the indirect coupling of device or unit or communication connection can be electrical, machinery or other form.
The described unit illustrated as separating component or can may not be and physically separates, and the parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of unit wherein can be selected according to the actual needs to realize the object of the present embodiment scheme.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing unit, also can be that the independent physics of unit exists, also can two or more unit in a unit integrated.Above-mentioned integrated unit both can adopt the form of hardware to realize, and the form of SFU software functional unit also can be adopted to realize.
If described integrated unit using the form of SFU software functional unit realize and as independently production marketing or use time, can be stored in a computer read/write memory medium.Based on such understanding, the part that technical scheme of the present invention contributes to prior art in essence in other words or all or part of of this technical scheme can embody with the form of software product, this computer software product is stored in a storage medium, comprising some instructions in order to make a computer equipment (can be personal computer, server, or the network equipment etc.) perform all or part of step of method described in each embodiment of the present invention.And aforesaid storage medium comprises: USB flash disk, portable hard drive, ROM (read-only memory) (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. various can be program code stored medium.
The above; be only the specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, is anyly familiar with those skilled in the art in the technical scope that the present invention discloses; change can be expected easily or replace, all should be encompassed within protection scope of the present invention.Therefore, protection scope of the present invention should described be as the criterion with the protection domain of claim.

Claims (16)

1. pupil center's localization method, is characterized in that, comprising:
The inner eye corner point of localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point;
The ocular image in described target image is intercepted according to described inner eye corner point;
Calculate the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determine the pupil center in described ocular image according to the gradient of described pixel.
2. according to described method according to claim 1, it is characterized in that, the described ocular image intercepted according to described inner eye corner point in described target image, comprising:
According to canthus horizontal range and described inner eye corner point determination external eyes angle point, described canthus horizontal range is default described inner eye corner point and described external eyes angle point distance in the horizontal direction, and described external eyes angle point is in the horizontal direction at the canthus of bridge of the nose far-end point;
According to human eye in the horizontal direction and the preset ratio of vertical direction, and with described inner eye corner point and the described tail of the eye for boundary's setting intercepts frame;
Use described intercepting frame in described target image, intercept described ocular image.
3., according to the method that described claim 2 is stated, it is characterized in that, described according to canthus horizontal range and described inner eye corner point determination external eyes angle point, comprising:
According to the area image of described canthus horizontal range and described inner eye corner point determination external eyes angle point;
Binary conversion treatment is carried out to the area image of described external eyes angle point;
Carry out traversal to the pixel in described binary conversion treatment rear region image to search, to determine in level toward the first difference pixel on bridge of the nose direction to be external eyes angle point.
4. according to described method according to claim 1, it is characterized in that, describedly calculate the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determine the pupil center in described ocular image according to the gradient of described pixel, comprising:
1) choose the first pixel, described first pixel is a pixel in described ocular image;
2) choose the second pixel, described second pixel is a pixel in described ocular image except described first pixel;
3) determine by described first pixel to the motion vector of described second pixel;
4) determine the gradient vector of described second pixel, described gradient vector represents the change direction in described second pixel place gray scale;
5) described motion vector is multiplied with described gradient vector, obtains vector product;
6) if the second pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 5); If the second pixel be not selected in described ocular image, then the absolute value of described vector product corresponding to the second pixel described in each or square value are sued for peace, obtain central point weights;
7) if the first pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 6); If the first pixel be not selected in described ocular image, then determine that the first pixel of described central point maximum weight is as the pupil center in described ocular image.
5., according to described method according to claim 4, it is characterized in that,
Describedly to determine by described first pixel, to the motion vector of described second pixel, to comprise:
Determine by described first pixel to the motion vector of described second pixel, and described motion vector is normalized;
The described gradient vector determining described second pixel, comprising:
Determine the gradient vector of described second pixel, and described gradient vector is normalized.
6. according to described method according to claim 1, it is characterized in that, the inner eye corner point of described localizing objects image, comprising:
The Harris corner detection approach of use Harris Harris corner detection approach, improvement, Scale invariant features transform SIFT algorithm, acceleration robust feature SURF algorithm, fast FAST algorithm, succinctly BRIEF algorithm or desirable features follow the tracks of the inner eye corner point of GFTT algorithm localizing objects image.
7. according to described method according to claim 1, it is characterized in that, the inner eye corner point of described localizing objects image, comprising:
The eigenwert of pixel in described target image is calculated according to corner detection approach;
Determine the inner eye corner point of the maximum pixel of described eigenwert as described target image.
8. according to described method according to claim 7, it is characterized in that, calculate the eigenwert of pixel in described target image according to corner detection approach after, comprising:
Obtain the maximum m of a described eigenwert pixel, described m is greater than 5 and is less than the integer of 20;
Length summation is carried out to m* (m-1)/2 line segment that two pixels any in a described m pixel are formed;
If the result of described summation is not in preset range, then do not carry out the location of pupil center according to described target image, the threshold value of described preset range to be described target image be eye opening image.
9. pupil center's locating device, is characterized in that, comprising:
Canthus positioning unit, for the inner eye corner point of localizing objects image, described inner eye corner point is in the horizontal direction at the canthus of bridge of the nose near-end point;
Area determination unit, for intercepting the ocular image in described target image according to described inner eye corner point;
Centering unit, for calculating the gradient of pixel in described ocular image according to the difference of gradation of image in described ocular image, and determines the pupil center in described ocular image according to the gradient of described pixel.
10., according to described device according to claim 9, it is characterized in that, described area determination unit specifically for:
According to canthus horizontal range and described inner eye corner point determination external eyes angle point, described canthus horizontal range is default described inner eye corner point and described external eyes angle point distance in the horizontal direction, and described external eyes angle point is in the horizontal direction at the canthus of bridge of the nose far-end point;
According to human eye in the horizontal direction and the preset ratio of vertical direction, and with described inner eye corner point and the described tail of the eye for boundary's setting intercepts frame;
Use described intercepting frame in described target image, intercept described ocular image.
11. devices stated according to described claim 10, is characterized in that, described area determination unit concrete also for:
According to the area image of described canthus horizontal range and described inner eye corner point determination external eyes angle point;
Binary conversion treatment is carried out to the area image of described external eyes angle point;
Carry out traversal to the pixel in described binary conversion treatment rear region image to search, to determine in level toward the first difference pixel on bridge of the nose direction to be external eyes angle point.
12., according to described device according to claim 9, is characterized in that, described centering unit specifically for:
1) choose the first pixel, described first pixel is a pixel in described ocular image;
2) choose the second pixel, described second pixel is a pixel in described ocular image except described first pixel;
3) determine by described first pixel to the motion vector of described second pixel;
4) determine the gradient vector of described second pixel, described gradient vector represents the change direction in described second pixel place gray scale;
5) described motion vector is multiplied with described gradient vector, obtains vector product;
6) if the second pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 5); If the second pixel be not selected in described ocular image, then the absolute value of described vector product corresponding to the second pixel described in each or square value are sued for peace, obtain central point weights;
7) if the first pixel be not selected in addition in described ocular image, then again step 2 is performed) to step 6); If the first pixel be not selected in described ocular image, then determine that the first pixel of described central point maximum weight is as the pupil center in described ocular image.
13., according to described device according to claim 12, is characterized in that, described centering unit concrete also for:
Determine by described first pixel to the motion vector of described second pixel, and described motion vector is normalized;
Determine the gradient vector of described second pixel, and described gradient vector is normalized.
14., according to described device according to claim 9, is characterized in that, described canthus positioning unit specifically for:
The Harris corner detection approach of use Harris Harris corner detection approach, improvement, Scale invariant features transform SIFT algorithm, acceleration robust feature SURF algorithm, fast FAST algorithm, succinctly BRIEF algorithm or desirable features follow the tracks of the inner eye corner point of GFTT algorithm localizing objects image.
15., according to described method according to claim 9, is characterized in that, described canthus positioning unit concrete also for:
The eigenwert of pixel in described target image is calculated according to corner detection approach;
Determine the inner eye corner point of the maximum pixel of described eigenwert as described target image.
16., according to described device according to claim 15, is characterized in that, described device also comprises:
Eye opening identifying unit, for obtaining the maximum m of a described eigenwert pixel, described m is greater than 5 and is less than the integer of 20; Length summation is carried out to m* (m-1)/2 line segment that two pixels any in a described m pixel are formed; If the result of described summation is not in preset range, then do not carry out the location of pupil center according to described target image, the threshold value of described preset range to be described target image be eye opening image.
CN201410834735.4A 2014-12-29 2014-12-29 A kind of pupil center's localization method and device Expired - Fee Related CN104809458B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410834735.4A CN104809458B (en) 2014-12-29 2014-12-29 A kind of pupil center's localization method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410834735.4A CN104809458B (en) 2014-12-29 2014-12-29 A kind of pupil center's localization method and device

Publications (2)

Publication Number Publication Date
CN104809458A true CN104809458A (en) 2015-07-29
CN104809458B CN104809458B (en) 2018-09-28

Family

ID=53694269

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410834735.4A Expired - Fee Related CN104809458B (en) 2014-12-29 2014-12-29 A kind of pupil center's localization method and device

Country Status (1)

Country Link
CN (1) CN104809458B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105590092A (en) * 2015-11-11 2016-05-18 中国银联股份有限公司 Method and device for identifying pupil in image
CN106022315A (en) * 2016-06-17 2016-10-12 北京极创未来科技有限公司 Pupil center positioning method for iris recognition
CN106778541A (en) * 2016-11-28 2017-05-31 华中科技大学 A kind of identification in the multilayer beam China and foreign countries beam hole of view-based access control model and localization method
CN107249126A (en) * 2017-07-28 2017-10-13 华中科技大学 A kind of gazing direction of human eyes tracking suitable for free view-point 3 D video
CN107301391A (en) * 2017-06-16 2017-10-27 广州市百果园信息技术有限公司 Area determination method and device, storage medium
CN107784263A (en) * 2017-04-28 2018-03-09 新疆大学 Based on the method for improving the Plane Rotation Face datection for accelerating robust features
CN108288248A (en) * 2018-01-02 2018-07-17 腾讯数码(天津)有限公司 A kind of eyes image fusion method and its equipment, storage medium, terminal
CN109640787A (en) * 2017-04-24 2019-04-16 上海趋视信息科技有限公司 Measure the System and method for of interpupillary distance
CN110942043A (en) * 2019-12-02 2020-03-31 深圳市迅雷网络技术有限公司 Pupil image processing method and related device
WO2020155792A1 (en) * 2019-01-31 2020-08-06 Boe Technology Group Co., Ltd. Pupil positioning method and apparatus, vr/ar apparatus and computer readable medium
CN111708939A (en) * 2020-05-29 2020-09-25 平安科技(深圳)有限公司 Push method and device based on emotion recognition, computer equipment and storage medium
CN112258569A (en) * 2020-09-21 2021-01-22 苏州唐古光电科技有限公司 Pupil center positioning method, device, equipment and computer storage medium
WO2021175180A1 (en) * 2020-03-02 2021-09-10 广州虎牙科技有限公司 Line of sight determination method and apparatus, and electronic device and computer-readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686051A (en) * 2005-05-08 2005-10-26 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
CN101339606A (en) * 2008-08-14 2009-01-07 北京中星微电子有限公司 Human face critical organ contour characteristic points positioning and tracking method and device
CN101576951A (en) * 2009-05-20 2009-11-11 电子科技大学 Iris external boundary positioning method based on shades of gray and classifier
CN103049740A (en) * 2012-12-13 2013-04-17 杜鹢 Method and device for detecting fatigue state based on video image
CN103345619A (en) * 2013-06-26 2013-10-09 上海永畅信息科技有限公司 Self-adaption correcting method of human eye natural contact in video chat
CN104050448A (en) * 2014-06-11 2014-09-17 青岛海信信芯科技有限公司 Human eye positioning method and device and human eye region positioning method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1686051A (en) * 2005-05-08 2005-10-26 上海交通大学 Canthus and pupil location method based on VPP and improved SUSAN
CN101339606A (en) * 2008-08-14 2009-01-07 北京中星微电子有限公司 Human face critical organ contour characteristic points positioning and tracking method and device
CN101576951A (en) * 2009-05-20 2009-11-11 电子科技大学 Iris external boundary positioning method based on shades of gray and classifier
CN103049740A (en) * 2012-12-13 2013-04-17 杜鹢 Method and device for detecting fatigue state based on video image
CN103345619A (en) * 2013-06-26 2013-10-09 上海永畅信息科技有限公司 Self-adaption correcting method of human eye natural contact in video chat
CN104050448A (en) * 2014-06-11 2014-09-17 青岛海信信芯科技有限公司 Human eye positioning method and device and human eye region positioning method and device

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017080410A1 (en) * 2015-11-11 2017-05-18 中国银联股份有限公司 Method and apparatus for identifying pupil in image
CN105590092A (en) * 2015-11-11 2016-05-18 中国银联股份有限公司 Method and device for identifying pupil in image
CN106022315B (en) * 2016-06-17 2019-07-12 北京极创未来科技有限公司 A kind of pupil center's localization method for iris recognition
CN106022315A (en) * 2016-06-17 2016-10-12 北京极创未来科技有限公司 Pupil center positioning method for iris recognition
CN106778541A (en) * 2016-11-28 2017-05-31 华中科技大学 A kind of identification in the multilayer beam China and foreign countries beam hole of view-based access control model and localization method
CN106778541B (en) * 2016-11-28 2019-08-13 华中科技大学 A kind of identification in the multilayer beam China and foreign countries beam hole of view-based access control model and localization method
CN109640787A (en) * 2017-04-24 2019-04-16 上海趋视信息科技有限公司 Measure the System and method for of interpupillary distance
CN107784263B (en) * 2017-04-28 2021-03-30 新疆大学 Planar rotation face detection method based on improved accelerated robust features
CN107784263A (en) * 2017-04-28 2018-03-09 新疆大学 Based on the method for improving the Plane Rotation Face datection for accelerating robust features
CN107301391A (en) * 2017-06-16 2017-10-27 广州市百果园信息技术有限公司 Area determination method and device, storage medium
CN107301391B (en) * 2017-06-16 2020-02-07 广州市百果园信息技术有限公司 Area determination method and device and storage medium
CN107249126A (en) * 2017-07-28 2017-10-13 华中科技大学 A kind of gazing direction of human eyes tracking suitable for free view-point 3 D video
CN108288248A (en) * 2018-01-02 2018-07-17 腾讯数码(天津)有限公司 A kind of eyes image fusion method and its equipment, storage medium, terminal
WO2020155792A1 (en) * 2019-01-31 2020-08-06 Boe Technology Group Co., Ltd. Pupil positioning method and apparatus, vr/ar apparatus and computer readable medium
US11315281B2 (en) 2019-01-31 2022-04-26 Beijing Boe Optoelectronics Technology Co., Ltd. Pupil positioning method and apparatus, VR/AR apparatus and computer readable medium
CN110942043A (en) * 2019-12-02 2020-03-31 深圳市迅雷网络技术有限公司 Pupil image processing method and related device
CN110942043B (en) * 2019-12-02 2023-11-14 深圳市迅雷网络技术有限公司 Pupil image processing method and related device
WO2021175180A1 (en) * 2020-03-02 2021-09-10 广州虎牙科技有限公司 Line of sight determination method and apparatus, and electronic device and computer-readable storage medium
CN111708939A (en) * 2020-05-29 2020-09-25 平安科技(深圳)有限公司 Push method and device based on emotion recognition, computer equipment and storage medium
CN111708939B (en) * 2020-05-29 2024-04-16 平安科技(深圳)有限公司 Emotion recognition-based pushing method and device, computer equipment and storage medium
CN112258569A (en) * 2020-09-21 2021-01-22 苏州唐古光电科技有限公司 Pupil center positioning method, device, equipment and computer storage medium
CN112258569B (en) * 2020-09-21 2024-04-09 无锡唐古半导体有限公司 Pupil center positioning method, pupil center positioning device, pupil center positioning equipment and computer storage medium

Also Published As

Publication number Publication date
CN104809458B (en) 2018-09-28

Similar Documents

Publication Publication Date Title
CN104809458A (en) Pupil center positioning method and pupil center positioning device
CN102402680B (en) Hand and indication point positioning method and gesture confirming method in man-machine interactive system
WO2019237942A1 (en) Line-of-sight tracking method and apparatus based on structured light, device, and storage medium
Fuhl et al. Pupilnet: Convolutional neural networks for robust pupil detection
CN110221699B (en) Eye movement behavior identification method of front-facing camera video source
CN103632136B (en) Human-eye positioning method and device
EP2975997B1 (en) System and method for on-axis eye gaze tracking
US10572072B2 (en) Depth-based touch detection
Chen et al. Efficient and robust pupil size and blink estimation from near-field video sequences for human–machine interaction
Nai et al. Fast hand posture classification using depth features extracted from random line segments
JP2022525829A (en) Systems and methods for control schemes based on neuromuscular data
CN105247539A (en) Method for gaze tracking
CN103324284A (en) Mouse control method based on face and eye detection
CN104766059A (en) Rapid and accurate human eye positioning method and sight estimation method based on human eye positioning
CN103598870A (en) Optometry method based on depth-image gesture recognition
Penkov et al. Physical symbol grounding and instance learning through demonstration and eye tracking
CN106814853A (en) A kind of eye control tracking based on machine learning
Wang et al. Your eyes reveal your secrets: An eye movement based password inference on smartphone
Naveed et al. Eye tracking system with blink detection
Xiao et al. Accurate iris center localization method using facial landmark, snakuscule, circle fitting and binary connected component
CN109634407B (en) Control method based on multi-mode man-machine sensing information synchronous acquisition and fusion
Yang et al. Student eye gaze tracking during MOOC teaching
CN104407746A (en) Infrared photoelectric technology based multi-point touch system
Cao et al. Gaze tracking on any surface with your phone
Méndez-Ortega et al. Supervision and control of students during online assessments applying computer vision techniques: a systematic literature review

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180928

Termination date: 20181229