CN103942542A - Human eye tracking method and device - Google Patents

Human eye tracking method and device Download PDF

Info

Publication number
CN103942542A
CN103942542A CN201410160852.7A CN201410160852A CN103942542A CN 103942542 A CN103942542 A CN 103942542A CN 201410160852 A CN201410160852 A CN 201410160852A CN 103942542 A CN103942542 A CN 103942542A
Authority
CN
China
Prior art keywords
template
frame image
eyes
potential target
current frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410160852.7A
Other languages
Chinese (zh)
Inventor
沈威
张涛
张春光
李春
俞能海
杨柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Zhuo Meihua Looks Photoelectric Co Ltd
Original Assignee
Chongqing Zhuo Meihua Looks Photoelectric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Zhuo Meihua Looks Photoelectric Co Ltd filed Critical Chongqing Zhuo Meihua Looks Photoelectric Co Ltd
Priority to CN201410160852.7A priority Critical patent/CN103942542A/en
Publication of CN103942542A publication Critical patent/CN103942542A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to the technical field of information, in particular to a human eye tracking method and device. The human eye tracking method comprises the steps that according to the positions of the eyes in a current frame image, a constructed Kalman filter is utilized for predicting the positions of the eyes in the next frame image so as to obtain predicted values of the positions of the eyes; when the next frame image becomes the current frame image, the predicted values of the positions of the eyes are utilized for carrying out searching in the current frame image to obtain a potential target matched with a target template in brightness distribution, wherein the target template is obtained on the basis of an initial template which is constructed according to a received initial frame image; the potential target is used as a tracking target of the eyes to achieve tracking of the eyes. The human eye tracking method and device can not be influenced by the illumination condition easily, the robustness is good, and the actual demand of human eye tracking can be met.

Description

Tracing of human eye method and device
Technical field
The present invention relates to areas of information technology, in particular to tracing of human eye method and device.
Background technology
Tracing of human eye is in video or image sequence, to determine the movement locus of human eye and the process of size variation.Tracing of human eye is the important part in face tracking, and it is significant in fields such as graphical analysis and identification, picture control and retrievals, becomes the focus of a large amount of focus of attentions, and many effective algorithms occur in succession.
For example, adopt AdaBoost iterative algorithm to realize people's ocular pursuit, but this algorithm cannot accurately capture face when the large or face anglec of rotation is large in screen rotation angle, and utilize this algorithm can only locate face cannot to capture eyes.Existing other people ocular pursuit method is more responsive for illumination etc., and robustness is poor, for example, realizing by bright pupil in the track algorithm of eyes, needs stable infrared lighting condition to follow the trail of eyes.
Find out that thus existing human eye method for tracing is easily subject to the impact of lighting condition, robustness is poor, does not meet the actual demand that human eye is followed the trail of.
Summary of the invention
The object of the present invention is to provide tracing of human eye method and device, to solve the above problems.
Tracing of human eye method is provided in an embodiment of the present invention, has comprised:
Position according to eyes in current frame image, utilizes the Kalman Kalman wave filter of structure to predict the position of eyes in next frame image, obtains eye position predicted value;
In the time that described next frame image becomes current frame image, utilize described eye position predicted value to search in this current frame image, obtain the potential target mating with To Template Luminance Distribution, wherein said To Template obtains based on original template, and described original template is according to the initial frame image configuration receiving;
Tracking target using described potential target as eyes realizes the tracking of eyes.
Preferably, the method also comprises according to the original template of the initial frame image configuration eyes that receive, comprising:
In described initial frame image, carry out face detection, obtain face detected image;
Described face detected image is carried out to gray scale processing, obtain face gray-scale map;
Described face gray-scale map is carried out to vertical Gray Projection, obtain vertical Gray Projection figure;
Determine the border, left and right of face according to the border, left and right at the vertical Gray Projection curve convexity peak in described vertical Gray Projection figure;
According to the border, left and right of described face, described vertical Gray Projection figure is carried out to cutting, obtain new face gray-scale map;
Described new face gray-scale map is carried out to horizontal Gray Projection, obtain horizontal Gray Projection figure;
Determine according to the horizontal Gray Projection curve in described horizontal Gray Projection figure the up-and-down boundary that the crown and nose middle part form, determine eyebrow and eye regions according to described up-and-down boundary;
Use Sobel SOBEL operator to obtain the boundary value of described eyebrow and eye regions, and carry out edge grouping, orient the position of eyes and obtain described original template;
Wherein said original template is the To Template of the position of eyes in prediction the second two field picture.
Preferably, the method also comprises in the time getting described tracking target, utilizes described tracking target to upgrade current described To Template; The position of eyes in the new To Template prediction next frame image that utilization obtains.
Preferably, the method also comprises structure Kalman wave filter, comprising: state model and the measurement model of determining the image sequence receiving; The state model of described definite image sequence comprises: adopt algorithm x t+1=θ x t+ w trepresent described state model;
Wherein set (c t, r t) be the centroid position of t moment eyes, (u t, v t) be the speed of t moment eyes in c direction and r direction, the state vector of t moment eyes is x t=(c_t, r_t, u_t, v_t) t, w tfor system noise; Suppose that the displacement of eyes between two continuous two field pictures goes to zero, and be uniform motion, run duration is Δ t, and state-transition matrix is θ = 1 0 Δt 0 0 1 0 Δt 0 0 1 0 0 0 0 1 ;
The measurement model of described definite image sequence comprises: adopt algorithm z t=Hx t+ s trepresent described measurement model, wherein observed quantity represent the position of t moment eyes, s tfor measuring noise, H = 1 0 0 0 0 1 0 0 .
Preferably, describedly utilize described eye position predicted value to search in this current frame image, obtain the potential target mating with To Template Luminance Distribution, comprise: adopt Mean Shift algorithm, described eye position predicted value is carried out to interative computation as the initial value of described Mean Shift algorithm; In this current frame image, search element according to the result of interative computation and go out and the potential target of To Template brightness similarity maximum, the similarity degree between wherein said To Template and described potential target is measured apart from Bhattacharyya distance value with Pasteur.
Preferably, adopt Bhattacharyya distance value to measure the similarity degree between described To Template and described potential target, comprising:
Utilize algorithm ρ ( y ) = ρ [ p ^ ( y ) , q ^ ] = Σ u = 0 m p ^ u ( y ) q ^ u Calculating central position is the discrete estimation value of the Bhattacharyya distance between potential target and the To Template of y;
Wherein, p represents the brightness histogram of described potential target, and q represents the brightness histogram of described To Template, and proper vector u represents the color vector in To Template, represent the probability distribution of color u in To Template, the characteristic probability that represents the potential target that center is y distributes, and M is the quantification layer of p and q, and m is the value that quantizes layer M;
Determine the luminance difference distance between described To Template and described potential target according to the discrete estimation value of the Bhattacharyya distance between described potential target and To Template, wherein said luminance difference is apart from being d ( y ) = 1 - ρ [ p ^ ( y ) , q ^ ] .
Preferably, the described element of searching in this current frame image goes out and the potential target of To Template brightness similarity maximum, comprising: the set positions of the described eye position predicted value that described Kalman wave filter dopes in current frame image is with starting point is carried out target search in described current frame image, comprises that calculating current frame image meta is set to the color probability of potential target, minimize luminance difference apart from d and maximize Bhattacharyya distance according to described color probability; Minimize and the maximized result of Bhattacharyya distance obtains and the potential target of described To Template brightness similarity maximum apart from d according to luminance difference.
Preferably, describedly minimize luminance difference apart from d and maximize Bhattacharyya distance according to described color probability, comprising:
A, basis the position of potential target in initialization current frame image, and calculate the color probability of described potential target and obtain Bhattacharyya distance ρ ( y ) = ρ [ p ^ ( y ^ 0 ) , q ^ ] = Σ u = 0 m p ^ u ( y ^ 0 ) q ^ u ;
B, in described current frame image determine new potential target position y ^ 1 = Σ i = 1 n k x i w i g ( | | y ^ 0 - x i h | | 2 ) Σ i = 1 n k w i g ( | | y ^ 0 - x i h | | 2 ) ;
Wherein basis w i = Σ u = 1 m δ [ b ( x i ) - u ] q ^ u p ^ u ( y ^ 0 ) , Determine weights { w i} i=1 ... n k; the pixel position that represents the potential tracking target that position is Y, h is the pixel sum that constant represents potential tracking target; According to obtaining new Bhattacharyya distance is: ρ = [ p ^ ( y ^ 1 ) , q ^ ] = Σ u = 1 m p ^ u ( y ^ 1 ) q ^ u ;
C, when &rho; [ p ^ ( y ^ 1 ) , q ^ ] < &rho; [ p ^ ( y ^ 0 ) , q ^ ] Time, y ^ 1 &LeftArrow; 1 2 ( y ^ 0 + y ^ 1 ) ;
If d finish, otherwise order and turn back to step a;
Wherein ε goes to zero, and adopts the vectorial Bhattacharyya of increase of Mean Shift distance in step b, in the time that Bhattacharyya distance does not have increase or increase to be less than 0.1%, performs step c and upgrades potential target position.
The embodiment of the present invention also provides a kind of tracing of human eye device, comprising: prediction module, and for the position at current frame image according to eyes, utilize the Kalman wave filter of structure to predict the position of eyes in next frame image, obtain eye position predicted value; Search module, for in the time that described next frame image becomes current frame image, utilize described eye position predicted value to search in this current frame image, obtain the potential target mating with To Template Luminance Distribution, wherein said To Template obtains based on original template, and described original template is according to the initial frame image configuration receiving; Tracking module, realizes the tracking of eyes for the tracking target using described potential target as eyes.
Tracing of human eye method and device that the embodiment of the present invention provides, in current frame image, predict the position of eyes in next frame image, in the time that this next frame image becomes current frame image, search in new current frame image according to the position of the eyes of prediction, the potential target of the To Template brightness matching obtaining and arrange, realize the tracking of human eye according to the potential target searching, this method and device take full advantage of Luminance Distribution feature and carry out mating of To Template and potential target in human eye tracing process, calculate simple, tracking event is short, and the method tracking effect under actual illumination condition is better, in situation for eyes partial occlusion or closure, also can realize tracking, therefore the human eye method for tracing of the embodiment of the present invention is not subject to the impact of lighting condition, robustness is better, more can meet the actual demand of people's ocular pursuit.
Brief description of the drawings
Fig. 1 shows the process flow diagram of human eye method for tracing in the embodiment of the present invention;
Fig. 2 shows the process flow diagram of constructing the method for original template in the embodiment of the present invention;
Fig. 3 shows the structural representation of human eye follow-up mechanism in the embodiment of the present invention.
Embodiment
Also by reference to the accompanying drawings the present invention is described in further detail below by specific embodiment.
The embodiment of the present invention provides a kind of tracing of human eye method, and main treatment step comprises as shown in Figure 1:
Step S11: the position according to eyes in current frame image, utilize the Kalman Kalman wave filter of structure to predict the position of eyes in next frame image, obtain eye position predicted value;
Step S12: in the time that next two field picture becomes current frame image, utilize eye position predicted value to search in this current frame image, obtain the potential target mating with To Template Luminance Distribution, wherein To Template obtains based on original template, and original template is according to the initial frame image configuration receiving;
Step S13: the tracking target using potential target as eyes realizes the tracking of eyes.
The tracing of human eye method of the embodiment of the present invention is predicted the position of eyes in next frame image in current frame image, in the time that this next frame image becomes current frame image, search in new current frame image according to the position of the eyes of prediction, the potential target of the To Template brightness matching obtaining and arrange, following the trail of the objective the search target obtaining as eyes adopts the method interative computation to realize thus the tracking of human eye in the image sequence receiving.
The embodiment of the present invention takes full advantage of Luminance Distribution feature and carries out mating of To Template and potential target in human eye tracing process, calculate simple, tracking event is short, and the method tracking effect under actual illumination condition is better, in situation for eyes partial occlusion or closure, also can realize tracking, therefore the human eye method for tracing of the embodiment of the present invention is not subject to the impact of lighting condition, and robustness is better, more can meet the actual demand of people's ocular pursuit.
While search in current frame image in the embodiment of the present invention, obtain the potential target mating with To Template Luminance Distribution, To Template described herein is that the original template based on eyes obtains, and wherein the original template of eyes is to construct and obtain in the time receiving the initial frame image of image sequence.
Wherein according to the method for the original template of the initial frame image configuration eyes that receive as shown in Figure 2 main treatment step comprise:
Step S21: carry out face detection in initial frame image, obtain face detected image;
Step S22: face detected image is carried out to gray scale processing, obtain face gray-scale map;
Step S23: face gray-scale map is carried out to vertical Gray Projection, obtain vertical Gray Projection figure;
Step S24: the border, left and right of determining face according to the border, left and right at the vertical Gray Projection curve convexity peak in vertical Gray Projection figure;
Step S25: according to the border, left and right of face, vertical Gray Projection figure is carried out to cutting, obtain new face gray-scale map;
Step S26: new face gray-scale map is carried out to horizontal Gray Projection, obtain horizontal Gray Projection figure;
Step S27: determine according to the horizontal Gray Projection curve in horizontal Gray Projection figure the up-and-down boundary that the crown and nose middle part form, determine eyebrow and eye regions according to up-and-down boundary;
Step S28: use Sobel SOBEL operator to obtain the boundary value of eyebrow and eye regions, and carry out edge grouping, orient the position of eyes and obtain original template.
The first actual two field picture of the image sequence that the initial frame image in step S21 can finger be received, also first two field picture of the image sequence of can finger receiving after image stabilization, the first two field picture of the specified continuous multiple frames image of user in the image sequence that initial frame image can also refer to receive in addition.
In this step, adopt in addition the face detection meter localization method based on complexion model to carry out face detection, obtain face detected image.
Obtaining after face detected image, then operating in Blob height and the width of in face detected image, determining face, as described in step S22~step S27.
Wherein after step S23 obtains vertical Gray Projection figure, before step S24 determines the border, left and right of face according to vertical Gray Projection figure, also comprise the vertical Gray Projection curve in vertical Gray Projection figure is carried out to smoothing processing, and remove the discrete noise point in vertical Gray Projection figure.
It is same after step S26 obtains horizontal Gray Projection figure, before step S27 determines up-and-down boundary according to horizontal Gray Projection figure, also comprise the horizontal Gray Projection curve in horizontal Gray Projection figure is carried out to smoothing processing, and remove the discrete noise point in horizontal Gray Projection figure.
To Template in human eye tracing process is that the original template based on above-mentioned obtains, above-mentioned original template is the To Template of eye position in prediction the second two field picture, in the second two field picture, carry out search comparison according to original template, to obtain following the trail of the objective of eyes.
The method for tracing of the embodiment of the present invention, carry out frame by frame people's ocular pursuit taking two field picture as unit, the original template of structure is predicted eyes in the second two field picture position as To Template, original template can also be predicted as To Template the position of eyes in the 3rd two field picture, if but carry out people's ocular pursuit as To Template all the time using original template, likely can cause the transmission of the drift error producing in tracing process, human eye tracking error is increased, be difficult to the tracking effect that reaches good.
For fear of the transmission of the drift error producing in tracing process, this method is constantly adjusted renewal to To Template adaptively in conjunction with the thought of template renewal.
Specifically To Template is adjusted to the method for upgrading in the time getting tracking target, utilize tracking target to upgrade current To Template; The position of eyes in the new To Template prediction next frame image that utilization obtains.
The method that adopts above-mentioned To Template to upgrade, original template is predicted the position of eyes in the second two field picture as To Template, search in the second two field picture according to the position of prediction, and after the potential target that obtains mating with To Template Luminance Distribution, this potential target is following the trail of the objective of eyes.This tracking target reaction current position of eyes, therefore the tracking target of utilizing search to obtain is upgraded To Template, obtain new To Template, utilize the position of eyes in To Template prediction the 3rd two field picture newly obtaining, in the time that the 3rd two field picture becomes current frame image, in the 3rd two field picture, utilize the position of prediction to search for, obtain and corresponding the following the trail of the objective of the 3rd two field picture, and utilize follow the trail of the objective corresponding with the 3rd two field picture to continue above-mentioned new To Template to upgrade, in the image sequence next receiving, continue to adopt the method to carry out frame by frame people's ocular pursuit, and frame by frame To Template is upgraded.
In current frame image, adopt the position of eyes in Kalman Kalman wave filter prediction next frame image.
The method of constructing Kalman wave filter in the embodiment of the present invention mainly comprises: state model and the measurement model of determining the image sequence receiving.
The state model of wherein determining image sequence comprises: adopt algorithm x t+1=θ x t+ w trepresent state model; Wherein set (c t, r t) be the centroid position of t moment eyes, (u t, v t) be the speed of t moment eyes in c direction and r direction, the state vector of t moment eyes is x t=(c_t, r_t, u_t, v_t) t, w tfor system noise; Suppose that the displacement of eyes between two continuous two field pictures goes to zero, and be uniform motion, run duration is Δ t, and state-transition matrix is &theta; = 1 0 &Delta;t 0 0 1 0 &Delta;t 0 0 1 0 0 0 0 1 ;
The measurement model of determining image sequence comprises: adopt algorithm z t=Hx t+ s trepresent measurement model, wherein observed quantity represent the position of t moment eyes, s tfor measuring noise, H = 1 0 0 0 0 1 0 0 .
Utilize the Kalman wave filter of structure to predict that the position of eyes in next frame image comprises: the state vector x that utilizes eyes in Kalman wave filter prediction next frame image t+1and covariance matrix ∑ t+1.
In the time that next two field picture becomes current frame image, the predicted value of the eye position that utilization obtains is searched in current frame image, with the potential target that obtains mating with To Template Luminance Distribution, concrete methods of realizing, for adopting Mean Shift algorithm, carries out interative computation using eye position predicted value as the initial value of Mean Shift algorithm; In this current frame image, search element according to the result of interative computation and go out and the potential target of To Template brightness similarity maximum, wherein the similarity degree between To Template and potential target is measured apart from Bhattacharyya distance value with Pasteur.
In the time of current frame image search potential target, what adopt is Mean Shift algorithm, Mean Shift algorithm is a kind of track algorithm based on surface, can carry out real-time follow-up to non-rigid targets, by the interative computation of this algorithm, can in current frame image, search out the potential target the most similar to To Template Luminance Distribution, Bhattacharyya distance value tolerance for the Luminance Distribution similarity degree between the two.
Bhattacharyya distance value is to obtain according to the estimated value of the brightness q of the brightness p of potential target and To Template, and wherein brightness q and p all adopt the form of brightness histogram to represent.
Adopt the similarity degree between Bhattacharyya distance value tolerance To Template and potential target, comprising: utilize algorithm &rho; ( y ) = &rho; [ p ^ ( y ) , q ^ ] = &Sigma; u = 0 m p ^ u ( y ) q ^ u Calculating central position is the discrete estimation value of the Bhattacharyya distance between potential target and the To Template of y; Wherein, p represents the brightness histogram of potential target, and q represents the brightness histogram of To Template, and proper vector u represents the color vector in To Template, represent the probability distribution of color u in To Template, the characteristic probability that represents the potential target that center is y distributes, and M is the quantification layer of p and q, and m is the value that quantizes layer M; Determine the luminance difference distance between To Template and potential target according to the discrete estimation value of the Bhattacharyya distance between potential target and To Template, wherein luminance difference is apart from being d ( y ) = 1 - &rho; [ p ^ ( y ) , q ^ ] .
Luminance difference is apart from being optimum, and its variation to target scale is insensitive, therefore more effective for stochastic distribution density ratio.Adopt d better than the effect of Bhattacharyya distance, Fisher linear discriminent etc.Therefore can find out, target following is exactly in front frame, to search for the potential target the most close with To Template Luminance Distribution.
In this current frame image, search element and go out and the potential target of To Template brightness similarity maximum, comprising: the set positions of the eye position predicted value that Kalman wave filter dopes in current frame image is with starting point is carried out target search in current frame image, comprises that calculating current frame image meta is set to the color probability of potential target, minimize luminance difference apart from d and maximize Bhattacharyya distance according to color probability; Minimize and the maximized result of Bhattacharyya distance obtains and the potential target of To Template brightness similarity maximum apart from d according to luminance difference.
In To Template, represent the position of pixel in To Template.Target's center is 0.R 2→ { 1 ... m}, represent the color index value of pixel x in histogram in To Template.
K:[0, ∞) → R, k is a protruding monotonically decreasing function, this function distributes weights to the pixel in To Template.
Because tracking target is subject to background and some impacts of blocking, the pixel reliability of periphery is low, and the weights that the pixel that therefore decentering is far away distributes are less, and processing can improve the robustness of estimated value like this.
If the primary system of x and y one use h xand h ystandard coordinate after normalization, in To Template, the probability distribution of color is: q ^ u = C ( &Sigma; i = 0 n k ( | | x i * | | 2 ) &delta; [ b ( x i * ) - u ] ) ;
Wherein, δ is kranecker delta function, and C is norming constant,
C = 1 &Sigma; i = 0 n k ( | | x i * | | 2 ) ;
Further, &Sigma; u = 1 m q ^ u = 1 .
In current frame image, the pixel position that represents the potential tracking target that position is Y, distributes weights with weight function k to pixel equally, and in potential tracking target, the probability distribution computing formula of color u is p ^ u ( y ) = C h &Sigma; i = 1 n k k ( | | y - x i h | | 2 ) &delta; [ b ( x i ) - u ] ;
Constant h represents the pixel sum of potential tracking target, C h = 1 &Sigma; i = 1 n h k ( | | y - x i h | | 2 ) ;
C hirrelevant with y, if given y and h can calculate C kvalue.
Minimize luminance difference apart from d and maximize Bhattacharyya distance according to color probability, comprising:
A, basis the position of potential target in initialization current frame image, and calculate the color probability of potential target and obtain Bhattacharyya distance &rho; ( y ) = &rho; [ p ^ ( y ^ 0 ) , q ^ ] = &Sigma; u = 0 m p ^ u ( y ^ 0 ) q ^ u ;
B, in current frame image determine new potential target position y ^ 1 = &Sigma; i = 1 n k x i w i g ( | | y ^ 0 - x i h | | 2 ) &Sigma; i = 1 n k w i g ( | | y ^ 0 - x i h | | 2 ) ;
Wherein basis w i = &Sigma; u = 1 m &delta; [ b ( x i ) - u ] q ^ u p ^ u ( y ^ 0 ) , Determine weights { w i} i=1 ... n k; the pixel position that represents the potential tracking target that position is Y, h is the pixel sum that constant represents potential tracking target; According to obtaining new Bhattacharyya distance is: &rho; = [ p ^ ( y ^ 1 ) , q ^ ] = &Sigma; u = 1 m p ^ u ( y ^ 1 ) q ^ u ;
C, when &rho; [ p ^ ( y ^ 1 ) , q ^ ] < &rho; [ p ^ ( y ^ 0 ) , q ^ ] Time, y ^ 1 &LeftArrow; 1 2 ( y ^ 0 + y ^ 1 ) ;
If d finish, otherwise order and turn back to step a;
Wherein ε goes to zero, and adopts the vectorial Bhattacharyya of increase of Mean Shift distance in step b, in the time that Bhattacharyya distance does not have increase or increase to be less than 0.1%, performs step c and upgrades potential target position.
Realistic objective tracking test shows that Bhattacharyya distance is generally all than initial position the distance at place is large, when with almost in presentation video coordinate when same pixel now eye tracking finish.
In the image sequence receiving, the tracing process of each frame is all described above.Therefore, for given initial position and To Template, make full use of the similarity between motion prediction, in the prediction field of present frame, search for, adopt above-mentioned algorithm minor increment d just can realize optimum target following.In tracing process, for preventing the transmission of Mean Shift drift error, in conjunction with the template of the continuous adaptive adjustment tracking target of thought of template renewal, preferably taking potential target as tracking target, upgrade To Template.
A kind of tracing of human eye device is also provided in the embodiment of the present invention, has mainly comprised as shown in Figure 3:
Prediction module 21, for the position at current frame image according to eyes, utilizes the Kalman wave filter of structure to predict the position of eyes in next frame image, obtains eye position predicted value;
Search module 22, for in the time that next two field picture becomes current frame image, utilize eye position predicted value to search in this current frame image, obtain the potential target mating with To Template Luminance Distribution, wherein To Template obtains based on original template, and original template is according to the initial frame image configuration receiving;
Tracking module 23, realizes the tracking of eyes for the tracking target using potential target as eyes.
The foregoing is only the preferred embodiments of the present invention, be not limited to the present invention, for a person skilled in the art, the present invention can have various modifications and variations.Within the spirit and principles in the present invention all, any amendment of doing, be equal to replacement, improvement etc., within all should being included in protection scope of the present invention.

Claims (9)

1. tracing of human eye method, is characterized in that, comprising:
Position according to eyes in current frame image, utilizes the Kalman Kalman wave filter of structure to predict the position of eyes in next frame image, obtains eye position predicted value;
In the time that described next frame image becomes current frame image, utilize described eye position predicted value to search in this current frame image, obtain the potential target mating with To Template Luminance Distribution, wherein said To Template obtains based on original template, and described original template is according to the initial frame image configuration receiving;
Tracking target using described potential target as eyes realizes the tracking of eyes.
2. method according to claim 1, is characterized in that, the method also comprises according to the original template of the initial frame image configuration eyes that receive, comprising:
In described initial frame image, carry out face detection, obtain face detected image;
Described face detected image is carried out to gray scale processing, obtain face gray-scale map;
Described face gray-scale map is carried out to vertical Gray Projection, obtain vertical Gray Projection figure;
Determine the border, left and right of face according to the border, left and right at the vertical Gray Projection curve convexity peak in described vertical Gray Projection figure;
According to the border, left and right of described face, described vertical Gray Projection figure is carried out to cutting, obtain new face gray-scale map;
Described new face gray-scale map is carried out to horizontal Gray Projection, obtain horizontal Gray Projection figure;
Determine according to the horizontal Gray Projection curve in described horizontal Gray Projection figure the up-and-down boundary that the crown and nose middle part form, determine eyebrow and eye regions according to described up-and-down boundary;
Use Sobel SOBEL operator to obtain the boundary value of described eyebrow and eye regions, and carry out edge grouping, orient the position of eyes and obtain described original template;
Wherein said original template is the To Template of the position of eyes in prediction the second two field picture.
3. method according to claim 2, is characterized in that, the method also comprises in the time getting described tracking target, utilizes described tracking target to upgrade current described To Template;
The position of eyes in the new To Template prediction next frame image that utilization obtains.
4. method according to claim 2, is characterized in that, the method also comprises structure Kalman wave filter, comprising: state model and the measurement model of determining the image sequence receiving;
The state model of described definite image sequence comprises: adopt algorithm x t+1=θ x t+ w trepresent described state model;
Wherein set (c t, r t) be the centroid position of t moment eyes, (u t, v t) be the speed of t moment eyes in c direction and r direction, the state vector of t moment eyes is x t=(c_t, r_t, u_t, v_t) t, w tfor system noise; Suppose that the displacement of eyes between two continuous two field pictures goes to zero, and be uniform motion, run duration is Δ t, and state-transition matrix is &theta; = 1 0 &Delta;t 0 0 1 0 &Delta;t 0 0 1 0 0 0 0 1 ;
The measurement model of described definite image sequence comprises: adopt algorithm z t=Hx t+ s trepresent described measurement model, wherein observed quantity represent the position of t moment eyes, s tfor measuring noise, H = 1 0 0 0 0 1 0 0 .
5. method according to claim 3, it is characterized in that, describedly utilize described eye position predicted value to search in this current frame image, obtain the potential target mating with To Template Luminance Distribution, comprise: adopt Mean Shift algorithm, described eye position predicted value is carried out to interative computation as the initial value of described Mean Shift algorithm;
In this current frame image, search element according to the result of interative computation and go out and the potential target of To Template brightness similarity maximum, the similarity degree between wherein said To Template and described potential target is measured apart from Bhattacharyya distance value with Pasteur.
6. method according to claim 5, is characterized in that, adopts Bhattacharyya distance value to measure the similarity degree between described To Template and described potential target, comprising:
Utilize algorithm &rho; ( y ) = &rho; [ p ^ ( y ) , q ^ ] = &Sigma; u = 0 m p ^ u ( y ) q ^ u Calculating central position is the discrete estimation value of the Bhattacharyya distance between potential target and the To Template of y;
Wherein, p represents the brightness histogram of described potential target, and q represents the brightness histogram of described To Template, and proper vector u represents the color vector in To Template, represent the probability distribution of color u in To Template, the characteristic probability that represents the potential target that center is y distributes, and M is the quantification layer of p and q, and m is the value that quantizes layer M;
Determine the luminance difference distance between described To Template and described potential target according to the discrete estimation value of the Bhattacharyya distance between described potential target and To Template, wherein said luminance difference is apart from being d ( y ) = 1 - &rho; [ p ^ ( y ) , q ^ ] .
7. method according to claim 5, is characterized in that, the described element of searching in this current frame image goes out and the potential target of To Template brightness similarity maximum, comprising:
The set positions of the described eye position predicted value that described Kalman wave filter dopes in current frame image is
With starting point is carried out target search in described current frame image, comprises that calculating current frame image meta is set to the color probability of potential target, minimize luminance difference apart from d and maximize Bhattacharyya distance according to described color probability;
Minimize and the maximized result of Bhattacharyya distance obtains and the potential target of described To Template brightness similarity maximum apart from d according to luminance difference.
8. method according to claim 7, is characterized in that, describedly minimizes luminance difference apart from d and maximizes Bhattacharyya distance according to described color probability, comprising:
A, basis the position of potential target in initialization current frame image, and calculate the color probability of described potential target and obtain Bhattacharyya distance &rho; ( y ) = &rho; [ p ^ ( y ^ 0 ) , q ^ ] = &Sigma; u = 0 m p ^ u ( y ^ 0 ) q ^ u ;
B, in described current frame image determine new potential target position y ^ 1 = &Sigma; i = 1 n k x i w i g ( | | y ^ 0 - x i h | | 2 ) &Sigma; i = 1 n k w i g ( | | y ^ 0 - x i h | | 2 ) ;
Wherein basis w i = &Sigma; u = 1 m &delta; [ b ( x i ) - u ] q ^ u p ^ u ( y ^ 0 ) , Determine weights { w i} i=1 ... n k; the pixel position that represents the potential tracking target that position is Y, h is the pixel sum that constant represents potential tracking target; According to obtaining new Bhattacharyya distance is: &rho; = [ p ^ ( y ^ 1 ) , q ^ ] = &Sigma; u = 1 m p ^ u ( y ^ 1 ) q ^ u ;
C, when &rho; [ p ^ ( y ^ 1 ) , q ^ ] < &rho; [ p ^ ( y ^ 0 ) , q ^ ] Time, y ^ 1 &LeftArrow; 1 2 ( y ^ 0 + y ^ 1 ) ;
If d finish, otherwise order and turn back to step a;
Wherein ε goes to zero, and adopts the vectorial Bhattacharyya of increase of Mean Shift distance in step b, in the time that Bhattacharyya distance does not have increase or increase to be less than 0.1%, performs step c and upgrades potential target position.
9. tracing of human eye device, is characterized in that, comprising:
Prediction module, for the position at current frame image according to eyes, utilizes the Kalman wave filter of structure to predict the position of eyes in next frame image, obtains eye position predicted value;
Search module, for in the time that described next frame image becomes current frame image, utilize described eye position predicted value to search in this current frame image, obtain the potential target mating with To Template Luminance Distribution, wherein said To Template obtains based on original template, and described original template is according to the initial frame image configuration receiving;
Tracking module, realizes the tracking of eyes for the tracking target using described potential target as eyes.
CN201410160852.7A 2014-04-18 2014-04-18 Human eye tracking method and device Pending CN103942542A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410160852.7A CN103942542A (en) 2014-04-18 2014-04-18 Human eye tracking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410160852.7A CN103942542A (en) 2014-04-18 2014-04-18 Human eye tracking method and device

Publications (1)

Publication Number Publication Date
CN103942542A true CN103942542A (en) 2014-07-23

Family

ID=51190206

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410160852.7A Pending CN103942542A (en) 2014-04-18 2014-04-18 Human eye tracking method and device

Country Status (1)

Country Link
CN (1) CN103942542A (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127145A (en) * 2016-06-21 2016-11-16 重庆理工大学 Pupil diameter and tracking
WO2016201683A1 (en) * 2015-06-18 2016-12-22 Wizr Cloud platform with multi camera synchronization
WO2017096753A1 (en) * 2015-12-11 2017-06-15 腾讯科技(深圳)有限公司 Facial key point tracking method, terminal, and nonvolatile computer readable storage medium
CN108196729A (en) * 2018-01-16 2018-06-22 安徽慧视金瞳科技有限公司 A kind of finger tip point rapid detection method based on infrared video
CN108335364A (en) * 2018-01-23 2018-07-27 北京易智能科技有限公司 A kind of three-dimensional scenic display methods based on line holographic projections
CN110910422A (en) * 2019-11-13 2020-03-24 北京环境特性研究所 Target tracking method and device, electronic equipment and readable storage medium
TWI707243B (en) * 2015-11-30 2020-10-11 大陸商中國銀聯股份有限公司 Method, apparatus, and system for detecting living body based on eyeball tracking
CN112749604A (en) * 2019-10-31 2021-05-04 Oppo广东移动通信有限公司 Pupil positioning method and related device and product
CN113256691A (en) * 2021-05-11 2021-08-13 广州织点智能科技有限公司 Target tracking method, device, equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7209574B2 (en) * 2003-01-31 2007-04-24 Fujitsu Limited Eye tracking apparatus, eye tracking method, eye state judging apparatus, eye state judging method and computer memory product
CN101833770A (en) * 2010-05-17 2010-09-15 西南交通大学 Driver eye movement characteristics handover detecting and tracing method based on light sensing
CN102314589A (en) * 2010-06-29 2012-01-11 比亚迪股份有限公司 Fast human-eye positioning method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7209574B2 (en) * 2003-01-31 2007-04-24 Fujitsu Limited Eye tracking apparatus, eye tracking method, eye state judging apparatus, eye state judging method and computer memory product
CN101833770A (en) * 2010-05-17 2010-09-15 西南交通大学 Driver eye movement characteristics handover detecting and tracing method based on light sensing
CN102314589A (en) * 2010-06-29 2012-01-11 比亚迪股份有限公司 Fast human-eye positioning method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
陈艳琴,罗大庸: "基于Kalman滤波和Mean Shift算法的人眼实时跟踪", 《模式识别与人工智能》 *

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016201683A1 (en) * 2015-06-18 2016-12-22 Wizr Cloud platform with multi camera synchronization
TWI707243B (en) * 2015-11-30 2020-10-11 大陸商中國銀聯股份有限公司 Method, apparatus, and system for detecting living body based on eyeball tracking
CN106874826A (en) * 2015-12-11 2017-06-20 腾讯科技(深圳)有限公司 Face key point-tracking method and device
US10452893B2 (en) 2015-12-11 2019-10-22 Tencent Technology (Shenzhen) Company Limited Method, terminal, and storage medium for tracking facial critical area
WO2017096753A1 (en) * 2015-12-11 2017-06-15 腾讯科技(深圳)有限公司 Facial key point tracking method, terminal, and nonvolatile computer readable storage medium
US11062123B2 (en) 2015-12-11 2021-07-13 Tencent Technology (Shenzhen) Company Limited Method, terminal, and storage medium for tracking facial critical area
CN106127145A (en) * 2016-06-21 2016-11-16 重庆理工大学 Pupil diameter and tracking
CN106127145B (en) * 2016-06-21 2019-05-14 重庆理工大学 Pupil diameter and tracking
CN108196729A (en) * 2018-01-16 2018-06-22 安徽慧视金瞳科技有限公司 A kind of finger tip point rapid detection method based on infrared video
CN108335364A (en) * 2018-01-23 2018-07-27 北京易智能科技有限公司 A kind of three-dimensional scenic display methods based on line holographic projections
CN112749604A (en) * 2019-10-31 2021-05-04 Oppo广东移动通信有限公司 Pupil positioning method and related device and product
CN110910422A (en) * 2019-11-13 2020-03-24 北京环境特性研究所 Target tracking method and device, electronic equipment and readable storage medium
CN113256691A (en) * 2021-05-11 2021-08-13 广州织点智能科技有限公司 Target tracking method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN103942542A (en) Human eye tracking method and device
US10380763B2 (en) Hybrid corner and edge-based tracking
CN102324030B (en) Target tracking method and system based on image block characteristics
US9621779B2 (en) Face recognition device and method that update feature amounts at different frequencies based on estimated distance
CN103514441B (en) Facial feature point locating tracking method based on mobile platform
US20180189577A1 (en) Systems and methods for lane-marker detection
CN104794733A (en) Object tracking method and device
CN104616318A (en) Moving object tracking method in video sequence image
CN102831382A (en) Face tracking apparatus and method
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
CN104091349A (en) Robust target tracking method based on support vector machine
EP2788838A1 (en) Method and apparatus for identifying a gesture based upon fusion of multiple sensor signals
CN103150740A (en) Method and system for moving target tracking based on video
EP2704056A2 (en) Image processing apparatus, image processing method
CN109448025B (en) Automatic tracking and track modeling method for short-path speed skating athletes in video
CN106952294B (en) A kind of video tracing method based on RGB-D data
CN105869178A (en) Method for unsupervised segmentation of complex targets from dynamic scene based on multi-scale combination feature convex optimization
CN103870796A (en) Eye sight evaluation method and device
CN103391430B (en) DSP (digital signal processor) based relevant tracking method and special device
CN106570892B (en) A kind of moving target active tracking method based on edge enhancing template matching
CN107301657B (en) A kind of video target tracking method considering target movable information
Li et al. Monocular long-term target following on uavs
CN104915969A (en) Template matching tracking method based on particle swarm optimization
CN103677274A (en) Interactive projection method and system based on active vision
CN101814137A (en) Driver fatigue monitor system based on infrared eye state identification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20140723