CN103761519A - Non-contact sight-line tracking method based on self-adaptive calibration - Google Patents

Non-contact sight-line tracking method based on self-adaptive calibration Download PDF

Info

Publication number
CN103761519A
CN103761519A CN201310719654.5A CN201310719654A CN103761519A CN 103761519 A CN103761519 A CN 103761519A CN 201310719654 A CN201310719654 A CN 201310719654A CN 103761519 A CN103761519 A CN 103761519A
Authority
CN
China
Prior art keywords
alpha
point
image
screen
pupil
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201310719654.5A
Other languages
Chinese (zh)
Other versions
CN103761519B (en
Inventor
王轩
韩楷
张自力
于成龙
李鑫鑫
张加佳
刘猛
赵海楠
李晔
漆舒汉
关键
张江涛
刘博�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Graduate School Harbin Institute of Technology
Original Assignee
Shenzhen Graduate School Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Graduate School Harbin Institute of Technology filed Critical Shenzhen Graduate School Harbin Institute of Technology
Priority to CN201310719654.5A priority Critical patent/CN103761519B/en
Publication of CN103761519A publication Critical patent/CN103761519A/en
Application granted granted Critical
Publication of CN103761519B publication Critical patent/CN103761519B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a non-contact sight-line tracking method based on self-adaptive calibration. A light spot characteristic extracting method combining the BFS algorithm, image geometric features and gray features is used for accurately matching light spots with corresponding light sources; by means of a fitting method comprising the steps that circular fitting is conducted through a one-dimension edge detection operator and least square ellipse fitting, and noisy points are removed until the center of an ellipse is fixed, the accurate centers of pupils are obtained finally. Besides, a dynamically self-adaptive calibration is provided, and existing space mapping model precision is improved effectively.

Description

A kind of contactless sight line method for tracing based on adaptive calibration
Technical field
The invention belongs to the method for tracing of computer vision field, relate generally to a kind of contactless Eye Tracking Technique based on adaptive calibration.
Background technology
The mankind are species that a height relies on vision, calculate people's blinkpunkt, and the application that can produce every aspect, comprises psychology, Industrial Engineering, commercial advertisement etc.For example, can utilize visual pursuit technology to carry out:
1) notice analysis.The web page contents that analysis user is most interested, finds interest in infant growth process etc.
2) man-machine interaction.Can replace hand to carry out various operations with human eye, such as carrying out auxiliary, the auxiliary driving of disabled person, controlling intelligent appliance, webpage and PPT page turning etc.Visual pursuit technological penetration, in people's life, will produce huge effect to improving people's quality of life.
Summary of the invention
For existing based on CCD(Charge Coupled Device, charge-coupled image sensor) the Visual Trace Technology pupil of video camera extracts inaccurate, the coordinate setting of people's eye fixation is inaccurate, and the deficiency to aspects such as posture restriction are too much, the present invention utilizes the intensity profile of image to carry out human eye fast and just locates.The present invention proposes a kind of combination BFS(BFS (Breadth First Search), Breadth First Search) the beam pattern extracting method of algorithm, image geometry feature and gray feature, can carry out exact matching with corresponding light source by hot spot; Utilize one dimension edge detection operator and the least square ellipse matching matching that circulates, remove noise, until the fixing approximating method of elliptical center finally obtains accurate pupil center; And a kind of calibration steps of dynamic self-adapting is proposed, effectively raise existing spatial mappings model accuracy.
The invention provides a kind of contactless sight line method for tracing based on adaptive calibration, comprise the following steps:
Step 1: convert the coloured image getting to gray-scale map, image is carried out to denoising;
Step 2: analyze the minimum subregion of gray-scale value in image, usining this gray-scale value is divided into binary image as thresholding by image, utilize all point (xi that are less than threshold value, yi) (i=1 ... N), N is the number that in histogram, gray scale is less than the point of threshold value, at binary image coarse positioning pupil center location (x pupil, y pupil), suc as formula (1) and formula (2):
x pupil = 1 N Σ x n n = 1 N - - - ( 1 )
y pupil = 1 N Σ y n n = 1 N - - - ( 2 ) ;
Step 3: according to thick fixed pupil center, the rectangular area centered by it is set to region of interest ROI;
Step 4: utilize maximum variance between clusters to set an initial threshold, then the ROI image after cutting apart is assessed, if search is less than the hot spot satisfying condition from utilize the image current threshold is cut apart, automatically adjust threshold value, again cut apart image, again search for, directly search out the whole hot spots that satisfy condition;
Step 5: utilize centroid method, determine the centre coordinate of five hot spots;
Step 6: based on geometric relationship coupling hot spot and light source;
Step 7: utilize the corneal reflection spot area obtaining in hot spot leaching process, the gray scale of the point set of spot area is replaced to the optimal self-adaptive threshold value in hot spot leaching process, then adopt classical Threshold sementation to carry out binaryzation to image, image is cut apart according to the optimal self-adaptive threshold value in hot spot leaching process, obtained the general area of pupil;
Step 8: the edge point set that extracts pupil;
Step 9: utilize ellipse fitting, obtain the centre coordinate of pupil: first use least square ellipse fitting process, candidate point is carried out to ellipse fitting, remove in candidate marginal from the point of the hypertelorism of elliptical center; Loop matching, until obtain a stable elliptical center position, the elliptical center position that this is stable, is exactly the centre coordinate of pupil;
Step 10: calculate blinkpunkt;
Step 11: calibration blinkpunkt.
As a further improvement on the present invention, in step 1, with median filtering method, remove noise.
As a further improvement on the present invention, in step 2, utilize histogram analysis publish picture picture in the minimum subregion of gray-scale value.
As a further improvement on the present invention, in step 3, the scope of rectangular area is: the rectangular area of 70*70 pixel is to the rectangular area of 90*90 pixel.
As a further improvement on the present invention, in step 4: adopt breadth first method search hot spot.
As a further improvement on the present invention, in step 8, the edge point set that extracts pupil is specific as follows: spot area gray scale is used after the gray-scale value replacement lower than thresholding, calculate the barycenter of pupil region, centered by this barycenter, along all directions, carrying out the scanning of one dimension marginal point, search the point set that meets gray scale sequence [111000] in binary image, is exactly the edge point set of pupil.
As a further improvement on the present invention, in step 9, remove in candidate marginal from the requirement of the point of the hypertelorism of elliptical center and be: surpass other all marginal point 5-10 pixels; Variation is less than 3 pixels and just thinks stable center.
As a further improvement on the present invention, in step 10, calculate blinkpunkt specific as follows:
According to four hot spots on the screen calculating and center coordinate of eye pupil, utilize the cross ratio invariability principle in projective geometry to carry out the mapping of volume coordinate, p represents the center of pupil, and g represents the blinkpunkt of human eye on screen, V1, V2, V3, V4 is respectively the near-infrared light source IR1 on four angles of screen, IR2, IR3, imaginary point corresponding to hot spot that IR4 produces;
U v, U v2, U v3, U v4respectively imaginary point V1, V2, V3, the mapping point of V4 on the plane of delineation, so these 4 points are also imaginary points, and actual hot spot Gt1 on human eye, Gt2, Gt3, Gt4 corresponding point in image is U r1, U r2, U r3, U r4; Between imaginary point in the hot spot of catching in image and spatial mappings relation on the plane of delineation, have a fixing proportionate relationship, shown in (7), wherein α is that a value is similar to 2.0 theoretical constant, U rpand U rican from the image of catching, calculate U rpit is the hot spot producing near the near-infrared light source of position of camera optic axis in system; So, when α value is constant 2.0, U v1, U v2, U v3, U v4just can calculate by through type (7), the relation between imaginary point and hot spot is just established like this,
U Vi=U RP+α(U Ri-U RP)(i=1,2,3,4) (7)
According to cross ratio invariability principle, IR 1iR 2double ratio value equal the double ratio value of V1V2, and the double ratio value of V1V2 equals U v1u v2double ratio value, so IR 1iR 2double ratio value equal U v1u v2double ratio value;
4 imaginary point coordinates supposing the plane of delineation are U vi=(x i v, y i v) (i=1,2,3,4), take these 4 points to expand as the intersection point in Fig. 4 (a) as basis, the coordinate of supposing these intersection points is U m=(x i m, y i m) (i=1,2,3,4), U v1u v2double ratio value suc as formula shown in (8), U v2u v3double ratio value suc as formula shown in (9),
CR image x = ( x 1 V y 1 M - x 1 M y 1 V ) ( x 2 M y 2 V - x 2 V y 2 M ) ( x 1 V y 2 M - x 2 M y 1 V ) ( x 1 M y 2 V - x 2 V y 1 M ) - - - ( 8 )
CR image y = ( x 2 V y 3 M - x 3 M y 2 V ) ( x 4 M y 3 V - x 3 V y 4 M ) ( x 2 V y 4 M - x 4 M y 2 V ) ( x 3 M y 3 V - x 3 V y 3 M ) - - - ( 9 )
Suppose that the sight line blinkpunkt coordinate of human eye on screen is g (x g, y g), the width of screen and be highly respectively w and h has M1 (x in the plane at screen place g, 0) and M2 (w/2,0), IR 1iR 2double ratio value suc as formula shown in (10), the double ratio value of IR1 and IR4, suc as formula shown in (11), is utilized double ratio equation CR image x = CR screen x With CR image y = CR screen y , Solve the blinkpunkt g of human eye,
CR screen x = ( w - w 2 ) x g ( w - x g ) w 2 = w g w - x g - - - ( 10 )
CR screen y = ( h - h 2 ) y g ( h - y g ) h 2 = y g h - y g - - - ( 11 ) .
As a further improvement on the present invention, the condition in step 4 is: the pixel number that gray scale is greater than in the single connected region of threshold value surpasses N, and the value of N depends on eye image size and resolution.For example: in the eye image of 768*576 pixel size, N value is 100.
As a further improvement on the present invention, in step 11, calibration blinkpunkt is specific as follows:
Screen area has been divided into the rectangular area of n formed objects equably, the center of all rectangular areas is exactly calibration point set, the coordinate of these calibration points can determine according to the size of screen and rectangular area, and the value of n depends on size and the desired accuracy rating reaching of screen
The coordinate of calibration point is that set is C, shown in (12), when tested, watches calibration point C attentively itime, the blinkpunkt coordinate that actual computation goes out is E i, shown in (13), the point in formula (14) is on the plane of delineation, the hot spot being produced by the near-infrared light source on 4 angles of screen, point in formula (15) is on the plane of delineation, the hot spot being produced by the near-infrared light source near position of camera optic axis, and formula (16) is U rPcorresponding imaginary point,
C i = ( x i C , y i C ) ( i = 1,2,3,4 . . . n ) - - - ( 12 )
E i = ( x i E , y i E ) ( i = 1,2,3,4 . . . n ) - - - ( 13 )
U Ri = ( x i R , y i R ) ( i = 1,2,3,4 . . . n ) - - - ( 14 )
U RP = ( x R P , y R P ) - - - ( 15 )
U VP = ( x V P , y V P ) - - - ( 16 )
In extracting alpha parameter matrix process, each alpha parameter that extracts a screen subregion, between parameter, do not interfere with each other mutually, in initial, alpha parameter is assigned 2.0, at this moment can calculate tested rough blinkpunkt, according to this rough blinkpunkt and the difference between calibration point is counter releases an optimum alpha parameter, according to formula (7), can push type (17), when wushu (17) also enters to (8), (9), unique unknown number is exactly α, so can be by solving an equation
Figure BDA00004433223600000410
with calculate the alpha parameter that current region is corresponding,
U Vi = ( x R P + α ( x i R - x R P ) , y R P + α ( y i R - y R P ) ) - - - ( 17 )
Each calibration point C icorresponding alpha parameter is stored in a matrix, remembers that this matrix is α Matrix, C ialso be stored in a matrix, remember that this matrix is CMatrix (C 11=C 1, C 12=C 2..., C st=C n), n=s*t, U r={ U r1, U r2, U r3, U r4, U rP, s and t are the row and column numbers of the subregion that is divided into of screen, U rbe the real-time hot spot set of obtaining from the plane of delineation, formula (4-19) represents C ijaccording to current U ralpha parameter is calculated in set, make s=4, t=4, screen has been divided into the subregion of 16 formed objects, α Matrix (shown in (20)) just can utilize formula (18) to calculate by a known CMatrix (shown in (19))
α ij=F(C ij,U R)(i=1...s,j=1...t) (18)
cMatrix = C 11 C 12 C 13 C 14 C 21 C 22 C 23 C 24 C 31 C 32 C 33 C 34 C 41 C 42 C 43 C 44 - - - ( 19 )
αMatrix = α 11 α 12 α 13 α 14 α 21 α 22 α 23 α 24 α 31 α 32 α 33 α 34 α 41 α 42 α 43 α 44 - - - ( 20 )
All alpha parameters in a tested α Matrix all extract complete after, just can watch experiment attentively, for each local subregion of screen, (center is C ij), in α Matrix, there is an alpha parameter corresponding with it, when initial, the alpha parameter of α Matrix center is set to initial current alpha parameter, when tested current sight line is watched drop point attentively in certain sub regions, current alpha parameter is just dynamically converted to alpha parameter corresponding to this sub regions in α Matrix, guarantees that current alpha parameter is always adapted to the optimum alpha parameter of current local subregion.
The invention has the beneficial effects as follows:
The present invention proposes the beam pattern extracting method of a kind of combination BFS algorithm, image geometry feature and gray feature, can accurately search hot spot, and hot spot accurately can be mated with corresponding light source; Utilize one dimension boundary operator and least square circulation ellipse fitting method to carry out matching pupil, circulate each time and from marginal point set, filter some away from the point of the current pupil center simulating, until pupil center is stable; Propose a kind of dynamic self-adapting calibration steps simultaneously, effectively raise the precision of existing space mapping model.
Accompanying drawing explanation
Fig. 1 is the corresponding relation structural representation of system source of the present invention and hot spot;
Fig. 2 is pupil ellipse fitting result of the present invention;
Fig. 3 is spatial mappings graph of a relation of the present invention;
Fig. 4 a is the imaginary point on the plane of delineation, and Fig. 4 b is the point on screen;
Fig. 5 is screen calibration point distribution of the present invention (n=4*4) figure.
Embodiment
Below in conjunction with accompanying drawing, the present invention will be further described.
Complicated for existing visual pursuit technical equipment, to posture restriction too much, want head-mount specialized equipment, the deficiency of the aspect such as precision is low and efficiency is slow, on the basis of forefathers' invention, propose a kind of new method that realizes visual pursuit technology, adopt five near infrared light modulations as the light source of corneal reflection.The method does not need to be with any helmet, and can adapt to the proper motion of head, and the sight line positional precision that the method for design is calculated is higher.Utilizing the intensity profile of image to carry out human eye fast just locates.Propose the beam pattern extracting method of a kind of combination BFS algorithm, image geometry feature and gray feature, hot spot can be carried out to exact matching with corresponding light source.Utilization, with edge detection operator and the least square ellipse matching matching that circulates, is removed noise, until the fixing approximating method of elliptical center finally obtains accurate pupil center.Utilize cross ratio invariability principle to carry out coordinate mapping, calculate blinkpunkt coordinate, and by a kind of calibration steps of dynamic self-adapting, effectively raise existing spatial mappings model accuracy.Utilize the Realizing Achievement of above-mentioned technology, realize eye controlled mouse system, show the effect of visual pursuit.
1 feature extraction
1.1 human eye area location
In sight line tracking system, human eye feature extraction accurately in early stage is the later stage to promote the assurance that sight line is estimated accuracy rate, so feature extraction is particularly important.And in a real-time sight line tracking system, in order to ensure the real-time of sight line tracking system, can not the too high method of complexity service time, thereby need design can locate fast the method for human eye area.The present invention, according to the morphological feature of human eye and gray feature, has adopted a kind of location algorithm fast.
(1) medium filtering denoising
First convert the coloured image getting to gray-scale map, due in using eye image, the marking on eyebrow, eyelashes and skin all may disturb processing to produce, noise penalty quality, to feature extraction, bring obstruction, so after catching eye image, first need image to carry out denoising.Through test, the present invention has selected medium filtering to remove noise.
(2) human eye histogram analysis
Then utilize the histogram analysis minimum subregion (being also first significant crest gray-scale value) of gray-scale value in picture of publishing picture, using this gray-scale value, as thresholding, image is divided into binary image.Utilize all point (xi, yi) (i=1 that are less than threshold value ... N), N is the number that in histogram, gray scale is less than the point of threshold value, at binary image coarse positioning pupil center location (x pupil, y pupil).Suc as formula (1) and formula (2).
x pupil = 1 N Σ x n n = 1 N - - - ( 1 )
y pupil = 1 N Σ y n n = 1 N - - - ( 2 )
(3) human eye area location
According to thick fixed pupil center, the rectangular area of the 80*80 pixel centered by it is set to region of interest ROI (region-of-interest), the scope of processing with regard to downscaled images like this, only operates area-of-interest, has improved image treatment effeciency.
1.2 feature hot spots extract
Next be to calculate spot center coordinate, the present invention proposes the beam pattern extracting method of a kind of combination BFS algorithm, image geometry feature and gray feature, hot spot can be carried out to exact matching with corresponding light source.
(1) adaptive threshold split plot design
First utilize maximum variance between clusters (Otsu algorithm) to set an initial threshold, then the ROI image after cutting apart is assessed, if search is less than the hot spot satisfying condition from utilize the image current threshold is cut apart, automatically adjust threshold value, again cut apart image, again search for, directly search out the whole hot spots that satisfy condition.
(2) breadth First algorithm search hot spot
In search hot spot, the present invention has adopted breadth First algorithm to search for, because being one, the search procedure of hot spot constantly expands the process that traversal scope is found optimum solution, the feature that is just meeting breadth First ergodic algorithm, so the present invention adopts breadth First algorithm to search for hot spot from image, can search out respectively 5 continuous maximum spot area.
(3) utilize centroid method, determine the centre coordinate of five hot spots
In (2), the hot spot producing in corneal reflection by breadth First algorithm search to 5 light source, we,, by centroid method, obtain respectively the centre coordinate of these five spot area here, are also spot center
(4) based on geometric relationship coupling hot spot and light source
The present invention is directed to the particular problem of invention, observe following feature: in the ROI region of human eye, 4 hot spots that the near-infrared light source on 4 angles of screen produces are that 4 angular distances in distance R OI region are respectively nearest.And from the distance at 4 angles of ROI, be not nearest by the hot spot producing near the near-infrared light source of ccd video camera optical axis.According to this feature, propose a kind of method of determining 5 hot spots according to distance R OI region herein, utilize this feature, can be more fast and extract exactly the corresponding relation (as shown in Figure 1) of correct 5 hot spots and light source.
1.3 pupil center location
After having determined spot center coordinate, next need accurately to locate pupil center.As another key character of the visual pursuit technology based on corneal reflection principle, whether the quality of pupil detection quality accurately plays vital effect for the result of spatial mappings.The present invention is in conjunction with the feature in corneal reflection visual pursuit technology, on the basis of forefathers' invention, adopt a kind of by carry out the detection of pupil edge point in human eye ROI image, and then utilize the marginal point of discounting to carry out pupil matching, finally by the pupil edge simulating, determine the method for pupil center.
In the visual pursuit technology based on corneal reflection, due to the existence of pupil corneal reflection hot spot, directly adopt gray threshold that image is divided into after binary image, in pupil region, have some cavities, and difficulty will be brought to rim detection in these cavities.In fact, in the leaching process of corneal reflection hot spot, known the point set of spot area, so can utilize this point, before extracting pupil, first the gray-scale value of the point of spot area is all replaced to the optimal self-adaptive threshold value in hot spot leaching process, like this, in binaryzation, just can not produce the cavity of corneal reflection spot area.On this basis, the present invention proposes and a kind ofly utilize one dimension edge detection operator and least square circulation pupil approximating method to realize pupil accurately to locate.
(1) image pre-service
Utilize the corneal reflection spot area obtaining in hot spot leaching process, the gray scale of the point set of spot area is replaced to the optimal self-adaptive threshold value in hot spot leaching process.Then adopt classical Threshold sementation to carry out binaryzation to image, image is cut apart according to the optimal self-adaptive threshold value in hot spot leaching process, just can obtain the general area of pupil.Wherein also comprise some possible noises, but can eliminate by medium filtering.
(2) marginal point extracts
In order to adopt the method for ellipse fitting to simulate pupil, just need to from image, obtain the marginal point of pupil.And the edge detection method adopting in visual pursuit technology based on corneal reflection is mainly to carry out based on one dimension edge detection operator.This one dimension edge detection algorithm is mainly for ellipse fitting, and in order to meet the real-time of system, aspect minimizing calculated amount, performance well, also can guarantee the accuracy of rim detection simultaneously.
Spot area gray scale is used after the gray-scale value replacement lower than thresholding, can be calculated the barycenter of pupil region, centered by this barycenter, along all directions, carry out the scanning of one dimension marginal point.Owing to having gray scale sudden change in pupil edge region, so utilize pupil gray threshold that image is divided into after binary image, adopt BFS (Breadth First Search) (the Breadth First Search using in hot spot leaching process, BFS) algorithm, in binary image, searching the point set that meets gray scale sequence [111000], is exactly the edge point set of pupil.
(3) pupil ellipse fitting
First the present invention uses least square ellipse fitting process, and candidate point is carried out to ellipse fitting.Remove in candidate marginal from the point (surpassing other all marginal point 5-10 pixels) of the hypertelorism of elliptical center.Loop matching, until obtain a stable elliptical center position (situation that absolute stability is constant is unlikely, and in most cases variation is less than 3 pixels and just thinks stable center).The elliptical center position that this is stable is exactly the centre coordinate of pupil.Fig. 2 is the result that algorithm that the present invention adopts is obtained.
2 spatial mappings
2.1 blinkpunkts calculate
The present invention, according to four hot spots on the screen calculating and center coordinate of eye pupil, utilizes the cross ratio invariability principle in projective geometry to carry out the mapping of volume coordinate.Computation model of the present invention as shown in Figure 3.P represents the center of pupil, and g represents the blinkpunkt of human eye on screen.Supposing has the cornea tangent plane shown in figure on eye cornea surface.V1, V2, V3, V4 is respectively the near-infrared light source IR1 on four angles of screen, IR2, IR3, imaginary point corresponding to hot spot that IR4 produces.Introducing the object of these imaginary points, is for spatial mappings relation and double ratio principle being combined.The hot spot that these imaginary points are not, the just mapping point that meets in the ideal case certain condition of hot spot.
In Fig. 3, U v, U v2, U v3, U v4respectively imaginary point V1, V2, V3, the mapping point of V4 on the plane of delineation, so these 4 points are also imaginary points, and actual hot spot Gt1 on human eye, Gt2, Gt3, Gt4 corresponding point in image is U r1, U r2, U r3, U r4.Between imaginary point in the hot spot of catching in image (real point) and spatial mappings relation on the plane of delineation, have a fixing proportionate relationship, shown in (7), wherein α is that a value is similar to 2.0 theoretical constant, U rpand U rican from the image of catching, calculate (U rpthe hot spot producing near the near-infrared light source of position of camera optic axis in system).So, when α value is constant 2.0, U v1, U v2, U v3, U v4just can calculate by through type (7).Relation between imaginary point and hot spot has just been established like this.
U Vi=U RP+α(U Ri-U RP)(i=1,2,3,4) (7)
After relation between known imaginary point and hot spot, just can directly by imaginary point (indirectly passing through hot spot), utilize double ratio principle to carry out the spatial mappings in Fig. 3.Fig. 4 a and Fig. 4 b show while utilizing double ratio principle to carry out spatial mappings, the relation between point used.In Fig. 3, according to cross ratio invariability principle, IR 1iR 2double ratio value equal the double ratio value of V1V2, and the double ratio value of V1V2 equals U v1u v2double ratio value, so IR 1iR 2double ratio value equal U v1u v2double ratio value.Utilize double ratio principle, so just the plane of delineation and screen plane have been set up and contacted directly by cornea tangent plane, subsequent calculations has not just needed to consider cornea tangent plane, only need in the plane of delineation and screen plane, carry out analytical calculation.
Point coordinate involved in the image of catching by video camera all can calculate, so IR 1iR 2on double ratio value just can calculate.4 imaginary point coordinates supposing the plane of delineation are U vi=(x i v, y i v) (i=1,2,3,4), take these 4 points to expand as the intersection point in Fig. 4 a as basis, the coordinate of supposing these intersection points is U m=(x i m, y i m) (i=1,2,3,4).U v1u v2double ratio value suc as formula shown in (8).U v2u v3double ratio value suc as formula shown in (9).
CR image x = ( x 1 V y 1 M - x 1 M y 1 V ) ( x 2 M y 2 V - x 2 V y 2 M ) ( x 1 V y 2 M - x 2 M y 1 V ) ( x 1 M y 2 V - x 2 V y 1 M ) - - - ( 8 )
CR image y = ( x 2 V y 3 M - x 3 M y 2 V ) ( x 4 M y 3 V - x 3 V y 4 M ) ( x 2 V y 4 M - x 4 M y 2 V ) ( x 3 M y 3 V - x 3 V y 3 M ) - - - ( 9 )
Suppose that the sight line blinkpunkt coordinate of human eye on screen is g (x g, y g), the width of screen and be highly respectively w and h.Similarly, as shown in Figure 4 b, in the plane at screen place, there is M1 (x g, 0) and M2 (w/2,0), IR 1iR 2double ratio value suc as formula shown in (10), the double ratio value of IR1 and IR4 is suc as formula shown in (11).Utilize double ratio equation
Figure BDA0000443322360000103
with just can solve the blinkpunkt g of human eye.
CR screen x = ( w - w 2 ) x g ( w - x g ) w 2 = w g w - x g - - - ( 10 )
CR screen y = ( h - h 2 ) y g ( h - y g ) h 2 = y g h - y g - - - ( 11 )
2.2 blinkpunkt calibrations
The dynamic self-adapting calibration steps that the present invention proposes adapts to different screen areas to each user with an alpha parameter matrix, the alpha parameter of current use be from matrix Real-time Obtaining to optimal parameter.For the target blinkpunkt (calibration point) that verification is used is relatively evenly distributed on screen, screen area has been divided into the rectangular area of n formed objects equably.The center of all rectangular areas is exactly calibration point set, and the coordinate of these calibration points can be determined according to the size of screen and rectangular area.The value of n depends on size and the desired accuracy rating reaching of screen.In certain rectangle size, the value of n is larger, and the precision finally obtaining is higher.But more calibration point means more numerous and diverse calibration process, and calibration process is more numerous and diverse, calibrate the needed time just longer.If n=4*4, the distribution situation of calibration point on screen as shown in Figure 5.
The coordinate of calibration point is that set is C, shown in (12).When tested, watch calibration point C attentively itime, the blinkpunkt coordinate that actual computation goes out is E i, shown in (13).Point in formula (14) is on the plane of delineation, the hot spot being produced by the near-infrared light source on 4 angles of screen.Point in formula (15) is on the plane of delineation, the hot spot being produced by the near-infrared light source near position of camera optic axis.Formula (16) is U rPcorresponding imaginary point.
C i = ( x i C , y i C ) ( i = 1,2,3,4 . . . n ) - - - ( 12 )
E i = ( x i E , y i E ) ( i = 1,2,3,4 . . . n ) - - - ( 13 )
U Ri = ( x i R , y i R ) ( i = 1,2,3,4 . . . n ) - - - ( 14 )
U RP = ( x R P , y R P ) - - - ( 15 )
U VP = ( x V P , y V P ) - - - ( 16 )
In extracting alpha parameter matrix process, extract the alpha parameter of a screen subregion at every turn, between parameter, do not interfere with each other mutually.In initial, alpha parameter is assigned 2.0, at this moment can calculate tested rough blinkpunkt, according to the difference between this rough blinkpunkt and calibration point (central point of subregion) is counter, releases an optimum alpha parameter.According to formula (7), can push type (17), when wushu (17) also enters to (8), (9), unique unknown number is exactly α, so can be by solving an equation
Figure BDA0000443322360000117
with
Figure BDA0000443322360000118
calculate the alpha parameter that current region is corresponding.
U Vi = ( x R P + α ( x i R - x R P ) , y R P + α ( y i R - y R P ) ) - - - ( 17 )
Each calibration point C icorresponding alpha parameter is stored in a matrix, remembers that this matrix is α Matrix, C ialso be stored in a matrix, remember that this matrix is CMatrix (C 11=C 1, C 12=C 2..., C st=C n), n=s*t, U r={ U r1, U r2, U r3, U r4, U rP.S and t are the row and column numbers of the subregion that is divided into of screen, U rbe the real-time hot spot set of obtaining from the plane of delineation, formula (4-19) represents C ijaccording to current U ralpha parameter is calculated in set.Make s=4, t=4, screen has been divided into the subregion of 16 formed objects.α Matrix (shown in (20)) just can utilize formula (18) to calculate by a known CMatrix (shown in (19)).
α ij=F(C ij,U R)(i=1...s,j=1...t) (18)
cMatrix = C 11 C 12 C 13 C 14 C 21 C 22 C 23 C 24 C 31 C 32 C 33 C 34 C 41 C 42 C 43 C 44 - - - ( 19 )
αMatrix = α 11 α 12 α 13 α 14 α 21 α 22 α 23 α 24 α 31 α 32 α 33 α 34 α 41 α 42 α 43 α 44 - - - ( 20 )
All alpha parameters in a tested α Matrix all extract complete after, just can watch attentively and test.For each local subregion of screen, (center is C ij), in α Matrix, there is an alpha parameter corresponding with it.When initial, the alpha parameter of α Matrix center is set to initial current alpha parameter.When tested current sight line is watched drop point attentively in certain sub regions, current alpha parameter is just dynamically converted to alpha parameter corresponding to this sub regions in α Matrix, guarantees that current alpha parameter is always adapted to the optimum alpha parameter of current local subregion.By such strategy, can solve the problem of overall offset, the method for precision Ye Bi global optimum is high simultaneously.
Above content is in conjunction with concrete preferred implementation further description made for the present invention, can not assert that specific embodiment of the invention is confined to these explanations.For general technical staff of the technical field of the invention, without departing from the inventive concept of the premise, can also make some simple deduction or replace, all should be considered as belonging to protection scope of the present invention.

Claims (10)

1. the contactless sight line method for tracing based on adaptive calibration, is characterized in that, comprises the following steps:
Step 1: convert the coloured image getting to gray-scale map, image is carried out to denoising;
Step 2: analyze the minimum subregion of gray-scale value in image, usining this gray-scale value is divided into binary image as thresholding by image, utilize all point (xi that are less than threshold value, yi) (i=1 ... N), N is the number that in histogram, gray scale is less than the point of threshold value, at binary image coarse positioning pupil center location (x pupil, y pupil), suc as formula (1) and formula (2):
x pupil = 1 N Σ x n n = 1 N - - - ( 1 )
y pupil = 1 N Σ y n n = 1 N - - - ( 2 ) ;
Step 3: according to thick fixed pupil center, the rectangular area centered by it is set to region of interest ROI;
Step 4: utilize maximum variance between clusters to set an initial threshold, then the ROI image after cutting apart is assessed, if search is less than the hot spot satisfying condition from utilize the image current threshold is cut apart, automatically adjust threshold value, again cut apart image, again search for, directly search out the whole hot spots that satisfy condition;
Step 5: utilize centroid method, determine the centre coordinate of five hot spots;
Step 6: based on geometric relationship coupling hot spot and light source;
Step 7: utilize the corneal reflection spot area obtaining in hot spot leaching process, the gray scale of the point set of spot area is replaced to the optimal self-adaptive threshold value in hot spot leaching process, then adopt classical Threshold sementation to carry out binaryzation to image, image is cut apart according to the optimal self-adaptive threshold value in hot spot leaching process, obtained the general area of pupil;
Step 8: the edge point set that extracts pupil;
Step 9: utilize ellipse fitting, obtain the centre coordinate of pupil: first use least square ellipse fitting process, candidate point is carried out to ellipse fitting, remove in candidate marginal from the point of the hypertelorism of elliptical center; Loop matching, until obtain a stable elliptical center position, the elliptical center position that this is stable, is exactly the centre coordinate of pupil;
Step 10: calculate blinkpunkt;
Step 11: calibration blinkpunkt.
2. a kind of contactless sight line method for tracing based on adaptive calibration according to claim 1, is characterized in that: in step 1, with median filtering method, remove noise.
3. a kind of contactless sight line method for tracing based on adaptive calibration according to claim 1, is characterized in that: in step 2, utilize the histogram analysis minimum subregion of gray-scale value in picture of publishing picture.
4. a kind of contactless sight line method for tracing based on adaptive calibration according to claim 1, is characterized in that: in step 3, the scope of rectangular area is: the rectangular area of 70*70 pixel is to the rectangular area of 90*90 pixel.
5. a kind of contactless sight line method for tracing based on adaptive calibration according to claim 1, is characterized in that: in step 4: adopt breadth first method search hot spot.
6. a kind of contactless sight line method for tracing based on adaptive calibration according to claim 1, it is characterized in that: in step 8, the edge point set that extracts pupil is specific as follows: spot area gray scale is used after the gray-scale value replacement lower than thresholding, calculate the barycenter of pupil region, centered by this barycenter, along all directions, carrying out the scanning of one dimension marginal point, search the point set that meets gray scale sequence [111000] in binary image, is exactly the edge point set of pupil.
7. a kind of contactless sight line method for tracing based on adaptive calibration according to claim 1, it is characterized in that: in step 9, remove in candidate marginal from the requirement of the point of the hypertelorism of elliptical center and be: surpass other all marginal point 5-10 pixels; Variation is less than 3 pixels and just thinks stable center.
8. a kind of contactless sight line method for tracing based on adaptive calibration according to claim 1, is characterized in that: in step 10, calculate blinkpunkt specific as follows:
According to four hot spots on the screen calculating and center coordinate of eye pupil, utilize the cross ratio invariability principle in projective geometry to carry out the mapping of volume coordinate, p represents the center of pupil, and g represents the blinkpunkt of human eye on screen, V1, V2, V3, V4 is respectively the near-infrared light source IR1 on four angles of screen, IR2, IR3, imaginary point corresponding to hot spot that IR4 produces;
U v, U v2, U v3, U v4respectively imaginary point V1, V2, V3, the mapping point of V4 on the plane of delineation, so these 4 points are also imaginary points, and actual hot spot Gt1 on human eye, Gt2, Gt3, Gt4 corresponding point in image is U r1, U r2, U r3, U r4; Between imaginary point in the hot spot of catching in image and spatial mappings relation on the plane of delineation, have a fixing proportionate relationship, shown in (7), wherein α is that a value is similar to 2.0 theoretical constant, U rpand U rican from the image of catching, calculate U rpit is the hot spot producing near the near-infrared light source of position of camera optic axis in system; So, when α value is constant 2.0, U v1, U v2, U v3, U v4just can calculate by through type (7), the relation between imaginary point and hot spot is just established like this,
U Vi=U RP+α(U Ri-U RP)(i=1,2,3,4) (7)
According to cross ratio invariability principle, IR 1iR 2double ratio value equal the double ratio value of V1V2, and the double ratio value of V1V2 equals U v1u v2double ratio value, so IR 1iR 2double ratio value equal U v1u v2double ratio value;
4 imaginary point coordinates supposing the plane of delineation are U vi=(x i v, y i v) (i=1,2,3,4), take these 4 points to expand as the intersection point in Fig. 4 (a) as basis, the coordinate of supposing these intersection points is U m=(x i m, y i m) (i=1,2,3,4), U v1u v2double ratio value suc as formula shown in (8), U v2u v3double ratio value suc as formula shown in (9),
CR image x = ( x 1 V y 1 M - x 1 M y 1 V ) ( x 2 M y 2 V - x 2 V y 2 M ) ( x 1 V y 2 M - x 2 M y 1 V ) ( x 1 M y 2 V - x 2 V y 1 M ) - - - ( 8 )
CR image y = ( x 2 V y 3 M - x 3 M y 2 V ) ( x 4 M y 3 V - x 3 V y 4 M ) ( x 2 V y 4 M - x 4 M y 2 V ) ( x 3 M y 3 V - x 3 V y 3 M ) - - - ( 9 )
Suppose that the sight line blinkpunkt coordinate of human eye on screen is g (x g, y g), the width of screen and be highly respectively w and h has M1 (x in the plane at screen place g, 0) and M2 (w/2,0), IR 1iR 2double ratio value suc as formula shown in (10), the double ratio value of IR1 and IR4, suc as formula shown in (11), is utilized double ratio equation CR image x = CR screen x With CR image y = CR screen y , Solve the blinkpunkt g of human eye,
CR screen x = ( w - w 2 ) x g ( w - x g ) w 2 = w g w - x g - - - ( 10 )
CR screen y = ( h - h 2 ) y g ( h - y g ) h 2 = y g h - y g - - - ( 11 ) .
9. a kind of contactless sight line method for tracing based on adaptive calibration according to claim 1, it is characterized in that: the condition in step 4 is: the pixel number that gray scale is greater than in the single connected region of threshold value surpasses N, and the value of N depends on eye image size and resolution.
10. a kind of contactless sight line method for tracing based on adaptive calibration according to claim 1, is characterized in that: in step 11, calibration blinkpunkt is specific as follows:
Screen area has been divided into the rectangular area of n formed objects equably, the center of all rectangular areas is exactly calibration point set, the coordinate of these calibration points can determine according to the size of screen and rectangular area, and the value of n depends on size and the desired accuracy rating reaching of screen
The coordinate of calibration point is that set is C, shown in (12), when tested, watches calibration point C attentively itime, the blinkpunkt coordinate that actual computation goes out is E i, shown in (13), the point in formula (14) is on the plane of delineation, the hot spot being produced by the near-infrared light source on 4 angles of screen, point in formula (15) is on the plane of delineation, the hot spot being produced by the near-infrared light source near position of camera optic axis, and formula (16) is U rPcorresponding imaginary point,
C i = ( x i C , y i C ) ( i = 1,2,3,4 . . . n ) - - - ( 12 )
E i = ( x i E , y i E ) ( i = 1,2,3,4 . . . n ) - - - ( 13 )
U Ri = ( x i R , y i R ) ( i = 1,2,3,4 . . . n ) - - - ( 14 )
U RP = ( x R P , y R P ) - - - ( 15 )
U VP = ( x V P , y V P ) - - - ( 16 )
In extracting alpha parameter matrix process, each alpha parameter that extracts a screen subregion, between parameter, do not interfere with each other mutually, in initial, alpha parameter is assigned 2.0, at this moment can calculate tested rough blinkpunkt, according to this rough blinkpunkt and the difference between calibration point is counter releases an optimum alpha parameter, according to formula (7), can push type (17), when wushu (17) also enters to (8), (9), unique unknown number is exactly α, so can be by solving an equation
Figure FDA0000443322350000046
with calculate the alpha parameter that current region is corresponding,
U Vi = ( x R P + α ( x i R - x R P ) , y R P + α ( y i R - y R P ) ) - - - ( 17 )
Each calibration point C icorresponding alpha parameter is stored in a matrix, remembers that this matrix is α Matrix, C ialso be stored in a matrix, remember that this matrix is CMatrix (C 11=C 1, C 12=C 2..., C st=C n), n=s*t, U r={ U r1, U r2, U r3, U r4, U rP, s and t are the row and column numbers of the subregion that is divided into of screen, U rbe the real-time hot spot set of obtaining from the plane of delineation, formula (4-19) represents C ijaccording to current U ralpha parameter is calculated in set, make s=4, t=4, screen has been divided into the subregion of 16 formed objects, α Matrix (shown in (20)) just can utilize formula (18) to calculate by a known CMatrix (shown in (19))
α ij=F(C ij,U R)(i=1...s,j=1...t) (18)
cMatrix = C 11 C 12 C 13 C 14 C 21 C 22 C 23 C 24 C 31 C 32 C 33 C 34 C 41 C 42 C 43 C 44 - - - ( 19 )
αMatrix = α 11 α 12 α 13 α 14 α 21 α 22 α 23 α 24 α 31 α 32 α 33 α 34 α 41 α 42 α 43 α 44 - - - ( 20 )
All alpha parameters in a tested α Matrix all extract complete after, just can watch experiment attentively, for each local subregion of screen, (center is C ij), in α Matrix, there is an alpha parameter corresponding with it, when initial, the alpha parameter of α Matrix center is set to initial current alpha parameter, when tested current sight line is watched drop point attentively in certain sub regions, current alpha parameter is just dynamically converted to alpha parameter corresponding to this sub regions in α Matrix, guarantees that current alpha parameter is always adapted to the optimum alpha parameter of current local subregion.
CN201310719654.5A 2013-12-20 2013-12-20 Non-contact sight-line tracking method based on self-adaptive calibration Active CN103761519B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310719654.5A CN103761519B (en) 2013-12-20 2013-12-20 Non-contact sight-line tracking method based on self-adaptive calibration

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310719654.5A CN103761519B (en) 2013-12-20 2013-12-20 Non-contact sight-line tracking method based on self-adaptive calibration

Publications (2)

Publication Number Publication Date
CN103761519A true CN103761519A (en) 2014-04-30
CN103761519B CN103761519B (en) 2017-05-17

Family

ID=50528755

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310719654.5A Active CN103761519B (en) 2013-12-20 2013-12-20 Non-contact sight-line tracking method based on self-adaptive calibration

Country Status (1)

Country Link
CN (1) CN103761519B (en)

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104506806A (en) * 2014-12-24 2015-04-08 天津市亚安科技股份有限公司 Device and system for adjusting optical axes of auxiliary light source equipment and video acquisition equipment to be coaxial
CN104751467A (en) * 2015-04-01 2015-07-01 电子科技大学 Gaze point estimation method based on dynamic cross ratio and system thereof
CN104966359A (en) * 2015-07-20 2015-10-07 京东方科技集团股份有限公司 Anti-theft alarm system and method
CN105955465A (en) * 2016-04-25 2016-09-21 华南师范大学 Desktop portable sight line tracking method and apparatus
CN106778641A (en) * 2016-12-23 2017-05-31 北京七鑫易维信息技术有限公司 Gaze estimation method and device
CN107105333A (en) * 2017-04-26 2017-08-29 电子科技大学 A kind of VR net casts exchange method and device based on Eye Tracking Technique
CN107223082A (en) * 2017-04-21 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of robot control method, robot device and robot device
CN107357429A (en) * 2017-07-10 2017-11-17 京东方科技集团股份有限公司 For determining the method, equipment and computer-readable recording medium of sight
CN107533634A (en) * 2015-03-23 2018-01-02 控制辐射系统有限公司 Eyes tracking system
CN108038884A (en) * 2017-11-01 2018-05-15 北京七鑫易维信息技术有限公司 calibration method, device, storage medium and processor
CN108108013A (en) * 2016-11-25 2018-06-01 深圳纬目信息技术有限公司 A kind of Eye-controlling focus method
CN108427926A (en) * 2018-03-16 2018-08-21 西安电子科技大学 A kind of pupil positioning method in gaze tracking system
CN109194952A (en) * 2018-10-31 2019-01-11 清华大学 Wear-type eye movement tracing equipment and its eye movement method for tracing
CN109766818A (en) * 2019-01-04 2019-05-17 京东方科技集团股份有限公司 Pupil center's localization method and system, computer equipment and readable storage medium storing program for executing
CN109857254A (en) * 2019-01-31 2019-06-07 京东方科技集团股份有限公司 Pupil positioning method and device, VR/AR equipment and computer-readable medium
CN109947253A (en) * 2019-03-25 2019-06-28 京东方科技集团股份有限公司 The method for establishing model of eyeball tracking, eyeball tracking method, equipment, medium
WO2019154012A1 (en) * 2018-02-12 2019-08-15 北京七鑫易维信息技术有限公司 Method and apparatus for matching light sources with light spots
CN110806885A (en) * 2019-09-29 2020-02-18 深圳市火乐科技发展有限公司 MCU (microprogrammed control Unit) firmware updating method, intelligent projector and related product
CN111027502A (en) * 2019-12-17 2020-04-17 Oppo广东移动通信有限公司 Eye image positioning method and device, electronic equipment and computer storage medium
CN111061373A (en) * 2019-12-18 2020-04-24 京东方科技集团股份有限公司 Eyeball tracking calibration method and device and wearable equipment
CN112114659A (en) * 2019-06-19 2020-12-22 托比股份公司 Method and system for determining a fine point of regard for a user
CN114020155A (en) * 2021-11-05 2022-02-08 沈阳飞机设计研究所扬州协同创新研究院有限公司 High-precision sight line positioning method based on eye tracker
CN114264997A (en) * 2021-12-14 2022-04-01 武汉联影生命科学仪器有限公司 Gradient sensitivity calibration method and device and magnetic resonance equipment
US11594075B2 (en) 2019-03-29 2023-02-28 Tobii Ab Holographic eye imaging device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102749991A (en) * 2012-04-12 2012-10-24 广东百泰科技有限公司 Non-contact free space eye-gaze tracking method suitable for man-machine interaction
US20130188130A1 (en) * 2012-01-25 2013-07-25 Canon Kabushiki Kaisha Ophthalmologic apparatus, control method therefore, and recording medium storing method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130188130A1 (en) * 2012-01-25 2013-07-25 Canon Kabushiki Kaisha Ophthalmologic apparatus, control method therefore, and recording medium storing method
CN102749991A (en) * 2012-04-12 2012-10-24 广东百泰科技有限公司 Non-contact free space eye-gaze tracking method suitable for man-machine interaction

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAI HAN 等: "AN EFFICIENT VISUAL TRACKING METHOD BASED ON SINGLE CCD CAMERA", 《PROCEEDINGS OF THE 2012 INTERNATIONAL CONFERENCE ON MACHINE LEARNING AND CYBERNETICS,XIAN》 *
张闯 等: "一种新的基于瞳孑L一角膜反射技术的视线追踪方法", 《计算机学报》 *

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104506806A (en) * 2014-12-24 2015-04-08 天津市亚安科技股份有限公司 Device and system for adjusting optical axes of auxiliary light source equipment and video acquisition equipment to be coaxial
CN104506806B (en) * 2014-12-24 2017-09-01 天津市亚安科技有限公司 Adjust secondary light source equipment and the apparatus and system of video capture device light shaft coaxle
CN107533634A (en) * 2015-03-23 2018-01-02 控制辐射系统有限公司 Eyes tracking system
CN104751467B (en) * 2015-04-01 2017-11-24 电子科技大学 It is a kind of that the point estimation method and its system are stared based on dynamic double ratio
CN104751467A (en) * 2015-04-01 2015-07-01 电子科技大学 Gaze point estimation method based on dynamic cross ratio and system thereof
CN104966359A (en) * 2015-07-20 2015-10-07 京东方科技集团股份有限公司 Anti-theft alarm system and method
CN105955465A (en) * 2016-04-25 2016-09-21 华南师范大学 Desktop portable sight line tracking method and apparatus
CN108108013A (en) * 2016-11-25 2018-06-01 深圳纬目信息技术有限公司 A kind of Eye-controlling focus method
CN106778641A (en) * 2016-12-23 2017-05-31 北京七鑫易维信息技术有限公司 Gaze estimation method and device
CN106778641B (en) * 2016-12-23 2020-07-03 北京七鑫易维信息技术有限公司 Sight estimation method and device
CN107223082A (en) * 2017-04-21 2017-09-29 深圳前海达闼云端智能科技有限公司 A kind of robot control method, robot device and robot device
US11325255B2 (en) 2017-04-21 2022-05-10 Cloudminds Robotics Co., Ltd. Method for controlling robot and robot device
CN107223082B (en) * 2017-04-21 2020-05-12 深圳前海达闼云端智能科技有限公司 Robot control method, robot device and robot equipment
CN107105333A (en) * 2017-04-26 2017-08-29 电子科技大学 A kind of VR net casts exchange method and device based on Eye Tracking Technique
CN107357429A (en) * 2017-07-10 2017-11-17 京东方科技集团股份有限公司 For determining the method, equipment and computer-readable recording medium of sight
US11294455B2 (en) 2017-07-10 2022-04-05 Beijing Boe Optoelectronics Technology Co., Ltd. Method and device for determining gaze placement, computer readable storage medium
WO2019010959A1 (en) * 2017-07-10 2019-01-17 京东方科技集团股份有限公司 Method and device for determining sight line, and computer readable storage medium
US11003243B2 (en) 2017-11-01 2021-05-11 Beijing 7Invensun Technology Co., Ltd. Calibration method and device, storage medium and processor
CN108038884A (en) * 2017-11-01 2018-05-15 北京七鑫易维信息技术有限公司 calibration method, device, storage medium and processor
WO2019154012A1 (en) * 2018-02-12 2019-08-15 北京七鑫易维信息技术有限公司 Method and apparatus for matching light sources with light spots
CN108427926A (en) * 2018-03-16 2018-08-21 西安电子科技大学 A kind of pupil positioning method in gaze tracking system
CN109194952B (en) * 2018-10-31 2020-09-22 清华大学 Head-mounted eye movement tracking device and eye movement tracking method thereof
CN109194952A (en) * 2018-10-31 2019-01-11 清华大学 Wear-type eye movement tracing equipment and its eye movement method for tracing
CN109766818A (en) * 2019-01-04 2019-05-17 京东方科技集团股份有限公司 Pupil center's localization method and system, computer equipment and readable storage medium storing program for executing
US11315281B2 (en) 2019-01-31 2022-04-26 Beijing Boe Optoelectronics Technology Co., Ltd. Pupil positioning method and apparatus, VR/AR apparatus and computer readable medium
CN109857254A (en) * 2019-01-31 2019-06-07 京东方科技集团股份有限公司 Pupil positioning method and device, VR/AR equipment and computer-readable medium
CN109947253A (en) * 2019-03-25 2019-06-28 京东方科技集团股份有限公司 The method for establishing model of eyeball tracking, eyeball tracking method, equipment, medium
US11594075B2 (en) 2019-03-29 2023-02-28 Tobii Ab Holographic eye imaging device
CN112114659A (en) * 2019-06-19 2020-12-22 托比股份公司 Method and system for determining a fine point of regard for a user
CN110806885B (en) * 2019-09-29 2021-05-25 深圳市火乐科技发展有限公司 MCU (microprogrammed control Unit) firmware updating method, intelligent projector and related product
CN110806885A (en) * 2019-09-29 2020-02-18 深圳市火乐科技发展有限公司 MCU (microprogrammed control Unit) firmware updating method, intelligent projector and related product
CN111027502A (en) * 2019-12-17 2020-04-17 Oppo广东移动通信有限公司 Eye image positioning method and device, electronic equipment and computer storage medium
CN111061373A (en) * 2019-12-18 2020-04-24 京东方科技集团股份有限公司 Eyeball tracking calibration method and device and wearable equipment
CN111061373B (en) * 2019-12-18 2024-04-16 京东方科技集团股份有限公司 Eyeball tracking calibration method and device and wearable equipment
CN114020155A (en) * 2021-11-05 2022-02-08 沈阳飞机设计研究所扬州协同创新研究院有限公司 High-precision sight line positioning method based on eye tracker
CN114264997A (en) * 2021-12-14 2022-04-01 武汉联影生命科学仪器有限公司 Gradient sensitivity calibration method and device and magnetic resonance equipment
CN114264997B (en) * 2021-12-14 2024-03-22 武汉联影生命科学仪器有限公司 Gradient sensitivity calibration method and device and magnetic resonance equipment

Also Published As

Publication number Publication date
CN103761519B (en) 2017-05-17

Similar Documents

Publication Publication Date Title
CN103761519A (en) Non-contact sight-line tracking method based on self-adaptive calibration
CN105138965B (en) A kind of near-to-eye sight tracing and its system
CN103390152B (en) Sight tracking system suitable for human-computer interaction and based on system on programmable chip (SOPC)
CN105955465A (en) Desktop portable sight line tracking method and apparatus
CN101561710B (en) Man-machine interaction method based on estimation of human face posture
CN103810491B (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN103530618A (en) Non-contact sight tracking method based on corneal reflex
CN109643366A (en) For monitoring the method and system of the situation of vehicle driver
CN104978012B (en) One kind points to exchange method, apparatus and system
CN102830793A (en) Sight tracking method and sight tracking device
CN106066696A (en) The sight tracing compensated based on projection mapping correction and point of fixation under natural light
CN105739702A (en) Multi-posture fingertip tracking method for natural man-machine interaction
CN105224285A (en) Eyes open and-shut mode pick-up unit and method
CN111596767B (en) Gesture capturing method and device based on virtual reality
CN103366157A (en) Method for judging line-of-sight distance of human eye
CN103076876A (en) Character input device and method based on eye-gaze tracking and speech recognition
CN106503644A (en) Glasses attribute detection method based on edge projection and color characteristic
Chansri et al. Reliability and accuracy of Thai sign language recognition with Kinect sensor
CN104794441A (en) Human face feature extracting method based on active shape model and POEM (patterns of oriented edge magnituedes) texture model in complicated background
CN103778406A (en) Object detection method and device
Bei et al. Sitting posture detection using adaptively fused 3D features
CN111339982A (en) Multi-stage pupil center positioning technology implementation method based on features
CN106599873A (en) Figure identity identification method based on three-dimensional attitude information
CN109409215A (en) Front vehicles based on depth convolutional neural networks partly block the detection method of human body
WO2020228224A1 (en) Face part distance measurement method and apparatus, and vehicle-mounted terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant