CN102749991B - A kind of contactless free space sight tracing being applicable to man-machine interaction - Google Patents

A kind of contactless free space sight tracing being applicable to man-machine interaction Download PDF

Info

Publication number
CN102749991B
CN102749991B CN201210107182.3A CN201210107182A CN102749991B CN 102749991 B CN102749991 B CN 102749991B CN 201210107182 A CN201210107182 A CN 201210107182A CN 102749991 B CN102749991 B CN 102749991B
Authority
CN
China
Prior art keywords
eye
image
human eye
pupil
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210107182.3A
Other languages
Chinese (zh)
Other versions
CN102749991A (en
Inventor
黄若浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
GUANGDONG BETTER TECHNOLOGY Co Ltd
Original Assignee
GUANGDONG BETTER TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by GUANGDONG BETTER TECHNOLOGY Co Ltd filed Critical GUANGDONG BETTER TECHNOLOGY Co Ltd
Priority to CN201210107182.3A priority Critical patent/CN102749991B/en
Publication of CN102749991A publication Critical patent/CN102749991A/en
Application granted granted Critical
Publication of CN102749991B publication Critical patent/CN102749991B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Eye Examination Apparatus (AREA)

Abstract

The invention provides a kind of contactless free space sight tracing being applicable to man-machine interaction, comprise the following steps: real-time face and eyes locating and tracking, eye move biological information extract, set up based on ocular bioavailability characteristic information eye movement model and build the mapping relations model etc. of eye movement model and eye gaze object.The present invention relates to multiple crossing domains such as image procossing, computer vision, pattern-recognition, the fields such as, Aero-Space association area auxiliary man-machine interaction of new generation, disabled person, sports, carplane are driven, virtual reality and game have a wide range of applications, in addition to raising disabled person life and self-care level, build a harmonious society, improve China man-machine interaction, the capability of independent innovation of the high-technology field such as unmanned has great realistic meaning.

Description

A kind of contactless free space sight tracing being applicable to man-machine interaction
Technical field
The present invention relates to the eye tracking and technical field that are applied to man-machine interaction, relate to a kind of contactless free space sight tracing being applicable to man-machine interaction based on Computer Image Processing and area of pattern recognition basis are developed especially.
Background technology
Traditional man-machine interaction mode is " centered by computing machine ", namely requires that the regulation that user will obey computing machine could use, and the training therefore sometimes needing corresponding specialty just can be carried out.And along with technical development and universal, information society more requires to carry out " man-machine interaction focusing on people ", computing machine is made to be everyone service in society, the center of man-machine interaction should be transformed into that focus be put on man, reach people and during computer interactive just as if the mutual the same effect of mankind itself.Wherein, in three dimensions, computing machine has also been embedded in various domestic electric appliance, living space and apparatus for human lives and activity, and the apparatus that before not being, that large volume position is fixing, the form of expression is diversification all the more.So man-machine interaction also demand fulfillment can make user can be convenient to use computing machine in three dimensions, instead of must sit in face of computing machine, is undertaken by the mode such as keyboard, mouse.
In fact, from the development history of man-machine interaction, from the most ancient man-machine interaction punched card, most main mode is become to keyboard and mouse, utilize now the sensation of people and action (as voice, hand-written, posture, sight line, expression etc.) as the rise of the research and development application of input mode, man-machine interaction experienced by and adapts to computing machine from people and constantly adapt to man-based development process to computing machine.Allow and calculate that function is listened, can be seen, talkative, the main development direction that can feel to be considered to following man-machine interaction.
And computing machine will be made to realize above-mentioned functions, simple hand motion operation keyboard and mouse obviously can not meet the demands, so other sense organ organ of people also management and participating in computing machine gradually, wherein Visual Trace Technology is the most important thing wherein, the object of this technology is the content that the information inference people that watch attentively from user are interested or arouse attention, and obtain its referents by the object that people watches attentively, the relation between hint object.The field such as early stage Visual Trace Technology is mainly used in psychological study, help the disabled, was just applied to the usability engineering such as compression of images and man-machine interaction afterwards.
Visual Trace Technology has broad application prospects, such as, can help paralytic or quadriplegia, and speechless people realizes normal interactive process.In addition, can also be controlled external unit by eye gazing, and realize multi-job operation, such as militarily, if pilot has found target, when manual operation is dealt with and do not come, while can being aimed at by eyes, control the transmitting of fire control system with eyes, just enhance operational efficiency greatly like this.The research of Visual Trace Technology relates to multiple crossing domain, its achievement in research Aero-Space association area, sports, etc. every field have a wide range of applications.
But at present, volume, weight for realizing the system equipment of eye tracking are all larger, also limit the degree of freedom of people simultaneously, larger to the interference of people, use very inconvenient, and the price of commercial product is generally also costly, so, make the universal of view line tracking device become more difficult.Therefore, reduce the hardware cost of gaze tracking system or equipment, development non-intrusion type Visual Trace Technology is a kind of development trend.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, a kind of contactless free space sight tracing being applicable to man-machine interaction is provided.
The present invention is achieved through the following technical solutions:
Be applicable to a contactless free space sight tracing for man-machine interaction, comprise the following steps.
1) real-time face and eyes locating and tracking
Captured the image of face and human eye by convention video tracking camera in real time, facial image is positioned to the analyzing and processing of pupil, described analyzing and processing mode is specially: adopt Viola algorithm to set up face classification device and detect face; On human face region, use Viola algorithm to set up human eye sorter to locate human eye area again simultaneously; Then, adopt based on the legal position pupil center of image gray projection, realize human face region to human eye area, and human eye area is to the image procossing simplification process of pupil region.
The method that described Viola algorithm sets up face classification device is:
Set up the living things feature recognition algorithm model based on cascade cascade searching algorithm Face datection, class rectangle (Haar-like) feature based on integrogram is extracted to the facial image database set up, adopt adaboost to advance training algorithm carry out sorter training to facial image database and classify and obtain face classification device, do pre-service in conjunction with colour of skin coupling.
The method that described VViola algorithm sets up human eye sorter is:
Set up the living things feature recognition algorithm model based on cascade cascade searching algorithm human eye detection, class rectangle (Haar-like) feature based on integrogram is extracted to the facial image database set up, adaboost is adopted to advance training algorithm carry out sorter training to facial image database and classify and obtain human eye sorter, can detection & localization human eye accurately by the sorter that obtains.
The described method based on the legal position pupil center of image gray projection is:
Change human eye area image into gray level image, size is m*n, by formula:
Ph y ( y ) = Σ x = 0 n - 1 I ( x , y ) ,
Ph x ( x ) = Σ y = 0 m - 1 I ( x , y )
Do the Gray Projection of horizontal and vertical, the Gray Projection of horizontal and vertical, wherein respectively have a minimal value in the vertical direction of pupil center and the Gray Projection of horizontal direction, can try to achieve pupil center Q is:
(x 0,y 0)wherePh y(y 0)=Max{Ph y(y)}andPh x(x 0)=Max{Ph x(x)}。
2) eye moves biological information extraction
Detect and gather the position of human eye in image, extracting eyes subimage, wherein adopting Viola algorithm to set up face classification device and face is detected, on human face region, use Viola algorithm to set up human eye sorter to locate human eye area more simultaneously; Further, by extracting the mobile message of pupil based on the method for corneal reflection principle and image procossing, pass through based on the legal position pupil center of image gray projection, and adopt the EHMM based on 2D-DCT feature, analyze by the eye image collected and set up Hidden Markov model, realizing the differentiation of human eye state.
The knowledge method for distinguishing that the described EHMM based on 2D-DCT feature carries out human eye state is specially:
Eye image sampled and 2D-DCT conversion is carried out to each sample window, forming observation sequence vector by the low frequency coefficient after 2D-DCT converts, according to the vector initialising EHMM parameter of observation obtained after status number and image uniform segmentation; Further, carry out the extracting method moving information based on the eye of pupil: by dual nested Viterbi algorithm, eye image is split again, by Baum-welch algorithm revaluation model parameter, to EHMM model training, obtain the human eye state recognition classifier based on EHMM, when wherein human eye state being identified, first constructed by eye image to be identified and observe sequence vector, then calculate each training pattern and produce the likelihood value observing sequence vector, the training pattern with maximum likelihood value is object belonging to eye image to be identified.
3) eye movement model based on ocular bioavailability characteristic information is set up
Set up the model of the rotation center measuring eyeball and the three-dimensional eye movement model based on two-dimentional pupil mobile message and eyeball irregular spheroid rotation information, the bivector of definition from Purkinje spot to pupil center is pupil-corneal reflection vector, be denoted as P-CR, and by carrying out the real-time acquisition eyeball information of shooting to human eye and in conjunction with P-CR, generating human eye to the 3 dimension space direction vectors watching object attentively.
4) the mapping relations model of eye movement model and eye gaze object is built
The pan of people when observing various outdoor scene and screen message is selected and watches process attentively to adopt eye tracking to learn, thus obtain visually-perceptible and the association mechanism of people, set up the transformation relation between visual field coordinate system and pupil coordinate system, obtain the coordinate of the true blinkpunkt of human eye in the coordinate system of visual field, and calculate eye gaze point, what finally blinkpunkt is mapped to user's reality watches attentively on object, what complete visual field (actual eye gaze point) and eye image mates work, realizes the corresponding of video tracking camera field of view and eyes visual field.
Wherein, the method building eye movement model is:
The three-dimensional vector (representing the actual direction of gaze of sight line) formed by the combination of pupil two-dimensional signal and eyeball shape information, and then set up eye movement model, concrete is: calculate eyeball radius according to eye image information, eyeglass center, location, then calculate human eye to the three-dimensional space direction vector watching object attentively; Then, by the Purkinje image point legal parallactic angle film curved surface centre of sphere (O of image procossing, improvement cornea), in conjunction with the two dimensional surface information of pupil center, generate the three-dimensional model that an eye is dynamic.
In step 4, to be admired (Purkinje) spotting method by pul, at screen four angles, infrared LED is set respectively as light source, corneal reflection on pupil, by obtaining each two field picture with the camera of optical filter, wherein, at camera collection in eye image, there will be four obvious bright spots around pupil center, by the method for the geometrical constraint of image procossing, the outline map of original image is first obtained with Canny boundary operator, adopt Hough transform that eyeball image is projected to parameter space from plane space again, find out the hot spot center of circle, accurately can locate the relative position of pupil center and four bright spots, then using reflection spot as reference point, the coordinate figure of pupil center is performed mathematical calculations with it, and then judge the two dimensional motion in-plane of eyeball.
Meanwhile, also comprise the scaling method of blinkpunkt in step 4, be specially:
1) mapping relations equation is built
If vectorial y is the eye gaze point of visual field reference system, vector x is pupil center's subpoint thereon in (human eye) reference system, by the transformation relation of function F (*) representative from x to y, P representative is determined statistically comprehensive parameters vector in calibration process, namely the parameter vector in original unknown F (*), then have:
y=F(x,P);
Determine the concrete form of function F (x, P), and try to achieve the estimated value p ' of statistically comprehensive parameters vector P, thus obtain the estimated value y ' of eye gaze point position:
y′=F(x,P′)。
2) statistically comprehensive parameters vector P is determined
Determine the valuation P ' of statistically comprehensive parameters vector P, concrete, adopt the calibration algorithm based on least square curve fitting, design a merit function, for the degree of consistency between metric measurement data and the parameter model of selection according to one group of measurement data; Regulate model parameter simultaneously, make merit function value minimum, obtain best fit parameters P.
Assuming that P is M dimension, total N number of test point, by this M adjustable parameter Pi (i=1,2 ..., M) model to N number of test data (xi, yi) i=1 ..., N carries out matching.Definition has the vectorial b of N number of component, then have: bi=y/Ri, i=1 ..., N.
Wherein R ibe the measuring error of i-th data point, default value is 1; To vectorial P and N number of data, have: yi (xi)=∑ pkXk (xi), i=1 ..., N.
X k(x i) be called one group of basis function.Then matrix A=(a is defined ij) n × M, its element by M basis function at N number of coordinate x ion value and N number of measuring error calculate, that is:
Definition merit function X 2=| A*P-b| 2, try to achieve parameter vector P, make X2 be minimum value.In over-determined systems situation, the optimal approximation solution under least square meaning can be drawn by the method that SVD decomposes.When carrying out matching by SVD least square method to eye-movement measurement data, the form of fitting function can be specified as required.After trying to achieve parameter vector P, the eye that can obtain y=F (x, P) function moves the mapping relations model of information model to eye gaze object, in man-machine interaction actual for eye tracking application.
Compared with prior art, the present invention has following beneficial effect:
The invention provides more creationary method, judge the two dimensional motion in-plane of eyeball, the biological information that eye is dynamic can accurately be extracted, ensure the data accuracy of the process moving the eye tracking of biological information based on eye, simultaneously, adopt the vectorial actual direction of gaze etc. representing eyes of the three-dimensional formed by the combination of pupil two-dimensional signal and eyeball shape information, the error that the good eye movement that compensate for irregular spheroid brings, obtains great breakthrough to the accuracy improving Visual Trace Technology, in addition, also apply dynamic calibration algorithm, when overcoming static demarcating, subject's head keeps the deficiency of absolute rest in eye-movement measurement process, and the head rotation etc. of user affects the problem of eye tracking free walker, consider eye tracking working mechanism, what complete visual field (actual eye gaze point) and eye movement model mates work, effectively solve the nonlinear problem of the mapping relations between visual field coordinate system and pupil coordinate system, construct dynamic eye and move the mapping model of information model to eye gaze object, the perfect low defect not strong with practicality of Visual Trace Technology degree of freedom now.
The present invention relates to multiple crossing domains such as image procossing, computer vision, pattern-recognition, its achievement in research is auxiliary man-machine interaction of new generation, disabled person, Aero-Space association area, sports, carplane are driven, the field such as virtual reality and game has a wide range of applications.To raising disabled person life and self-care level, build a harmonious society, improve China man-machine interaction, the capability of independent innovation of the high-technology field such as unmanned has great realistic meaning.
Accompanying drawing explanation
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail:
Figure 1 shows that the overall flow schematic diagram of the present invention one specific embodiment;
Figure 2 shows that the method modular diagram of the present invention one specific embodiment;
Figure 3 shows that the eyeball phantom schematic diagram involved by the present invention one specific embodiment.
Embodiment
Below in conjunction with embodiment and accompanying drawing, the present invention is described in further detail, but embodiments of the present invention are not limited thereto.
As shown in FIG. 1 to 3 for can be used as a kind of contactless free space sight tracing being applicable to man-machine interaction of present pre-ferred embodiments, comprise the following steps.
1) real-time face and eyes locating and tracking
The image of face and human eye is captured in real time by convention video tracking camera, facial image is positioned to the analyzing and processing of pupil, convention video tracking camera can Real-time Collection to facial image, although head is in continuous activity, the localization and tracking of face and human eye in image can be realized by the image analyzing and processing technology of existing software.
Described analyzing and processing mode is specially: utilize the method for image procossing and pattern-recognition to locate pupil, because human eye area is little, if directly adopt the algorithm at entire image location pupil, the problems such as false drop rate is high, calculated amount is large can be produced, detect so adopt Viola algorithm to set up face classification device to face; On human face region, use Viola algorithm to set up human eye sorter to locate human eye area again simultaneously; Then, due to the accurate location based on human eye area, adopt based on the legal position pupil center of image gray projection, realize human face region to human eye area, and human eye area is to the image procossing simplification process of pupil region.
The method that described Viola algorithm sets up face classification device is:
Set up the living things feature recognition algorithm model based on cascade cascade searching algorithm Face datection, class rectangle (Haar-like) feature based on integrogram is extracted to the facial image database set up, adopt adaboost to advance training algorithm carry out sorter training to facial image database and classify and obtain face classification device, do pre-service in conjunction with colour of skin coupling.
Ongoing research confirms, after Viola algorithm does pre-service in conjunction with colour of skin coupling, and more accurate Face datection model under complex background and light conditions can be obtained.
The method that described Viola algorithm sets up human eye sorter is:
Set up the living things feature recognition algorithm model based on cascade cascade searching algorithm human eye detection, class rectangle (Haar-like) feature based on integrogram is extracted to the facial image database set up, adaboost is adopted to advance training algorithm carry out sorter training to facial image database and classify and obtain human eye sorter, can detection & localization human eye accurately by the sorter that obtains.
The described method based on the legal position pupil center of image gray projection is:
Change human eye area image into gray level image, size is m*n, by formula:
Ph y ( y ) = Σ x = 0 n - 1 I ( x , y ) ,
Ph x ( x ) = Σ y = 0 m - 1 I ( x , y )
Do the Gray Projection of horizontal and vertical, the Gray Projection of horizontal and vertical, wherein respectively have a minimal value in the vertical direction of pupil center and the Gray Projection of horizontal direction, can try to achieve pupil center Q is:
(x 0,y 0)wherePh y(y 0)=Max{Ph y(y)}andPh x(x 0)=Max{Ph x(x)}。
2) eye moves biological information extraction
Detect and gather the position of human eye in image, extracting eyes subimage, wherein adopting Viola algorithm to set up face classification device and face is detected, on human face region, use Viola algorithm to set up human eye sorter to locate human eye area more simultaneously.
Analysis chart comprises as eye movement characteristics: watch (fixation) attentively, beat (saccades) and smoothly trail tracking (smoothpursuit) etc., these eye movement characteristics main manifestations are that pupil center moves, in conjunction with Purkinje image point method, by extracting the mobile message of pupil based on the method for corneal reflection principle and image procossing, pass through based on the legal position pupil center of image gray projection, and adopt the EHMM based on 2D-DCT feature, analyze by the eye image collected and set up Hidden Markov model, realizing the differentiation of human eye state.
The knowledge method for distinguishing that the described EHMM based on 2D-DCT feature carries out human eye state is specially:
Eye image sampled and 2D-DCT conversion is carried out to each sample window, forming observation sequence vector by the low frequency coefficient after 2D-DCT converts, according to the vector initialising EHMM parameter of observation obtained after status number and image uniform segmentation; Further, carry out the extracting method moving information based on the eye of pupil: by dual nested Viterbi algorithm, eye image is split again, by Baum-welch algorithm revaluation model parameter, to EHMM model training, obtain the human eye state recognition classifier based on EHMM, when wherein human eye state being identified, first constructed by eye image to be identified and observe sequence vector, then calculate each training pattern and produce the likelihood value observing sequence vector, the training pattern with maximum likelihood value is object belonging to eye image to be identified.
3) eye movement model based on ocular bioavailability characteristic information is set up
The direction of visual lines of eyes is the rectilinear directions connecting eyeball center and pupil center, and the change of direction of visual lines is that centre of sphere axle rotates a certain angle with eyeball.
Set up the model of the rotation center measuring eyeball and the three-dimensional eye movement model based on two-dimentional pupil mobile message and eyeball irregular spheroid rotation information, the bivector of definition from Purkinje spot to pupil center is pupil-corneal reflection vector, be denoted as P-CR, and by carrying out the real-time acquisition eyeball information of shooting to human eye and in conjunction with P-CR, generating human eye to the 3 dimension space direction vectors watching object attentively.
4) the mapping relations model of eye movement model and eye gaze object is built
The pan of people when observing various outdoor scene and screen message is selected and watches process attentively to adopt eye tracking to learn, thus obtain visually-perceptible and the association mechanism of people, set up the transformation relation between visual field coordinate system and pupil coordinate system, obtain the coordinate of the true blinkpunkt of human eye in the coordinate system of visual field, and calculate eye gaze point, what finally blinkpunkt is mapped to user's reality watches attentively on object, what complete visual field (actual eye gaze point) and eye image mates work, realizes the corresponding of video tracking camera field of view and eyes visual field.
Wherein, the method building eye movement model is:
The eyeball of people is a spheroid substantially, the natural functions that human eye has is realized by forms of motion different in eye movement, as the means of man-machine interaction, be worth it is of concern that target after eye movement, instead of its motion process, so need the three-dimensional vector (representing the actual direction of gaze of sight line) that the eye movement model set up namely is formed by the combination of pupil two-dimensional signal and eyeball shape information, and then set up eye movement model, concrete is:
Calculate eyeball radius according to eye image information, eyeglass center, location, human eye can be calculated to the three-dimensional space direction vector watching object attentively.But eyeball is not exclusively spheroid, if therefore can certain defect be there is with conventional algorithm.When Physiologic Studies proves eye movement, its center is not a point of fixity, but along a movement in a curve, is called the shifting movement of eyeball, but there are some researches show eyeball, the extreme sport of rotating the centre of form when rotating in the scope of ± 38 ° from primary position of eye is less than 2mm simultaneously; When eyeball rotates within the scope of ± 3 °, the motion of the centre of form is less than 0.2mm, and for eye tracking, the center of rotation of eyeball is fixing.Eyeball physical arrangement is in fact inlayed by former and later two spheroids to form, and the spheroid radius-of-curvature accounting for volume 1/6 is above about 8mm, and radius of sphericity is below about 12mm, and the spherula center of circle is the cornea curved surface centre of sphere, and the direction of sight line is the upper O of figure corneato (optical axis VisualAxis) on the line of actual blinkpunkt, so the optical axis determines the direction of gaze of sight line.
Described on total, then analyze eyeball physical arrangement further, in conjunction with its characteristic, by the Purkinje image point legal parallactic angle film curved surface centre of sphere (O of image procossing, improvement cornea), and in conjunction with the two dimensional surface information of pupil center, finally generate the dynamic three-dimensional model of an eye.
Based on the Visual Trace Technology of pupil one corneal reflection vector method and image procossing, there is non-invasive advantage, achieve very fast progress in recent years, near-infrared light source light determines the moving direction of pupil at the hot spot (glint) of cornea eye reflection generation and the position relationship of pupil center.So in step 4, to be admired (Purkinje) spotting method by pul, at screen four angles, infrared LED is set respectively as light source, corneal reflection on pupil, by obtaining each two field picture with the camera of optical filter, wherein, at camera collection in eye image, there will be four obvious bright spots around pupil center, by the method for the geometrical constraint of image procossing, the outline map of original image is first obtained with Canny boundary operator, adopt Hough transform that eyeball image is projected to parameter space from plane space again, find out the hot spot center of circle, accurately can locate the relative position of pupil center and four bright spots, then using reflection spot as reference point, the coordinate figure of pupil center is performed mathematical calculations with it, and then judge the two dimensional motion in-plane of eyeball.
Simultaneously, the scaling method of blinkpunkt is also comprised in step 4, as the mapping relations between the direction of gaze that eye tracking will be used for just must completing in man-machine interaction sight line to computer screen point vector, a key point of practical application can be carried out so the demarcation of blinkpunkt is eye tracking, it is the condition precedent of system worked well, also be the key that system can move towards practical, be specially:
1) mapping relations equation is built
The problems such as distortion are extracted owing to there are data when actual eye moves information, there is unintentional nonlinearity factor in the mapping relations F (*) that eye camera image coordinate is tied between screen coordinate system, so F (*) can not describe by simple linear relationship.In order to determine mapping relations F (*), if vectorial y is the eye gaze point of visual field reference system, vector x is pupil center's subpoint thereon in (human eye) reference system, by the transformation relation of function F (*) representative from x to y, P representative is determined statistically comprehensive parameters vector in calibration process, namely the parameter vector in original unknown F (*), then have:
y=F(x,P);
Determine the concrete form of function F (x, P), and try to achieve the estimated value p ' of statistically comprehensive parameters vector P, thus obtain the estimated value y ' of eye gaze point position:
y′=F(x,P′)。
Further, make | y-y ' | the form of the function of → 0 determines primarily of the conversion of two described in step 1 is common.Due to the characteristic present point after eye rotation angles and eye imaging---there is unintentional nonlinearity between pupil center location; Simultaneously cause the distortion of data owing to being difficult to ensure the absolute rest of head in test process; In addition the impact of the sphere of eyes and the factor such as the position of light source and intensity also causes the abundant and even of very difficult underwriter's eye light, and in test process, easily cause the distortion of data because of absent minded.These also all result in F (*) and can not describe by simple linear relationship.So the task that mapping relations equation of the present invention will complete is the working mechanism considering system, what complete visual field (actual eye gaze point) and eye image mates work, solve in measuring process the nonlinear problem occurred as far as possible, by the measured value of system in certain accuracy rating " reduction " in the reference system of visual field.
2) statistically comprehensive parameters vector P is determined
The key link of demarcating how to determine the valuation P ' of statistically comprehensive parameters vector P, concrete, adopt the calibration algorithm based on least square curve fitting, design a merit function, for the degree of consistency between metric measurement data and the parameter model of selection according to one group of measurement data; Regulate model parameter simultaneously, make merit function value minimum, obtain best fit parameters P.
Assuming that P is M dimension, total N number of test point, by this M adjustable parameter Pi (i=1,2 ..., M) model to N number of test data (xi, yi) i=1 ..., N carries out matching.Definition has the vectorial b of N number of component, then have: bi=y/Ri, i=1 ..., N.
Wherein R ibe the measuring error of i-th data point, default value is 1; To vectorial P and N number of data, have: yi (xi)=∑ pkXk (xi), i=1 ..., N.
X k(x i) be called one group of basis function.Then matrix A=(a is defined ij) n × M, its element by M basis function at N number of coordinate x ion value and N number of measuring error calculate, that is:
Definition merit function X 2=| A*P-b| 2, try to achieve parameter vector P, make X2 be minimum value.In over-determined systems situation, the optimal approximation solution under least square meaning can be drawn by the method that SVD decomposes.When carrying out matching by SVD least square method to eye-movement measurement data, the form of fitting function can be specified as required.After trying to achieve parameter vector P, the eye that can obtain y=F (x, P) function moves the mapping relations model of information model to eye gaze object, in man-machine interaction actual for eye tracking application.
The application of the present invention in actual man-machine interactive operation:
By adopting the method for the invention, come operating computer or other equipment by eyes, current application can be embodied in: the controlling functions 1. realizing eye-controlled mouse, as: control text reading and webpage rolling, play music and other multimedia; 2. according to the needs of embody rule, various eye movement characteristics is corresponded in the concrete operating function of software, as in electric athletic game, just can transfer a certain stunt function etc., for game enthusiasts provides a kind of interactive mode of fashion by eyes; 3. from the physiological medical science feature of people, people is once occur that fatigue is easy to move rule from eye reflect, as frequency of wink, blink, doze off and divert attention, by the physiological medical science feature of above combine with technique fatigue, realize carrying out detection early warning to driver, important or dangerous post, voice reminder staff take care, tired alarm can be sent to the Surveillance center of enterprise by network simultaneously, or by 3G wireless network, alarm is sent in the mobile phone of managerial personnel, facilitate Enterprises Leader grasp important information in time and make a policy.

Claims (8)

1. be applicable to a contactless free space sight tracing for man-machine interaction, it is characterized in that comprising the following steps:
1) real-time face and eyes locating and tracking:
Captured the image of face and human eye by video tracking video camera in real time, facial image is positioned to the analyzing and processing of pupil, described analyzing and processing mode is specially: adopt Viola algorithm to set up face classification device and detect face; On human face region, use Viola algorithm to set up human eye sorter to locate human eye area again simultaneously; Then, adopt based on the legal position pupil center of image gray projection, realize human face region to human eye area, and human eye area is to the image procossing simplification process of pupil region;
2) eye moves biological information extraction:
Detect and gather the position of human eye in image, extracting eyes subimage, wherein adopting Viola algorithm to set up face classification device and face is detected, on human face region, use Viola algorithm to set up human eye sorter to locate human eye area more simultaneously; Further, by extracting the mobile message of pupil based on the method for corneal reflection principle and image procossing, pass through based on the legal position pupil center of image gray projection, and adopt the EHMM based on 2D-DCT feature, analyze by the eye image collected and set up Hidden Markov model, realizing the differentiation of human eye state;
3) eye movement model based on ocular bioavailability characteristic information is set up:
Set up the model of the rotation center measuring eyeball and the three-dimensional eye movement model based on two-dimentional pupil mobile message and eyeball irregular spheroid rotation information, the bivector of definition from Purkinje spot to pupil center is pupil-corneal reflection vector, be denoted as P-CR, and by carrying out the real-time acquisition eyeball information of shooting to human eye and in conjunction with P-CR, generating human eye to the 3 dimension space direction vectors watching object attentively;
4) the mapping relations model of eye movement model and eye gaze object is built:
The pan of people when observing various outdoor scene and screen message is selected and watches process attentively to adopt eye tracking to learn, thus obtain visually-perceptible and the association mechanism of people, set up the transformation relation between visual field coordinate system and pupil coordinate system, obtain the coordinate of the true blinkpunkt of human eye in the coordinate system of visual field, and calculate eye gaze point, what finally blinkpunkt is mapped to user's reality watches attentively on object, what complete actual eye gaze point and eye image mates work, realizes the corresponding of video tracking camera field of view and eyes visual field.
2. the contactless free space sight tracing being applicable to man-machine interaction according to claim 1, is characterized in that the process that described Viola algorithm sets up face classification device is:
Set up the living things feature recognition algorithm model based on cascade cascade searching algorithm Face datection, class rectangle (Haar-like) feature based on integrogram is extracted to the facial image database set up, adopt adaboost to advance training algorithm carry out sorter training to facial image database and classify and obtain face classification device, do pre-service in conjunction with colour of skin coupling.
3. the contactless free space sight tracing being applicable to man-machine interaction according to claim 1, is characterized in that the process that described Viola algorithm sets up human eye sorter is:
Set up the living things feature recognition algorithm model based on cascade cascade searching algorithm human eye detection, class rectangle (Haar-like) feature based on integrogram is extracted to the facial image database set up, adaboost is adopted to advance training algorithm carry out sorter training to facial image database and classify and obtain human eye sorter, by the detection of classifier and the location human eye that obtain.
4. the contactless free space sight tracing being applicable to man-machine interaction according to claim 1, is characterized in that the described process based on the legal position pupil center of image gray projection is:
Change human eye area image into gray level image, its size is m*n, by formula:
Ph y ( y ) = Σ x = 0 n - 1 I ( x , y ) ,
Ph x ( x ) = Σ y = 0 m - 1 I ( x , y )
Do the Gray Projection of horizontal and vertical, wherein respectively have a minimal value in the vertical direction of pupil center and the Gray Projection of horizontal direction, can try to achieve pupil center Q is:
(x 0,y 0)wherePh y(y 0)=Max{Ph y(y)}andPh x(x 0)=Max{Ph x(x)}。
5. the contactless free space sight tracing being applicable to man-machine interaction according to claim 1, is characterized in that being specially based on the method for the EHMM of 2D-DCT feature in described step 2:
Eye image sampled and 2D-DCT conversion is carried out to each sample window, forming observation sequence vector by the low frequency coefficient after 2D-DCT converts, according to the vector initialising EHMM parameter of observation obtained after status number and image uniform segmentation; Further, carry out the extracting method moving information based on the eye of pupil: by dual nested Viterbi algorithm, eye image is split again, by Baum-welch algorithm revaluation model parameter, to EHMM model training, obtain the human eye state recognition classifier based on EHMM, when wherein human eye state being identified, first constructed by eye image to be identified and observe sequence vector, then calculate each training pattern and produce the likelihood value observing sequence vector, the training pattern with maximum likelihood value is object belonging to eye image to be identified.
6. the contactless free space sight tracing being applicable to man-machine interaction according to claim 1, is characterized in that the mode setting up eye movement model in described step 4 is specially:
The three-dimensional vector formed by the combination of pupil two-dimensional signal and eyeball shape information, and then set up eye movement model, concrete is: calculate eyeball radius according to eye image information, eyeglass center, location, then calculates human eye to the three-dimensional space direction vector watching object attentively; Then, by the Purkinje image point legal parallactic angle film curved surface centre of sphere (O of image procossing, improvement cornea), in conjunction with the two dimensional surface information of pupil center, generate the three-dimensional model that an eye is dynamic.
7. the contactless free space sight tracing being applicable to man-machine interaction according to claim 1, it is characterized in that in described step 4, to be admired (Purkinje) spotting method by pul, at screen four angles, infrared LED is set respectively as light source, corneal reflection on pupil, by obtaining each two field picture with the camera of optical filter, wherein, at camera collection in eye image, there will be four obvious bright spots around pupil center, by the method for the geometrical constraint of image procossing, the outline map of original image is first obtained with Canny boundary operator, adopt Hough transform that eyeball image is projected to parameter space from plane space again, find out the hot spot center of circle, accurately can locate the relative position of pupil center and four bright spots, then using reflection spot as reference point, the coordinate figure of pupil center is performed mathematical calculations with it, and then judge the two dimensional motion in-plane of eyeball.
8. the contactless free space sight tracing being applicable to man-machine interaction according to claim 1, is characterized in that described step 4 comprises the computing method of actual eye gaze point, specific as follows:
1) mapping relations equation is built:
If vectorial y is the eye gaze point of visual field reference system, vector x is pupil center's subpoint thereon in (human eye) reference system, by the transformation relation of function F (*) representative from x to y, P representative is determined statistically comprehensive parameters vector in calibration process, namely the parameter vector in original unknown F (*), then have:
y=F(x,P);
Determine the concrete form of function F (x, P), and try to achieve the estimated value p ' of statistically comprehensive parameters vector P, thus obtain the estimated value y ' of eye gaze point position:
y′=F(x,P′);
2) statistically comprehensive parameters vector P is determined:
Determine the valuation P ' of statistically comprehensive parameters vector P, concrete, adopt the calibration algorithm based on least square curve fitting, design a merit function, for the degree of consistency between metric measurement data and the parameter model of selection according to one group of measurement data; Regulate model parameter simultaneously, make merit function value minimum, obtain best fit parameters P;
Assuming that P is M dimension, total N number of test point, by this M adjustable parameter Pi (i=1,2 ... M) model to N number of test data (xi, yi) i=1 ..., N carries out matching, and definition has the vectorial b of N number of component, then have: bi=y/Ri, i=1 ..., N;
Wherein R ibe the measuring error of i-th data point, default value is 1; To vectorial P and N number of data, have: yi (xi)=Σ pkXk (xi), i=1 ..., N;
X k(x i) be called one group of basis function, then define matrix A=(a ij) n × M, its element by M basis function at N number of coordinate x ion value and N number of measuring error calculate, that is:
Definition merit function X 2=| A*P – b| 2try to achieve parameter vector P, make X2 be minimum value, in over-determined systems situation, the optimal approximation solution under least square meaning can be drawn by the method that SVD decomposes, when matching being carried out to eye-movement measurement data by SVD least square method, the form of fitting function can be specified as required, after trying to achieve parameter vector P, can obtain y=F (x, P) eye of function moves the mapping relations model of information model to eye gaze object, in man-machine interaction actual for eye tracking application.
CN201210107182.3A 2012-04-12 2012-04-12 A kind of contactless free space sight tracing being applicable to man-machine interaction Active CN102749991B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210107182.3A CN102749991B (en) 2012-04-12 2012-04-12 A kind of contactless free space sight tracing being applicable to man-machine interaction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210107182.3A CN102749991B (en) 2012-04-12 2012-04-12 A kind of contactless free space sight tracing being applicable to man-machine interaction

Publications (2)

Publication Number Publication Date
CN102749991A CN102749991A (en) 2012-10-24
CN102749991B true CN102749991B (en) 2016-04-27

Family

ID=47030252

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210107182.3A Active CN102749991B (en) 2012-04-12 2012-04-12 A kind of contactless free space sight tracing being applicable to man-machine interaction

Country Status (1)

Country Link
CN (1) CN102749991B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263745A (en) * 2019-06-26 2019-09-20 京东方科技集团股份有限公司 A kind of method and device of pupil of human positioning

Families Citing this family (67)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102930252B (en) * 2012-10-26 2016-05-11 广东百泰科技有限公司 A kind of sight tracing based on the compensation of neutral net head movement
CN102957931A (en) * 2012-11-02 2013-03-06 京东方科技集团股份有限公司 Control method and control device of 3D (three dimensional) display and video glasses
CN102981616B (en) * 2012-11-06 2017-09-22 中兴通讯股份有限公司 The recognition methods of object and system and computer in augmented reality
CN103870796B (en) * 2012-12-13 2017-05-24 汉王科技股份有限公司 Eye sight evaluation method and device
CN103927670B (en) * 2013-01-10 2017-11-28 上海通用汽车有限公司 Quantify the method for the region attention rate of object
CN104076915A (en) * 2013-03-29 2014-10-01 英业达科技有限公司 Exhibition system capable of adjusting three-dimensional models according to sight lines of visitors and method implemented by exhibition system
CN104133548A (en) * 2013-05-03 2014-11-05 中国移动通信集团公司 Method and device for determining viewpoint area and controlling screen luminance
CN109584868B (en) * 2013-05-20 2022-12-13 英特尔公司 Natural human-computer interaction for virtual personal assistant system
JPWO2014192103A1 (en) * 2013-05-29 2017-02-23 三菱電機株式会社 Information display device
CN103440038B (en) * 2013-08-28 2016-06-15 中国人民大学 A kind of information acquisition system based on eye recognition and application thereof
TW201518979A (en) * 2013-11-15 2015-05-16 Utechzone Co Ltd Handheld eye-controlled ocular device, password input device and method, computer-readable recording medium and computer program product
CN103761519B (en) * 2013-12-20 2017-05-17 哈尔滨工业大学深圳研究生院 Non-contact sight-line tracking method based on self-adaptive calibration
CN104978548B (en) * 2014-04-02 2018-09-25 汉王科技股份有限公司 A kind of gaze estimation method and device based on three-dimensional active shape model
WO2015167471A1 (en) * 2014-04-29 2015-11-05 Hewlett-Packard Development Company, L.P. Gaze detector using reference frames in media
TWI577327B (en) * 2014-08-14 2017-04-11 由田新技股份有限公司 Method, apparatus and computer program product for positioning pupil
CN104253944B (en) * 2014-09-11 2018-05-01 陈飞 Voice command based on sight connection assigns apparatus and method
US20170231064A1 (en) * 2014-09-25 2017-08-10 Philips Lighting Holding B.V. Control of lighting
CN105138961A (en) * 2015-07-27 2015-12-09 华南师范大学 Eyeball tracking big data based method and system for automatically identifying attractive person of opposite sex
FR3039643B1 (en) * 2015-07-31 2018-07-13 Thales HUMAN-MACHINE INTERFACE FOR THE FLIGHT MANAGEMENT OF AN AIRCRAFT
CN105184246B (en) 2015-08-28 2020-05-19 北京旷视科技有限公司 Living body detection method and living body detection system
CN105184277B (en) * 2015-09-29 2020-02-21 杨晴虹 Living body face recognition method and device
CN105700677A (en) * 2015-12-29 2016-06-22 努比亚技术有限公司 Mobile terminal and control method thereof
CN105867611A (en) * 2015-12-29 2016-08-17 乐视致新电子科技(天津)有限公司 Space positioning method, device and system in virtual reality system
CN107180441B (en) * 2016-03-10 2019-04-09 腾讯科技(深圳)有限公司 The method and apparatus for generating eye image
CN105892691A (en) * 2016-06-07 2016-08-24 京东方科技集团股份有限公司 Method and device for controlling travel tool and travel tool system
CN106445115A (en) * 2016-08-31 2017-02-22 中国人民解放军海军医学研究所 Eye movement data-based user help information automatic triggering apparatus and method
CN107103293B (en) * 2017-04-13 2019-01-29 西安交通大学 It is a kind of that the point estimation method is watched attentively based on joint entropy
CN108721070A (en) * 2017-04-24 2018-11-02 河北工业大学 A kind of intelligent vision functional training system and its training method based on eyeball tracking
CN107862246B (en) * 2017-10-12 2021-08-06 电子科技大学 Eye gazing direction detection method based on multi-view learning
CN108335364A (en) * 2018-01-23 2018-07-27 北京易智能科技有限公司 A kind of three-dimensional scenic display methods based on line holographic projections
CN108181994A (en) * 2018-01-26 2018-06-19 成都科木信息技术有限公司 For the man-machine interaction method of the AR helmets
CN110096130A (en) * 2018-01-29 2019-08-06 美的集团股份有限公司 Control method and device, water heater and computer readable storage medium
CN108268858B (en) * 2018-02-06 2020-10-16 浙江大学 High-robustness real-time sight line detection method
CN108572733B (en) * 2018-04-04 2019-03-12 西安交通大学 A kind of eye movement behavior visual search target prediction method based on condition random field
CN110363555B (en) * 2018-04-10 2024-04-09 释空(上海)品牌策划有限公司 Recommendation method and device based on vision tracking visual algorithm
CN108888487A (en) * 2018-05-22 2018-11-27 深圳奥比中光科技有限公司 A kind of eyeball training system and method
CN108960106B (en) * 2018-06-25 2019-09-20 西安交通大学 A kind of human eye fixation point estimation method based on quantization Minimum error entropy criterion
CN109177922A (en) * 2018-08-31 2019-01-11 北京七鑫易维信息技术有限公司 Vehicle starting method, device, equipment and storage medium
CN109271030B (en) * 2018-09-25 2020-12-22 华南理工大学 Multidimensional comparison method for three-dimensional space betting viewpoint track
CN109240510B (en) * 2018-10-30 2023-12-26 东北大学 Augmented reality man-machine interaction equipment based on sight tracking and control method
CN109409298A (en) * 2018-10-30 2019-03-01 哈尔滨理工大学 A kind of Eye-controlling focus method based on video processing
TWI704473B (en) 2018-11-16 2020-09-11 財團法人工業技術研究院 Vision vector detecting method and device
CN109830238B (en) 2018-12-24 2021-07-30 北京航空航天大学 Method, device and system for detecting working state of tower controller
CN109864699A (en) * 2019-01-04 2019-06-11 东南大学 Animal nystagmus parameter based on vestibulo-ocular reflex obtains system and method
CN109634431B (en) * 2019-01-22 2024-04-26 像航(上海)科技有限公司 Medium-free floating projection visual tracking interaction system
CN110045834A (en) * 2019-05-21 2019-07-23 广东工业大学 Detection method, device, system, equipment and storage medium for sight locking
CN110286754B (en) * 2019-06-11 2022-06-24 Oppo广东移动通信有限公司 Projection method based on eyeball tracking and related equipment
CN110456904B (en) * 2019-06-18 2024-06-11 中国人民解放军军事科学院国防科技创新研究院 Augmented reality glasses eye movement interaction method and system without calibration
CN110377158B (en) * 2019-07-22 2023-03-31 北京七鑫易维信息技术有限公司 Eyeball tracking calibration method based on variable field range and electronic equipment
CN110362210B (en) * 2019-07-24 2022-10-11 济南大学 Human-computer interaction method and device integrating eye movement tracking and gesture recognition in virtual assembly
CN110516553A (en) * 2019-07-31 2019-11-29 北京航空航天大学 The monitoring method and device of working condition
CN110458104B (en) * 2019-08-12 2021-12-07 广州小鹏汽车科技有限公司 Human eye sight direction determining method and system of human eye sight detection system
CN110703904B (en) * 2019-08-26 2023-05-19 合肥疆程技术有限公司 Visual line tracking-based augmented virtual reality projection method and system
CN112306223B (en) * 2019-08-30 2024-03-26 北京字节跳动网络技术有限公司 Information interaction method, device, equipment and medium
CN110811645B (en) * 2019-10-15 2022-12-20 南方科技大学 Visual fatigue measuring method and system, storage medium and electronic equipment
CN110934599A (en) * 2019-12-20 2020-03-31 东南大学 Method and system for evaluating infant common attention in natural scene
CN113116291A (en) * 2019-12-31 2021-07-16 Oppo广东移动通信有限公司 Calibration and calibration method and device for eyeball tracking, mobile terminal and storage medium
CN111598049B (en) * 2020-05-29 2023-10-10 中国工商银行股份有限公司 Cheating identification method and device, electronic equipment and medium
CN111881719B (en) * 2020-06-09 2024-04-16 青岛奥美克生物信息科技有限公司 Non-contact type biological recognition guiding device, method and biological feature recognition system
CN111985303A (en) * 2020-07-01 2020-11-24 江西拓世智能科技有限公司 Human face recognition and human eye light spot living body detection device and method
CN111985341B (en) * 2020-07-23 2023-04-07 东北师范大学 Method and system for capturing visual attention of image and readable storage medium
CN112298059A (en) * 2020-10-26 2021-02-02 武汉华星光电技术有限公司 Vehicle-mounted display screen adjusting device and vehicle
CN112363629B (en) * 2020-12-03 2021-05-28 深圳技术大学 Novel non-contact man-machine interaction method and system
CN112509007B (en) * 2020-12-14 2024-06-04 科大讯飞股份有限公司 Real gaze point positioning method and head-mounted gaze tracking system
CN113850145A (en) * 2021-08-30 2021-12-28 中国科学院上海微系统与信息技术研究所 Hand-eye orientation cooperative target positioning method
CN113870639A (en) * 2021-09-13 2021-12-31 上海市精神卫生中心(上海市心理咨询培训中心) Training evaluation method and system based on virtual reality
CN113821108B (en) * 2021-11-23 2022-02-08 齐鲁工业大学 Robot remote control system and control method based on multi-mode interaction technology

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344919A (en) * 2008-08-05 2009-01-14 华南理工大学 Sight tracing method and disabled assisting system using the same
CN101540090A (en) * 2009-04-14 2009-09-23 华南理工大学 Driver fatigue monitoring device based on multivariate information fusion and monitoring method thereof
CN101576771A (en) * 2009-03-24 2009-11-11 山东大学 Scaling method for eye tracker based on nonuniform sample interpolation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101344919A (en) * 2008-08-05 2009-01-14 华南理工大学 Sight tracing method and disabled assisting system using the same
CN101576771A (en) * 2009-03-24 2009-11-11 山东大学 Scaling method for eye tracker based on nonuniform sample interpolation
CN101540090A (en) * 2009-04-14 2009-09-23 华南理工大学 Driver fatigue monitoring device based on multivariate information fusion and monitoring method thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263745A (en) * 2019-06-26 2019-09-20 京东方科技集团股份有限公司 A kind of method and device of pupil of human positioning
CN110263745B (en) * 2019-06-26 2021-09-07 京东方科技集团股份有限公司 Method and device for positioning pupils of human eyes

Also Published As

Publication number Publication date
CN102749991A (en) 2012-10-24

Similar Documents

Publication Publication Date Title
CN102749991B (en) A kind of contactless free space sight tracing being applicable to man-machine interaction
Kar et al. A review and analysis of eye-gaze estimation systems, algorithms and performance evaluation methods in consumer platforms
Cheng et al. Appearance-based gaze estimation with deep learning: A review and benchmark
Cheng et al. Appearance-based gaze estimation via evaluation-guided asymmetric regression
Wang et al. Real time eye gaze tracking with 3d deformable eye-face model
CN107656613B (en) Human-computer interaction system based on eye movement tracking and working method thereof
González-Ortega et al. A Kinect-based system for cognitive rehabilitation exercises monitoring
CN104504390B (en) A kind of user on the network's state identification method and device based on eye movement data
JP5016175B2 (en) Face image processing system
CN104978548A (en) Visual line estimation method and visual line estimation device based on three-dimensional active shape model
WO2020125499A1 (en) Operation prompting method and glasses
CN106796449A (en) Eye-controlling focus method and device
CN103324284A (en) Mouse control method based on face and eye detection
CN110221699A (en) A kind of eye movement Activity recognition method of front camera video source
CN109145802A (en) More manpower gesture man-machine interaction methods and device based on Kinect
Jingchao et al. Recognition of classroom student state features based on deep learning algorithms and machine learning
Wu et al. Appearance-based gaze block estimation via CNN classification
Sheela et al. Mapping functions in gaze tracking
Lim et al. Development of gaze tracking interface for controlling 3D contents
Nitschke Image-based eye pose and reflection analysis for advanced interaction techniques and scene understanding
Roy et al. Real time hand gesture based user friendly human computer interaction system
HemaMalini et al. Eye and voice controlled wheel chair
Yang et al. vGaze: Implicit saliency-aware calibration for continuous gaze tracking on mobile devices
Xu et al. Ravengaze: A dataset for gaze estimation leveraging psychological experiment through eye tracker
Changyuan et al. The line of sight to estimate method based on stereo vision

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant