CN108921059A - A kind of eye-tracking method based on Haar classifier - Google Patents

A kind of eye-tracking method based on Haar classifier Download PDF

Info

Publication number
CN108921059A
CN108921059A CN201810631690.9A CN201810631690A CN108921059A CN 108921059 A CN108921059 A CN 108921059A CN 201810631690 A CN201810631690 A CN 201810631690A CN 108921059 A CN108921059 A CN 108921059A
Authority
CN
China
Prior art keywords
classifier
eye
observer
coordinate
haar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810631690.9A
Other languages
Chinese (zh)
Inventor
徐新
滕鑫
穆楠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan University of Science and Engineering WUSE
Wuhan University of Science and Technology WHUST
Original Assignee
Wuhan University of Science and Engineering WUSE
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan University of Science and Engineering WUSE filed Critical Wuhan University of Science and Engineering WUSE
Priority to CN201810631690.9A priority Critical patent/CN108921059A/en
Publication of CN108921059A publication Critical patent/CN108921059A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/013Eye tracking input arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The eye-tracking method based on Haar classifier that the invention discloses a kind of, Face datection is carried out using Haar-like feature first, pupil detection is carried out in the output results area of Face datection, it is final to determine pupil center location, and by the matching relationship of pupil center and blinkpunkt after linear regression fit fixing head bracket, to calculate observer's visual attention region.This method omission factor is low, accuracy is high, and the real-time of algorithm is high, and without using equipment such as expensive eye trackers, will not damage to human eye;In addition, this method can be used for all kinds of vision attention force tracking scenes, such as human-computer interaction technology, medical diagnosis, psychological research, computer vision, it is applied widely.

Description

A kind of eye-tracking method based on Haar classifier
Technical field
The eye-tracking method based on Haar classifier that the present invention relates to a kind of, belong to image procossing and cranial nerve science Interleaving techniques field.
Background technique
In the cognitive science of Chinese tradition, in short it is called the window that eyes are souls, cognitive psychology thinks people Many law of psychological activities can be expressed by eyes and behaviour is found out.In daily production activity and social interaction In, the information input of people and cognition are largely all from the perception of visual information, and according to statistics, people can by eyes To obtain the 80%-90% of information needed data, many researchers attempt research by finding out human eye mechanics and people The position of eyes concern obtain visual perception information.For this purpose, having a considerable amount of research institutions and research always both at home and abroad Personnel always work on the research activities of human eye movement's target identification and tracking.
The research of eye-tracking problem is just suggested at the thirties in last century, but experienced a long-term development Process is just grown up.In Middle Ages that physiological psychology occurs, the Arabic in the Middle East grinds mathematics and Experimental Optical Study carefully, by improveing scope, proposes theories of vision.Contemporaneity,《Kitabal Manazir》- mono- about physiology light It learns, is put forward for the first time the fashionable Physiological Psychology educational circles of book of center and peripheral vision, people have come to realise the important function of eye movement. Later, it experienced one section of silence, it is just again gradually active about the science of eye movement and experimental activity.
In the past, the method that eye-tracking uses is very cumbersome, such as to fix whatsit on the eyeball of subject. The basic principle of novel eye-tracking technology is identical, that is, Ray Of Light and a video camera are aligned to the eyes of subject, Infer the direction that subject watches attentively by light and back-end analysis, the process of video camera then recording interactive.Several centuries go over , be not quite similar about the research method of eye movement technique, means every country, the eye-tracking technology of early stage in medical diagnosis on disease and The application of psychological research etc. is wider, expands to other field, such as Image Compression and novel human-machine interaction later.Closely It is phase, commonplace to the concern of eye-tracking both at home and abroad, correspondingly there is a collection of application based on eye-tracking technology, such as Study webpage availability and the availability of application software etc..Using eye-tracking technology, human-computer interaction also be would be much more convenient.
Currently, the correlative study about eye-tracking is primarily present following advantage and disadvantage:1) Knowledge based engineering method:Its advantage Be to rotation, inclination robustness it is preferable, can detecte the flashing and closure of human eye, know the disadvantage is that needing to construct a large amount of priori Know, real-time, the robustness of algorithm are lower, most of human eye detections under subsidiary conditions;2) Statistics-Based Method:It is excellent Point is that precision is relatively high, the disadvantage is that the complexity of algorithm is higher, is difficult to meet the requirement of real-time, detection sample relies on training Sample;3) based on the method for template matching:Advantage is that algorithm speed is very fast, the disadvantage is that being generally used for light changes lesser field Close, for light variation greatly, background complexity in the case where, robustness is low, accuracy is low;4) special installation and infrared light supply enter Invade formula method:Advantage is that robustness and accuracy are higher, the disadvantage is that eye tracker is expensive, it is limited to be applicable in scene, shadow when wearing Subject's normal activity is rung, human eye may be damaged.
Summary of the invention
The eye-tracking method based on Haar classifier that overcome the above deficiencies, the invention provides a kind of, Face and eyes are combined by the method for Face datection and moved in eye movement tracking, is both avoided using expensive eye movement Instrument, also overcomes that Face datection and pupil detection missing inspection are high, precision is low and is difficult to the technical problem with real-time.
The Haar classifier is the classifier based on tree, it establishes boost screening type cascade classifier.
Haar classifier=Haar-like feature+integrogram method+AdaBoost+ cascade.
The main points of Haar classifier algorithm are as follows:
(1) it is detected using Haar-like feature;
(2) Haar-like feature evaluation is accelerated using integrogram (Integral Image);
(3) face and non-face strong classifier are distinguished using the training of AdaBoost algorithm;
(4) it is cascaded using screening type and strong classifier is cascaded to together, improve accuracy rate.
Haar-like feature:Assuming that we need so child window in picture to be detected in Face datection Constantly displacement sliding in window, the every position of child window will calculate the feature in the region, then be trained with us Good cascade classifier screens this feature, once this feature has passed through the screening of all strong classifiers, then determines the area Domain is face.
Integrogram accelerates:The calculation method of histogram is to traverse whole pixels of image and add up each intensity value in image The number of middle appearance.Sometimes it only needs to calculate the histogram of some specific region in image, and calculates if necessary more in image The histogram in a region, these calculating process will become very time-consuming.It will greatly be mentioned using integral image in this case The efficiency of high statistics image region pixel.Integral image is very widely used in a program.
AdaBoost:AdaBoost is a kind of iterative algorithm, and core concept is different for the training of the same training set Classifier (Weak Classifier), then these weak classifier sets are got up, constitute (the strong classification of a stronger final classification device Device).
Cascade:Mapping relations between multiple objects, the cascade connection established between data improve the efficiency of management.
The present invention overcomes the technical solution used by its technical problem to be:
A kind of eye-tracking method based on Haar classifier, includes the following steps:
(1) human eye detection is carried out to realtime graphic using Haar classifier;
(2) the location of observer information is obtained;
(3) observer's visual attention region is calculated using the location of human eye detection result and observer information.
Currently preferred, in the step (1), carrying out human eye detection to realtime graphic, specific step is as follows:
(1.1) rectangular characteristic at human eye position is calculated:
V=R1-R2 (1)
Wherein, V is calculated value of the rectangular characteristic in the image-region, R1And R2Respectively represent white area in rectangular characteristic Feature and black region feature, wherein RiCalculation formula be:
Ri=∑M, n ∈ Ri, i={ 1,2 }r(m,n) (2)
Wherein, r (m, n) is the value of pixel in white area or black region, and m and n respectively represent rectangular characteristic region Ordinate and abscissa;
(1.2) using the value for calculating rectangular characteristic figure with area table, wherein for a certain piece of region in entire image, Calculating side's formula with area table is:
SABCD=E (A)+E (C)-E (B)-E (D) (3)
Wherein, E (A), E (B), E (C), E (D) respectively represent required region upper left point, upper right point, lower-left point, lower-right most point And area table;
(1.3) 10-20 Weak Classifier is generated on the basis of eyes rectangular characteristic, then uses AdaBoost by its grade Connection is strong classifier.
It is currently preferred, in the step (1.3), the step of generating several Weak Classifiers and being cascaded as strong classifier packet It includes as follows:
First a basic classification device is trained from initial training data set;
Training sample branch is adjusted further according to the performance of basic classification device, by misclassification when increasing last round of iteration The weight of sample;
Based on the next classifier of sample weights adjusted training;
It repeats the above steps, until classifier quantity reaches pre-set quantity.
It is currently preferred, in the step (1.3), if training dataset T={ (x1,y1),(x2,y2)…,(xn, yn), whereinFinal strong classifier is G (x);
Initialize the distribution of training data centralized value:
For the number of iterations m=1,2, M;
D is distributed using with weightmTraining dataset study, obtain basic classification device:
GM(x):χ→{-1,+1} (5)
Calculate Gm(x) the error in classification rate on training dataset:
Calculate Gm(x) coefficient
Update the weight distribution of training dataset
Dm+1=(Wm+1,1,···,Wm+1,i,···,Wm+1,N) (8)
ZmIt is standardizing factor
Construct the linear combination of basic classification device
Obtain final strong classifier
It is currently preferred, in the step (2), determined locating for observer by the position of holder,head and camera Spatial information, specific steps include as follows:
(2.1) camera is fixed on beside computer screen to the position that can clearly shoot observer's face;
(2.2) head of observer is fixed with holder,head, so that the head of observer is consolidated at a distance from camera It is fixed, and record the distance;
(2.3) several points are randomly generated on computer screen;
(2.4) head of observer remains stationary, and only rotates eyes and removes the point being randomly generated on observation computer screen, record The coordinate of each point and the coordinate of pupil.
Currently preferred, in the step (3), it includes as follows for calculating the specific steps of observer's area-of-interest:
Matching formula is:
Wherein, S (x, y) is the coordinate put on screen, and P (x, y) is pupil coordinate, SwIt is wide for computer screen, SlWith SrRespectively Coordinate left and right direction maximum when for observer's viewing screen;
Screen coordinate corresponding to pupil is calculated by coordinate data recorded in formula (13) and step (2.4), Acquire the value of S (x, y).
The beneficial effects of the invention are as follows:
The present invention use first Haar-like feature carry out Face datection, in the output results area of Face datection into Row pupil detection finally determines pupil center location, and passes through the pupil center after linear regression fit fixing head bracket With the matching relationship of blinkpunkt, to calculate observer's visual attention region.This method omission factor is low, accuracy is high, calculates The real-time of method is high, and without using equipment such as expensive eye trackers, will not damage to human eye;In addition, this method can To be used for all kinds of vision attention force tracking scenes, such as human-computer interaction technology, medical diagnosis, psychological research, computer vision, fit It is wide with range.
Detailed description of the invention
Fig. 1 is the basic procedure schematic diagram of the embodiment of the present invention.
Fig. 2 is the effect diagram for the eye-tracking completed using the method for the embodiment of the present invention.
In figure, 1, camera, 2, computer screen, 3, holder,head, 4, the point on computer screen.
Specific embodiment
For a better understanding of the skilled in the art, being done in the following with reference to the drawings and specific embodiments to the present invention It is further described, it is following to be merely exemplary that the scope of protection of the present invention is not limited.
As shown in Figure 1, a kind of eye-tracking method based on Haar classifier of the present invention, includes the following steps:
(1) human eye detection is carried out to realtime graphic using Haar classifier;
(2) the location of observer information is obtained;
(3) observer's visual attention region is calculated using the location of human eye detection result and observer information.
In the step (1), carrying out human eye detection to realtime graphic, specific step is as follows:
(1.1) rectangular characteristic at human eye position is calculated:
V=R1-R2 (1)
Wherein, V is calculated value of the rectangular characteristic in the image-region, R1And R2Respectively represent white area in rectangular characteristic Feature and black region feature, wherein RiCalculation formula be:
Ri=∑M, n ∈ Ri, i={ 1,2 }r(m,n) (2)
Wherein, r (m, n) is the value of pixel in white area or black region, and m and n respectively represent rectangular characteristic region Ordinate and abscissa;
(1.2) using the value for calculating rectangular characteristic figure with area table, wherein for a certain piece of region in entire image, Calculating side's formula with area table is:
SABCD=E (A)+E (C)-E (B)-E (D) (3)
Wherein, E (A), E (B), E (C), E (D) respectively represent required region upper left point, upper right point, lower-left point, lower-right most point And area table;
(1.3) 10-20 Weak Classifier is generated on the basis of eyes rectangular characteristic, then uses AdaBoost by its grade Connection is strong classifier, and specific steps include as follows:
First a basic classification device is trained from initial training data set;Further according to basic classification device performance to training sample This branch is adjusted, by the weight of misclassification sample when increasing last round of iteration;Based under sample weights adjusted training One classifier;It repeats the above steps, until classifier quantity reaches pre-set quantity.If training dataset T={ (x1, y1),(x2,y2)…,(xn,yn), whereinFinal strong classifier is G (x).
Initialize the distribution of training data centralized value:
For the number of iterations m=1,2, M;
D is distributed using with weightmTraining dataset study, obtain basic classification device:
GM(x):χ→{-1,+1} (5)
Calculate Gm(x) the error in classification rate on training dataset:
Calculate Gm(x) coefficient
Update the weight distribution of training dataset
Dm+1=(Wm+1,1,···,Wm+1,i,···,Wm+1,N) (8)
ZmIt is standardizing factor
Construct the linear combination of basic classification device
Obtain final strong classifier
In the step (2), spatial information locating for observer is determined by the position of holder,head and camera, is such as schemed Shown in 2, specific steps include as follows:
(2.1) camera 1 is fixed on beside computer screen 2 to the position that can clearly shoot observer's face, this implementation Camera 1 is fixed on to the top of computer screen 2 in example;
(2.2) head of observer is fixed with holder,head 3, so that the head of observer is at a distance from camera 1 It is fixed, and record the distance;
(2.3) several points 4 are randomly generated on computer screen 2;
(2.4) head of observer remains stationary, and only rotates eyes and removes the point being randomly generated on observation computer screen, record The coordinate of each point and the coordinate of pupil.
In the step (3), it includes as follows for calculating the specific steps of observer's area-of-interest:
Matching formula is:
Wherein, S (x, y) is the coordinate put on screen, and P (x, y) is pupil coordinate, SwIt is wide for computer screen, SlWith SrRespectively Coordinate left and right direction maximum when for observer's viewing screen;
Screen coordinate corresponding to pupil is calculated by coordinate data recorded in formula (13) and step (2.4), Acquire the value of S (x, y).When specifically acquiring S (x, y) value, first pass through the screen coordinate S (x, y) that records in step (2.4) with Pupil P (x, y) coordinate, calculates SlWith Sr, after obtaining the two unknown numbers, in formula (13) just only P (x, y) and S (x, Y) two unknown numbers, when there is new P (x, y), so that it may calculate the value of S (x, y), that is, it is interested to acquire observer Region.
Above only describes basic principle of the invention and preferred embodiment, those skilled in the art can be according to foregoing description Many changes and improvements are made, these changes and improvements should be within the scope of protection of the invention.

Claims (6)

1. a kind of eye-tracking method based on Haar classifier, which is characterized in that include the following steps:
(1) human eye detection is carried out to realtime graphic using Haar classifier;
(2) the location of observer information is obtained;
(3) observer's visual attention region is calculated using the location of human eye detection result and observer information.
2. the eye-tracking method according to claim 1 based on Haar classifier, which is characterized in that the step (1) In, carrying out human eye detection to realtime graphic, specific step is as follows:
(1.1) rectangular characteristic at human eye position is calculated:
V=R1-R2 (1)
Wherein, V is calculated value of the rectangular characteristic in the image-region, R1And R2Respectively represent white area feature in rectangular characteristic With black region feature, wherein RiCalculation formula be:
Ri=∑M, n ∈ Ri, i={ 1,2 }r(m,n) (2)
Wherein, r (m, n) is the value of pixel in white area or black region, and m and n respectively represent the perpendicular of rectangular characteristic region Coordinate and abscissa;
(1.2) using the value for calculating rectangular characteristic figure with area table, wherein for a certain piece of region and face in entire image Product table calculating side's formula be:
SABCD=E (A)+E (C)-E (B)-E (D) (3)
Wherein, E (A), E (B), E (C), E (D) respectively represent required region upper left point, upper right point, lower-left point, lower-right most point and face Product table;
(1.3) 10-20 Weak Classifier is generated on the basis of eyes rectangular characteristic, is then cascaded as using AdaBoost Strong classifier.
3. the eye-tracking method according to claim 2 based on Haar classifier, which is characterized in that the step (1.3) in, the step of generating several Weak Classifiers and being cascaded as strong classifier includes as follows:
First a basic classification device is trained from initial training data set;
Training sample branch is adjusted further according to the performance of basic classification device, by misclassification sample when increasing last round of iteration Weight;
Based on the next classifier of sample weights adjusted training;
It repeats the above steps, until classifier quantity reaches pre-set quantity.
4. the eye-tracking method according to claim 3 based on Haar classifier, which is characterized in that the step (1.3) in, if training dataset T={ (x1,y1),(x2,y2)…,(xn,yn), whereinFinally Strong classifier is G (x);
Initialize the distribution of training data centralized value:
For the number of iterations m=1,2, M;
D is distributed using with weightmTraining dataset study, obtain basic classification device:
GM(x):χ→{-1,+1} (5)
Calculate Gm(x) the error in classification rate on training dataset:
Calculate Gm(x) coefficient
Update the weight distribution of training dataset
Dm+1=(Wm+1,1,···,Wm+1,i,···,Wm+1,N) (8)
ZmIt is standardizing factor
Construct the linear combination of basic classification device
Obtain final strong classifier
5. the eye-tracking method according to claim 1 based on Haar classifier, which is characterized in that the step (2) In, spatial information locating for observer is determined by the position of holder,head and camera, specific steps include as follows:
(2.1) camera is fixed on beside computer screen to the position that can clearly shoot observer's face;
(2.2) head of observer is fixed with holder,head, so that the head of observer is fixed at a distance from camera, and Record the distance;
(2.3) several points are randomly generated on computer screen;
(2.4) head of observer remains stationary, and only rotates eyes and removes the point being randomly generated on observation computer screen, record is each The coordinate of point and the coordinate of pupil.
6. the eye-tracking method according to claim 5 based on Haar classifier, which is characterized in that the step (3) In, it includes as follows for calculating the specific steps of observer's area-of-interest:
Matching formula is:
Wherein, S (x, y) is the coordinate put on screen, and P (x, y) is pupil coordinate, SwIt is wide for computer screen, SlWith SrRespectively see Coordinate left and right direction maximum when the person's of examining viewing screen;
Screen coordinate corresponding to pupil is calculated by coordinate data recorded in formula (13) and step (2.4), that is, is asked Obtain the value of S (x, y).
CN201810631690.9A 2018-06-19 2018-06-19 A kind of eye-tracking method based on Haar classifier Withdrawn CN108921059A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810631690.9A CN108921059A (en) 2018-06-19 2018-06-19 A kind of eye-tracking method based on Haar classifier

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810631690.9A CN108921059A (en) 2018-06-19 2018-06-19 A kind of eye-tracking method based on Haar classifier

Publications (1)

Publication Number Publication Date
CN108921059A true CN108921059A (en) 2018-11-30

Family

ID=64421921

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810631690.9A Withdrawn CN108921059A (en) 2018-06-19 2018-06-19 A kind of eye-tracking method based on Haar classifier

Country Status (1)

Country Link
CN (1) CN108921059A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697430A (en) * 2018-12-28 2019-04-30 成都思晗科技股份有限公司 The detection method that working region safety cap based on image recognition is worn
CN110147163A (en) * 2019-05-20 2019-08-20 浙江工业大学 The eye-tracking method and system of the multi-model fusion driving of facing mobile apparatus
CN111012315A (en) * 2020-01-02 2020-04-17 辽宁中晨优智医疗技术有限公司 Brain health diagnosis equipment based on human cognitive function
CN111145361A (en) * 2019-12-26 2020-05-12 和信光场(深圳)科技有限公司 Naked eye 3D display vision improving method
CN111714080A (en) * 2020-06-30 2020-09-29 重庆大学 Disease classification system based on eye movement information
WO2021147757A1 (en) * 2020-01-20 2021-07-29 北京芯海视界三维科技有限公司 Method, device, and product for performing information statistics

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093250A (en) * 2013-02-22 2013-05-08 福建师范大学 Adaboost face detection method based on new Haar- like feature

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103093250A (en) * 2013-02-22 2013-05-08 福建师范大学 Adaboost face detection method based on new Haar- like feature

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
LQF1403: "浅谈AdaBoost算法", 《CSDN》 *
PAUL VIOLA ET AL.: "Rapid Object Detection using a Boosted Cascade of Simple Features", 《IEEE》 *
YUNYANG LI ET AL.: "Eye-Gaze Tracking System By Haar Cascade Classifier", 《IEEE》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109697430A (en) * 2018-12-28 2019-04-30 成都思晗科技股份有限公司 The detection method that working region safety cap based on image recognition is worn
CN110147163A (en) * 2019-05-20 2019-08-20 浙江工业大学 The eye-tracking method and system of the multi-model fusion driving of facing mobile apparatus
CN110147163B (en) * 2019-05-20 2022-06-21 浙江工业大学 Eye movement tracking method and system driven by multi-model fusion for mobile equipment
CN111145361A (en) * 2019-12-26 2020-05-12 和信光场(深圳)科技有限公司 Naked eye 3D display vision improving method
CN111012315A (en) * 2020-01-02 2020-04-17 辽宁中晨优智医疗技术有限公司 Brain health diagnosis equipment based on human cognitive function
WO2021147757A1 (en) * 2020-01-20 2021-07-29 北京芯海视界三维科技有限公司 Method, device, and product for performing information statistics
CN111714080A (en) * 2020-06-30 2020-09-29 重庆大学 Disease classification system based on eye movement information

Similar Documents

Publication Publication Date Title
CN108921059A (en) A kind of eye-tracking method based on Haar classifier
Chen et al. Strabismus recognition using eye-tracking data and convolutional neural networks
Wang et al. A natural visible and infrared facial expression database for expression recognition and emotion inference
CN105955465A (en) Desktop portable sight line tracking method and apparatus
CN109247923A (en) Contactless pulse real-time estimation method and equipment based on video
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
Liu et al. Multi-channel remote photoplethysmography correspondence feature for 3d mask face presentation attack detection
Hassan et al. Video-based heartbeat rate measuring method using ballistocardiography
KR20120060978A (en) Method and Apparatus for 3D Human-Computer Interaction based on Eye Tracking
Hirsch et al. Hands-free gesture control with a capacitive textile neckband
AU2014234955B2 (en) Automatic detection of task transition
Kumar et al. A novel approach to video-based pupil tracking
Tang et al. Mmpd: multi-domain mobile video physiology dataset
Wang et al. Stable EEG Biometrics Using Convolutional Neural Networks and Functional Connectivity.
Liu et al. Learning temporal similarity of remote photoplethysmography for fast 3d mask face presentation attack detection
Suh et al. Contactless physiological signals extraction based on skin color magnification
Jayawardena et al. Automated filtering of eye gaze metrics from dynamic areas of interest
Buvaneswari et al. A review of EEG based human facial expression recognition systems in cognitive sciences
CN110298815A (en) A kind of method of skin pore detection and evaluation
CN112790750A (en) Fear and tension emotion recognition method based on video eye movement and heart rate analysis
Wang et al. Heart rate estimation from facial videos with motion interference using T-SNE-based signal separation
Jiang et al. Emotion analysis: Bimodal fusion of facial expressions and eeg
Huang et al. Research on learning state based on students’ attitude and emotion in class learning
Vranceanu et al. A computer vision approach for the eye accesing cue model used in neuro-linguistic programming
Anwar et al. Development of real-time eye tracking algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20181130