CN104463216A - Eye movement pattern data automatic acquisition method based on computer vision - Google Patents

Eye movement pattern data automatic acquisition method based on computer vision Download PDF

Info

Publication number
CN104463216A
CN104463216A CN201410775791.5A CN201410775791A CN104463216A CN 104463216 A CN104463216 A CN 104463216A CN 201410775791 A CN201410775791 A CN 201410775791A CN 104463216 A CN104463216 A CN 104463216A
Authority
CN
China
Prior art keywords
grating
eye movement
experimenter
eye
mode data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410775791.5A
Other languages
Chinese (zh)
Other versions
CN104463216B (en
Inventor
杨必琨
王泽亮
崔锦实
李晓清
査红彬
王莉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Peking University
Original Assignee
Peking University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Peking University filed Critical Peking University
Priority to CN201410775791.5A priority Critical patent/CN104463216B/en
Publication of CN104463216A publication Critical patent/CN104463216A/en
Application granted granted Critical
Publication of CN104463216B publication Critical patent/CN104463216B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Eye Examination Apparatus (AREA)

Abstract

The invention discloses an eye movement pattern data automatic acquisition method based on computer vision. According to the method, an eye movement pattern data acquisition process is divided into a learning stage and a testing stage, a watch model of a testee is obtained in the learning stage, and eye movement characteristics of the testee are obtained in the testing stage. Specifically, the testess watches a computer screen; an operator makes initial judgment on the testee, determines program running parameters and inputs the parameters into a computer; the watch model of one eye of the testee is obtained in the learning stage, and a classifier h is established; in the testing stage, the eye movement characteristic formed when the testee is subjected to the optical grating simulation of each specific frequency is obtained and serves as a testing sample set of the testee; the classifier h is utilized to perform SVM classification prediction on the testing sample set, then, prediction values are obtained, and the eye movement pattern data of the testeee can be obtained through comparison. Through the method, the cost for obtaining the eye movement pattern data can be reduced, and data acquisition efficiency and accuracy are improved.

Description

Based on the eye movement mode data automatic obtaining method of computer vision
Technical field
The invention belongs to computer vision field, relate to a kind of face position of human eye identification automatically and eye movement characteristics data capture method, particularly relate to a kind of eye movement mode data automatic obtaining method based on computer vision.
Background technology
Dynamic (eye motion) pattern of eye can provide bulk information for vision processing.In actual applications, such as, for do not possess complete language performance and the low and notice of limbs ability to express, intelligence and discernment developmental level can not keep the long period concentrated crowd as infant carry out vision drop time, need to obtain its eye movement mode data.At present, for this situation, the acquisition of eye movement mode data adopts preferential looking usually, the method needs examiner to experimenter's display raster cardboard, is observed the rotation direction of subject eye or head, judge the grating pattern correctness on itself and cardboard by the aperture in cardboard, thus obtain the eye movement mode data of experimenter, this process is manually carried out, and does not judge automatically by computer approach, therefore extremely wastes time and energy.Other method also comprises the method such as OKN, VEP, but because of the operation of most of inspection method more difficult loaded down with trivial details, be difficult to popularize always.Therefore, existing eye movement mode data capture method can not realize robotization, data acquisition inefficiency, and the accuracy rate obtaining data is not high, is difficult to accomplish rationally, obtain eye movement mode data result quickly.
Summary of the invention
In order to overcome above-mentioned the deficiencies in the prior art, the invention provides a kind of eye movement mode data automatic obtaining method based on computer vision, the method utilizes computer vision knowledge, that sets up experimenter by machine learning watches model attentively, again experimenter is predicted at the eye movement characteristics of test phase, thus automatically obtain the eye movement mode data of experimenter.The method can reduce the cost obtaining eye movement mode data, improves data acquisition efficiency, ensures the accuracy rate obtaining data.
Technical scheme provided by the invention is:
A kind of eye movement mode data automatic obtaining method based on computer vision, eye movement mode data acquisition is divided into learning phase and test phase by the method, watch model attentively by the experimenter obtained at learning phase to predict the experimenter's eye movement characteristics obtained at test phase, obtain the eye movement mode data of experimenter, comprise the steps:
1) arrange eye movement mode Data capture environment, ensure that experimenter can only watch computer screen attentively in whole eye movement mode data acquisition;
2) do preliminary judgement by operator to experimenter determine program running parameter and program running parameter is input in computing machine; Program running parameter to comprise in eye movement mode data acquisition present the raster order pattern that grating stimulates, the simple eye type of testing and gather time interval of image; The raster order pattern that presenting grating stimulates is multiple, and often kind of raster order pattern comprises the grating stimulation of multiple characteristic frequency; The simple eye type tested comprises left eye or right eye;
3) at learning phase, computing machine presents picture according to program running parameter, the position that demarcation picture presents simultaneously is left or right, successively by identify experimenter face position of human eye, carry out feature extraction and SVM (Support VectorMachine support vector machine) study obtain experimenter simple eye watch model attentively, set up sorter h;
4) at test phase, computing machine presents grating according to program running parameter to stimulate, demarcate the position of the grating stimulation obtaining each characteristic frequency as actual value simultaneously, the face position of human eye watching the period attentively that experimenter stimulates at the grating of each characteristic frequency is obtained by the identification of human eye face location, the eye movement characteristics of experimenter when the grating of each characteristic frequency stimulates is obtained, as the test sample book collection of experimenter again by carrying out feature extraction;
5) utilize step 3) the sorter h that sets up is to step 4) in test sample book collection carry out svm classifier prediction and obtain predicted value, by this predicted value and step 4) in actual value contrast, obtain the highest frequency that in the raster order pattern that experimenter notices, grating stimulates, as the eye movement mode data of experimenter.
The above-mentioned eye movement mode data automatic obtaining method based on computer vision, in step 5) obtain experimenter's eye movement mode data after, operator is according to the situation obtaining data, can step 2 be re-started) ~ step 5) process, step 2 in this process) program running parameter that adopts can with the step 2 of a upper process) in program running parameter identical; Also can the step 2 of a process on Selection radio) in the higher batch processing operational factor of grating frequency carry out, the raster order pattern that presenting grating in the program running parameter that namely this process adopts stimulates is the raster order pattern higher than the grating frequency in a upper process; Or the simple eye type of testing in the program running parameter adopted is different from the simple eye type of testing in a upper process.
Above-mentioned based in the eye movement mode data capture method of computer vision, further,
Step 1) described running environment should be quiet, dark environment, and the eyes of experimenter and screen center keep level.In embodiments of the present invention, experimenter is infant, before being had in arms and be sitting in computer screen, makes eyes and screen center's level of experimenter by the head of a family.
Step 2) in the grating sequence number that program running parameter also comprises the filename of experimenter's view data, test starts, test grating sequence number, each grating of terminating present initial frame sequence number in process, each grating presents the end frame sequence number in process and the dimension after PCA (Principal Component Analysis, principal component analysis (PCA)) dimensionality reduction.
Step 2) described in the raster order pattern presenting grating and stimulate comprise various modes, the experimenter of the corresponding all ages and classes of often kind of pattern and different vision condition, is divided into 10 frequency bands; The characteristic frequency that frequency band correspondence comprises is for increase gradually, and the grating of each characteristic frequency stimulates has backtracking, to ensure the accuracy rate of eye movement mode data acquisition.In embodiments of the present invention, present grating stimulate raster order pattern be three kinds, grating stimulate frequency increase gradually, the grating of each characteristic frequency stimulates backtracking, to ensure the accuracy rate of eye movement mode data acquisition, wherein: frequency corresponding to 10 characteristic frequency of pattern 1 (cycle/centimetre) be followed successively by: 0.32,0.32,0.64,0.64,1.29,1.29,2.28,2.28,5.14,5.14, be applicable to the infant that more than 0 ~ 2 years old or 2 years old eyesight obviously has obstacle; Frequency corresponding to 10 characteristic frequency of pattern 2 (cycle/centimetre) be followed successively by: 0.43,0.43,0.86,0.86,1.58,1.58,3.43,3.43,6.85,6.85, be applicable to the infant that more than 1 ~ 3 years old or 3 years old eyesight obviously has obstacle; Frequency corresponding to 10 characteristic frequency of mode 3 (cycle/centimetre) be followed successively by: 2.28,2.28,5.14,5.14,10.28,10.28,13.71,13.71,20.56,20.56, be applicable to the infant of more than 2 years old.Different raster order patterns is adopted to be convenient to reduced data acquisition process, estimate because operator can be made by the vision general condition of the factors such as age to infant human subject, by the eye movement mode data selecting suitable raster order pattern just can obtain infant human subject rapidly.
Step 3) present motion picture according to program running parameter, specifically present repeatedly by first left and then right order, leave the time interval between adjacent twice picture presents, picture is current with auditory tone cues; The position that demarcation picture presents simultaneously is left or right.Step 3) in sorter h specifically according to the demarcation position of picture and the monocular fixation model of experimenter, obtained by SVM learning training, comprise experimenter simple eye eye left and eye right watch model attentively.
Step 4) in present grating according to program running parameter and stimulate that specifically left and right is random to be occurred, twice grating stimulate present between leave the time interval; When namely will present grating stimulation, play one section of sound, for attracting the attention of experimenter.
Step 3) learning phase and step 4) identify that the face position of human eye of experimenter is all specifically adopt the sorter based on haar feature in test phase, face and human eye detection are carried out to the appointed area of image, then completion or reporting errors do not detected.
Step 3) learning phase and step 4) feature extraction in test phase all comprises grey level histogram and calculates and PCA dimensionality reduction; Wherein, grey level histogram calculates the intensity histogram nomography by blockette, and while utilizing colouring information, describe the positional information of iris, the grey level histogram computing method of this blockette comprise the steps:
A. first, obtain eye detection frame, it is divided into m block by horizontal ordinate decile;
B. secondly, for each block in m the block that segmentation obtains, statistical computation goes out the histogram feature vector of this block;
C. last, the histogram feature vector of all m block is merged in order and obtains final vector.
Wherein, m is integer, and value is 17 ~ 30.Preferably, m value is 20.
Eye movement mode data capture method carries out feature extraction at learning phase and test phase and adopts sequential track, judge with the characteristic sequence on a period of time and the experimenter that classifies vision towards, the proper vector dimension obtained by this method is very large, contained redundant information is also a lot, needs by obtaining the eye movement characteristics of experimenter when the grating of each characteristic frequency stimulates after PCA dimensionality reduction.Therefore, step 3) learning phase and step 4) feature extraction in test phase also comprises and carries out PCA dimensionality reduction to the grey level histogram proper vector of each block obtained, for removing the redundant information contained by each block grey level histogram proper vector of being obtained by the grey level histogram computing method of blockette.
Above-mentioned steps 4) obtain experimenter watches the eye movement characteristics of period attentively method in each characteristic frequency in test phase and specifically comprise the steps:
A) when the grating presenting each characteristic frequency obtained by camera is stimulated experimenter watch screen shot attentively, detect and obtain face and the ocular position of experimenter;
B) grey level histogram proper vector is extracted according to eye locations;
The grey level histogram proper vector of c) each characteristic frequency obtained being watched attentively to all pictures in stage carries out PCA dimensionality reduction, obtains the eye movement mode that each characteristic frequency watches period experimenter attentively.
Step 5) in carry out contrasting the data obtained and comprise the simple eye grating frequency of stimulation noticed of experimenter and the grating frequency of stimulation do not noticed; When experimenter has the grating repeatedly do not noticed in a certain raster order pattern to stimulate, the highest frequency that the grating in the raster order pattern noticed according to it stimulates is as the eye movement mode data result of experimenter; When experimenter all notices that grating in a certain raster order pattern stimulates, then the highest frequency stimulated according to the grating in this raster order pattern is as the eye movement mode data result of experimenter.
Principle of the present invention is: eye movement mode data acquisition is divided into learning phase and two stages of test phase.Using experimenter's electromyogram picture of learning phase computing machine camera Real-time Collection as learning sample collection, what utilize learning sample collection to obtain this experimenter by the training of svm classifier method watches model (eye movement mode) attentively, sets up sorter, again using experimenter's electromyogram picture of test phase computing machine camera Real-time Collection as test sample book collection, sorter watches the eye movement characteristics of period experimenter attentively according to each characteristic frequency that test phase obtains, classification prediction is carried out to the electromyogram picture that experimenter presents the period at the grating of each characteristic frequency, obtain the predicted value (namely eye left or eye right) of the eye movement mode of each period experimenter, and computer program can be recorded each characteristic frequency when running and watches the actual value (appearing at left or right) that stage raster image presents attentively, predicted value and actual value are compared, judge whether experimenter correctly notices the raster pattern on screen, experimenter can be obtained and which period not notice that grating stimulates in, or all notice that grating stimulates, thus obtain experimenter's monocular fixation or do not watch the eye movement mode data of raster pattern of which frequency attentively.
Compared with prior art, the invention has the beneficial effects as follows:
Compare the method for existing artificial acquisition eye movement mode data, eye movement mode data automatic obtaining method based on computer vision provided by the invention can obtain eye movement mode data automatically, save the manual labor of at substantial on eye movement mode data acquisition, improve the efficiency of eye movement mode data acquisition, both the artificial inefficiency problem obtaining eye movement mode data had been solved, reduce the artificial cost obtaining eye movement mode data, again save the time obtaining eye movement mode data, also effectively ensure that the accuracy rate obtaining eye movement mode data.
Accompanying drawing explanation
Fig. 1 is the FB(flow block) of the infant human subject's eye movement mode data capture method in the embodiment of the present invention.
Fig. 2 is that the grating of embodiment of the present invention Computer screen display stimulates sample.
Fig. 3 is the schematic diagram that in the embodiment of the present invention, infant's eye movement mode data capture method eye detects framework,
Wherein, (a) ocular for detecting; B () is for being divided into the ocular of m block; C () is for calculating the grey level histogram of m block ocular respectively.
Embodiment
Below in conjunction with accompanying drawing, further describe the present invention by embodiment, but the scope do not limited the present invention in any way.
In the present embodiment, the infant of experimenter for being had in arms by the head of a family, operator is for implementing the tester of the infant's eye movement mode data acquisition based on computer vision provided by the invention, Fig. 1 is the FB(flow block) of the infant human subject's eye movement mode data capture method in the present embodiment, specifically comprises the following steps:
1) environment is arranged: be placed in by computing machine on desktop, screen center is highly 100 ~ 150 centimetres, places seat before table, allow the head of a family have experimenter in arms just facing computer screen and sit down, regulate seat height, the eyes of experimenter and screen center are horizontally aligned, and distance is 55 centimetres.Surrounding is kept quite, dark, does not have the unnecessary article causing experimenter to note, makes experimenter notice computer screen.Ensure in whole process that the head of a family does not make any instruction to experimenter.
2) by operator, experimenter is tentatively judged, loading routine operational factor, comprising: the initial frame sequence number in the grating sequence number that the grating sequence number that the filename of experimenter's view data, test start, test terminate, each grating presentative time, the end frame sequence number in each grating presentative time, the dimension after PCA dimensionality reduction, test eye type (left eye or right eye) and present the ordered mode of raster pattern.The ordered mode presenting raster pattern has 3 kinds, pattern 1, 2 and 3, the frequency that grating stimulates increases gradually, the grating of each characteristic frequency stimulates backtracking, to ensure the accuracy rate of data acquisition, the infant of the corresponding all ages and classes of different mode and different vision condition, any pattern is selected to be determined in advance by operator, wherein: pattern 1 is divided into 10 frequency stages, corresponding frequency (cycle/centimetre) be: 0.32, 0.32, 0.64, 0.64, 1.29, 1.29, 2.28, 2.28, 5.14, 5.14, be applicable to the infant that more than 0 ~ 2 years old or 2 years old eyesight obviously has obstacle, pattern 2 is divided into 10 frequency stages, corresponding frequency (cycle/centimetre) be: 0.43,0.43,0.86,0.86,1.58,1.58,3.43,3.43,6.85,6.85, be applicable to the infant that more than 1 ~ 3 years old or 3 years old eyesight obviously has obstacle, mode 3 is divided into 10 frequency stages, corresponding frequency (cycle/centimetre) be: 2.28,2.28,5.14,5.14,10.28,10.28,13.71,13.71,20.56,20.56, be applicable to the infant of more than 2 years old.
3) at learning phase, computer program is according to input parameter, identify the face position of human eye of experimenter, carry out feature extraction, the simple eye eye movement mode of experimenter is obtained by SVM (Support Vector Machine support vector machine) study, set up sorter h, detailed process is:
A. on screen, present the motion picture that can attract experimenter, occur repeatedly by first left and then right order, demarcate the right position of picture simultaneously; The time interval is left, with auditory tone cues when picture occurs between adjacent twice picture occurs;
B. the eye movement mode of experimenter is obtained by feature extraction; Feature extraction comprises grey level histogram and calculates and PCA dimensionality reduction;
C. go out sorter h by the right position of the picture of above-mentioned demarcation and the eye movement mode of corresponding experimenter by SVM learning training, this sorter h comprises the eye movement mode that experimenter eyes left and eyes right;
4) at test phase, presenting grating according to input parameter stimulates, and demarcation simultaneously obtains the right position of grating as actual value, obtains the test sample book collection of experimenter:
A. the grating presented on the computer screen as shown in Figure 2 stimulates, and left and right occurs at random, and twice stimulation leaves the time interval between occurring;
When B. namely will occur that grating stimulates, play one section of sound, attract the attention of experimenter;
C. the frequency that grating stimulates increases gradually, and the grating of each characteristic frequency stimulates all can have backtracking, ensures the accuracy rate of eye movement mode data acquisition;
D. when the grating presenting each characteristic frequency obtained according to camera stimulates, experimenter's watches picture attentively, by calculating the eye movement characteristics of experimenter when the grating of each characteristic frequency stimulates, as test sample book collection, detailed process mainly comprises the identification of human eye face location, feature extraction (comprising grey level histogram to calculate and PCA dimensionality reduction) and svm classifier;
5) the sorter h utilizing learning phase to set up carries out prediction to the test sample book collection comprising the eye movement characteristics of experimenter when the grating of each characteristic frequency stimulates and obtains predicted value, the right position actual value of predicted value and grating is contrasted, obtain experimenter's monocular fixation or do not watch the raster pattern of which frequency attentively, as the eye movement mode data result that experimenter is simple eye.
Above-mentioned based in the eye movement mode data capture method of computer vision, the face position of human eye of learning phase and test phase identification experimenter specifically adopts the sorter based on Harr feature, carries out face and position of human eye identification to the appointed area of image.The feature extraction of learning phase and test phase all comprises grey level histogram and calculates and PCA dimensionality reduction; Fig. 3 is that in experimenter's eye movement mode data capture method, eye detects block schematic illustration, wherein, and (a) ocular for detecting; B () is for being divided into the ocular of m block; C () is for calculating the grey level histogram of m block ocular respectively, as shown in Figure 3, adopt the intensity histogram nomography of blockette, while utilizing colouring information, describe the positional information of iris well, the grey level histogram computing method of this blockette comprise the steps:
A. first, obtain eye detection frame, it is divided into m block by horizontal ordinate decile;
B. secondly, for each block in m the block that segmentation obtains, statistical computation goes out the histogram feature vector of this block;
C. last, the histogram feature vector of all m block is merged in order and obtains final vector.
Wherein, m value is 20.
Eye movement mode data capture method based on computer vision carries out feature extraction at learning phase and test phase and adopts sequential track, judge with the characteristic sequence on a period of time and the experimenter that classifies vision towards, the proper vector dimension obtained by this method is very large, contained redundant information is also a lot, needs by obtaining the eye movement characteristics of experimenter when the grating of each characteristic frequency stimulates after PCA dimensionality reduction.Eye movement mode data capture method based on computer vision provided by the invention after completing the aforementioned steps, depending on the different situations of result, operator can further test: comprise the pattern again repeating this time to test, select high-frequency grating to stimulate and test, or carry out the test of another eyes, obtain more fully eye movement mode data result with this.
The eye movement mode data capture method based on computer vision is carried out according to the method for the invention provides, the data obtained can report in which, experimenter does not notice that grating stimulates in stage, or all notice that grating stimulates, when experimenter have repeatedly do not notice that grating stimulates time, the grating of the highest frequency should noticed according to it stimulates the eye movement mode data as experimenter; When experimenter all notices that grating stimulates, then should according to the eye movement mode data of the grating stimulation of the highest frequency in this pattern as experimenter.

Claims (10)

1. the eye movement mode data automatic obtaining method based on computer vision, eye movement mode data acquisition is divided into learning phase and test phase by described method, watch model attentively by the experimenter obtained at learning phase to predict the experimenter's eye movement characteristics obtained at test phase, obtain the eye movement mode data of experimenter, comprise the steps:
1) arrange eye movement mode Data capture environment, ensure that experimenter can only watch computer screen attentively in whole eye movement mode data acquisition;
2) by operator for experimenter's situation determination program running parameter, program running parameter is input in computing machine; Described program running parameter comprises the raster order pattern presenting grating and stimulate, the simple eye type of testing and gathers the time interval of image; The described raster order pattern presenting grating stimulation is multiple, and often kind of raster order pattern comprises the grating stimulation of multiple characteristic frequency; The simple eye type tested comprises left eye or right eye;
3) at learning phase, computing machine presents picture according to program running parameter, demarcate the position that presents of picture is left or right simultaneously, successively by identify experimenter face position of human eye, carry out feature extraction and SVM study obtain experimenter simple eye watch model attentively, set up sorter h;
4) at test phase, computing machine presents grating according to program running parameter to stimulate, the position that the grating simultaneously demarcating each characteristic frequency stimulates is as actual value, the face position of human eye watching the period attentively that experimenter stimulates at each characteristic frequency grating is obtained by the identification of human eye face location, the eye movement characteristics of experimenter when the grating of each characteristic frequency stimulates is obtained, as the test sample book collection of experimenter again by carrying out feature extraction;
5) utilize step 3) the sorter h that sets up is to step 4) in test sample book collection carry out svm classifier prediction and obtain predicted value, by this predicted value and step 4) in actual value contrast, obtain the highest frequency that in the raster order pattern that experimenter notices, grating stimulates, as the eye movement mode data of experimenter.
2. as claimed in claim 1 based on the eye movement mode data automatic obtaining method of computer vision, it is characterized in that, in step 5) obtain experimenter's eye movement mode data after, adopt identical or different program running parameter to re-start step 2) ~ step 5); Described distinct program operational factor comprise present raster order pattern that grating stimulates for the higher raster order pattern of grating frequency or the simple eye type of testing be different simple eye types.
3. as claimed in claim 1 based on the eye movement mode data automatic obtaining method of computer vision, it is characterized in that, step 2) described program running parameter also comprises the filename of experimenter's view data, test starts grating sequence number, test grating sequence number, each grating of terminating present initial frame sequence number in process, each grating presents the end frame sequence number in process and the dimension after PCA dimensionality reduction.
4., as claimed in claim 1 based on the eye movement mode data automatic obtaining method of computer vision, it is characterized in that, described experimenter is infant; Step 2) described in the raster order pattern that presents grating and stimulate be three kinds, the respectively infant human subject of corresponding all ages and classes and different vision condition, the grating that often kind of raster order pattern comprises ten characteristic frequency stimulates.
5. as claimed in claim 4 based on the eye movement mode data automatic obtaining method of computer vision, it is characterized in that, the described raster order pattern presenting grating stimulation is pattern 1, pattern 2 and mode 3, the characteristic frequency that included grating stimulates is and increases gradually, and the stimulation of the grating of each characteristic frequency has backtracking.
6. as claimed in claim 5 based on the eye movement mode data automatic obtaining method of computer vision, it is characterized in that, 10 characteristic frequency of described pattern 1 are followed successively by: 0.32,0.32,0.64,0.64,1.29,1.29,2.28,2.28,5.14 and 5.14 cycle/centimetre, be applicable to the infant that more than 0 ~ 2 years old or 2 years old eyesight obviously has obstacle; 10 characteristic frequency of described pattern 2 are followed successively by: 0.43,0.43,0.86,0.86,1.58,1.58,3.43,3.43,6.85 and 6.85 cycle/centimetre, be applicable to the infant that more than 1 ~ 3 years old or 3 years old eyesight obviously has obstacle; 10 characteristic frequency of described mode 3 are followed successively by: 2.28,2.28,5.14,5.14,10.28,10.28,13.71,13.71,20.56 and 20.56 cycle/centimetre, be applicable to the infant of more than 2 years old.
7., as claimed in claim 1 based on the eye movement mode data automatic obtaining method of computer vision, it is characterized in that, step 3) the described picture presented according to program running parameter is motion picture; Described rendering method specifically presents repeatedly by first left and then right order, leaves the time interval between adjacent twice picture presents, and is current with auditory tone cues at picture; What described experimenter was simple eye watch attentively, and model comprises that experimenter eyes left and eye right watches model attentively.
8. as claimed in claim 1 based on the eye movement mode data automatic obtaining method of computer vision, it is characterized in that, step 4) describedly present grating according to program running parameter and stimulate that specifically left and right is random to be occurred, twice grating stimulate present between leave the time interval; Will present shortly when grating stimulates and play one section of sound, for attracting the attention of experimenter.
9., as claimed in claim 1 based on the eye movement mode data automatic obtaining method of computer vision, it is characterized in that, step 3) learning phase and step 4) feature extraction described in test phase includes grey level histogram and calculates; Described grey level histogram calculates the intensity histogram nomography by blockette, and while utilizing colouring information, describe the positional information of iris, the grey level histogram computing method of described blockette comprise the steps:
A. first, obtain eye detection frame, be divided into m block by horizontal ordinate decile, m value is the integer of 17 ~ 30;
B. secondly, for each block in m the block that segmentation obtains, the grey level histogram proper vector of this block is obtained by statistical computation;
C. last, the histogram feature vector of all m block is merged in order, obtains final vector.
10. as claimed in claim 9 based on the eye movement mode data automatic obtaining method of computer vision, it is characterized in that, described feature extraction also comprises carries out PCA dimensionality reduction to the grey level histogram proper vector of each block obtained, for removing the redundant information contained by each block grey level histogram proper vector of being obtained by the grey level histogram computing method of blockette.
CN201410775791.5A 2014-12-15 2014-12-15 Eye movement mode data automatic obtaining method based on computer vision Active CN104463216B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410775791.5A CN104463216B (en) 2014-12-15 2014-12-15 Eye movement mode data automatic obtaining method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410775791.5A CN104463216B (en) 2014-12-15 2014-12-15 Eye movement mode data automatic obtaining method based on computer vision

Publications (2)

Publication Number Publication Date
CN104463216A true CN104463216A (en) 2015-03-25
CN104463216B CN104463216B (en) 2017-07-28

Family

ID=52909230

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410775791.5A Active CN104463216B (en) 2014-12-15 2014-12-15 Eye movement mode data automatic obtaining method based on computer vision

Country Status (1)

Country Link
CN (1) CN104463216B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133584A (en) * 2017-04-27 2017-09-05 贵州大学 Implicit intention assessment sorting technique based on eye-tracking
CN109190505A (en) * 2018-08-11 2019-01-11 石修英 The image-recognizing method that view-based access control model understands
CN109798888A (en) * 2019-03-15 2019-05-24 京东方科技集团股份有限公司 Posture determining device, method and the visual odometry of mobile device
CN110840467A (en) * 2019-10-18 2020-02-28 天津大学 Correlation analysis method for eye movement data and mental system diseases
CN111141472A (en) * 2019-12-18 2020-05-12 梁启慧 Anti-seismic support and hanger detection method and system
CN111951637A (en) * 2020-07-19 2020-11-17 西北工业大学 Task scenario-related unmanned aerial vehicle pilot visual attention distribution mode extraction method
CN112890815A (en) * 2019-12-04 2021-06-04 中国科学院深圳先进技术研究院 Autism auxiliary evaluation system and method based on deep learning
CN113425247A (en) * 2021-06-10 2021-09-24 北京邮电大学 Novel eye movement data visualization method, device and equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101283905A (en) * 2008-05-22 2008-10-15 重庆大学 Statistical analysis process of nystagmus displacement vector
CN103279751A (en) * 2013-06-19 2013-09-04 电子科技大学 Eye movement tracking method on the basis of accurate iris positioning
US20130300891A1 (en) * 2009-05-20 2013-11-14 National University Of Ireland Identifying Facial Expressions in Acquired Digital Images
CN104200192A (en) * 2013-01-18 2014-12-10 通用汽车环球科技运作有限责任公司 Driver gaze detection system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101283905A (en) * 2008-05-22 2008-10-15 重庆大学 Statistical analysis process of nystagmus displacement vector
US20130300891A1 (en) * 2009-05-20 2013-11-14 National University Of Ireland Identifying Facial Expressions in Acquired Digital Images
CN104200192A (en) * 2013-01-18 2014-12-10 通用汽车环球科技运作有限责任公司 Driver gaze detection system
CN103279751A (en) * 2013-06-19 2013-09-04 电子科技大学 Eye movement tracking method on the basis of accurate iris positioning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
DINGRUI DUAN,ET AL.: "Gaze Estimation In Children’s Peer-play Scenarios", 《2013 SECOND IAPR ASIAN CONFERENCE ON PATTERN RECOGNITION》 *
樊云葳: "弱视儿童扫描视觉诱发电位视力与国际标准视力的比较", 《眼科》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107133584A (en) * 2017-04-27 2017-09-05 贵州大学 Implicit intention assessment sorting technique based on eye-tracking
CN109190505A (en) * 2018-08-11 2019-01-11 石修英 The image-recognizing method that view-based access control model understands
CN109798888A (en) * 2019-03-15 2019-05-24 京东方科技集团股份有限公司 Posture determining device, method and the visual odometry of mobile device
CN110840467A (en) * 2019-10-18 2020-02-28 天津大学 Correlation analysis method for eye movement data and mental system diseases
CN112890815A (en) * 2019-12-04 2021-06-04 中国科学院深圳先进技术研究院 Autism auxiliary evaluation system and method based on deep learning
CN111141472A (en) * 2019-12-18 2020-05-12 梁启慧 Anti-seismic support and hanger detection method and system
CN111141472B (en) * 2019-12-18 2022-02-22 江苏万路机电科技有限公司 Anti-seismic support and hanger detection method and system
CN111951637A (en) * 2020-07-19 2020-11-17 西北工业大学 Task scenario-related unmanned aerial vehicle pilot visual attention distribution mode extraction method
CN111951637B (en) * 2020-07-19 2022-05-03 西北工业大学 Task-context-associated unmanned aerial vehicle pilot visual attention distribution mode extraction method
CN113425247A (en) * 2021-06-10 2021-09-24 北京邮电大学 Novel eye movement data visualization method, device and equipment
CN113425247B (en) * 2021-06-10 2022-12-23 北京邮电大学 Eye movement data visualization method, device and equipment

Also Published As

Publication number Publication date
CN104463216B (en) 2017-07-28

Similar Documents

Publication Publication Date Title
CN104463216A (en) Eye movement pattern data automatic acquisition method based on computer vision
CN107862678B (en) Fundus image non-reference quality evaluation method
US11389058B2 (en) Method for pupil detection for cognitive monitoring, analysis, and biofeedback-based treatment and training
CN105069304A (en) Machine learning-based method for evaluating and predicting ASD
CN108960182A (en) A kind of P300 event related potential classifying identification method based on deep learning
CN111986211A (en) Deep learning-based ophthalmic ultrasonic automatic screening method and system
CN104102899B (en) Retinal vessel recognition methods and device
CN110269587B (en) Infant motion analysis system and infant vision analysis system based on motion
CN106650795B (en) Hotel room type image sorting method
CN108921169B (en) A kind of eye fundus image blood vessel segmentation method
CN109886165A (en) A kind of action video extraction and classification method based on moving object detection
Fuadah et al. Mobile cataract detection using optimal combination of statistical texture analysis
CN109344763A (en) A kind of strabismus detection method based on convolutional neural networks
Suero et al. Locating the Optic Disc in Retinal Images Using Morphological Techniques.
CN115409764A (en) Multi-mode fundus blood vessel segmentation method and device based on domain self-adaptation
RU2556417C2 (en) Detecting body movements using digital colour rear projection
CN104679967B (en) A kind of method for judging psychological test reliability
CN109840905A (en) Power equipment rusty stain detection method and system
CN114100103B (en) Rope skipping counting detection system and method based on key point identification
CN106175657A (en) A kind of vision automatic checkout system
CN107133631A (en) A kind of method and device for recognizing TV station's icon
CN102680488B (en) Device and method for identifying massive agricultural product on line on basis of PCA (Principal Component Analysis)
CN110503636A (en) Parameter regulation means, lesion prediction technique, parameter adjustment controls and electronic equipment
CN114445666A (en) Deep learning-based method and system for classifying left eye, right eye and visual field positions of fundus images
CN106548121A (en) A kind of method of testing and device of vivo identification

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant