CN104463216B - Eye movement mode data automatic obtaining method based on computer vision - Google Patents
Eye movement mode data automatic obtaining method based on computer vision Download PDFInfo
- Publication number
- CN104463216B CN104463216B CN201410775791.5A CN201410775791A CN104463216B CN 104463216 B CN104463216 B CN 104463216B CN 201410775791 A CN201410775791 A CN 201410775791A CN 104463216 B CN104463216 B CN 104463216B
- Authority
- CN
- China
- Prior art keywords
- subject
- eye movement
- grating
- mode data
- movement mode
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/18—Eye characteristics, e.g. of the iris
Abstract
The present invention announces a kind of eye movement mode data automatic obtaining method based on computer vision, eye movement mode data acquisition is divided into study stage and test phase by this method, subject is obtained in the study stage and watches model attentively, and subject's eye movement characteristics are obtained in test phase;Specifically, computer screen is watched attentively by subject;Made preliminary judgement determination program running parameter to subject by operator and be input in computer;The study stage obtain subject it is simple eye watch model attentively, set up grader h;In test phase, eye movement characteristics of the subject when the grating of each specific frequency is stimulated are obtained as the test sample collection of subject;Svm classifier prediction is carried out to test sample collection using grader h and obtains predicted value, contrast the eye movement mode data for obtaining subject.This method can reduce the cost for obtaining eye movement mode data, improve data acquisition efficiency and accuracy rate.
Description
Technical field
The invention belongs to computer vision field, it is related to a kind of automatic face position of human eye identification and eye movement characteristics data
Acquisition methods, more particularly to a kind of eye movement mode data automatic obtaining method based on computer vision.
Background technology
Dynamic (eye motion) pattern of eye can provide bulk information for vision processing.In actual applications, for example, for not
Possess complete language performance and limbs ability to express, intelligence and discernment developmental level is low and notice can not be kept the long period
, it is necessary to obtain its eye movement mode data during the crowd of concentration such as infant's progress vision drop.At present, for such case, eye
The acquisition of dynamic mode data generally uses preferential looking, and this method needs examiner to subject's display raster cardboard, passed through
The rotation direction of aperture observation subject eye or head in cardboard, to judge itself and the grating pattern correctness on cardboard,
So as to obtain the eye movement mode data of subject, the process is manually carried out, it is impossible to automatically entered by computer approach
Row judges, therefore extremely wastes time and energy.Other methods also include the methods such as OKN, VEP, but because big
Part inspection method operation is more difficult cumbersome, is difficult to popularize always.Therefore, existing eye movement mode data capture method can not
Automation is realized, data acquisition inefficiency, the accuracy rate for obtaining data is not high, it is difficult to accomplish that rationally, quickly obtaining eye moves
Mode data result.
The content of the invention
In order to overcome the above-mentioned deficiencies of the prior art, the present invention provides a kind of eye movement mode data based on computer vision
Automatic obtaining method, this method utilizes computer vision knowledge, sets up the model of watching attentively of subject by machine learning, then to by
Examination person is predicted in the eye movement characteristics of test phase, so as to automatically obtain the eye movement mode data of subject.This method can
Reduction obtains the cost of eye movement mode data, improves data acquisition efficiency, it is ensured that obtain the accuracy rate of data.
The technical scheme that the present invention is provided is:
A kind of eye movement mode data automatic obtaining method based on computer vision, this method is by eye movement mode data acquisition
Process is divided into study stage and test phase, and model is watched attentively to being obtained in test phase by the subject obtained in the study stage
Subject's eye movement characteristics be predicted, obtain subject eye movement mode data, comprise the following steps:
1) eye movement mode Data capture environment is arranged, it is ensured that in whole eye movement mode data acquisition, subject is only
Computer screen can be watched attentively;
2) make preliminary judgement determination program running parameter to subject by operator and program running parameter is input to meter
In calculation machine;Program running parameter includes raster order pattern, the institute of grating stimulation are presented in eye movement mode data acquisition
The simple eye type of test and the time interval of collection image;The raster order pattern that grating, which is presented, to stimulate is a variety of, every kind of grating
The grating that ordered mode includes multiple specific frequencies is stimulated;The simple eye type tested includes left eye or right eye;
3) in the study stage, picture is presented according to program running parameter in computer, while the position for demarcating picture presentation is
Left or right, passes sequentially through the face position of human eye of identification subject, carries out feature extraction and SVM (Support Vector
Machine SVMs) study obtain subject it is simple eye watch model attentively, set up grader h;
4) in test phase, computer is presented grating according to program running parameter and stimulated, while demarcation obtains each specific
The position that the grating of frequency is stimulated obtains subject in each specific frequency as actual value by the identification of human eye face location
The face position of human eye for watching the period attentively that grating is stimulated, then subject is obtained in each specific frequency by carrying out feature extraction
Eye movement characteristics when grating is stimulated, are used as the test sample collection of subject;
5) utilize step 3) set up grader h to step 4) in test sample collection carry out svm classifier prediction obtain pre-
Measured value, by this predicted value and step 4) in actual value contrasted, obtain light in the raster order pattern that subject notices
The highest frequency that grid are stimulated, is used as the eye movement mode data of subject.
The above-mentioned eye movement mode data automatic obtaining method based on computer vision, in step 5) obtain subject's eye dynamic model
After formula data, operator can re-start step 2 according to the situation for obtaining data)~step 5) process, during this
Step 2) use program running parameter can be with a upper process step 2) in program running parameter it is identical;It can also select
Than the step 2 of a upper process) in the higher batch processing operational factor of grating frequency carry out, i.e., the program that this process is used is run
The raster order pattern that grating is presented in parameter to stimulate is the higher raster order pattern of the grating frequency during than upper one;Or
The simple eye type tested in the program running parameter that person uses is different from the simple eye type tested during upper one.
In the above-mentioned eye movement mode data capture method based on computer vision, further,
Step 1) running environment should be quiet, dark environment, and eyes and the screen center of subject keep level.
In embodiments of the present invention, subject is infant, is had in arms and is sitting in before computer screen by parent so that the eyes of subject with
Screen center's level.
Step 2) in program running parameter also including subject's view data filename, test start grating sequence
Number, test terminate grating sequence number, each grating present during initial frame number, each grating present during
Terminate the dimension after frame number and PCA (Principal Component Analysis, principal component analysis) dimensionality reduction.
Step 2) described in the raster order pattern for grating is presented stimulating include various modes, each pattern correspondence not the same year
Age and the subject of different vision conditions, are divided into 10 frequency bands;The specific frequency that frequency band correspondence includes is gradually to increase,
The grating of each specific frequency, which is stimulated, backtracking, to ensure the accuracy rate of eye movement mode data acquisition.In the embodiment of the present invention
In, the raster order pattern that grating stimulation is presented is three kinds, and the frequency that grating is stimulated gradually increases, the light of each specific frequency
Grid, which are stimulated, backtracking, to ensure the accuracy rate of eye movement mode data acquisition, wherein:10 specific frequencies of pattern 1 are corresponding
Frequency (cycle/centimetre) be followed successively by:0.32,0.32,0.64,0.64,1.29,1.29,2.28,2.28,5.14,5.14, it is adapted to 0
~2 years old or more than 2 years old eyesight substantially have the infant of obstacle;The corresponding frequency of 10 specific frequencies (cycle/centimetre) of pattern 2
It is followed successively by:0.43,0.43,0.86,0.86,1.58,1.58,3.43,3.43,6.85,6.85, it is adapted to 1~3 years old or more than 3 years old
Eyesight substantially has the infant of obstacle;The corresponding frequency of 10 specific frequencies (cycle/centimetre) of pattern 3 is followed successively by:2.28,
2.28,5.14,5.14,10.28,10.28,13.71,13.71,20.56,20.56, suitable more than 2 years old infant.Using not
Same raster order pattern is easy to simplify data acquisition, because operator can be tested to infant by factors such as ages
The vision general condition of person, which is made, to be estimated, by just can rapidly obtain infant human subject from suitable raster order pattern
Eye movement mode data.
Step 3) motion picture is presented according to program running parameter, specifically it is presented multiple by first left and then right order, phase
Adjacent picture twice leaves time interval between presenting, with auditory tone cues when picture is presented;The position that picture is presented is demarcated simultaneously
For left or right.Step 3) in grader h be specifically according to demarcation picture position and subject monocular fixation model, pass through
SVM learning trainings are obtained, including subject it is simple eye eye left and eye right watch model attentively.
Step 4) in grating is presented to stimulate according to program running parameter to be specifically that left and right is random occur, grating, which is stimulated, twice is in
Time interval is left between existing;When grating stimulation will be presented, one section of sound, the attention for attracting subject are played.
Step 3) study stage and step 4) the face position of human eye of identification subject is all specifically to use in test phase
Based on the grader of haar features, the designated area to image carries out face and human eye detection, is not detected by then completion or report
Mistake.
Step 3) study stage and step 4) feature extraction in test phase all includes grey level histogram and calculates and PCA drops
Dimension;Wherein, grey level histogram calculates the intensity histogram nomography by blockette, and eyes are described while using colouring information
The positional information of iris, the grey level histogram computational methods of the blockette comprise the following steps:
A. first, eye detection frame is obtained, it is divided into m block by abscissa decile;
B. secondly, for splitting each block in m obtained block, the histogram feature of the block is calculated
Vector;
C. it is last, the histogram feature vector of all m blocks is merged in order and obtains final vector.
Wherein, m is integer, and value is 17~30.Preferably, m values are 20.
Eye movement mode data capture method carries out feature extraction in study stage and test phase and uses sequential track, with one
The temporal characteristic sequence of section come judge and the subject that classifies vision direction, the characteristic vector dimension obtained by this method
Very big, contained redundancy is also many, it is necessary to be pierced by obtaining subject after PCA dimensionality reductions in the grating of each specific frequency
Eye movement characteristics when swashing.Therefore, step 3) study stage and step 4) feature extraction in test phase also includes to obtaining
The grey level histogram characteristic vector of each block carries out PCA dimensionality reductions, for removing the grey level histogram calculating side by blockette
The redundancy contained by each block grey level histogram characteristic vector that method is obtained.
Above-mentioned steps 4) method that subject watches the eye movement characteristics of period attentively in each specific frequency is obtained in test phase
Specifically include following steps:
A) subject's watches screen shot, inspection attentively when the grating of each specific frequency of presentation obtained by camera is stimulated
Measure face and the ocular position of subject;
B) grey level histogram characteristic vector is extracted according to eye locations;
C) the grey level histogram characteristic vector for watching all pictures in stage attentively to obtained each specific frequency carries out PCA drops
Dimension, obtains the eye movement mode that each specific frequency watches period subject attentively.
Step 5) in carry out contrasting obtained data and include the simple eye grating frequency of stimulation noticed of subject and do not note
The grating frequency of stimulation anticipated;When subject has the grating do not noticed repeatedly in a certain raster order pattern to stimulate, according to
The highest frequency that grating in the raster order pattern noticed according to it is stimulated as subject eye movement mode data result;When
Subject all notices that the grating in a certain raster order pattern is stimulated, then is pierced according to the grating in the raster order pattern
Sharp highest frequency as subject eye movement mode data result.
The present invention principle be:Eye movement mode data acquisition is divided into two stages of study stage and test phase.
Subject's electromyogram picture that study stepped reckoner camera is gathered in real time is logical using learning sample collection as learning sample collection
That crosses that the training of svm classifier method obtains the subject watches model (eye movement mode) attentively, sets up grader;Test phase is calculated again
Each spy that subject's electromyogram picture that machine camera is gathered in real time is obtained as test sample collection, grader according to test phase
Determine the eye movement characteristics that frequency watches period subject attentively, the electromyogram picture of period is presented in the grating of each specific frequency to subject
Classification prediction is carried out, the predicted value (eye left or eye right) of the eye movement mode of each period subject is obtained, and computer
Program can record the actual value (appearing in left or right) that each specific frequency watches the presentation of stage raster image attentively when running, will
Predicted value is compared with actual value, judges whether subject correctly notices the raster pattern on screen, you can obtain tested
Person does not notice that grating is stimulated in which period, or all notices that grating is stimulated, so that it is simple eye to obtain subject
Watch attentively or do not watch attentively which frequency raster pattern eye movement mode data.
Compared with prior art, the beneficial effects of the invention are as follows:
Compared to the method for existing artificial acquisition eye movement mode data, the eye based on computer vision that the present invention is provided is moved
Mode data automatic obtaining method can automatically derive eye movement mode data, save and expend big on eye movement mode data acquisition
The manual labor of amount, improves the efficiency of eye movement mode data acquisition, has both solved the artificial efficiency for obtaining eye movement mode data
Low problem, the artificial cost for obtaining eye movement mode data of reduction, has been saved the time for obtaining eye movement mode data again, also effective
Ground ensure that the accuracy rate for obtaining eye movement mode data.
Brief description of the drawings
Fig. 1 is the FB(flow block) of infant human subject's eye movement mode data capture method in the embodiment of the present invention.
Fig. 2 is that the grating of Computer screen display of the embodiment of the present invention stimulates sample.
Fig. 3 is the schematic diagram of infant's eye movement mode data capture method eye detection framework in the embodiment of the present invention,
Wherein, (a) is the ocular detected;(b) it is to be divided into the oculars of m blocks;(c) it is to calculate respectively
The grey level histogram of m block oculars.
Embodiment
Below in conjunction with the accompanying drawings, the present invention, the model of but do not limit the invention in any way are further described by embodiment
Enclose.
In the present embodiment, subject is the infant that is had in arms by parent, operator for implement it is of the invention provide based on meter
The tester of infant's eye movement mode data acquisition of calculation machine vision, Fig. 1 is infant human subject's eye dynamic model in the present embodiment
The FB(flow block) of formula data capture method, specifically includes following steps:
1) environment is arranged:Place the computer on desktop, screen center is highly 100~150 centimetres, and seat is placed before table
Position, allows parent to have subject in arms and just facing computer screen and sits down, and adjusts seat height, makes eyes and the screen center of subject
Horizontally align, distance is 55 centimetres.Around keep quite, it is dark, cause the article that subject notes without unnecessary, make by
Examination person can only notice computer screen.Ensure that parent should not make any instruction to subject in whole process.
2) subject is tentatively judged by operator, inputs program running parameter, including:Subject's view data
Grating sequence number, the initial frame sequence in each grating presentation time that filename, the grating sequence number of test beginning, test terminate
Number, the dimension terminated after frame number, PCA dimensionality reductions in each grating presentation time, test eye type (left eye or right eye)
And the ordered mode of raster pattern is presented.The ordered mode of raster pattern, which is presented, 3 kinds, pattern 1,2 and 3, the frequency that grating is stimulated
Gradually increase, the grating of each specific frequency, which is stimulated, backtracking, to ensure the accuracy rate of data acquisition, different mode correspondence is not
With age and the infant of different vision conditions, any pattern is selected to be determined in advance by operator, wherein:1 point of pattern is
10 frequency stages, corresponding frequency (cycle/centimetre) be:0.32,0.32,0.64,0.64,1.29,1.29,2.28,2.28,
5.14,5.14, it is adapted to the infant that 0~2 years old or more than 2 years old eyesight substantially have obstacle;2 points of pattern is 10 frequency stages, right
The frequency (cycle/centimetre) answered is:0.43,0.43,0.86,0.86,1.58,1.58,3.43,3.43,6.85,6.85, it is adapted to 1
~3 years old or more than 3 years old eyesight substantially have the infant of obstacle;3 points of pattern is 10 frequency stages, corresponding frequency (cycle/li
Rice) be:2.28,2.28,5.14,5.14,10.28,10.28,13.71,13.71,20.56,20.56, suitable more than 2 years old baby
Child.
3) in the study stage, computer program recognizes the face position of human eye of subject according to input parameter, carries out feature
Extract, the simple eye eye movement mode of subject is obtained by SVM (Support Vector Machine SVMs) study, built
Grader h is found, detailed process is:
A. the motion picture of subject can be attracted by being presented on screen, occur multiple, same markers by first left and then right order
Determine the right position of picture;Adjacent picture twice leaves time interval between occurring, with auditory tone cues when picture occurs;
B. the eye movement mode of subject is obtained by feature extraction;Feature extraction includes grey level histogram and calculated and PCA drops
Dimension;
C. gone out by the right position of picture and the eye movement mode of corresponding subject of above-mentioned demarcation by SVM learning trainings
Grader h, grader h include the eye movement mode that subject eyes left and eyed right;
4) in test phase, grating, which is presented, according to input parameter stimulates, while demarcation obtains the right position conduct of grating
Actual value, obtains the test sample collection of subject:
A. grating as shown in Figure 2 is presented on the computer screen to stimulate, and left and right occurs at random, stimulates twice between occurring
Leave time interval;
When B. grating stimulation will occur, one section of sound is played, attracts the attention of subject;
C. the frequency that grating is stimulated gradually increases, and the grating of each specific frequency, which is stimulated, can all backtracking, it is ensured that eye dynamic model
The accuracy rate of formula data acquisition;
D. subject watches picture attentively when the grating of each specific frequency of presentation obtained according to camera is stimulated, and passes through meter
The eye movement characteristics for obtaining subject when the grating of each specific frequency is stimulated are calculated, as test sample collection, detailed process is main
Including the identification of human eye face location, feature extraction (including grey level histogram is calculated and PCA dimensionality reductions) and svm classifier;
5) the grader h for learning stage foundation is utilized to the eye including subject when the grating of each specific frequency is stimulated
The test sample collection of dynamic feature, which is predicted, obtains predicted value, and the right position actual value of predicted value and grating is contrasted,
Obtain subject's monocular fixation or do not watch the raster pattern of which frequency attentively, be used as the simple eye eye movement mode data of subject
As a result.
In the above-mentioned eye movement mode data capture method based on computer vision, study stage and test phase identification are tested
The face position of human eye of person is specifically to use the grader based on Harr features, and the designated area to image carries out face and human eye
Position is recognized.The feature extraction of study stage and test phase all includes grey level histogram and calculated and PCA dimensionality reductions;Fig. 3 is tested
Eye detection framework schematic diagram in person's eye movement mode data capture method, wherein, (a) is the ocular detected;(b) it is to divide
It is segmented into the ocular of m blocks;(c) it is to calculate the grey level histograms of m block oculars respectively, as shown in figure 3, using subregion
The intensity histogram nomography of block, describes the positional information of iris, the blockette well while using colouring information
Grey level histogram computational methods comprise the following steps:
A. first, eye detection frame is obtained, it is divided into m block by abscissa decile;
B. secondly, for splitting each block in m obtained block, the histogram feature of the block is calculated
Vector;
C. it is last, the histogram feature vector of all m blocks is merged in order and obtains final vector.
Wherein, m values are 20.
Eye movement mode data capture method based on computer vision carries out feature extraction in study stage and test phase
Using sequential track, judged with the characteristic sequence on a period of time and the subject that classifies vision direction, by this method
Obtained characteristic vector dimension is very big, and contained redundancy is also many, it is necessary to by obtaining subject every after PCA dimensionality reductions
The eye movement characteristics when grating of one specific frequency is stimulated.The eye movement mode data acquisition based on computer vision that the present invention is provided
After completing the aforementioned steps, depending on the different situations of result, operator can further be tested method:Including weighing again
This time test again, select the pattern of high-frequency grating stimulation to be tested, or carry out the test of another eyes, come with this
Obtain more fully eye movement mode data result.
Eye movement mode data capture method based on computer vision, obtained data are carried out according to the method provided by the present invention
It can report that subject does not notice that grating is stimulated in which in stage, or all notice that grating is stimulated, when tested
Person have repeatedly do not notice grating stimulate when, it should according to its highest frequency noticed grating stimulate as subject's
Eye movement mode data;When subject all notices that grating is stimulated, then grating that should be according to the highest frequency in this pattern
Stimulate the eye movement mode data as subject.
Claims (10)
1. a kind of eye movement mode data automatic obtaining method based on computer vision, methods described is by eye movement mode data acquisition
Process is divided into study stage and test phase, and model is watched attentively to being obtained in test phase by the subject obtained in the study stage
Subject's eye movement characteristics be predicted, obtain subject eye movement mode data, comprise the following steps:
1) eye movement mode Data capture environment is arranged, it is ensured that in whole eye movement mode data acquisition, subject can only note
Depending on computer screen;
2) program running parameter is determined for subject's situation by operator, program running parameter is input in computer;Institute
State raster order pattern of the program running parameter including presentation grating stimulation, the simple eye type tested and the time for gathering image
Interval;The raster order pattern that grating stimulation is presented is a variety of, and every kind of raster order pattern includes multiple specific frequencies
Grating is stimulated;The simple eye type tested includes left eye or right eye;
3) in the study stage, picture is presented according to program running parameter in computer, at the same demarcate position that picture is presented to be left or
The right side, passes sequentially through the face position of human eye of identification subject, carries out feature extraction and SVM study obtains that subject is simple eye to be watched attentively
Model, sets up grader h;
4) in test phase, computer is presented grating according to program running parameter and stimulated, while demarcating the light of each specific frequency
The position that grid are stimulated obtains what subject stimulated in each specific frequency grating as actual value by the identification of human eye face location
Watch the face position of human eye of period attentively, then subject is obtained when the grating of each specific frequency is stimulated by carrying out feature extraction
Eye movement characteristics, be used as the test sample collection of subject;
5) utilize step 3) set up grader h to step 4) in test sample collection carry out svm classifier prediction obtain predicted value,
By this predicted value and step 4) in actual value contrasted, obtaining in the raster order pattern that subject notices grating stimulates
Highest frequency, be used as the eye movement mode data of subject.
2. the eye movement mode data automatic obtaining method as claimed in claim 1 based on computer vision, it is characterized in that, in step
5) obtain after subject's eye movement mode data, step 2 re-started using identical or different program running parameter)~step
5);It is the higher raster order of grating frequency that the distinct program operational factor, which includes the raster order pattern of grating stimulation is presented,
Pattern or the simple eye type tested are different simple eye types.
3. the eye movement mode data automatic obtaining method as claimed in claim 1 based on computer vision, it is characterized in that, step 2)
The light that the described program operational factor grating sequence number that also filename including subject's view data, test start, test terminate
The end frame number and PCA during initial frame number, the presentation of each grating during grid sequence number, the presentation of each grating
Dimension after dimensionality reduction.
4. the eye movement mode data automatic obtaining method based on computer vision as claimed in claim 1, it is characterized in that, it is described by
Examination person is infant;Step 2) the raster order pattern for grating is presented stimulating is three kinds, and all ages and classes and not are corresponded to respectively
With the infant human subject of vision condition, the grating that every kind of raster order pattern includes ten specific frequencies is stimulated.
5. the eye movement mode data automatic obtaining method as claimed in claim 4 based on computer vision, it is characterized in that, it is described to be in
The raster order pattern that existing grating is stimulated is pattern 1, pattern 2 and pattern 3, and the specific frequency that included grating is stimulated is gradually
Increase, and the grating stimulation of each specific frequency has backtracking.
6. the eye movement mode data automatic obtaining method as claimed in claim 5 based on computer vision, it is characterized in that, the mould
10 specific frequencies of formula 1 are followed successively by:0.32nd, 0.32,0.64,0.64,1.29,1.29,2.28,2.28,5.14 and 5.14 weeks
Phase/centimetre, the infant or more than 2 years old eyesight that are adapted to 0~2 years old substantially have the infant of obstacle;10 of the pattern 2 are specific
Frequency is followed successively by:0.43rd, 0.43,0.86,0.86,1.58,1.58,3.43,3.43,6.85 and 6.85 cycles/centimetre, be adapted to 1
The infant of~3 years old or more than 3 years old eyesight substantially have the infant of obstacle;10 specific frequencies of the pattern 3 are followed successively by:
2.28th, 2.28,5.14,5.14,10.28,10.28,13.71,13.71,20.56 and 20.56 cycles/centimetre, be adapted to more than 2 years old
Infant.
7. the eye movement mode data automatic obtaining method as claimed in claim 1 based on computer vision, it is characterized in that, step 3)
The picture presented according to program running parameter is motion picture;The rendering method be specifically be in by first left and then right order
Now repeatedly, adjacent picture twice leaves time interval between presenting, when picture is presented with auditory tone cues;The subject is single
The model of watching attentively of eye watches model attentively including what subject eyed left and eyed right.
8. the eye movement mode data automatic obtaining method as claimed in claim 1 based on computer vision, it is characterized in that, step 4)
Described to be presented what grating stimulation specifically left and right occurred at random according to program running parameter, grating stimulation twice leaves between presenting
Time interval;Grating will be presented shortly and play one section of sound, the attention for attracting subject when stimulating.
9. the eye movement mode data automatic obtaining method as claimed in claim 1 based on computer vision, it is characterized in that, step 3)
Study stage and step 4) feature extraction described in test phase include grey level histogram calculate;The grey level histogram calculates logical
The intensity histogram nomography of blockette is crossed, the positional information of iris, the subregion are described while using colouring information
The grey level histogram computational methods of block comprise the following steps:
A. first, eye detection frame is obtained, m block is divided into by abscissa decile, m values are 17~30 integer;
B. secondly, for splitting each block in m obtained block, the intensity histogram for obtaining the block is calculated by counting
Figure characteristic vector;
C. it is last, the histogram feature vector of all m blocks is merged in order, final vector is obtained.
10. the eye movement mode data automatic obtaining method as claimed in claim 9 based on computer vision, it is characterized in that, step
3) study the stage and step 4) feature extraction described in test phase also include to the grey level histogram feature of obtained each block
Vector carries out PCA dimensionality reductions, for removing each block grey level histogram obtained by the grey level histogram computational methods of blockette
Redundancy contained by characteristic vector.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410775791.5A CN104463216B (en) | 2014-12-15 | 2014-12-15 | Eye movement mode data automatic obtaining method based on computer vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410775791.5A CN104463216B (en) | 2014-12-15 | 2014-12-15 | Eye movement mode data automatic obtaining method based on computer vision |
Publications (2)
Publication Number | Publication Date |
---|---|
CN104463216A CN104463216A (en) | 2015-03-25 |
CN104463216B true CN104463216B (en) | 2017-07-28 |
Family
ID=52909230
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410775791.5A Active CN104463216B (en) | 2014-12-15 | 2014-12-15 | Eye movement mode data automatic obtaining method based on computer vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104463216B (en) |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107133584A (en) * | 2017-04-27 | 2017-09-05 | 贵州大学 | Implicit intention assessment sorting technique based on eye-tracking |
CN109190505A (en) * | 2018-08-11 | 2019-01-11 | 石修英 | The image-recognizing method that view-based access control model understands |
CN109798888B (en) * | 2019-03-15 | 2021-09-17 | 京东方科技集团股份有限公司 | Posture determination device and method for mobile equipment and visual odometer |
CN110840467A (en) * | 2019-10-18 | 2020-02-28 | 天津大学 | Correlation analysis method for eye movement data and mental system diseases |
CN111141472B (en) * | 2019-12-18 | 2022-02-22 | 江苏万路机电科技有限公司 | Anti-seismic support and hanger detection method and system |
CN111951637B (en) * | 2020-07-19 | 2022-05-03 | 西北工业大学 | Task-context-associated unmanned aerial vehicle pilot visual attention distribution mode extraction method |
CN113425247B (en) * | 2021-06-10 | 2022-12-23 | 北京邮电大学 | Eye movement data visualization method, device and equipment |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101283905A (en) * | 2008-05-22 | 2008-10-15 | 重庆大学 | Statistical analysis process of nystagmus displacement vector |
CN103279751A (en) * | 2013-06-19 | 2013-09-04 | 电子科技大学 | Eye movement tracking method on the basis of accurate iris positioning |
CN104200192A (en) * | 2013-01-18 | 2014-12-10 | 通用汽车环球科技运作有限责任公司 | Driver gaze detection system |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8488023B2 (en) * | 2009-05-20 | 2013-07-16 | DigitalOptics Corporation Europe Limited | Identifying facial expressions in acquired digital images |
-
2014
- 2014-12-15 CN CN201410775791.5A patent/CN104463216B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101283905A (en) * | 2008-05-22 | 2008-10-15 | 重庆大学 | Statistical analysis process of nystagmus displacement vector |
CN104200192A (en) * | 2013-01-18 | 2014-12-10 | 通用汽车环球科技运作有限责任公司 | Driver gaze detection system |
CN103279751A (en) * | 2013-06-19 | 2013-09-04 | 电子科技大学 | Eye movement tracking method on the basis of accurate iris positioning |
Non-Patent Citations (2)
Title |
---|
Gaze Estimation In Children’s Peer-play Scenarios;Dingrui Duan,et al.;《2013 Second IAPR Asian Conference on Pattern Recognition》;20131231;760-764 * |
弱视儿童扫描视觉诱发电位视力与国际标准视力的比较;樊云葳;《眼科》;20081231;第17卷(第4期);269-273 * |
Also Published As
Publication number | Publication date |
---|---|
CN104463216A (en) | 2015-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104463216B (en) | Eye movement mode data automatic obtaining method based on computer vision | |
CN102096810B (en) | The detection method and device of a kind of fatigue state of user before computer | |
CN105069304B (en) | A kind of device of the assessment prediction ASD based on machine learning | |
CN108876775B (en) | Method for rapidly detecting diabetic retinopathy | |
CN105022929B (en) | A kind of cognition accuracy analysis method of personal traits value test | |
CN106037627B (en) | A kind of full-automatic eyesight exam method of infant and device | |
CN107007257B (en) | The automatic measure grading method and apparatus of the unnatural degree of face | |
CN110169770A (en) | The fine granularity visualization system and method for mood brain electricity | |
CN110428908B (en) | Eyelid motion function evaluation system based on artificial intelligence | |
CN105513077A (en) | System for screening diabetic retinopathy | |
CN104143079A (en) | Method and system for face attribute recognition | |
CN109886165A (en) | A kind of action video extraction and classification method based on moving object detection | |
CN110269587B (en) | Infant motion analysis system and infant vision analysis system based on motion | |
CN109820524A (en) | The acquisition of self-closing disease eye movement characteristics and classification wearable system based on FPGA | |
CN109740019A (en) | A kind of method, apparatus to label to short-sighted frequency and electronic equipment | |
CN110516685A (en) | Lenticular opacities degree detecting method based on convolutional neural networks | |
CN102567734A (en) | Specific value based retina thin blood vessel segmentation method | |
Fuadah et al. | Mobile cataract detection using optimal combination of statistical texture analysis | |
CN108921169B (en) | A kind of eye fundus image blood vessel segmentation method | |
CN106175657A (en) | A kind of vision automatic checkout system | |
CN109276243A (en) | Brain electricity psychological test method and terminal device | |
CN114100103B (en) | Rope skipping counting detection system and method based on key point identification | |
CN109194952B (en) | Head-mounted eye movement tracking device and eye movement tracking method thereof | |
CN109480775A (en) | A kind of icterus neonatorum identification device based on artificial intelligence, equipment, system | |
CN110314361A (en) | A kind of basketball goal score judgment method and system based on convolutional neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |