CN104899565B - Eye movement recognition methods and device based on textural characteristics - Google Patents

Eye movement recognition methods and device based on textural characteristics Download PDF

Info

Publication number
CN104899565B
CN104899565B CN201510293913.1A CN201510293913A CN104899565B CN 104899565 B CN104899565 B CN 104899565B CN 201510293913 A CN201510293913 A CN 201510293913A CN 104899565 B CN104899565 B CN 104899565B
Authority
CN
China
Prior art keywords
eye movement
identified
feature
sample
original eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201510293913.1A
Other languages
Chinese (zh)
Other versions
CN104899565A (en
Inventor
张成岗
李春永
岳敬伟
屈武斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Yunyi International Technology Co ltd
Institute of Radiation Medicine of CAMMS
Original Assignee
Beijing Yunyi International Technology Co ltd
Institute of Radiation Medicine of CAMMS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Yunyi International Technology Co ltd, Institute of Radiation Medicine of CAMMS filed Critical Beijing Yunyi International Technology Co ltd
Priority to CN201510293913.1A priority Critical patent/CN104899565B/en
Publication of CN104899565A publication Critical patent/CN104899565A/en
Application granted granted Critical
Publication of CN104899565B publication Critical patent/CN104899565B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/19Sensors therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The present invention provides a kind of eye movement recognition methods and device based on textural characteristics, the original eye movement figure recorded by obtaining eye tracker, and extract the feature of original eye movement figure, the feature extracted is inputted in grader, obtain recognition effect, by being identified on original eye movement figure, grain details are more rich, improve recognition accuracy.

Description

Eye movement recognition methods and device based on textural characteristics
Technical field
The present invention relates to image processing techniques more particularly to a kind of eye movement recognition methods and dress based on textural characteristics It puts.
Background technology
Living things feature recognition refers to carry out the technology of identity authentication using the intrinsic feature of human body by computer. Eye movement identification technology is used widely in unique mental mechanism that identity differentiates and personal visual information is processed.
In the prior art, eye movement recognition methods is the method that the feature based on blinkpunkt and twitching of the eyelid is identified, i.e., " complicated eye movement mode biological characteristic ", " complicated eye movement mode biological characteristic " are the spies based on blinkpunkt and the extraction of twitching of the eyelid trajectory diagram Sign, and blinkpunkt and twitching of the eyelid trajectory diagram are mapped with blinkpunkt and twitching of the eyelid data, therefore obtained blinkpunkt and twitching of the eyelid track are non- It is often sparse, wherein, eye tracker obtains the initial data of eye movement figure, it is assumed that the sample rate of eye tracker is 300HZ, and blinkpunkt is Refer to the point that current sampling point is more than or equal to some predetermined threshold value with next interval time using point, which is usually 200ms, twitching of the eyelid refer to the point quickly moved between two blinkpunkts, since blinkpunkt and twitching of the eyelid are based on original eye movement data Artificially divide, also, obtained blinkpunkt and twitching of the eyelid track are very sparse, therefore the accuracy of its identification is low.
The content of the invention
Eye movement recognition methods and device provided by the invention based on textural characteristics, by original eye movement figure On be identified, grain details are more rich, improve recognition accuracy.
The present invention provides a kind of eye movement recognition methods based on textural characteristics, including:Obtain N of eye tracker record Original eye movement figure, the N are the integer more than or equal to 1;Extract the feature of the N original eye movement figures;By the N The feature for opening original eye movement figure is input to grader, obtains recognition result.
The feature of the N original eye movement figures includes the feature of M samples to be identified;The extraction is N described The feature of original eye movement figure, including:The N original eye movement figures are combined into M samples to be identified, wherein, Original eye movement figures are opened comprising L × L in each sample to be identified, the L is the integer more than or equal to 1, and the M is Integer more than or equal to 1, and the product of the L × L × M is less than or equal to the N;The spy of each sample to be identified of extraction Sign.
It is described that the N original eye movement figures are combined into M samples to be identified, including:From the N original eyes L × L × M original eye movement figures are determined in dynamic trajectory diagram;To the L × L × M original eye movement figures according to L × L's Distribution mode is combined into M samples to be identified.
The feature of each sample to be identified of extraction, including:Each sample to be identified is carried out Gabor transformation;The feature of each sample to be identified after extraction Gabor transformation.
It is described that Gabor transformation is carried out to each sample to be identified, including:Each sample to be identified is turned Turn to corresponding two-dimensional matrix;Binaryzation is carried out to each two-dimensional matrix;Become using the Gabor of different frequency f and direction θ The each two-dimensional matrix changed after function pair binaryzation does two-dimensional convolution computing and obtains f × θ matrix of consequence respectively, described Gabor transformation function isWherein, F is sinusoidal frequency in Gabor transformation function, and θ is the direction of Gabor transformation function, and φ is phase difference, and δ is Gaussian function Several standard deviations, γ are space proportion constant.
The feature of each sample to be identified after the extraction Gabor transformation, including:By described f × θ knot Fruit matrix is converted into f × θ one-dimensional vector, averages respectively to the f × θ one-dimensional vector and obtains f × θ average with variance With f × θ variance, using the f × θ average and/or the f × θ variance as a sample to be identified of extraction Feature.
The feature by the N original eye movement figures is input to grader, obtains recognition result, including:From institute It states and m feature vector is randomly selected in M feature vector of M samples to be identified of N original eye movement figures as instruction Practice sample, (M-m) a feature vector is trained grader using the training sample, as test sample by the survey The feature of sample sheet inputs trained grader, obtains recognition result, and a sample to be identified is corresponded to described in one Feature vector, the m are the integer more than or equal to 1.
The present invention provides a kind of eye movement identification device based on textural characteristics, including:Acquisition module, for obtaining eye N original eye movement figures of dynamic instrument record, the N are the integer more than or equal to 1;Characteristic extracting module, for extracting the N Open the feature of original eye movement figure;Identification module, for the feature of the N original eye movement figures to be input to classification Device obtains recognition result.
The characteristic extracting module includes:Picture processing unit, for the N original eye movement figures to be combined into M A sample to be identified, wherein, comprising L × L original eye movement figures in each sample to be identified, the L is big In the integer equal to 1, the M is the integer more than or equal to 1, and the product of the L × L × M is less than or equal to the N;Extraction unit, For extracting the feature of each sample to be identified.
The picture processing unit is specifically used for determining L × L × M original eyes from the N original eye movement figures Dynamic trajectory diagram;M samples to be identified are combined into according to the distribution mode of L × L to the L × L × M original eye movement figures This.
Eye movement recognition methods and device provided in this embodiment based on textural characteristics are recorded by obtaining eye tracker Original eye movement figure, and extract the feature of original eye movement figure, the feature extracted inputted in grader, obtain and know Other effect, by being identified on original eye movement figure, grain details are more rich, improve recognition accuracy.
Description of the drawings
It in order to illustrate more clearly about the embodiment of the present invention or technical scheme of the prior art, below will be to embodiment or existing There is attached drawing needed in technology description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair Some bright embodiments, for those of ordinary skill in the art, without having to pay creative labor, can be with Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the flow chart of the eye movement recognition methods embodiment one the present invention is based on textural characteristics;
The flow chart that it is eye movement recognition methods embodiment two the present invention is based on textural characteristics that Fig. 2, which is,;
Fig. 3 A, which are one after the original eye movement figure combination of one application example numeric search test of the present invention, to be waited to know Other sample;
Fig. 3 B are one after the original eye movement figure combination of another application example Mental Rotation test of the invention Sample to be identified;
Fig. 4 is the flow chart of the eye movement recognition methods embodiment three the present invention is based on textural characteristics;
Fig. 5 is the specific oscillogram of Gabor transformation function;
Fig. 6 is the difference of the corresponding recognition correct rate of different frequency characteristic value;
Fig. 7 is the structure diagram of the eye movement identification device embodiment one the present invention is based on textural characteristics;
Fig. 8 is the structure diagram of the eye movement identification device embodiment two the present invention is based on textural characteristics.
Specific embodiment
To make the purpose, technical scheme and advantage of the embodiment of the present invention clearer, below in conjunction with the embodiment of the present invention In attached drawing, the technical solution in the embodiment of the present invention is clearly and completely described, it is clear that described embodiment is Part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art All other embodiments obtained without creative efforts belong to the scope of protection of the invention.
Fig. 1 is the flow chart of the eye movement recognition methods embodiment one the present invention is based on textural characteristics, as shown in Figure 1, The method of the present embodiment can include:
Step 101:N original eye movement figures of eye tracker record are obtained, wherein, N is the integer more than or equal to 1.
The initial data of eye movement figure is obtained first, and the sample rate for gathering eye tracker used in eye movement data is 300HZ, such as Fruit watches point data mapping attentively with what the prior art used, and the obtained locus of points of watching attentively will be very sparse, thus lose major part The textural characteristics of Vision information processing.Because we are to analyze the track of watching attentively of subject as texture, grain details are got over It is abundant, it can more reflect the search characteristics of subject, if carrying out watching track mapping attentively with the initial data of 300Hz, have more More grain details, therefore, the eye movement figure that will be made in the present embodiment using initial data.It is adopted with testing eye tracker every time The map data collected, obtains N original eye movement pictures, and wherein N is the integer more than or equal to 1.
In the embodiment of the present invention one, what it is due to extraction is textural characteristics, draw eye movement figure when using The initial data of eye tracker acquisition, can thus reflect more visual informations.In addition, the problem of being limited to experimental data quantity, 3 × 3 combination is preferably used in the example of the present invention.
Step 102:Extract the feature of N original eye movement figures.
Feature extraction is to extract image information using computer, determines whether the point of each image belongs to an image spy Sign.Feature extraction the result is that the point on image is divided into different subsets, these subsets tend to belong to isolated point, continuous Curve or continuous region.Common characteristics of image has color characteristic, textural characteristics, shape feature, spatial relation characteristics.
The present invention is the eye movement recognition methods based on textural characteristics, and textural characteristics are a kind of global characteristics, it is described The surface nature of scenery corresponding to image or image-region.Different from color characteristic, textural characteristics are not based on pixel Feature, it needs to carry out statistics calculating in the region comprising multiple pixels.In pattern match, this zonal feature It, will not can not successful match due to local deviation with larger superiority.As a kind of statistical nature, textural characteristics are normal There is stronger resistivity with rotational invariance, and for noise.
In this step, the N in extraction step 101 opens the feature of original eye movement figure to be identified.This feature Can be the statistics such as average, variance.
Step 103:The feature of N original eye movement figures is input to grader, obtains recognition result.
The feature that step 102 is extracted is inputted in trained grader, obtains recognition result;If there is no for this The grader of original eye movement figure then first trains grader, and the feature input grader that step 102 is extracted is trained, Then picture to be tested is inputted to be identified.Above-mentioned grader can be weighted euclidean distance, Multilayer networks, supporting vector Machine.Using support vector machines as our grader in the present embodiment, support vector machines belongs to generalized linear classifier, it Sample is divided into two classes by building a hyperplane, and realizes minimum experience error simultaneously with maximizing Geometry edge area, Its recognition accuracy is high.
Eye movement recognition methods provided in this embodiment based on textural characteristics, by obtaining the original of eye tracker record Eye movement figure, and the feature of original eye movement figure is extracted, the feature extracted is inputted in grader, obtains identification effect Fruit, by being identified on original eye movement figure, grain details are more rich, improve recognition accuracy.
The flow chart that it is eye movement recognition methods embodiment two the present invention is based on textural characteristics that Fig. 2, which is, such as Fig. 2 institutes Show, the method for the present embodiment can include:
Step 201:Obtain N original eye movement figures of eye tracker record.
This step is as the step method of embodiment one, and details are not described herein again.
Step 202:N original eye movement figures are combined into M samples to be identified, wherein, each sample to be identified Comprising L × L original eye movement figures in this, L is the integer more than or equal to 1, and M is the integer more than or equal to 1, and L × L × M Product be less than or equal to N.
In this step, the feature of above-mentioned N original eye movement figures includes the feature of M samples to be identified, specifically , N original eye movement figures are combined into M samples pictures to be identified, in each samples pictures to be identified comprising L × L original eye movement figures, for example, the sample to be identified of 2 × 2,3 × 3,4 × 4 original eye movement figures is included, when L is 1 When, as soon as it is shown in the implementation method flow of embodiment, at this point, N original eye movement figures are combined into M samples to be identified This, the computational methods of M are N% (L × L), i.e. the integer that N obtains (L × L) remainder.
Step 203:The feature of each sample to be identified of extraction.
After being combined into above-mentioned sample to be identified, feature extraction is carried out to the sample to be identified.The feature and step of extraction Rapid 102 is identical, and details are not described herein again.
Step 204:The feature of sample to be identified is input to grader, obtains recognition result.
It is similar with step 103, the difference is that original trajectory diagram in step 103 is combined into sample to be identified, extract After the feature of the sample, grader is inputted, obtains recognition result.
Fig. 3 A, which are one after the original eye movement figure combination of one application example numeric search test of the present invention, to be waited to know Other sample, Fig. 3 B are one after the original eye movement figure combination of another application example Mental Rotation test of the invention Sample to be identified, the sample to be identified difference after combination is as shown in Figure 3A and Figure 3B.
Specifically, N original eye movement figures are combined into M samples to be identified, including:From N original eye dynamic rails L × L × M original eye movement figures are determined in mark figure;To L × L × M original eye movement figures according to the distribution side of L × L Formula is combined into M samples to be identified.Wherein it is determined that the method for going out L × L × M original eye movement figures may be employed at random The method of extraction can also be selected artificially, for example, it is original have 40 original eye movement figures, therefrom randomly select 36 pictures, The sample to be identified that composition is 43 × 3, each sample to be identified is according to 3 horizontal, vertical 3 original eye movement figures Distribution mode arranges, and the sample to be identified after the original eye movement figure combination of different application example is respectively such as Fig. 3 A and Fig. 3 B It is shown.Wherein, numeric search is a kind of test for testing number search capability, numeric search test include one with 7 " Subscriber Numbers " and 10 groups " prize-winning numbers " of machine generation, user need by " Subscriber Number " and 10 groups " prize-winning number " into Row matching, and " Subscriber Number " is selected to suffer the prizes such as several.Mental rotation is a kind of spatial characterization for imagining self or array rotation Power transfer capability and a kind of important scale for evaluating space intelligent.Test mode is to judge two three used by us Whether dimension figure is overlapped by the rotation of certain angle, if it is possible to be overlapped, then be selected identical as a result, if it cannot overlap Selection is different.
One subject in single numeric search task to watch track difference attentively bigger, may not have certain stabilization Feature, but the trajectory set of watching attentively of multiple numeric search task of subject is combined together and is likely to have stable feature, This feature can reflect the fixation characteristics of a subject.The eye movement recognition methods of the present embodiment, using multiple eye movement figure The mode of combination is identified, and has stable feature so that recognition result is more stable accurate.
Further, the feature of each sample to be identified of extraction, including:Gabor is carried out to each sample to be identified Conversion;The feature of each sample to be identified after extraction Gabor transformation.
Gabor transformation belongs to windowed FFT, and Gabor functions can carry on frequency domain different scale, different directions Take relevant feature.In addition the biological effect of Gabor functions and human eye is similar, so being commonly used as in texture recognition, and obtains Preferable effect.According to Gabor transformation obtain as a result, convert thereof into one-dimensional vector, then acquire average and variance.
Fig. 4 is the flow chart of the eye movement recognition methods embodiment three the present invention is based on textural characteristics, as shown in figure 4, It is above-mentioned that Gabor transformation is carried out to each sample to be identified, including:
Step 301:Each sample to be identified is converted into corresponding two-dimensional matrix.
Sample to be identified is represented with two-dimensional matrix, two-dimensional matrix refers to each pixel in sample image to be identified Gray value is represented with a value of matrix.
Step 302:Binaryzation is carried out to each two-dimensional matrix.
Binaryzation is carried out to two-dimensional matrix to refer to replace value all in two-dimensional matrix using 0 or 1 by certain rule, Element value i.e. in two-dimensional matrix is arranged to 1 more than 127, and 0 is arranged to less than 127, here for reduction Gabor transformation Calculation amount by 0 and 1 negate.
Step 303:Distinguished using each two-dimensional matrix after the Gabor transformation function pair binaryzation of different frequency f and direction θ It does two-dimensional convolution computing and obtains f × θ matrix of consequence, Gabor transformation function is Wherein,F is sinusoidal frequency in Gabor transformation function, and θ is the side of Gabor transformation function To φ is phase difference, and δ is the standard deviation of Gaussian function, and γ is space proportion constant.
In the present embodiment, the Gabor transformation function of use has following two combinations, and one kind is 5 × 8, i.e., using 5 kinds not Totally 40 kinds of Gabor transformation function pair dimensional matrix datas are handled for same frequency and 8 kinds of different directions, wherein 5 kinds of differences Frequency be8 kinds of different directions are [1,2,3,4,5,6,7] 8 × π of ÷.Another is 15 × 8, that is, is adopted With 15 kinds of different frequencies and 8 kinds of different directions, totally 120 kinds of Gabor transformation function pair dimensional matrix datas are handled, In 15 kinds of different frequencies be8 kinds of different directions for [1,2,3,4,5,6,7] ÷ 8 × π.The present embodiment is windowed FFT using two-dimensional convolution computing, and window function is Gaussian function, it can be in frequency Relevant feature is extracted on domain different scale, different directions, it is similar with the biological effect of human eye, have in texture recognition preferable Effect.
Further, the feature of each sample to be identified after extraction Gabor transformation, including:By above-mentioned f × θ matrix of consequence is converted into f × θ one-dimensional vector, averages respectively to f × θ one-dimensional vector and obtains f × θ with variance Value and f × θ variance, using f × θ average and/or f × θ variance as the feature of the sample to be identified extracted.
Specifically, according to Gabor transformation obtain as a result, convert thereof into one-dimensional vector, then seek the one-dimensional vector Average and variance, the feature of the sample to be identified as one, can all use when grader is used to classify, i.e., Value and variance can also use any of which, that is, only use average or only use variance all as characteristic value. A set of frequencies f and direction θ parameters of Gabor transformation function correspond to linear transformation, obtain a two-dimensional matrix as a result, therefore one The dimension for the feature vector that sample to be identified obtains is:Frequency (f) group number × direction (θ) group number × feature vector number is ( Value, variance), a sample to be identified corresponds to a feature vector.
Fig. 5 is the specific oscillogram of Gabor transformation function, as shown in figure 5, carrying out 5 kinds of frequencies and 8 to sample to be identified The Gabor transformation that kind direction (5 × 8) size is 39 × 39 can obtain 40 matrixs of consequence, ask equal to obtained matrix of consequence Value and variance, the characteristic value as linear transformation.One sample to be identified corresponds to a feature vector, provided by the invention In one embodiment, be extracted two characteristic values of average and variance, employ 5 × 8 kinds of Gabor transformations, thus obtain feature to The dimension of amount is 5 × 8 × 2=80.
In the present embodiment, the sample to be identified to one uses the result that the Gabor transformation in different frequency and direction obtains Matrix is averaged and variance, the feature of a feature vector as extraction, and a feature vector is by multiple eigenvalue clusters into can To reflect the feature of the abundant stabilization of sample to be identified, therefore lay the foundation for subsequent recognition accuracy.
Further, on the basis of embodiment three, the feature of N original eye movement figures is input to grader, is obtained Recognition result is taken, including:M are randomly selected from M feature vector of M samples to be identified of N original eye movement figures Feature vector instructs grader using training sample as test sample as training sample, (M-m) a feature vector Practice, the feature of test sample is inputted into trained grader, obtain recognition result, a sample to be identified corresponds to a spy Sign vector, m are the integer more than or equal to 1.
Specifically, being carried out classification to the feature of extraction using grader is included:From the spy of everyone multiple samples to be identified Certain ratio is randomly selected as training sample in sign vector, remaining is as test sample;Using training sample to classification Device is trained;Test sample is identified using grader, wherein, a sample to be identified correspond to a feature to Amount.In one embodiment of the invention, a people shares 4 feature vectors, thus we randomly selected a feature to For amount as test sample, remaining 3 are training sample.
23 subjects that data used in this application example are tested from numeric search, every subject carry out 40 numbers and search Rope is tested, and every 9 tests are combined into 3 × 3 sample to be identified, therefore share 23 × 4 samples to be identified, obtain 23 × 4 feature vectors.One is randomly selected from be each tested 4 obtained feature vectors every time and is used as test sample, Other 3 are used as training sample, and 23 feature vectors being drawn into are classified using support vector machine method, 23 subjects Classification corresponding to corresponding feature vector is 1,2,3 respectively ... 23, can be assigned in its corresponding subject class then for just Really classification.The classification accuracy rate for repeating the test of 20 subseries is as shown in table 1:
Table 1
0.695652 0.73913 0.782609 0.73913 0.826087
0.73913 0.782609 0.652174 0.652174 0.695652
0.869565 0.695652 0.695652 0.826087 0.826087
0.652174 0.695652 0.869565 0.913043 0.608696
Table 1 classify average accuracy be:0.7478.
In another application example, we attempt to identify the eye movement figure of mental rotation test.It adopts With same Gabor transformation parameter, classification accuracy rate is as shown in table 2:
Table 2
0.565217 0.478261 0.565217 0.565217 0.608696
0.695652 0.695652 0.434783 0.521739 0.565217
0.565217 0.521739 0.478261 0.695652 0.73913
0.608696 0.608696 0.521739 0.565217 0.565217
Mean accurate rate of recognition is:0.5783.
In another set test, we have attempted another set Gabor transformation parameter, i.e., sample to be identified are carried out 15 kinds of frequencies and the Gabor transformation of 8 kinds of directions (15 × 8), obtained feature vector dimension are 15 × 8 × 2=240.Similary weight Multiple 20 tests, recognition result are as shown in table 3:
Table 3
0.789474 0.894737 0.842105 0.894737 0.894737
0.842105 0.894737 0.842105 0.894737 0.789474
0.789474 0.842105 0.842105 0.894737 0.947368
0.789474 0.947368 0.789474 0.789474 1
Classification average accuracy be:0.8605.
When last time is classified, classification accuracy rate has reached 1, this explanation is with the increase of the characteristic value of extraction, institute Some test samples are all correctly classified.Therefore, the eye movement recognition methods of the present embodiment, to different application example Test sample is respectively provided with higher recognition accuracy, and with the increase of the characteristic value of the sample of extraction, recognition accuracy increases.
Further, influence of the characteristic value of different frequency to the result of classification is different that Fig. 6 is different frequency feature It is worth the difference of corresponding recognition correct rate, as shown in fig. 6, the figure is primarily to illustrate the characteristic value of different frequency to identifying just The contribution of true rate is different.Recognition correct rate difference in Y-axis refers to that the identification classified using all frequecy characteristic values is correct Rate, which subtracts to use, lacks the accuracy that a kind of frequecy characteristic value is classified.Y value is more than 0, illustrate to have the frequecy characteristic value than The not good classification effect of the frequecy characteristic value.In addition, test sample amount and the accuracy of eye movement data can also influence classification knot Fruit.
Fig. 7 is the structure diagram of the eye movement identification device embodiment one the present invention is based on textural characteristics, such as Fig. 7 institutes Show, which includes:
Acquisition module 41, for N original eye movement figures of eye tracker record, N is the integer more than or equal to 1;
Characteristic extracting module 42, for extracting the feature of N original eye movement figures;
Identification module 43 for the feature of N original eye movement figures to be input to grader, obtains recognition result.
The device of the present embodiment can be used for the technical solution for performing embodiment of the method shown in Fig. 1, realization principle and skill Art effect is similar, and details are not described herein again.
Fig. 8 is the structure diagram of the eye movement identification device embodiment two the present invention is based on textural characteristics, such as Fig. 8 institutes Show, characteristic extracting module 42 includes:
Picture processing unit 421, for N original eye movement figures to be combined into M samples to be identified, wherein, often Comprising L × L original eye movement figures in a sample to be identified, L is the integer more than or equal to 1, and M is whole more than or equal to 1 Number, and the product of L × L × M is less than or equal to N;
Extraction unit 422, for extracting the feature of each sample to be identified.
The device of the present embodiment can be used for the technical solution for performing embodiment of the method shown in Fig. 2, realization principle and skill Art effect is similar, and details are not described herein again.
Further, picture processing unit 421 is specifically used for determining L × L × M originals from N original eye movement figures Beginning eye movement figure;M samples to be identified are combined into according to the distribution mode of L × L to L × L × M original eye movement figures This.Its realization principle is similar with corresponding method, and details are not described herein again.
One of ordinary skill in the art will appreciate that:Realizing all or part of step of above-mentioned each method embodiment can lead to The relevant hardware of program instruction is crossed to complete.Foregoing program can be stored in a computer read/write memory medium.The journey Sequence upon execution, execution the step of including above-mentioned each method embodiment;And foregoing storage medium includes:ROM, RAM, magnetic disc or The various media that can store program code such as person's CD.
Finally it should be noted that:The above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent Pipe is described in detail the present invention with reference to foregoing embodiments, it will be understood by those of ordinary skill in the art that:Its according to Can so modify to the technical solution recorded in foregoing embodiments either to which part or all technical characteristic into Row equivalent substitution;And these modifications or replacement, the essence of appropriate technical solution is not made to depart from various embodiments of the present invention technology The scope of scheme.

Claims (6)

1. a kind of eye movement recognition methods based on textural characteristics, which is characterized in that including:
N original eye movement figures of eye tracker record are obtained, the N is the integer more than or equal to 1;
Extract the feature of the N original eye movement figures;
The feature of the N original eye movement figures is input to grader, obtains recognition result;
The feature of the N original eye movement figures includes the feature of M samples to be identified;
The feature of the extraction N original eye movement figure, including:
The N original eye movement figures are combined into M samples to be identified, wherein, in each sample to be identified Comprising L × L original eye movement figures, the L is the integer more than or equal to 1, and the M is the integer more than or equal to 1, and L × L The product of × M is less than or equal to the N;
The feature of each sample to be identified of extraction;
The feature of each sample to be identified of extraction, including:
Gabor transformation is carried out to each sample to be identified;
The feature of each sample to be identified after extraction Gabor transformation;
It is described that Gabor transformation is carried out to each sample to be identified, including:
Each sample to be identified is converted into corresponding two-dimensional matrix;
Binaryzation is carried out to each two-dimensional matrix;
Two-dimensional convolution is done respectively using each two-dimensional matrix after the Gabor transformation function pair binaryzation of different frequency f and direction θ Computing obtains f × θ matrix of consequence, and the Gabor transformation function is Wherein,F is sinusoidal frequency in Gabor transformation function, and θ is the side of Gabor transformation function To φ is phase difference, and δ is the standard deviation of Gaussian function, and γ is space proportion constant, and η is the constant for influencing Gauss window size.
2. according to the method described in claim 1, it is characterized in that, described be combined into M by the N original eye movement figures Sample to be identified, including:
L × L × M original eye movement figures are determined from the N original eye movement figures;
M samples to be identified are combined into according to the distribution mode of L × L to the L × L × M original eye movement figures.
3. according to the method described in claim 1, it is characterized in that, each after the extraction Gabor transformation described waits to know The feature of other sample, including:
F × θ the matrix of consequence is converted into f × θ one-dimensional vector, average respectively to the f × θ one-dimensional vector and Variance obtains f × θ average and f × θ variance, using the f × θ average and/or the f × θ variance as extraction The feature of one sample to be identified.
4. according to the method described in claim 3, the feature by the N original eye movement figures is input to grader, Recognition result is obtained, including:
Randomly selected from M feature vector of M samples to be identified of the N original eye movement figures m feature to As training sample, (M-m) a feature vector is trained grader using the training sample as test sample amount, The feature of the test sample is inputted into trained grader, obtains recognition result, a sample to be identified corresponds to One described eigenvector, the m are the integer more than or equal to 1.
5. a kind of eye movement identification device based on textural characteristics, which is characterized in that including:
Acquisition module, for obtaining N original eye movement figures of eye tracker record, the N is the integer more than or equal to 1;
Characteristic extracting module, for extracting the feature of the N original eye movement figures;
Identification module for the feature of the N original eye movement figures to be input to grader, obtains recognition result;
The characteristic extracting module includes:
Picture processing unit, for the N original eye movement figures to be combined into M samples to be identified, wherein, Mei Gesuo State comprising L × L original eye movement figures in sample to be identified, the L is the integer more than or equal to 1, the M be more than etc. In 1 integer, and the product of L × L × M is less than or equal to the N;
Extraction unit, for extracting the feature of each sample to be identified;
The feature of each sample to be identified of extraction, including:
Gabor transformation is carried out to each sample to be identified;
The feature of each sample to be identified after extraction Gabor transformation;
It is described that Gabor transformation is carried out to each sample to be identified, including:
Each sample to be identified is converted into corresponding two-dimensional matrix;
Binaryzation is carried out to each two-dimensional matrix;
Two-dimensional convolution is done respectively using each two-dimensional matrix after the Gabor transformation function pair binaryzation of different frequency f and direction θ Computing obtains f × θ matrix of consequence, and the Gabor transformation function is Wherein,F is sinusoidal frequency in Gabor transformation function, and θ is the side of Gabor transformation function To φ is phase difference, and δ is the standard deviation of Gaussian function, and γ is space proportion constant, and η is the constant for influencing Gauss window size.
6. device according to claim 5, which is characterized in that the picture processing unit is specifically used for from the N originals L × L × M original eye movement figures are determined in beginning eye movement figure;To the L × L × M original eye movement figures according to L The distribution mode of × L is combined into M samples to be identified.
CN201510293913.1A 2015-06-01 2015-06-01 Eye movement recognition methods and device based on textural characteristics Expired - Fee Related CN104899565B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510293913.1A CN104899565B (en) 2015-06-01 2015-06-01 Eye movement recognition methods and device based on textural characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510293913.1A CN104899565B (en) 2015-06-01 2015-06-01 Eye movement recognition methods and device based on textural characteristics

Publications (2)

Publication Number Publication Date
CN104899565A CN104899565A (en) 2015-09-09
CN104899565B true CN104899565B (en) 2018-05-18

Family

ID=54032221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510293913.1A Expired - Fee Related CN104899565B (en) 2015-06-01 2015-06-01 Eye movement recognition methods and device based on textural characteristics

Country Status (1)

Country Link
CN (1) CN104899565B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106598258B (en) * 2016-12-28 2019-04-16 北京七鑫易维信息技术有限公司 Blinkpunkt mapping function determines that method and device, blinkpunkt determine method and device
CN110502100B (en) * 2019-05-29 2020-09-29 中国人民解放军军事科学院军事医学研究院 Virtual reality interaction method and device based on eye movement tracking
CN116185192B (en) * 2023-02-09 2023-10-20 北京航空航天大学 Eye movement identification VR interaction method based on denoising variation encoder

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620266B2 (en) * 2005-01-20 2009-11-17 International Business Machines Corporation Robust and efficient foreground analysis for real-time video surveillance
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN103500011A (en) * 2013-10-08 2014-01-08 百度在线网络技术(北京)有限公司 Eye movement track law analysis method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7620266B2 (en) * 2005-01-20 2009-11-17 International Business Machines Corporation Robust and efficient foreground analysis for real-time video surveillance
CN102521595A (en) * 2011-12-07 2012-06-27 中南大学 Method for extracting image region of interest based on eye movement data and bottom-layer features
CN103500011A (en) * 2013-10-08 2014-01-08 百度在线网络技术(北京)有限公司 Eye movement track law analysis method and device

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
A neuromorphic control module for real-time vergence eye movements on the iCub robot head;Agostino Gibaldi.etc;《2011 11th IEEE-RAS International Conference on Humanoid Robots》;20111028;第543-550页 *
Biometric Recognition via Probabilistic Spatial Projection of Eye Movement Trajectories in Dynamic Visual Environments;Ioannis Rigas.etc;《IEEE TRANSACTIONS ON INFORMATION FORENSICS AND SECURITY》;20141030;第9卷(第10期);第1743-1754页 *
基于SVM的眼动轨迹解读思维状态的研究;严会霞;《中国优秀硕士学位论文全文数据库》;20101015;I140-47 *

Also Published As

Publication number Publication date
CN104899565A (en) 2015-09-09

Similar Documents

Publication Publication Date Title
CN103914676B (en) A kind of method and apparatus used in recognition of face
CN104517104B (en) A kind of face identification method and system based under monitoring scene
CN103116763B (en) A kind of living body faces detection method based on hsv color Spatial Statistical Character
CN106778468B (en) 3D face identification method and equipment
US8666122B2 (en) Assessing biometric sample quality using wavelets and a boosted classifier
CN103902978B (en) Face datection and recognition methods
CN101615292B (en) Accurate positioning method for human eye on the basis of gray gradation information
CN110443128A (en) One kind being based on SURF characteristic point accurately matched finger vein identification method
CN107844736A (en) iris locating method and device
CN101739555A (en) Method and system for detecting false face, and method and system for training false face model
Ogura et al. Automatic particle pickup method using a neural network has high accuracy by applying an initial weight derived from eigenimages: a new reference free method for single-particle analysis
CN109086711A (en) Facial Feature Analysis method, apparatus, computer equipment and storage medium
CN105184266B (en) A kind of finger venous image recognition methods
CN106778489A (en) The method for building up and equipment of face 3D characteristic identity information banks
CN104899565B (en) Eye movement recognition methods and device based on textural characteristics
Vieriu et al. Facial expression recognition under a wide range of head poses
CN107918773A (en) A kind of human face in-vivo detection method, device and electronic equipment
CN110232390A (en) Image characteristic extracting method under a kind of variation illumination
CN106407916A (en) Distributed face recognition method, apparatus and system
CN106778491B (en) The acquisition methods and equipment of face 3D characteristic information
Shirke et al. Biometric personal iris recognition from an image at long distance
Zana et al. Face recognition based on polar frequency features
Liu et al. The scale of edges
CN108010015A (en) One kind refers to vein video quality evaluation method and its system
Cai et al. An adaptive symmetry detection algorithm based on local features

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180518

Termination date: 20190601