CN108985210A - A kind of Eye-controlling focus method and system based on human eye geometrical characteristic - Google Patents

A kind of Eye-controlling focus method and system based on human eye geometrical characteristic Download PDF

Info

Publication number
CN108985210A
CN108985210A CN201810735315.9A CN201810735315A CN108985210A CN 108985210 A CN108985210 A CN 108985210A CN 201810735315 A CN201810735315 A CN 201810735315A CN 108985210 A CN108985210 A CN 108985210A
Authority
CN
China
Prior art keywords
eye
iris
point
human eye
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810735315.9A
Other languages
Chinese (zh)
Inventor
侯振杰
苏海明
夏宇杰
林恩
莫宇剑
巢新
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou University
Original Assignee
Changzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou University filed Critical Changzhou University
Priority to CN201810735315.9A priority Critical patent/CN108985210A/en
Publication of CN108985210A publication Critical patent/CN108985210A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to technical field of image processing, disclose a kind of Eye-controlling focus method and system based on human eye geometrical characteristic.First with Face detection algorithm locating human face position, eye corner location is positioned using the method for facial feature points detection, the position of human eye is calculated by canthus point;Iris templates are established, the position of ocular is then detected using iris templates, the position at iris center are positioned by iris center fine positioning algorithm, the canthus point finally detected, iris center are as eye movement vector;Blinkpunkt mapping relations are established using eye movement vector as the input of neural network using neural network model.Invention achieves good effects using eye movement characteristics more stable under head movement are chosen at, and achieves preferable precision using the 2D Eye-controlling focus method based on artificial neural network using the system architecture of single camera gauge without light source;The hardware requirement of system is reduced, hardware cost is reduced, enhances the availability of system.

Description

A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
Technical field
The invention belongs to technical field of image processing more particularly to a kind of Eye-controlling focus methods based on human eye geometrical characteristic And system.
Background technique
Currently, the prior art commonly used in the trade is such that
Eye tracking obtains subject currently " direction of gaze " using the various detection means such as machinery, electronics, optics Technology.In human-computer interaction and medical diagnosis on disease[1]Two fields have a wide range of applications, and such as help the disabled, virtual reality, and vehicle auxiliary is driven It sails.
Currently, most of sight tracks the non-intrusion type sight based on image, early stage, Findlay etc.[2]Utilize sclera rainbow Membrane edge lot or luck by which people are brought together extracts edge using the color difference of sclera and iris, and this method requires head to fix.Current mainstream Method is using pupil corneal reflection method, and Morimato etc. irradiates human eye using external light source using pupil corneal reflection method, Corneal reflection spot is formed on cornea, using the relative displacement of pupil center to corneal reflection spot as visual line characteristics parameter, into Row sight point estimation, this method, which uses, to need to demarcate camera and needs external light source.Waizenegger Deng[4]By iris center, eyeball center is calculated in conjunction with eyeball 3D model, then pass through the connection at eyeball center and iris center Direction determines direction of visual lines, but this method is to bring error to sight estimation using fixed anatomy constant.Panev Deng[5]To solve the problems, such as head rotation when eye tracking, head pose and deflection information are first obtained, is estimated in conjunction with eye corner location Eyeball center is counted, estimates direction of visual lines in conjunction with iris center, but this method needs three dimensional depth sensor, is constituted in system It is upper more complicated.All there are some disadvantages in the above method, systematic procedure is complicated, the variation during eye tracking to position Compare sensitive.
Therefore, current some researchers are solved by the mapping relations directly established between eye movement information and blinkpunkt Eye tracking problem.Chahir etc. obtains the angle point of eyes, and the characteristic points such as corners of the mouth point and nostril center pass through supervised learning Obtain the mapping relations between eye movement characteristics and blinkpunkt, this method when head movement amplitude is excessive precision and robustness without Method is guaranteed.The point characterized by angle point inside and outside iris center and eyes such as Sesma, matches to obtain by polynomial map equation Blinkpunkt, but this method blinkpunkt evaluated error in head movement quickly increases.
And with the development of neural network, the calculation amount of existing computer increases, the method based on neural network model Arrived very big raising, earliest Baluja is obtained eyes image as the input of neural network, by the institute of whole picture figure There is pixel as eye movement characteristics vector, when intrinsic dimensionality a larger increase of this method training and prediction of neural network Between.It is influenced by calculation amount at that time, the execution speed of neural network is very slow, and Xucong Zhang is existed using 37667 width images It is trained on CNN, vision response test is between 13.9 ° and 10.8 °.The spy that this direct method using entire image obtains It is high to levy dimension, calculation amount is huge.
In conclusion problem of the existing technology is:
(1) traditional iris locating method is easily judged by accident, to the positioning belt at iris center come very big influence and it is time-consuming compared with Long problem.
(2) in the prior art, eye movement characteristics more stable under head movement are not chosen at, using single camera without The system architecture of light source, and without using the 2D Eye-controlling focus method based on artificial neural network, preferable essence cannot be obtained Degree.The hardware requirement that system cannot be reduced increases the cost of hardware, the poor availability of system.
(3) in the prior art, the feature vector dimension that neural network method extracts is high, and information redundance is high, computationally intensive, Useful information can not be extracted well, so that the efficiency of neural network reduces.
Solve the difficulty and meaning of above-mentioned technical problem:
The key that solves the above problems is to find the information that can represent eye movement vector in the picture, reduces deep neural network side Method use without purpose, reduce the hardware requirement of system, accelerate the operational efficiency of system.
Summary of the invention
In view of the problems of the existing technology, the present invention provides a kind of Eye-controlling focus methods based on human eye geometrical characteristic And system.
The invention is realized in this way a kind of Eye-controlling focus method based on human eye geometrical characteristic, described several based on human eye How the Eye-controlling focus method of feature includes:
Using Face detection algorithm locating human face position, eye corner location is positioned using the method for facial feature points detection, The position of human eye is calculated by canthus point;
Iris templates are established, the position of ocular is then detected using iris templates, pass through iris center fine positioning The position at algorithm positioning iris center;
The information of the canthus point, iris center that detect is as eye movement vector;
Blinkpunkt mapping relations are established using eye movement vector as the input of neural network using neural network model, are calculated Blinkpunkt region out.
Further, Face datection includes: with eye corner locating method
It utilizesd∈Rm×1It carries out, d (x) indicates that coordinate is x in facial image in formula Pixel, which has m pixel, and h (d (x)) indicates the SIFT feature extracted in facial image, Φ expression hand The characteristic point of dynamic label;SDM uses successive ignition, final training objective are as follows:
K is the number of iterations, Δ x in formulakiIndicate error when iterating to kth time,
diIndicate the i-th picture,The hand labeled point for indicating i-th figure, finally according to the training parameter R of acquisitionkWith bk, 49 characteristic points of face are obtained, the characteristic point including ocular: 4 canthus points, 8 upper palpebra inferior points;Utilize acquisition Canthus and eyelid characteristic point the image of ocular is calculated, the image of ocular will be utilized to carry out the inspection of iris center It surveys.
Further, iris center positioning method includes: the principle using first thick rear essence: generating iris first with iris image Template image carries out coarse positioning to iris region using the method for template matching, excludes the influence in other regions of eye, recycle Integro-differential operator fine positioning iris center.
Further, establishing iris templates method includes:
M iris images are obtained from the image that video camera is shot and constitute iris data collection U, and every image is converted into one The vector Γ of 50 × 50 dimensions, is then put into this 80 vectors in one set U, is shown below:
U={ Γ123,..,ΓM}
After getting iris vector set U, the average image Ψ is calculated, formula is as follows:
The value of each corresponding points of Γ is added and is averaging;Ψ is a characteristic pupil, for the required template obtained.
Further, template matching algorithm includes:
Template T (m, n) is overlayed to be stacked on searched figure S (W, H) and be translated, and template covers searched region whistle figure Sij, i, j are coordinate of the subgraph lower left corner on searched figure S, and search range is: 1≤i≤W-n, 1≤j≤H-m;It is weighed with following formula Measure T and SijSimilitude:
Normalization, obtains the related coefficient of template matching:
When template is as subgraph, coefficient R (i, j)=1;After completing all search in searched figure S, find out R's Maximum value Rmax(im,jm), corresponding subgraphTo match target.
Further, the selection method of eye movement vector includes:
(1) location information based on canthus point and iris central point will be added at a distance from canthus point and iris center eye movement to Among amount;
(2) reference axis is established so that original image central point is as a reference point, using x-axis direction as straight line is referred to, with two canthus work It is parallel to the straight line of x-axis, angle is the angle of pupil center Yu canthus point;
(3) in the case that deflection occurs for head, using right eye inner eye corner as reference point, the folder of canthus point and x axis is taken Angle.
Another object of the present invention is to provide a kind of Eye-controlling focus methods described in realize based on human eye geometrical characteristic Computer program.
Another object of the present invention is to provide a kind of Eye-controlling focus methods described in realize based on human eye geometrical characteristic Information data processing terminal.
Another object of the present invention is to provide a kind of computer readable storage mediums, including instruction, when it is in computer When upper operation, so that computer executes the Eye-controlling focus method based on human eye geometrical characteristic.
Realize that the sight line tracking system based on human eye geometrical characteristic includes: another object of the present invention is to provide a kind of
The position module of Face detection and human eye uses human face characteristic point using Face detection algorithm locating human face position The method of detection positions eye corner location, and the position of human eye is calculated by canthus point;
Iris templates establish module, establish iris templates, and the position of ocular is then detected using iris templates, lead to Cross the position at iris center fine positioning algorithm positioning iris center;
Eye movement vector detection module, the information of the canthus point, iris center that detect is as eye movement vector;
Blinkpunkt area calculation module is established using neural network model using eye movement vector as the input of neural network Blinkpunkt mapping relations calculate blinkpunkt region.
In conclusion advantages of the present invention and good effect are as follows:
The invention firstly uses Face detection algorithm locating human face positions, position eye using the method for facial feature points detection Corner location calculates the position of human eye by canthus point.Iris templates are established, then detect eye area using iris templates The position in domain positions the position at iris center, the canthus point finally detected, iris center by iris center fine positioning algorithm Etc. information as eye movement vector.It establishes blinkpunkt using eye movement vector as the input of neural network using neural network model and reflects Relationship is penetrated, and then calculates blinkpunkt region.The result shows that the method for eye tracking reaches in common experiment light environment The good effect arrived.
The present invention can obtain the method in of the invention in the case where head is fixed by the data in table 4 and obtain highest knowledge Not rate, average discrimination have reached 95.74%, and have reached 98.9% in the highest region of discrimination, and translational head and Although end rotation angle remains at 91% or more in some declines of 15 ° of range discriminations, the discrimination of population mean.But When the angle of head deflection increases, discrimination decline is obvious.By verifying, when end rotation angle increases, camera is Complete contouring head cannot be taken, and then influences the precision of identification.
The line chart that data in table 4 are obtained is as shown in Figure 5.The discrimination of each position shown in figure is relative to head The fixed situation in portion has a degree of decline, 2,3,6,7,10,11 when being in general the universal higher region of ratio, These regions are in the centre of test zone in area distribution, and the image acquired under these regions, which is substantially all, to be had completely Face image, feature extraction are more apparent.And in 1,4,5,6,8,9,12, these regions, end rotation angle increases, can not Guarantee that face mask is complete, and then False Rate can be improved.It is computed the average knowledge of these positions when end rotation angle reaches 45 ° Rate is not only 69.55%.It was proved that head maximum rotation angle is 37 ° in the case where guaranteeing precision, at this time flat Equal discrimination reaches 85%.
Detailed description of the invention
Fig. 1 is the Eye-controlling focus method flow diagram provided in an embodiment of the present invention based on human eye geometrical characteristic.
Fig. 2 is the sight line tracking system schematic diagram provided in an embodiment of the present invention based on human eye geometrical characteristic.
In figure: 1, the position module of Face detection and human eye;2, iris templates establish module;3, eye movement vector detection mould Block;4, blinkpunkt area calculation module.
Fig. 3 is eye movement vector table diagram provided in an embodiment of the present invention.
Fig. 4 is the histogram in lightness space in other two color spaces provided in an embodiment of the present invention;
In figure: being (a) the lightness spatial histogram of YUV after progress illumination compensation, be (b) Ycbcr after progress illumination compensation Lightness spatial histogram.
Fig. 5 is the discrimination figure of each position provided in an embodiment of the present invention.
Specific embodiment
In order to make the objectives, technical solutions, and advantages of the present invention clearer, with reference to embodiments, to the present invention It is further elaborated.It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not used to Limit the present invention.
The present invention is using eye movement characteristics more stable under head movement are chosen at, using the system tray of single camera gauge without light source Structure achieves preferable precision using the 2D Eye-controlling focus method based on artificial neural network.The present invention is come using neural network Eye image and blinkpunkt are mapped, and feature vector etc. is improved and optimized, does not need system using multiple cameras And infrared light supply, it is demarcated without to camera, reduces the hardware requirement of system, reduce hardware cost, enhanced and be The availability of system.
Below with reference to concrete analysis, the invention will be further described.
1, Fig. 1, the Eye-controlling focus method provided in an embodiment of the present invention based on human eye geometrical characteristic, comprising:
S101: face is detected from the video frame that camera is shot by Face datection algorithm first, is then being detected Human eye area is oriented greatly in face out.
S102: and then human eye iris center is accurately positioned out in this human eye area.
S103: finally first using eyes watch screen different location when characteristic parameter as the training of artificial neural network, Learning sample is trained, after training is completed, then using eye feature parameter as the input of trained neural network Feature vector, then neural network prediction goes out the direction of visual lines of human eye.
2, Face datection and eye corner locating
Especially for wearable Eye Tracking Technique under constraint condition, video camera directly shoots human eye, does not need Face location is tracked, but the variation of face location influences the variation of position of human eye in the tracking of unconstrained vision, so It needs to carry out Face datection.It is that Xuehan Xiong et al. proposes SDM (Supervised Descent used in the present invention Method) face alignment algorithm detects human face characteristic point.The determination of human face characteristic point needs first to find face frame, and the present invention is logical Classical Adaboost cascade is crossed to determine the position of face frame.Face is found using SDM algorithm in determining face frame Characteristic point.
The present invention is positioned using SDM algorithm and 49 characteristic points of track human faces, and it is minimum that actually SDM algorithm, which is one, Change the optimization method of nonlinear least square method, this method learns the smallest side of successively decreasing of least square function by training To avoiding calculating Hessian matrix and Jacobian matrix.
Use SDM face characteristic training principle are as follows:D (x) is indicated in formula Coordinate is the pixel of x in facial image, which has m pixel, extract in h (d (x)) expression facial image SIFT feature, Φ indicate the characteristic point of hand labeled.In order to avoid falling into Local Minimum, SDM uses successive ignition, finally Training objective are as follows:
K is the number of iterations, Δ x in formulakiIndicate error when iterating to kth time,
diIndicate the i-th picture,The hand labeled point for indicating i-th figure, finally according to the training parameter R of acquisitionkWith bk, obtain human face characteristic point.
The present invention obtains 49 characteristic points of face by SDM algorithm, including the characteristic point of ocular: 4 eyes Angle point, 8 upper palpebra inferior points.The image of ocular is calculated using the canthus and eyelid characteristic point of acquisition, eye will be utilized The image in portion region carries out iris Spot detection.
3, iris centralized positioning
The outer edge of iris is excentric under normal circumstances, but is influenced in practice by image resolution ratio, Pupil edge can not be accurately obtained, so being approximately pupil center using the outer peripheral center of circle of iris.The picture of actual acquisition by The influence of the factors such as light environment, photo resolution can all have noise in picture, can also judge by accident to a certain extent.Institute With Iris Location process of the invention using the principle of " essence after first thick ": carrying out coarse positioning to iris region first with template, so Integro-differential operator fine positioning iris is utilized afterwards.
Ocular image is carried out it has been observed that iris region has relative to other regions in other oculars Its more obvious feature, iris region are smaller with having inside better circular contour and iris compared to other regions Average gray value is established feature iris based on these features and is relatively applicable in using the method for template matching.
4, iris templates are established
M iris images are obtained from the image that video camera is shot and constitute iris data collection U, and every image can be converted The vector Γ of one 50 × 50 dimension, is then put into this 80 vectors in one set U, is shown below:
U={ Γ123,..,ΓM}
After getting iris vector set U, the average image Ψ is calculated, formula is as follows:
Ψ herein is the required template obtained.
5, template matching algorithm
The template matching algorithm that the present invention uses is correlation method, the principle of correlation method are as follows: template T (m, n) is overlayed and folded It is placed on searched figure S (W, H) and translates, template covers searched region whistle SijFigure, i, j are the subgraph lower left corner in searched figure Coordinate on S, search range are: 1≤i≤W-n, 1≤j≤H-m.The similitude of T sum can be measured with following formula:
It is normalized, obtains the related coefficient of template matching:
When template is as subgraph, coefficient R (i, j)=1, After completing all search in searched figure S, the maximum value R of R is found outmax(im,jm), corresponding subgraphAs match Target.
6, iris center essence SijPositioning
Integro-differential operator is a kind of Iris-orientation Algorithm that Daugman is proposed, which calculates on iris image annulus Grey scale difference, then area's maximum value from all difference results, obtains positioning result to the end.Calculation formula is as follows:
In formula: I (x, y) is image array;(x, y) is the center of circle;R is radius.
7, the selection of eye movement vector
The selection of eye movement vector is the key that Eye-controlling focus, and the method for Baluja of early stage etc. is to normalize eye picture It is using whole picture figure as the input of neural network, all pixels o'clock of whole picture figure are big as 512 dimensions to 32 × 16 sizes Small eye movement characteristics vector, the training of intrinsic dimensionality a larger increase of this method neural network and predicted time.Nerve net The otherness that the feature vector of network needs to have certain, otherwise the prediction output of neural network will be inaccurate.
The present invention obtains the information of canthus point and iris center, but the information content for only using this 6 points is very few.The present invention The information of 6 points is extended.
(1) variable quantity is iris center when remaining stationary in view of face location, due to the change of iris center Change and bring is the variation at iris center to two human eye angle point distances, the present invention is based on these features will be in canthus point and iris The distance of the heart is added among eye movement vector, but only has these information and can not indicate the uniqueness of eye movement vector.
(2) in order to make the bigger consideration of the feature difference of vector be added angle character on this basis, what when beginning selected is It is reference with the line between canthus point, chooses the angle between canthus point and pupil center, but this eye movement vector only considers Be feature when head pose is fixed, and when head pose changes, vector information is unable to get description, of the invention Reference axis is established so that original image central point is as a reference point, using x-axis direction as straight line is referred to, is parallel to x-axis with two canthus Straight line, angle are that pupil center and the angle of canthus point are as shown in Figure 3.
(3) the eye movement vector more than can represent head to a certain extent and be kept essentially stationary or when fine rotation Eye movement characteristics, the present invention consider head occur deflection in the case where, using right eye inner eye corner as reference point, take canthus point with The angle of x-axis.
Will eventually determine in the present invention these eye movement vectors as neural network characteristics vector, this feature vectors have compared with Good otherness, is suitable as the input of neural network, and substantially reduce the dimension of feature vector, improves system Speed.Here, the location information of eye pupil center and eyes angle point has formed the feature vector that a size is 17 dimensions: wherein 4 values be made of the distance of pupil center and eyes angle point, 4 values are by the line and pupil center's angle between eyes angle point Cosine value, 1 value are the cosine value of the tilt angle of eyes, and remaining 8 are worth by the coordinate position of 4 angle points of two eyes Composition.
Such as Fig. 2, sight line tracking system of the realization based on human eye geometrical characteristic provided in an embodiment of the present invention includes:
The position module 1 of Face detection and human eye uses human face characteristic point using Face detection algorithm locating human face position The method of detection positions eye corner location, and the position of human eye is calculated by canthus point;
Iris templates establish module 2, establish iris templates, and the position of ocular is then detected using iris templates, The position at iris center is positioned by iris center fine positioning algorithm;
Eye movement vector detection module 3, the information of the canthus point, iris center that detect is as eye movement vector;
Blinkpunkt area calculation module 4 is built using neural network model using eye movement vector as the input of neural network Vertical blinkpunkt mapping relations, calculate blinkpunkt region.
Below with reference to experimental result, the invention will be further described.
Experiment is completed on Dell's 14R-5420 i5-23100M CPU@2.50GHz processor, RAM 6.00GB, Windows 7、Matlab R2016b。
1) image preprocessing
Unconstrained lower shooting picture due to the eye figure by illumination effect than stronger, under low photoenvironment Piece, picture is whole partially dark and eye profile is unobvious, and histogram shows at this moment gray value, is concentrated mainly on lower region, is Raising iris region and other region contrasts.The present invention uses the light backoff algorithm based on reference white, after illumination compensation Image using the grey level histogram after this algorithm, gray scale is stretched, and light and shade contrast increases, and iris region is shown more Obviously.
To carrying out after illumination compensation being the HSV lightness spatial filter colour of skin used in the present invention.Illumination benefit is not being carried out When repaying, HSV lightness space is compared from histogram distribution, and COLOR COMPOSITION THROUGH DISTRIBUTION concentrated area is also on lower section.Carry out illumination After compensation, HSV lightness space diagram, histogram distribution concentrates on the high region of brightness and increases, and light and shade contrast further draws Greatly, iris edge is more clear.
The lightness space for the HSV that the present invention uses has higher area relative to the details in other color space pictures It indexes, the histogram in lightness space in other two color spaces as shown in Figure 4, (a) is the bright of YUV after carrying out illumination compensation Spatial histogram is spent, is (b) the lightness spatial histogram of Ycbcr after progress illumination compensation;Brightness is high in hsv color space Area distribution pixel number it is more, light and shade contrast is more preferable relative to other two color spaces in image.
2) iris center is determined
The determination at iris center is emphasis during Eye-controlling focus.
When determining iris center, due to the influence of experimental situation, eye areas can generate much noise, upper and lower eyes Have certain radian, these can all influence the matched effect of iris, the marginal information that ocular contains in addition to iris edge, More there are also other noises, directly using integro-differential operator to iris centralized positioning as a result, since integro-differential operator is It is scanned on full figure, when noise region is there are when better circular feature, the fitting of iris center is just judged by accident.
In order to reduce the probability judged by accident, the present invention is using the integro-differential operator based on template, this method Iris region is limited in a small range region first with template, iris center coarse positioning process is equivalent to, then utilizes The fine positioning at integro-differential operator progress iris center.The present invention using template matching method carry out iris center coarse positioning be by Have more stable shape feature and gray value lower relative to other regions in the gray value of iris region, is suitble to use template Matched method.
Template matching method needs first to establish template picture, shows in 80 iris pictures that the present invention is gone out using manual cutting Be picture that iris templates are established in part, iris picture is first normalized to 50 × 50 picture in establishment process, is established Iris templates.
Determination for template size is a multiple dimensioned template matching problem.Distance of the eyes from camera, determines The size of the size of iris in image, template should be zoomed in and out according to the size of eyes.By experiment, the present invention is big using dynamic Small template, according to the inner eye corner point of SDM detection as reference at a distance from the tail of the eye, the length of template is canthus point distance 0.45 times.
By the Template Location to iris region, the process of coarse positioning is carried out to iris, has used integro-differential operator Iris centralized positioning extract iris center.By the way that iris region is limited to a pole after the operation of template matching Small region, the length variation range of template is 10-20 pixel size in experiment.
By directly using integro-differential operator with the present invention is based on template methods to compare, test picture under lamp It is acquired by camera.
It opens one's eyes portion's picture by the present invention in that being taken pictures with camera and having chosen 40, wherein 20 situations good for illumination, 20 light environments are poor.Comparison discovery effect of iris Spot detection under the good environment of illumination is not much different, but tests It is middle discovery since present invention employs templates, and the influence of eyelid sclera region to be made to become smaller, under the good environment of illumination obtain compared with High precision.And in the case where light environment is poor, interference due to traditional algorithm vulnerable to outside noise, performance sharply under Drop, the False Rate of experiment traditional algorithm of the invention reach 65%, are unable to satisfy the requirement of experiment.And the present invention passes through template handle Iris is limited in a zonule, greatly reduces eyelid, the influence of the factors such as sclera, and experiment shows the side that the present invention uses Method achieves comparatively ideal result.
500 pictures for opening one's eyes eyeball opening that the present invention chooses in 1520 picture of BioID data are tested, and are obtained Phase difference is as shown in table 1 by standard of city block distance compared with standard data for iris center:
The accuracy of 1 iris detection of table
The method of the present invention with conventional method compare institute the present invention consume average time comparison such as table 2:
The comparison of 2 iris detection time of table
3) neural metwork training
Data in present invention experiment are carried out under general light environment, and the general network of 480P resolution ratio is used Camera, for face apart from computer screen 45cm or so, computer screen size is 14 inches, long about 31cm, wide about 17.5cm, Screen is divided into 12 regions by 4 × 3.
3 computer screen region of table
1 2 3 4
5 6 7 8
9 10 11 12
Computer screen is divided into 12 regions by the present invention, when a certain region of eye gaze, by eye feature at this moment to Amount records the input as neural network, and network structure is 4 layers of two hidden-layer network, and each region obtains the data of 900 frames, 10800 groups of data in total.
The present invention will test the experiment in the case of being divided into four kinds, the first is that situation is that head pose is fixed or slightly shaken Dynamic, second situation is translational head motions, and small range rotation occurs for the third situation head, and the 4th kind is that head generation is larger The rotation of range.Discrimination such as table 4 under each posture of translational head
Discrimination under each posture of table 4
The method in of the invention, which can be obtained, in the case where head is fixed by the data in table 4 has obtained highest discrimination, Average discrimination has reached 95.74%, and has reached 98.9% in the highest region of discrimination, and translational head and head are revolved Although gyration remains at 91% or more in some declines of 15 ° of range discriminations, the discrimination of population mean.But work as head When the angle of deflection increases, discrimination decline is obvious.By verifying, when end rotation angle increases, camera cannot Complete contouring head is taken, and then influences the precision of identification.
The line chart that data in table 4 are obtained is as shown in Figure 5.The discrimination of each position shown in figure is relative to head The fixed situation in portion has a degree of decline, 2,3,6,7,10,11 when being in general the universal higher region of ratio, These regions are in the centre of test zone in area distribution, and the image acquired under these regions, which is substantially all, to be had completely Face image, feature extraction are more apparent.And in 1,4,5,6,8,9,12, these regions, end rotation angle increases, can not Guarantee that face mask is complete, and then False Rate can be improved.It is computed the average knowledge of these positions when end rotation angle reaches 45 ° Rate is not only 69.55%.It was proved that head maximum rotation angle is 37 ° in the case where guaranteeing precision, at this time flat Equal discrimination reaches 85%.
Below with reference to effect, the invention will be further described.
The comparison of 5 method of table
The relatively traditional SVR of method proposed by the present invention and ALR have greater advantage, and have relative to convolutional neural networks Possess lesser calculation amount and higher precision.
The present invention is proposed quickly to position iris center using the method that template matching is combined with iris fine positioning, be utilized Neural network maps sight drop point, calculates sight drop point region.What this method was used only is general network camera And without being demarcated to camera, head can small range it is mobile, by experiment, reached higher precision.
In the above-described embodiments, can come wholly or partly by software, hardware, firmware or any combination thereof real It is existing.When using entirely or partly realizing in the form of a computer program product, the computer program product include one or Multiple computer instructions.When loading on computers or executing the computer program instructions, entirely or partly generate according to Process described in the embodiment of the present invention or function.The computer can be general purpose computer, special purpose computer, computer network Network or other programmable devices.The computer instruction may be stored in a computer readable storage medium, or from one Computer readable storage medium is transmitted to another computer readable storage medium, for example, the computer instruction can be from one A web-site, computer, server or data center pass through wired (such as coaxial cable, optical fiber, Digital Subscriber Line (DSL) Or wireless (such as infrared, wireless, microwave etc.) mode is carried out to another web-site, computer, server or data center Transmission).The computer-readable storage medium can be any usable medium or include one that computer can access The data storage devices such as a or multiple usable mediums integrated server, data center.The usable medium can be magnetic Jie Matter, (for example, floppy disk, hard disk, tape), optical medium (for example, DVD) or semiconductor medium (such as solid state hard disk Solid State Disk (SSD)) etc..
The foregoing is merely illustrative of the preferred embodiments of the present invention, is not intended to limit the invention, all in essence of the invention Made any modifications, equivalent replacements, and improvements etc., should all be included in the protection scope of the present invention within mind and principle.

Claims (10)

1. a kind of Eye-controlling focus method based on human eye geometrical characteristic, which is characterized in that the view based on human eye geometrical characteristic Line method for tracing includes:
Using Face detection algorithm locating human face position, eye corner location is positioned using the method for facial feature points detection, is passed through Canthus point calculates the position of human eye;
Iris templates are established, the position of ocular is then detected using iris templates, pass through iris center fine positioning algorithm Position the position at iris center;
The information of the canthus point, iris center that detect is as eye movement vector;
Blinkpunkt mapping relations are established using eye movement vector as the input of neural network using neural network model, calculate note View region.
2. as described in claim 1 based on the Eye-controlling focus method of human eye geometrical characteristic, which is characterized in that
Face datection includes: with eye corner locating method
It utilizesIt carries out, d (x) indicates the picture that coordinate is x in facial image in formula Vegetarian refreshments, the facial image have m pixel, and h (d (x)) indicates the SIFT feature extracted in facial image, and Φ indicates mark manually The characteristic point of note;SDM uses successive ignition, final training objective are as follows:
K is the number of iterations, Δ x in formulakiIndicate error when iterating to kth time,
diIndicate the i-th picture,The hand labeled point for indicating i-th figure, finally according to the training parameter R of acquisitionkAnd bk, obtain 49 characteristic points of face, the characteristic point including ocular: 4 canthus points, 8 upper palpebra inferior points;Utilize the canthus of acquisition The image of ocular is calculated with eyelid characteristic point, iris Spot detection will be carried out using the image of ocular.
3. as described in claim 1 based on the Eye-controlling focus method of human eye geometrical characteristic, which is characterized in that
Iris center positioning method includes: the principle using first thick rear essence: generating iris templates image, benefit first with iris image Coarse positioning is carried out to iris region with the method for template matching, the influence in other regions of eye is excluded, recycles integro-differential operator Fine positioning iris center.
4. as described in claim 1 based on the Eye-controlling focus method of human eye geometrical characteristic, which is characterized in that establish iris templates Method includes:
Obtain M iris images from the image that video camera is shot and constitute iris data collection U, every image be converted into one 50 × The vector Γ of 50 dimensions, is then put into this 80 vectors in one set U, is shown below:
U={ Γ123,..,ΓM}
After getting iris vector set U, the average image Ψ is calculated, formula is as follows:
The value of each corresponding points of Γ is added and is averaging;Ψ is a characteristic pupil, for the required template obtained.
5. as described in claim 1 based on the Eye-controlling focus method of human eye geometrical characteristic, which is characterized in that template matching algorithm Include:
Template T (m, n) is overlayed to be stacked on searched figure S (W, H) and be translated, and template covers searched region whistle SijFigure, I, j are coordinate of the subgraph lower left corner on searched figure S, and search range is: 1≤i≤W-n, 1≤j≤H-m;T is measured with following formula And SijSimilitude:
Normalization, obtains the related coefficient of template matching:
When template is as subgraph, coefficient R (i, j)=1;After completing all search in searched figure S, the maximum of R is found out Value Rmax(im,jm), corresponding subgraphTo match target.
6. as described in claim 1 based on the Eye-controlling focus method of human eye geometrical characteristic, which is characterized in that the choosing of eye movement vector Selection method includes:
(1) eye movement vector will be added at a distance from canthus point and iris center in the location information based on special canthus point and iris central point Among;
(2) reference axis is established so that original image central point is as a reference point, using x-axis direction as straight line is referred to, is made with two canthus parallel In the straight line of x-axis, angle is the angle of pupil center Yu canthus point;
(3) in the case that deflection occurs for head, using right eye inner eye corner as reference point, the angle of canthus point and x-axis is taken.
7. a kind of computer for realizing the Eye-controlling focus method described in claim 1~6 any one based on human eye geometrical characteristic Program.
8. a kind of Information Number for realizing the Eye-controlling focus method described in claim 1~6 any one based on human eye geometrical characteristic According to processing terminal.
9. a kind of computer readable storage medium, including instruction, when run on a computer, so that computer is executed as weighed Benefit requires the Eye-controlling focus method described in 1-6 any one based on human eye geometrical characteristic.
10. it is a kind of realize the Eye-controlling focus method described in claim 1 based on human eye geometrical characteristic based on human eye geometrical characteristic Sight line tracking system, which is characterized in that the sight line tracking system based on human eye geometrical characteristic includes:
The position module of Face detection and human eye uses facial feature points detection using Face detection algorithm locating human face position Method position eye corner location, the position of human eye is calculated by canthus point;
Iris templates establish module, establish iris templates, and the position of ocular is then detected using iris templates, passes through rainbow The position at center membrane fine positioning algorithm positioning iris center;
Eye movement vector detection module, the information of the canthus point, iris center that detect is as eye movement vector;
Blinkpunkt area calculation module, using neural network model, using eye movement vector as the input of neural network, foundation is watched attentively Point mapping relations, calculate blinkpunkt region.
CN201810735315.9A 2018-07-06 2018-07-06 A kind of Eye-controlling focus method and system based on human eye geometrical characteristic Pending CN108985210A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810735315.9A CN108985210A (en) 2018-07-06 2018-07-06 A kind of Eye-controlling focus method and system based on human eye geometrical characteristic

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810735315.9A CN108985210A (en) 2018-07-06 2018-07-06 A kind of Eye-controlling focus method and system based on human eye geometrical characteristic

Publications (1)

Publication Number Publication Date
CN108985210A true CN108985210A (en) 2018-12-11

Family

ID=64536303

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810735315.9A Pending CN108985210A (en) 2018-07-06 2018-07-06 A kind of Eye-controlling focus method and system based on human eye geometrical characteristic

Country Status (1)

Country Link
CN (1) CN108985210A (en)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598253A (en) * 2018-12-14 2019-04-09 北京工业大学 Mankind's eye movement measuring method based on visible light source and camera
CN109829380A (en) * 2018-12-28 2019-05-31 北京旷视科技有限公司 A kind of detection method, device, system and the storage medium of dog face characteristic point
CN109965843A (en) * 2019-03-14 2019-07-05 华南师范大学 A kind of eye movements system passing picture based on filtering
CN110147163A (en) * 2019-05-20 2019-08-20 浙江工业大学 The eye-tracking method and system of the multi-model fusion driving of facing mobile apparatus
CN110456904A (en) * 2019-06-18 2019-11-15 中国人民解放军军事科学院国防科技创新研究院 A kind of augmented reality glasses eye movement exchange method and system without calibration
CN110555426A (en) * 2019-09-11 2019-12-10 北京儒博科技有限公司 Sight line detection method, device, equipment and storage medium
CN110909611A (en) * 2019-10-29 2020-03-24 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
CN111428634A (en) * 2020-03-23 2020-07-17 中国人民解放军海军特色医学中心 Human eye sight tracking and positioning method adopting six-point method block fuzzy weighting
CN111632367A (en) * 2020-05-18 2020-09-08 歌尔科技有限公司 Hand-trip system based on visual guidance and hand-trip response method
CN112069986A (en) * 2020-09-04 2020-12-11 江苏慧明智能科技有限公司 Machine vision tracking method and device for eye movements of old people
WO2021082636A1 (en) * 2019-10-29 2021-05-06 深圳云天励飞技术股份有限公司 Region of interest detection method and apparatus, readable storage medium and terminal device
CN113158879A (en) * 2021-04-19 2021-07-23 天津大学 Three-dimensional fixation point estimation and three-dimensional eye movement model establishment method based on matching characteristics
CN113505694A (en) * 2021-07-09 2021-10-15 南开大学 Human-computer interaction method and device based on sight tracking and computer equipment
CN113627316A (en) * 2021-08-06 2021-11-09 南通大学 Human face eye position positioning and sight line estimation method
CN113762077A (en) * 2021-07-19 2021-12-07 沈阳工业大学 Multi-biological-characteristic iris template protection method based on double-hierarchical mapping
CN113808160A (en) * 2021-08-05 2021-12-17 虹软科技股份有限公司 Sight direction tracking method and device
CN113822174A (en) * 2021-09-02 2021-12-21 北京的卢深视科技有限公司 Gaze estimation method, electronic device, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056092A (en) * 2016-06-08 2016-10-26 华南理工大学 Gaze estimation method for head-mounted device based on iris and pupil
CN107193383A (en) * 2017-06-13 2017-09-22 华南师范大学 A kind of two grades of Eye-controlling focus methods constrained based on facial orientation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106056092A (en) * 2016-06-08 2016-10-26 华南理工大学 Gaze estimation method for head-mounted device based on iris and pupil
CN107193383A (en) * 2017-06-13 2017-09-22 华南师范大学 A kind of two grades of Eye-controlling focus methods constrained based on facial orientation

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
XUHAN XIONG等: ""Supervised descent method and its applications to face alignment", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
任大大: "基于十字模板的数字减影图像配准快速算法", 《道客巴巴,URL:HTTP://WWW.DOC88.COM/P-302517712316.HTML》 *
兔死机: "人脸识别经典算法一:特征脸方法", 《CSDN,URL:HTTPS://BLOG.CSDN.NET/SMARTEMPIRE/ARTICLE/DETAILS/21406005》 *
朱博等: "基于极限学习机(ELM)的视线落点估计方法", 《东北大学学报(自然科学版)》 *

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109598253A (en) * 2018-12-14 2019-04-09 北京工业大学 Mankind's eye movement measuring method based on visible light source and camera
CN109598253B (en) * 2018-12-14 2023-05-05 北京工业大学 Human eye movement measuring and calculating method based on visible light source and camera
CN109829380A (en) * 2018-12-28 2019-05-31 北京旷视科技有限公司 A kind of detection method, device, system and the storage medium of dog face characteristic point
CN109965843B (en) * 2019-03-14 2022-05-24 华南师范大学 Eye movement system based on filtering image transmission
CN109965843A (en) * 2019-03-14 2019-07-05 华南师范大学 A kind of eye movements system passing picture based on filtering
CN110147163A (en) * 2019-05-20 2019-08-20 浙江工业大学 The eye-tracking method and system of the multi-model fusion driving of facing mobile apparatus
CN110147163B (en) * 2019-05-20 2022-06-21 浙江工业大学 Eye movement tracking method and system driven by multi-model fusion for mobile equipment
CN110456904A (en) * 2019-06-18 2019-11-15 中国人民解放军军事科学院国防科技创新研究院 A kind of augmented reality glasses eye movement exchange method and system without calibration
CN110555426A (en) * 2019-09-11 2019-12-10 北京儒博科技有限公司 Sight line detection method, device, equipment and storage medium
CN110909611A (en) * 2019-10-29 2020-03-24 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment
WO2021082636A1 (en) * 2019-10-29 2021-05-06 深圳云天励飞技术股份有限公司 Region of interest detection method and apparatus, readable storage medium and terminal device
WO2021082635A1 (en) * 2019-10-29 2021-05-06 深圳云天励飞技术股份有限公司 Region of interest detection method and apparatus, readable storage medium and terminal device
CN111428634A (en) * 2020-03-23 2020-07-17 中国人民解放军海军特色医学中心 Human eye sight tracking and positioning method adopting six-point method block fuzzy weighting
CN111632367A (en) * 2020-05-18 2020-09-08 歌尔科技有限公司 Hand-trip system based on visual guidance and hand-trip response method
CN112069986A (en) * 2020-09-04 2020-12-11 江苏慧明智能科技有限公司 Machine vision tracking method and device for eye movements of old people
CN113158879A (en) * 2021-04-19 2021-07-23 天津大学 Three-dimensional fixation point estimation and three-dimensional eye movement model establishment method based on matching characteristics
CN113158879B (en) * 2021-04-19 2022-06-10 天津大学 Three-dimensional fixation point estimation and three-dimensional eye movement model establishment method based on matching characteristics
CN113505694A (en) * 2021-07-09 2021-10-15 南开大学 Human-computer interaction method and device based on sight tracking and computer equipment
CN113505694B (en) * 2021-07-09 2024-03-26 南开大学 Man-machine interaction method and device based on sight tracking and computer equipment
CN113762077A (en) * 2021-07-19 2021-12-07 沈阳工业大学 Multi-biological-characteristic iris template protection method based on double-hierarchical mapping
CN113762077B (en) * 2021-07-19 2024-02-02 沈阳工业大学 Multi-biological feature iris template protection method based on double-grading mapping
CN113808160A (en) * 2021-08-05 2021-12-17 虹软科技股份有限公司 Sight direction tracking method and device
CN113808160B (en) * 2021-08-05 2024-01-16 虹软科技股份有限公司 Sight direction tracking method and device
CN113627316A (en) * 2021-08-06 2021-11-09 南通大学 Human face eye position positioning and sight line estimation method
CN113822174A (en) * 2021-09-02 2021-12-21 北京的卢深视科技有限公司 Gaze estimation method, electronic device, and storage medium

Similar Documents

Publication Publication Date Title
CN108985210A (en) A kind of Eye-controlling focus method and system based on human eye geometrical characteristic
US11775056B2 (en) System and method using machine learning for iris tracking, measurement, and simulation
US20230255478A1 (en) Automated determination of arteriovenous ratio in images of blood vessels
WO2020000908A1 (en) Method and device for face liveness detection
TWI754806B (en) System and method for locating iris using deep learning
CN106796449A (en) Eye-controlling focus method and device
US11026571B2 (en) Method for processing pupil tracking image
JP6822482B2 (en) Line-of-sight estimation device, line-of-sight estimation method, and program recording medium
EP3154407B1 (en) A gaze estimation method and apparatus
CN111291701B (en) Sight tracking method based on image gradient and ellipse fitting algorithm
CN115482574B (en) Screen gaze point estimation method, device, medium and equipment based on deep learning
Jung et al. An eye detection method robust to eyeglasses for mobile iris recognition
Jafari et al. Eye-gaze estimation under various head positions and iris states
CN110929570B (en) Iris rapid positioning device and positioning method thereof
CN114360039A (en) Intelligent eyelid detection method and system
JP2014064083A (en) Monitoring device and method
Asem et al. Blood vessel segmentation in modern wide-field retinal images in the presence of additive Gaussian noise
Li et al. Automatic detection and boundary estimation of the optic disk in retinal images using a model-based approach
Wang et al. Contact lenses detection based on the gaussian curvature
Savaş Real-time detection and tracking of human eyes in video sequences
JP7103443B2 (en) Information processing equipment, information processing methods, and programs
Rajabhushanam et al. IRIS recognition using hough transform
Valenti et al. Simple and efficient visual gaze estimation
Li et al. Iris center localization using integral projection and gradients
Wang et al. Hybrid iris center localization method using cascaded regression, weighted averaging, and weighted snakuscule

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181211

RJ01 Rejection of invention patent application after publication