CN104036238B - The method of the human eye positioning based on active light - Google Patents

The method of the human eye positioning based on active light Download PDF

Info

Publication number
CN104036238B
CN104036238B CN201410231543.4A CN201410231543A CN104036238B CN 104036238 B CN104036238 B CN 104036238B CN 201410231543 A CN201410231543 A CN 201410231543A CN 104036238 B CN104036238 B CN 104036238B
Authority
CN
China
Prior art keywords
human eye
face
human
positioning
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410231543.4A
Other languages
Chinese (zh)
Other versions
CN104036238A (en
Inventor
王元庆
孙文晋
徐斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201410231543.4A priority Critical patent/CN104036238B/en
Publication of CN104036238A publication Critical patent/CN104036238A/en
Application granted granted Critical
Publication of CN104036238B publication Critical patent/CN104036238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

Human-eye positioning method based on active light, is actively projected using active light generating device in video-unit to face, is provided with image-pickup device and bright pupil, dark pupil two field picture are extracted;The bright pupil effect triggered using active light projection is to obtain the candidate region that human eye is positioned by the difference and image filtering method of two field picture;It is follow-up that human eye positioning is completed using face, human-eye positioning method.Human face region is oriented especially by the method for Knowledge based engineering, feature based, template matches or based on presentation Face detection;According to the geometric properties of face, the method positioned by Knowledge based engineering, feature based, template matches or based on presentation human eye orients human eye area.

Description

The method of the human eye positioning based on active light
Technical field
The side of the positioning of the human eye based on active light used the present invention relates to the method for human eye positioning, especially video-unit Method.
Background technology
The research of human eye positioning has had history very long, and earliest research work can trace back to the forties in 20th century, But it is real to have development or at nearest 20 years.The input picture of human eye positioning generally has 3 kinds of situations:Front, side, inclined-plane. So far, the object of most of human eye Position Research work is for front or close to positive eye image for the work of IBM in 1997.
Human eye positioning is one important theoretical research value and application value, extremely challenging problem.Human eye Positioning refers to be checked whether in piece image containing human eye, if it has, then need to further determine that position and the yardstick of human eye, And then the region of human eye is indicated with a polygon or circular frame.Its potential application include robot vision, sight line mouse, Many aspects such as fatigue driving early warning, self-service disabled person, man-machine interaction, artificial intelligence.
The method for being used for human eye positioning both at home and abroad at present emerges in an endless stream, and summing up substantially has four kinds:Knowledge based engineering, base In the method for feature, template matches or based on presentation human eye positioning.
Knowledge based engineering human-eye positioning method is that the mankind are regular into some about the knowledge encoding of typical eye, using this A little rules carry out the positioning of human eye.These rules mainly include:Profile rule, what such as the profile of human eye can be approximate is seen as one It is oval;Human eye is distributed in upper half face in organ arranging rule, such as front face;The eyes tool of rule of symmetry, such as people There is symmetry;The action of sports rule, such as blink can be used to realize that human eye is separated with background.
The human-eye positioning method of feature based is to find some attributes or knot for not relying on external condition on human eye Structure feature, and carry out human eye positioning using these attributes or architectural feature.The method for being learnt by great amount of samples first is looked for These attributes or architectural feature, are then gone to position human eye with these attributes or architectural feature.
The human-eye positioning method of template matches is a kind of classical mode identification method, predefines or parameterize one first Whether the human eye template of standard, then calculates the degree of correlation in detection image region and standard form, be human eye by threshold determination. Wherein, human eye template can dynamically update.
Human-eye positioning method based on presentation typically finds human eye and inhuman eye pattern using statistical analysis and machine learning The relevant characteristic of picture.Study and come characteristic be summarized as distributed model or discriminant function, recycle these distributed models or Discriminant function positions human eye.The theoretical foundation of the human-eye positioning method based on presentation is probability theory, will typically use probability By the knowledge with mathematical statistics.
Adaboost is a kind of iterative algorithm, and its core concept is directed to same training set and trains different graders (Weak Classifier), then gets up these weak classifier sets, constitutes a stronger final classification device (strong classifier).It is calculated Method realizes whether it is correct according to the classification of each sample among each training set by change data distribution in itself, And the accuracy rate of the general classification of last time determines the weights of each sample.The new data set that weights will be changed is given down Layer grader is trained, and finally will every time train the grader for obtaining finally to merge, as last Decision Classfication device.
The human eye that prior art is based on active light is positioned.So-called active light refers to the throwing sent by infrared or near-infrared light source It is mapped to the light beam of detected target surface.And when the face image of people is shot under infrared illumination, when meeting some requirements When, the performance of the pupil of human eye in picture is brightness apparently higher than pupil peripheral region, and this phenomenon is referred to as that " bright pupil is imitated Should ".Using the bright pupil effect under active light, by difference and image filtering between different images, human eye positioning can be obtained Candidate region so that accelerate human eye position speed.
The content of the invention
The present invention seeks to:Propose a kind of human-eye positioning method based on active illuminaton.The method is shone using active light The method penetrated, is positioned by Face detection and human eye, rapidly and effectively distinguishes other regions in human eye area and image, Realize the real-time positioning of the single or multiple human eyes under complex background.Wherein, tracking, template matches, the correlation used in method Calculate and filter accuracy, stability and real-time that scheduling algorithm is effectively guaranteed human eye location algorithm.
Technical solution of the invention is as follows:Human-eye positioning method based on active light, using actively in video-unit Light generating device is actively projected to face, is provided with image-pickup device and bright pupil, dark pupil two field picture are extracted;Using actively The bright pupil effect that light projection triggers is to obtain the candidate region that human eye is positioned by the difference and image filtering method of two field picture; It is follow-up that human eye positioning is completed using face, human-eye positioning method.
According to the candidate region that human eye is positioned, by Knowledge based engineering, feature based, template matches or based on presentation The method of Face detection orient human face region;According to the geometric properties of face, by Knowledge based engineering, feature based, The method of template matches or based on presentation human eye positioning orients human eye area.
The method of optimization human eye positioning is tracked using track algorithm to the face or position of human eye oriented;Or use The template matches method related to calculating improves the performance of human eye positioning;Or positioned using filtering method raising face or human eye Performance, can select the several basic skills positioned with the human eye based on active light in above-mentioned three classes method to be applied in combination.
The accuracy of Face detection is improved using filtering algorithm in the Face detection stage;Make in face or human eye positioning stage Speed with track algorithm to accelerate follow-up two field picture human eye to position;Template matches, correlation computations are used in human eye positioning stage Method improve human eye positioning accuracy and stability.
Video-unit is used for 3 d display device, is placed in a certain position of 3 d display device.
Bandpass filter, the centre frequency and active light of bandpass filter are installed on the imaging lens of image-pickup device The centre frequency of light source is equal or close.
Using the means of Digital Image Processing, the image that image-pickup device is obtained is analyzed, further determines that people Eye position, its process are as shown in figure 1, main include following aspect:
The acquisition of human eye candidate region:Human eye candidate region is made up of two parts, and a part is by fixed in previous frame image What the position of human eye that position is arrived was tracked and come;Another part is carried using threshold value after carrying out difference to bright pupil, two kinds of images of dark pupil Obtain.Wherein Part II will be generally filtered due to the pseudo- candidate region that edge or motion etc. are caused using filtering algorithm. It is illustrated in figure 2 two kinds of different images of bright pupil and dark pupil.
Face detection:The method of Face detection has Knowledge based engineering, feature based, template matches or based on presentation Method.By taking the AdaBoost methods of feature based as an example, training obtains the feature group of some better performances into cascade of strong classifiers. The arrangement of position and human face according to human eye candidate region, examines according to different scale to possible human face region successively Survey, the method for using threshold value to compare determines whether the region is human face region.Fig. 3 may be used in showing AdaBoost algorithms Several Like-Fenton Oxidations.
Human eye is positioned and optimized:The method of human eye positioning equally also has Knowledge based engineering, feature based, template matches Or the method based on presentation.By taking SVMs (SVM) method based on presentation as an example, some Like-Fenton Oxidations are chosen first and is made Be characterized space, using grid search and, the method trained of cross validation weighted balance error rate and SVMs propped up Vector and respective weights coefficient are held, so as to depict Optimal Separating Hyperplane.Then using the hyperplane to human eye area to be detected Detected, determined whether the region is human eye area.After orienting human eye area, can be related using template matches and interframe The method of calculating optimizes the position of human eye area.Conventional Related Computational Methods for example there are lms algorithm.
Position of human eye is tracked:According to the position of human eye that present frame and preceding some frame alignment are arrived, using correlation tracking algorithm, can The possible position of human eye in next frame is obtained to predict.These positions directly carry out human eye detection, no longer carry out Face datection, from And be the whole human eye position fixing process of next frame carry out it is time-consuming.Conventional track algorithm for example there are Kalman prediction algorithms, Mean-Shift prediction algorithms.
Improvement of the invention is:With existing human-eye positioning method compared with device, the present invention is fast using active optical illumination Speed obtains human eye candidate region, accelerates the search speed of human eye positioning;In terms of being processed image, employ " face- Human eye " dual-positioning structure, process time is saved relative to the method for directly being positioned to human eye in the picture;In face Positioning stage improves the accuracy of Face detection using filtering algorithm, and template matches, correlometer are used in human eye positioning stage The method of calculation improves the accuracy and stability of human eye positioning;By tracking calculation after positioning obtains human face region or human eye area Method predicts possible human eye area in next two field picture, so that for the human eye positioning of subsequent frame saves the time;Taken the photograph in image Take and bandpass filter has been disposed on the imaging lens of device, the centre frequency of bandpass filter and the centre frequency of active radiant It is close or equal, so as to reduce influence of the ambient light change to human eye locating effect.
The beneficial effects of the invention are as follows:Target is detected using active light irradiation, is described using the bright pupil effect under active light The feature of detected target, the principal character of target is depicted with minimum data volume, is accelerating the same of human eye locating speed When, also reduce influence of the ambient light change to human eye locating effect.In image processing stage, using classification positioning, filtering, with Various methods such as track, the optimization of template matches combination correlation computations, improve accuracy, the stability of human eye positioning, it is ensured that people The real-time of eye positioning.
Brief description of the drawings
Fig. 1 is the flow chart of the inventive method;
Fig. 2 is bright pupil image of the invention and dark pupil image;
Fig. 3 is the example of several Like-Fenton Oxidations that can be used in image procossing of the present invention;
Fig. 4 is that the present invention illustrates for the example structure of 3 d display device.
Fig. 5 is Kalman one-step prediction device block diagrams.
Specific embodiment
Fig. 1 active illuminaton human eye location algorithm flow charts are as follows:Human-eye positioning method based on active light, utilizes Active light generating device is actively projected to face, and bright pupil, two kinds of images of dark pupil are extracted with image-pickup device;By two The difference and image filtering of field picture obtain the candidate region of human eye positioning;According to the candidate region that human eye is positioned, by base Human face region is oriented in the method for Face detection knowledge, feature based, template matches or based on presentation;According to people The geometric properties of face, the method positioned by Knowledge based engineering, feature based, template matches or based on presentation human eye is determined Position goes out human eye area;Wherein, the accuracy of Face detection is improved using filtering algorithm in the Face detection stage;In face or human eye Positioning stage accelerates the speed of follow-up two field picture human eye positioning using track algorithm;Template is used in human eye positioning stage Method with, correlation computations improves the accuracy and stability of human eye positioning.
In flow of the invention, processed from after intake or input picture, it is specific as follows to state:
It is applied to the position of human eye Detection And Tracking device of 3 d display device.The bottom of screen produces dress using active light Put, while using camera as image-input device.The distance of human eye to screen is general between 30 centimetres to 80 centimetres, During viewing screen, the lower edge of screen, not higher than 30 degree of the upper edge scope of subtended angle are typically not less than.
By taking 17 cun of three-dimensional display as an example, the size positioned opposite between each part is as shown in Figure 4.17 cun of liquid crystal Plate, it is long 338 millimeters, it is wide 268 millimeters.Frame plus surrounding is about 420 millimeters, wide about 389 millimeters.When screen is watched, people Eyes can traditionally be located on the top position in front of screen, although can often up and down or left and right mobile ABCD it Between, but it is general not over certain scope, this also just determines the range of exposures of active light, and typical size marking is on the diagram (effect that can not know from experience stereotelevision more than this scope also belongs to normal).
The distance of face to screen is general between 30 centimetres to 80 centimetres, such as D points, when screen is watched, typically will not Less than the lower edge of screen, not higher than 30 degree of the upper edge scope of subtended angle, as shown in Figure 4.
Bright pupil image and dark pupil image are carried out difference by the image of input first, and difference value is possible to correspondence than larger point Human eye area.In order to avoid motion or the difference value that causes of edge than larger situation, it is necessary to according to difference value than larger point Aggregation extent and shape be filtered treatment, reduce may human eye area quantity, so as to be follow-up face human eye positioning Process time is saved in work.Bright pupil image and dark pupil image are as shown in Figure 2.
After obtaining human eye candidate region, can be using Knowledge based engineering, feature based, template matches or based on presentation Method carry out Face datection, by taking the AdaBoost algorithms of feature based as an example, by face, non-face two classes Sample Storehouse Machine learning is carried out, is searched out and performance preferably some Haar features is distinguished in positive negative example base.It is provided with training sample set S Each sample weights is distributed to in={ (x_1, y_1), (x_2, y_2) ..., (x_m, y_m) }, initialization, then uses Weak Classifier All of Weak Classifier is to sample classification in the H of space, will after classification results and multiplied by weight plus and, select best weak point of effect Class device h_1, sample weights are changed according to classification results, and the sample of misclassification improves weight, then repeatedly above step, from weak point The best Weak Classifier h_2 of prediction effect is selected in class device space, n times are repeated, N number of Weak Classifier is just obtained.Each weak point Class device can also be allocated a weight, and the weight of the Weak Classifier distribution of good classification effect is big, the Weak Classifier of classifying quality difference The weight of distribution is small.The result of final strong classifier classification is exactly that N number of Weak Classifier is produced according to the classification of respective weight votes Result.
The organ arrangement of position and face according to human eye candidate region, using the strong classifier for obtaining in altimetric image to be checked On carry out mutative scale Face datection, so as to orient the human face region in image.If used Ganlei Haar in machine learning Feature is as shown in Figure 3.
After orienting human face region, can be using Knowledge based engineering, feature based, template matches or based on presentation Method carry out human eye detection, by taking SVMs (SVM) algorithm based on presentation as an example, first by human eye, non-human eye Two class Sample Storehouses carry out machine learning, show that some Like-Fenton Oxidations constitute the feature space of algorithm of support vector machine.Selection Purpose is that a small amount of representational feature is selected from Haar feature spaces, so as to simplify calculate.We utilize formulaA small amount of feature for having an optimal classification performance is chosen from Haar features to build sample This vector, in above formula,It is the probability density of x to represent that ith feature concentrates value in positive sample,For corresponding Weight,WithIt is respectively the expression concentrated in negative sample.F (i) is smaller, represents the positive and negative sample of differentiation of ith feature This ability is stronger.
After a series of Haar features are obtained by above-mentioned criterion, we are constituted one with the normalization characteristic value of these features Individual vector, and training sample is projected in the space as the training space of SVM.And be trained using LIBSVM storehouses, should Storehouse is relatively broad using using, therefore on specific training process, here not reinflated introduction.
After obtaining human eye area, the human eye area to orienting is optimized, to strengthen the accuracy of human eye positioning and steady It is qualitative.Specific method is:The human eye detection image of former frames is stored first as template, and mould is used around this frame test position The method of plate matching, chooses matching degree highest region as human eye area, and window size change granularity to be detected is reduced with this The error for causing greatly very much, improves the precision of human eye positioning.
For people's eye coordinates saltus step that external interference is caused, improved using the filtering method based on face displacement correlation Saltus step.In the video of 25 frame per second, the difference very little of interframe, even if the attitude and expression of face change, 1/25s's In the case of, relative coordinate change is also not too large.We obtain the displacement of face, and the displacement with human eye is compared, if people The displacement of eye and the displacement difference of face are too big, that is, think to there occurs saltus step, carry out saltus step amendment.Wherein face is displaced through Three step search algorithm based on template matches is obtained.
We obtain the skew of face by three step search algorithm, and the precision of template matches is relatively low, but stability is high, will not Mutation is produced, and can well reflect the movement locus of human eye.In the frame of video that nobody's twitching of the eyelid becomes, template matches are obtained Track and human eye positioning obtain trajectory height overlap.When saltus step occurs, the movement locus of the two becomes to misfit, we The degree of correlation of the two can be monitored to judge whether to there occurs saltus step.If saltus step occurs, use a detection for human eye and sit Mark adds the side-play amount of estimation as detected value.
After orienting human eye area, the position of human eye in position of human eye and preceding some two field pictures according to current frame image, The position that human eye in next two field picture occurs is predicted using track algorithm.By taking kalman predicting tracing algorithms as an example, in advance The Probability Area of human eye in next two field picture is measured, these regions no longer carry out Face datection, but directly carry out human eye inspection Survey, so as to accelerate the human eye position fixing process of next two field picture.
Specifically method is:Even acceleration mobility model is chosen, the change in location sequence of human eye is gathered, for sample, selection The suitable parameter of Kalman Algorithm equation.The calculation process of Kalman filtering is as follows:
By the recursion flow that can obtain Kalman Prediction assumed above:
1., at the t=k-1 moment, calculate
2. the covariance matrix of predicated error is calculated
3. gain matrix is calculated
4. the estimate to current time state is calculated:
5. evaluated error P (k | k)=(I-K (k) C (k)) P (k | k-1) is calculated;
In subsequent time, 1-5 operations are repeated.The block diagram of this process is as shown in Figure 5.
In, Kalman algorithms will be on the basis of known location sequence, according to new data and previous moment Estimates of parameters, by means of system state transition equation in itself, according to a set of recurrence formula, you can calculate new parameter Estimation Value.Prediction obtains the Probability Area of human eye in next two field picture, and these regions directly carry out human eye detection, next so as to accelerate The position fixing process of two field picture.
The present invention has disposed bandpass filter, the centre frequency of bandpass filter on the imaging lens of image-pickup device It is equal with the centre frequency of active radiant or close.
The embodiment of the present invention does not constitute limitation of the invention, simple modifications or equivalent based on the principle of the invention Not departing from the scope of protection of present invention.

Claims (1)

1. based on active light human-eye positioning method, it is characterized in that in video-unit using active light generating device to face actively Projection, is provided with image-pickup device and bright pupil, dark pupil two field picture is extracted;The bright pupil effect triggered using active light projection The candidate region that human eye is positioned is obtained by the difference and image filtering method of two field picture;It is follow-up fixed using face, human eye Position method completes human eye positioning;According to human eye position candidate region, by Knowledge based engineering, feature based, template matches Or the method for Face detection based on presentation orient human face region;According to the geometric properties of face, by Knowledge based engineering, The method of feature based, template matches or based on presentation human eye positioning orients human eye area;
The method of optimization human eye positioning is tracked using track algorithm to the face or position of human eye oriented, or uses template The matching method related to calculating improves the performance of human eye positioning;Or the property of face or human eye positioning is improved using filtering method Can, select the several basic skills positioned with the human eye based on active light in above-mentioned three classes method to be applied in combination;
The accuracy of Face detection is improved using filtering algorithm in the Face detection stage;Face or human eye positioning stage use with Speed of the track algorithm to accelerate follow-up two field picture human eye to position;Template matches, the side of correlation computations are used in human eye positioning stage Method improves the accuracy and stability of human eye positioning;
Video-unit is used for 3 d display device, is placed in a certain position of 3 d display device;
Bandpass filter, centre frequency and the active radiant of bandpass filter are installed on the imaging lens of image-pickup device Centre frequency it is equal or close;
Using the means of Digital Image Processing, the image that image-pickup device is obtained is analyzed, further determines that human eye position Put, step is:
1)The acquisition of human eye candidate region:Human eye candidate region is made up of two parts, and a part is positioned by previous frame image What the position of human eye for arriving was tracked and come;Part II is extracted using threshold value after carrying out difference to bright pupil, two kinds of images of dark pupil Obtain;Wherein Part II is filtered due to the pseudo- candidate region that edge or motion etc. are caused using filtering algorithm;
2)Face detection and filtering:The AdaBoost methods of the method feature based of Face detection, training obtains some feature groups Into cascade of strong classifiers;The arrangement of position and human face according to human eye candidate region, successively according to different scale to possible Human face region detected that the method for using threshold value to compare determines whether the region is human face region;Orient human face region Afterwards, using the position of Kalman filter algorithm optimization human face region;
3)Human eye is positioned and optimized:The method of human eye positioning is based on the SVMs of presentation(SVM)Method, chooses some first Like-Fenton Oxidation is trained as feature space using grid search and cross validation weighted balance error rate and SVMs Method obtains supporting vector and respective weights coefficient, so as to depict Optimal Separating Hyperplane;Then using the hyperplane to be detected Human eye area detected, determine whether the region is human eye area;After orienting human eye area, using template matches and frame Between correlation computations method optimize human eye area position;Conventional Related Computational Methods are lms algorithm;
4)Position of human eye is tracked:According to the position of human eye that present frame and preceding some frame alignment are arrived, using correlation tracking algorithm, prediction Obtain the possible position of human eye in next frame;These positions directly carry out human eye detection, no longer carry out Face datection, so that under being Carrying out for the whole human eye position fixing process of one frame is time-consuming;Track algorithm has Kalman prediction algorithms, Mean-Shift to calculate in advance Method.
CN201410231543.4A 2014-05-28 2014-05-28 The method of the human eye positioning based on active light Active CN104036238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410231543.4A CN104036238B (en) 2014-05-28 2014-05-28 The method of the human eye positioning based on active light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410231543.4A CN104036238B (en) 2014-05-28 2014-05-28 The method of the human eye positioning based on active light

Publications (2)

Publication Number Publication Date
CN104036238A CN104036238A (en) 2014-09-10
CN104036238B true CN104036238B (en) 2017-07-07

Family

ID=51467004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410231543.4A Active CN104036238B (en) 2014-05-28 2014-05-28 The method of the human eye positioning based on active light

Country Status (1)

Country Link
CN (1) CN104036238B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127145B (en) * 2016-06-21 2019-05-14 重庆理工大学 Pupil diameter and tracking
DE102016215766A1 (en) * 2016-08-23 2018-03-01 Robert Bosch Gmbh Method and device for operating an interior camera
CN106846399B (en) * 2017-01-16 2021-01-08 浙江大学 Method and device for acquiring visual gravity center of image
CN107085703A (en) * 2017-03-07 2017-08-22 中山大学 Merge face detection and the automobile passenger method of counting of tracking
CN111714080B (en) * 2020-06-30 2021-03-23 重庆大学 Disease classification system based on eye movement information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877599A (en) * 2006-06-29 2006-12-13 南京大学 Face setting method based on structured light
CN102830797A (en) * 2012-07-26 2012-12-19 深圳先进技术研究院 Man-machine interaction method and system based on sight judgment
CN103605968A (en) * 2013-11-27 2014-02-26 南京大学 Pupil locating method based on mixed projection
CN103744978A (en) * 2014-01-14 2014-04-23 清华大学 Parameter optimization method for support vector machine based on grid search technology

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005044330A (en) * 2003-07-24 2005-02-17 Univ Of California San Diego Weak hypothesis generation device and method, learning device and method, detection device and method, expression learning device and method, expression recognition device and method, and robot device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877599A (en) * 2006-06-29 2006-12-13 南京大学 Face setting method based on structured light
CN102830797A (en) * 2012-07-26 2012-12-19 深圳先进技术研究院 Man-machine interaction method and system based on sight judgment
CN103605968A (en) * 2013-11-27 2014-02-26 南京大学 Pupil locating method based on mixed projection
CN103744978A (en) * 2014-01-14 2014-04-23 清华大学 Parameter optimization method for support vector machine based on grid search technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
一种面向立体显示的实时人眼检测方法;周瑾等;《计算机应用与软件》;20130430;第30卷(第4期);全文 *
基于DM642的人眼检测系统设计与实现;郑威等;《现代电子技术》;20120215;第35卷(第4期);第106页第4-11段,第107页第2-4段、第11-12段 *

Also Published As

Publication number Publication date
CN104036238A (en) 2014-09-10

Similar Documents

Publication Publication Date Title
Li et al. A parallel and robust object tracking approach synthesizing adaptive Bayesian learning and improved incremental subspace learning
CN104036238B (en) The method of the human eye positioning based on active light
US9547908B1 (en) Feature mask determination for images
CN106096538B (en) Face identification method and device based on sequencing neural network model
JP5227639B2 (en) Object detection method, object detection apparatus, and object detection program
CN108205658A (en) Detection of obstacles early warning system based on the fusion of single binocular vision
CN106682578B (en) Weak light face recognition method based on blink detection
Wang et al. Blink detection using Adaboost and contour circle for fatigue recognition
CN104036237B (en) The detection method of rotation face based on on-line prediction
CN104463191A (en) Robot visual processing method based on attention mechanism
JP2014093023A (en) Object detection device, object detection method and program
CN103443804A (en) Method of facial landmark detection
CN113592911B (en) Apparent enhanced depth target tracking method
CN105373767A (en) Eye fatigue detection method for smart phones
García et al. Adaptive multi-cue 3D tracking of arbitrary objects
CN106599785A (en) Method and device for building human body 3D feature identity information database
Huang et al. Soft-margin mixture of regressions
CN103413312A (en) Video target tracking method based on neighborhood components analysis and scale space theory
Kim et al. Real-time facial feature extraction scheme using cascaded networks
CN107967944A (en) A kind of outdoor environment big data measuring of human health method and platform based on Hadoop
Patil et al. Emotion recognition from 3D videos using optical flow method
Kurdthongmee et al. A yolo detector providing fast and accurate pupil center estimation using regions surrounding a pupil
KR101542206B1 (en) Method and system for tracking with extraction object using coarse to fine techniques
CN109949344A (en) It is a kind of to suggest that the nuclear phase of window closes filter tracking method based on color probability target
Yuan et al. Ear detection based on CenterNet

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant