CN104036238A - Human eye location method based on active light - Google Patents

Human eye location method based on active light Download PDF

Info

Publication number
CN104036238A
CN104036238A CN201410231543.4A CN201410231543A CN104036238A CN 104036238 A CN104036238 A CN 104036238A CN 201410231543 A CN201410231543 A CN 201410231543A CN 104036238 A CN104036238 A CN 104036238A
Authority
CN
China
Prior art keywords
human eye
location
face
human
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410231543.4A
Other languages
Chinese (zh)
Other versions
CN104036238B (en
Inventor
王元庆
孙文晋
徐斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN201410231543.4A priority Critical patent/CN104036238B/en
Publication of CN104036238A publication Critical patent/CN104036238A/en
Application granted granted Critical
Publication of CN104036238B publication Critical patent/CN104036238B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention provides a human eye location method based on active light. In a video device, an active light generating device is utilized for achieving active projection on a human face; an image pick-up device is adopted for extracting bright-pupil and dark-pupil two-field images; a bright-pupil effect caused by active light projection is utilized, i.e., a human eye location candidate region is obtained through two-field image difference and by an image filtering method; and a human face and human eye location method is subsequently adopted for completing the human eye location. Particularly, a human face region is positioned through adopting a human face location method based on knowledge, features, template matching or appearances; and according to the geometrical characteristics of the human face, and a human eye region is located through adopting the human eye location method based on the knowledge, the features, the template matching or the appearances.

Description

The method of the human eye location based on active light
Technical field
The present invention relates to the method for human eye location, the method for the location of the human eye based on active light that especially video-unit uses.
Background technology
The research of human eye location has had very long history, and research work the earliest can be traced back to the forties in 20th century, but really has development still at nearest 20 years.The input picture of human eye location has 3 kinds of situations conventionally: front, side, inclined-plane.So far, the object of most of human eye Position Research work is positive or approaches positive eye image in the work of IBM in 1997.
Human eye location is one important theoretical research value and using value, has challenging problem.Human eye location refers in piece image to check whether contain human eye, if had, needs further to determine position and the yardstick of human eye, and then indicates the region of human eye with a polygon or circular frame.Its potential application comprises many aspects such as robot vision, sight line mouse, fatigue driving early warning, disabled person are self-service, man-machine interaction, artificial intelligence.
Emerge in an endless stream for the method for human eye location both at home and abroad at present, sum up and roughly have four kinds: based on method knowledge, that locate based on human eye feature, template matches or based on presentation.
Human-eye positioning method based on knowledge be by the mankind about the knowledge encoding of typical eye becomes some rules, utilize these rules to carry out the location of human eye.These rules mainly comprise: profile rule, is seen as an ellipse as what the profile of human eye can be similar to; Organ arranging rule, as human eye in front face is distributed in first face; Rule of symmetry, as people's eyes have symmetry; Sports rule, as nictation action can be used for realizing separating of human eye and background.
Human-eye positioning method based on feature is to find some attribute that does not rely on external condition or architectural features about human eye, and utilizes these attributes or architectural feature to carry out human eye location.First the method for learning by great amount of samples goes to find these attributes or architectural feature, then removes to locate human eye by these attributes or architectural feature.
The human-eye positioning method of template matches is a kind of mode identification method of classics, and then the first human eye template of predefine or a standard of parametrization calculates the degree of correlation of detected image region and standard form, and passing threshold takes a decision as to whether human eye.Wherein, human eye template can dynamically update.
The related characteristics of human eye and non-eye image is found in human-eye positioning method general using statistical study based on presentation and machine learning.The characteristic of learning and come is summarized as distributed model or discriminant function, recycles these distributed models or discriminant function and locates human eye.The theoretical foundation of the human-eye positioning method based on presentation is theory of probability, generally all will use the knowledge of Probability Theory and Math Statistics.
Adaboost is a kind of iterative algorithm, and its core concept is to train different sorter (Weak Classifier) for same training set, then these Weak Classifiers is gathered, and forms a stronger final sorter (strong classifier).Its algorithm itself distributes to realize by changing data, and whether it is correct according to the classification of each sample among each training set, and the accuracy rate of the overall classification of last time, determines the weights of each sample.Give lower floor's sorter by the new data set of revising weights and train, finally merge last the sorter that training obtains at every turn, as last Decision Classfication device.
The human eye location of prior art based on active light.So-called initiatively only finger by the light beam that projects detected target surface infrared or that near-infrared light source sends.And take people's face image under infrared illumination time, in the time meeting some requirements, the performance of the pupil of human eye in picture be brightness apparently higher than pupil peripheral region, this phenomenon is called " bright pupil effect ".Utilize the bright pupil effect under active light, by the difference between different images and image filtering, can obtain the candidate region of human eye location, thereby accelerate the speed of human eye location.
Summary of the invention
The present invention seeks to: propose a kind of human-eye positioning method based on active illuminaton.The method adopts initiatively light-struck method, locates and human eye location by face, fast and effectively other region in human eye area and image is distinguished to the real-time location of realizing the single or multiple human eyes under complex background.Wherein, tracking, template matches, correlation computations and the filtering scheduling algorithm of using in method effectively ensured accuracy, stability and the real-time of human eye location algorithm.
Technical solution of the present invention is as follows: based on the human-eye positioning method of active light, utilize initiatively light generating device to initiatively projection of face, to be provided with image-pickup device bright pupil, dark pupil two field picture are extracted in video-unit; Utilize the bright pupil effect that initiatively light projection causes to obtain by difference and the image filtering method of two field picture the candidate region that human eye is located; Following adopted face, human-eye positioning method complete human eye location.
According to the candidate region of human eye location, by orienting human face region based on method knowledge, that locate based on face feature, template matches or based on presentation; According to the geometric properties of face, by orienting human eye area based on method knowledge, that locate based on human eye feature, template matches or based on presentation.
The method of optimizing human eye location adopts track algorithm to follow the tracks of the face of orienting or position of human eye; Or adopt template matches and calculate the performance that relevant method raising human eye is located; Or adopt filtering method to improve the performance of face or human eye location, can select the several and basic skills of the human eye location based on active light in above-mentioned three class methods to be used in combination.
Adopt filtering algorithm to improve the accuracy of face location at face positioning stage; Use track algorithm to accelerate the speed of subsequent frame image human eye location at face or human eye positioning stage; Use the method for template matches, correlation computations to improve accuracy and the stability of human eye location at human eye positioning stage.
Video-unit, for 3 d display device, can be placed in a certain position of 3 d display device.
Mounting strap pass filter on the imaging lens of image-pickup device, the centre frequency of bandpass filter equates with the centre frequency of active radiant or is close.
Adopt the means of Digital Image Processing, the image that image-pickup device is obtained is analyzed, and further determines position of human eye, and its process as shown in Figure 1, mainly comprises following aspect:
Obtaining of human eye candidate region: human eye candidate region is made up of two parts, a part is followed the tracks of and come by the position of human eye navigating in previous frame image; Another part carries out using threshold value extraction to obtain after difference to bright pupil, dark two kinds of images of pupil.Wherein Part II will use filtering algorithm to filter the pseudo-candidate region of causing due to edge or motion etc. conventionally.Be illustrated in figure 2 bright pupil and dark two kinds of different images of pupil.
Face location: the method for face location have based on knowledge, based on method feature, template matches or based on presentation.Taking the AdaBoost method based on feature as example, training obtains the feature composition cascade of strong classifiers of some better performances.According to arranging of the position of human eye candidate region and human face, according to different scale, possible human face region is detected successively, adopt the method for threshold value comparison to determine whether this region is human face region.Figure 3 shows that a few class Haar features that may use in AdaBoost algorithm.
Human eye location and optimizing: the method for human eye location equally also have based on knowledge, based on method feature, template matches or based on presentation.Taking the support vector machine based on presentation (SVM) method as example, first choose some class Haar features as feature space, utilize grid search and, the supported vector of method and the respective weights coefficient of the training of cross validation weighted balance error rate and support vector machine, thereby depict classification lineoid.Then utilize this lineoid to detect human eye area to be detected, determine whether this region is human eye area.Orient after human eye area, can adopt the method for template matches and interframe correlation computations to optimize the position of human eye area.Conventional Related Computational Methods has lms algorithm for example.
Position of human eye is followed the tracks of: the position of human eye arriving according to present frame and front some frame alignment, adopt correlation tracking algorithm, and can predict the possible position that obtains human eye in next frame.Human eye detection is directly carried out in these positions, no longer carries out face detection, thus saving time for the whole human eye position fixing process of next frame.Conventional track algorithm has Kalman prediction algorithm, Mean-Shift prediction algorithm for example.
Improvement of the present invention is: compared with device, the present invention utilizes initiatively optical illumination to obtain fast human eye candidate region with existing human-eye positioning method, has accelerated the search speed of human eye location; Image being carried out to aspect processing, adopted " face-human eye " dual-positioning structure, save the processing time with respect to the method directly in image, human eye being positioned; Adopt filtering algorithm to improve the accuracy of face location at face positioning stage, use the method for template matches, correlation computations to improve accuracy and the stability of human eye location at human eye positioning stage; After obtaining human face region or human eye area, location dopes human eye area possible in next frame image by track algorithm, thereby for having saved the time in the human eye location of subsequent frame; On the imaging lens of image-pickup device, settled bandpass filter, the centre frequency of the centre frequency of bandpass filter and radiant is initiatively close or equate, changes the impact on human eye locating effect thereby reduced surround lighting.
The invention has the beneficial effects as follows: utilize the initiatively irradiation detection of a target, bright pupil effect under utilization active light is described the feature of detected target, described the principal character of target with minimum data volume, in having accelerated human eye locating speed, also reduce surround lighting and changed the impact on human eye locating effect.The processing stage of image, adopt classification location, filtering, tracking, template matches in conjunction with several different methods such as correlation computations optimizations, improve accuracy, the stability of human eye location, ensure the real-time of human eye location.
Brief description of the drawings
Fig. 1 is the process flow diagram of the inventive method;
Fig. 2 is bright pupil image of the present invention and dark pupil image;
Fig. 3 is the example of a few class Haar features that can use during image of the present invention is processed;
Fig. 4 is the example structure signal of the present invention for 3 d display device.
Fig. 5 is Kalman one-step prediction device block diagram.
Embodiment
Initiatively illuminaton human eye location algorithm process flow diagram is as follows for Fig. 1: based on the human-eye positioning method of active light, utilize initiatively light generating device to initiatively projection of face, bright pupil, dark two kinds of images of pupil are extracted with image-pickup device; Difference and image filtering by two field picture obtain the candidate region that human eye is located; According to the candidate region of human eye location, by orienting human face region based on method knowledge, that locate based on face feature, template matches or based on presentation; According to the geometric properties of face, by orienting human eye area based on method knowledge, that locate based on human eye feature, template matches or based on presentation; Wherein, adopt filtering algorithm to improve the accuracy of face location at face positioning stage; Use track algorithm to accelerate the speed of subsequent frame image human eye location at face or human eye positioning stage; Use the method for template matches, correlation computations to improve accuracy and the stability of human eye location at human eye positioning stage.
In flow process of the present invention, from picked-up or input picture, process specific as follows stating:
Be applied to the position of human eye Detection And Tracking device of 3 d display device.The bottom of screen adopts initiatively light generating device, adopts camera as image-input device simultaneously.Human eye generally between 30 centimetres to 80 centimetres, in the time watching screen, generally can, lower than the lower edge of screen, not spent the scope of subtended angle to the distance of screen higher than upper edge 30.
Taking the three-dimensional displays of 17 cun as example, the positioned opposite size between each parts as shown in Figure 4.The liquid crystal board of 17 cun, long 338 millimeters, wide 268 millimeters.The frame that adds surrounding is about 420 millimeters, wide approximately 389 millimeters.In the time watching screen, people's eyes can be positioned on top position, the dead ahead of screen traditionally, although can be often up and down or between the mobile ABCD of left and right, but generally can not exceed certain scope, this has also just determined the initiatively range of exposures of light, typical size marking (exceed this scope can not know from experience the effect of stereotelevision also belong to normal) on figure.
Face generally between 30 centimetres to 80 centimetres, as D point, in the time watching screen, generally can, lower than the lower edge of screen, not spent the scope of subtended angle, as shown in Figure 4 to the distance of screen higher than upper edge 30.
First the image of input carries out difference, the likely corresponding human eye area of point that difference value is larger by bright pupil image and dark pupil image.For fear of the larger situation of difference value of moving or edge causes, need to carry out filtering processing according to aggregation extent and the shape of the larger point of difference value, reduce the quantity of possibility human eye area, thereby be that the processing time is saved in follow-up face human eye location work.Bright pupil image and dark pupil image are as shown in Figure 2.
Obtain behind human eye candidate region, can adopt based on knowledge, based on feature, template matches or based on presentation method carry out face detection, taking the AdaBoost algorithm based on feature as example, by face, non-face two class Sample Storehouses are carried out to machine learning, search out some Haar features of distinguishing better performances in positive negative example base.Be provided with training sample set S={ (x_1, y_1), (x_2, y_2),, (x_m, y_m) }, each sample weights is distributed in initialization, then with Weak Classifiers all in the H of Weak Classifier space to sample classification, by after classification results and multiplied by weight, add with, select the best Weak Classifier h_1 of effect, change sample weights according to classification results, the sample of misclassification improves weight, then repeats above step, selects the best Weak Classifier h_2 of prediction effect from Weak Classifier space, repeat N time, just obtained N Weak Classifier.Each Weak Classifier also can be assigned with a weight, and the weight that the Weak Classifier of good classification effect distributes is large, and the weight that the poor Weak Classifier of classifying quality distributes is little.The result of final strong classifier classification is exactly N Weak Classifier according to the result of weight ballot classification generation separately.
Arrange according to the organ of the position of human eye candidate region and face, utilize the strong classifier obtaining on image to be detected, to become yardstick face and detect, thereby orient the human face region in image.In machine learning, some class Haar features used as shown in Figure 3.
Orient after human face region, can adopt based on knowledge, based on feature, template matches or based on presentation method carry out human eye detection, taking the support vector machine based on presentation (SVM) algorithm as example, first by human eye, non-human eye two class Sample Storehouses are carried out to machine learning, draw the feature space of some class Haar feature composition algorithm of support vector machine.The object of selecting is from Haar feature space, to select a small amount of representational feature, calculates thereby simplify.We utilize formula from Haar feature, choose and have on a small quantity the feature of optimal classification performance to build sample vector, in above formula, represent i the feature probability density that value is x in positive sample set, for corresponding weight, with respectively the expression concentrated at negative sample.F (i) is less, and the ability of the positive negative sample of differentiation of i feature of expression is stronger.
Obtaining by above-mentioned criterion after a series of Haar features, we form a vector with the normalization eigenwert of these features, and training sample is projected to the training space as SVM in this space.And using LIBSVM storehouse to train, this storehouse is used comparatively extensive, therefore about concrete training process, no longer launch here to introduce.
Obtain after human eye area, the human eye area of orienting is optimized, to strengthen accuracy and the stability of human eye location.Concrete grammar is: first store the human eye detection image of former frames as template, around this frame detection position, adopt the method for template matches, choose region that matching degree is the highest as human eye area, reduce window size to be detected with this and change the error that granularity causes too greatly, improve the precision of human eye location.
For people's eye coordinates saltus step that external interference causes, use the filtering method based on face displacement correlativity to improve saltus step.In the video of 25 frames per second, the difference of interframe is very little, even if the attitude of face and expression change, the in the situation that of 1/25s, relative coordinate changes also not too large.We obtain the displacement of face, compare with the displacement of human eye, if the displacement difference of the displacement of human eye and face is too large, thinks saltus step has occurred, and carry out saltus step correction.Wherein the displacement of face obtains by the three step search procedures based on template matches.
We obtain the skew of face by three step search procedures, the precision of template matches is lower, but stability is high, can not produce sudden change, and can well reflect the movement locus of human eye.In the frame of video becoming in nobody's twitching of the eyelid, the trajectory height that the track that template matches obtains and human eye location obtain overlaps.In the time that saltus step occurs, the movement locus of the two becomes misfits, and the degree of correlation that we can monitor the two judges whether to have occurred saltus step.If saltus step occurs, use side-play amount that the detection coordinates of a human eye adds estimation as detected value.
Orient after human eye area, according to the position of human eye in the position of human eye of current frame image and front some two field pictures, predict the position that adopts track algorithm to occur human eye in next frame image.Taking kalman predicting tracing algorithm as example, prediction obtains the Probability Area of human eye in next frame image, and face detection is no longer carried out in these regions, but directly carries out human eye detection, thereby has accelerated the human eye position fixing process of next frame image.
Concrete method is: choose even acceleration mobility model, gather the change in location sequence of human eye, for sample, select the suitable parameter of Kalman Algorithm equation.The calculation process of Kalman filtering is as follows:
Can be obtained the recursion flow process of Kalman Prediction by above supposition:
1. in the t=k-1 moment, calculate x ^ ( k | k - 1 ) = A ( k ) x ^ ( k - 1 | k - 1 ) ;
2. calculate the covariance matrix of predicated error P ( k | k - 1 ) = A ( k ) P ( k - 1 | k - 1 ) A T ( k ) + σ w 2 ;
3. calculated gains matrix K ( k ) = P ( k | k - 1 ) C T ( k ) C ( k ) P ( k | k - 1 ) C T ( k ) + σ v 2
4. calculate the estimated value to current time state:
x ^ ( k | k ) = x ^ ( k | k - 1 ) + K ( k ) ( y ( k ) - C ( k ) x ^ ( k | k - 1 ) ) ;
5. calculate evaluated error P (k|k)=(I-K (k) C (k)) P (k|k-1);
In next moment, repeat 1-5 operation.The block diagram of this process as shown in Figure 5.
In application, Kalman algorithm will, on the basis of known location sequence, according to the estimates of parameters of new data and previous moment, by means of the state transition equation of system itself, according to a set of recursion formula, can calculate new estimates of parameters.Prediction obtains the Probability Area of human eye in next frame image, and human eye detection is directly carried out in these regions, thereby has accelerated the position fixing process of next frame image.
The present invention has settled bandpass filter on the imaging lens of image-pickup device, and the centre frequency of bandpass filter equates with the centre frequency of active radiant or be close.
The embodiment of the present invention does not form limitation of the invention, and the simple modifications based on the principle of the invention or equivalent do not exceed the scope of protection of present invention.

Claims (7)

1. the human-eye positioning method based on active light, is characterized in that utilizing initiatively light generating device to initiatively projection of face, to be provided with image-pickup device bright pupil, dark pupil two field picture are extracted in video-unit; Utilize the bright pupil effect that initiatively light projection causes to obtain by difference and the image filtering method of two field picture the candidate region that human eye is located; Following adopted face, human-eye positioning method complete human eye location.
2. the method for the human eye based on active light according to claim 1 location, is characterized in that according to the candidate region of human eye location, by orienting human face region based on method knowledge, that locate based on face feature, template matches or based on presentation; According to the geometric properties of face, by orienting human eye area based on method knowledge, that locate based on human eye feature, template matches or based on presentation.
3. the method for the human eye based on active light according to claim 1 and 2 location, it is characterized in that the method for optimizing human eye location adopts track algorithm to follow the tracks of the face of orienting or position of human eye, or adopt template matches and the performance of calculating relevant method raising human eye location; Or adopt filtering method to improve the performance of face or human eye location, select the several and basic skills of the human eye location based on active light in above-mentioned three class methods to be used in combination.
4. the method for the human eye based on active light according to claim 1 and 2 location, is characterized in that adopting filtering algorithm to improve the accuracy of face location at face positioning stage; Use track algorithm to accelerate the speed of subsequent frame image human eye location at face or human eye positioning stage; Use the method for template matches, correlation computations to improve accuracy and the stability of human eye location at human eye positioning stage.
5. the method for the human eye based on active light according to claim 1 location, is characterized in that video-unit is for 3 d display device, is placed in a certain position of 3 d display device.
6. the method for the human eye based on active light according to claim 1 location, is characterized in that mounting strap pass filter on the imaging lens of image-pickup device, and the centre frequency of bandpass filter equates with the centre frequency of active radiant or be close.
7. the method for the human eye based on active light according to claim 1 location, is characterized in that adopting the means of Digital Image Processing, and the image that image-pickup device is obtained is analyzed, and further determines position of human eye, and step is:
1) obtaining of human eye candidate region: human eye candidate region is made up of two parts, a part is followed the tracks of and come by the position of human eye navigating in previous frame image; Part II carries out using threshold value extraction to obtain after difference to bright pupil, dark two kinds of images of pupil; Wherein Part II uses filtering algorithm to filter the pseudo-candidate region of causing due to edge or motion etc.;
2) face location and filtering: the AdaBoost method of the method for face location based on feature, training obtains some feature composition cascade of strong classifiers; According to arranging of the position of human eye candidate region and human face, according to different scale, possible human face region is detected successively, adopt the method for threshold value comparison to determine whether this region is human face region; Orient after human face region, adopt Kalman filtering algorithm to optimize the position of human face region;
3) human eye location and optimization: support vector machine (SVM) method of the method for human eye location based on presentation, first choose some class Haar features as feature space, utilize the supported vector of method and the respective weights coefficient of grid search and cross validation weighted balance error rate and support vector machine training, thereby depict classification lineoid; Then utilize this lineoid to detect human eye area to be detected, determine whether this region is human eye area; Orient after human eye area, adopt the method for template matches and interframe correlation computations to optimize the position of human eye area; Conventional Related Computational Methods is lms algorithm;
4) position of human eye is followed the tracks of: the position of human eye arriving according to present frame and front some frame alignment, adopt correlation tracking algorithm, and prediction obtains the possible position of human eye in next frame; Human eye detection is directly carried out in these positions, no longer carries out face detection, thus saving time for the whole human eye position fixing process of next frame; Conventional track algorithm has Kalman prediction algorithm, Mean-Shift prediction algorithm.
CN201410231543.4A 2014-05-28 2014-05-28 The method of the human eye positioning based on active light Active CN104036238B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410231543.4A CN104036238B (en) 2014-05-28 2014-05-28 The method of the human eye positioning based on active light

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410231543.4A CN104036238B (en) 2014-05-28 2014-05-28 The method of the human eye positioning based on active light

Publications (2)

Publication Number Publication Date
CN104036238A true CN104036238A (en) 2014-09-10
CN104036238B CN104036238B (en) 2017-07-07

Family

ID=51467004

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410231543.4A Active CN104036238B (en) 2014-05-28 2014-05-28 The method of the human eye positioning based on active light

Country Status (1)

Country Link
CN (1) CN104036238B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127145A (en) * 2016-06-21 2016-11-16 重庆理工大学 Pupil diameter and tracking
CN106846399A (en) * 2017-01-16 2017-06-13 浙江大学 A kind of method and device of the vision center of gravity for obtaining image
CN107085703A (en) * 2017-03-07 2017-08-22 中山大学 Merge face detection and the automobile passenger method of counting of tracking
CN109565549A (en) * 2016-08-23 2019-04-02 罗伯特·博世有限公司 Method and apparatus for running interior trim video camera
CN111714080A (en) * 2020-06-30 2020-09-29 重庆大学 Disease classification system based on eye movement information

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1877599A (en) * 2006-06-29 2006-12-13 南京大学 Face setting method based on structured light
US20080235165A1 (en) * 2003-07-24 2008-09-25 Movellan Javier R Weak hypothesis generation apparatus and method, learning aparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial enpression recognition apparatus and method, and robot apparatus
CN102830797A (en) * 2012-07-26 2012-12-19 深圳先进技术研究院 Man-machine interaction method and system based on sight judgment
CN103605968A (en) * 2013-11-27 2014-02-26 南京大学 Pupil locating method based on mixed projection
CN103744978A (en) * 2014-01-14 2014-04-23 清华大学 Parameter optimization method for support vector machine based on grid search technology

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080235165A1 (en) * 2003-07-24 2008-09-25 Movellan Javier R Weak hypothesis generation apparatus and method, learning aparatus and method, detection apparatus and method, facial expression learning apparatus and method, facial enpression recognition apparatus and method, and robot apparatus
CN1877599A (en) * 2006-06-29 2006-12-13 南京大学 Face setting method based on structured light
CN102830797A (en) * 2012-07-26 2012-12-19 深圳先进技术研究院 Man-machine interaction method and system based on sight judgment
CN103605968A (en) * 2013-11-27 2014-02-26 南京大学 Pupil locating method based on mixed projection
CN103744978A (en) * 2014-01-14 2014-04-23 清华大学 Parameter optimization method for support vector machine based on grid search technology

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周瑾等: "一种面向立体显示的实时人眼检测方法", 《计算机应用与软件》 *
郑威等: "基于DM642的人眼检测系统设计与实现", 《现代电子技术》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106127145A (en) * 2016-06-21 2016-11-16 重庆理工大学 Pupil diameter and tracking
CN106127145B (en) * 2016-06-21 2019-05-14 重庆理工大学 Pupil diameter and tracking
CN109565549A (en) * 2016-08-23 2019-04-02 罗伯特·博世有限公司 Method and apparatus for running interior trim video camera
CN106846399A (en) * 2017-01-16 2017-06-13 浙江大学 A kind of method and device of the vision center of gravity for obtaining image
CN107085703A (en) * 2017-03-07 2017-08-22 中山大学 Merge face detection and the automobile passenger method of counting of tracking
CN111714080A (en) * 2020-06-30 2020-09-29 重庆大学 Disease classification system based on eye movement information

Also Published As

Publication number Publication date
CN104036238B (en) 2017-07-07

Similar Documents

Publication Publication Date Title
CN109670396B (en) Fall detection method for indoor old people
CN106127148B (en) A kind of escalator passenger's anomaly detection method based on machine vision
CN108205658A (en) Detection of obstacles early warning system based on the fusion of single binocular vision
WO2021139484A1 (en) Target tracking method and apparatus, electronic device, and storage medium
Li et al. A parallel and robust object tracking approach synthesizing adaptive Bayesian learning and improved incremental subspace learning
CN103514441B (en) Facial feature point locating tracking method based on mobile platform
US8837773B2 (en) Apparatus which detects moving object from image and method thereof
CN104050488B (en) A kind of gesture identification method of the Kalman filter model based on switching
CN104036238A (en) Human eye location method based on active light
CN103870843B (en) Head posture estimation method based on multi-feature-point set active shape model (ASM)
CN107316333B (en) A method of it automatically generates and day overflows portrait
WO2009123354A1 (en) Method, apparatus, and program for detecting object
CN106203375A (en) A kind of based on face in facial image with the pupil positioning method of human eye detection
CN103443804A (en) Method of facial landmark detection
CN108182447A (en) A kind of adaptive particle filter method for tracking target based on deep learning
García et al. Adaptive multi-cue 3D tracking of arbitrary objects
CN104036237A (en) Detection method of rotating human face based on online prediction
CN109086724A (en) A kind of method for detecting human face and storage medium of acceleration
CN103413312B (en) Based on the video target tracking method of neighbourhood's constituent analysis and Scale-space theory
CN101833654A (en) Sparse representation face identification method based on constrained sampling
CN106599785A (en) Method and device for building human body 3D feature identity information database
CN105741326B (en) A kind of method for tracking target of the video sequence based on Cluster-Fusion
CN105930808A (en) Moving object tracking method based on vector boosting template updating
CN105701486B (en) A method of it realizing face information analysis in video camera and extracts
CN103198491A (en) Indoor visual positioning method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant