CN105095879A - Eye state identification method based on feature fusion - Google Patents

Eye state identification method based on feature fusion Download PDF

Info

Publication number
CN105095879A
CN105095879A CN201510511165.XA CN201510511165A CN105095879A CN 105095879 A CN105095879 A CN 105095879A CN 201510511165 A CN201510511165 A CN 201510511165A CN 105095879 A CN105095879 A CN 105095879A
Authority
CN
China
Prior art keywords
eye
vector
feature
image
state identification
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510511165.XA
Other languages
Chinese (zh)
Inventor
秦华标
仝锡民
廖才满
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN201510511165.XA priority Critical patent/CN105095879A/en
Publication of CN105095879A publication Critical patent/CN105095879A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Ophthalmology & Optometry (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an eye state identification method based on feature fusion, belongs to the image processing and mode identification field, and is suitable for face expression and mental state identification, driver fatigue detection and sight line tracking. The method is characterized by, to begin with, carrying out face positioning and eye positioning on an input image to obtain an eye area; and then, judging the eye state by utilizing an eye state recognition algorithm based on pseudo-Zernike moment_Gabor feature fusion (PZ_GAB). The eye state identification method has very good robustness under the condition of head movement and complex illumination variation, can effectively solve the problems that eye state identification precision is reduced, and meanwhile, computation complexity is not high, and meets real-time requirement.

Description

The eye state identification method that feature based merges
Technical field
This method belongs to image procossing and area of pattern recognition, relates to the eye state identification method that a kind of feature based merges.
Background technology
There is important effect in a lot of field such as man-machine interaction, driver fatigue detection, eye tracking that is identified in of eye state, and its accuracy rate differentiated directly affects the performance of these systems.In actual applications, in order to ensure that eye state is sentenced method for distinguishing and can round-the-clockly be used, method the most frequently used is at present exactly the image-pickup method using active infrared light source and optical filter to combine.But when infrared illumination adds optical filter, there is larger difference in the eye image under eye image and normal illumination; In practical application, the significantly rotation of head makes eye image that comparatively large deformation occur in addition, causes the feature failure originally extracted.Therefore, suitable human eye feature and the excellent eye state sorter of design performance is selected to be the keys improving Eye states recognition.
Eye state method of discrimination according to the characteristic sum model of eye state identification method employing can be divided into eye state method of discrimination and Corpus--based Method based on macroscopic features to learn:
Based on the eye state method of discrimination of macroscopic features, some intrinsic external appearance characteristics of eyes are utilized to identify, the intensity profile etc. of the shape of such as iris, the curvature of eyelid, eyes.Some intrinsic external appearance characteristics of eyes can be subject to extraneous environmental impact and change, and under the uncontrollable condition of reality, this method often becomes and unreliable.
The eye state method of discrimination of Corpus--based Method study, this method often needs the training sample under a large amount of different changing pattern, such learning model that could allow can have enough good generalization ability, therefore it is when processing uncertain sample data, shows better stability and robustness than the recognition methods based on macroscopic features.
In the eye state method of discrimination of Corpus--based Method study, conventional feature mainly contains illumination invariant characteristic sum invariant moment features.Wherein illumination invariant feature is a kind of feature that can remain unchanged under different light environment, can overcome the impact of illumination variation in Eye states recognition process, and conventional illumination invariant feature mainly contains Gabor characteristic, HOG feature, LBP feature; Bending moment is not a kind of characteristics of image with translation, Invariant to rotation and scale, can overcome the impact of head rotation and distance in Eye states recognition process, and conventional moment characteristics mainly contains Hu square, Zernike square, wavelet moment etc.
In practical application scene, Eye states recognition is vulnerable to the impact of head movement, complex illumination, single feature cannot meet the robustness requirement of practical application, need carrying out eye image on the basis of manifold extraction and effect analysis, propose a kind of Eye states recognition algorithm of Fusion Features.
Summary of the invention
The object of the invention is to the eye state identification method proposing the fusion of a kind of feature based, the method better solves the identification problem of eye state under head movement and complex illumination scene change, this valve can export eye state in real time exactly, thus improves the robustness of corresponding intelligent system.The present invention is achieved through the following technical solutions.
The eye state identification method that feature based merges, the method comprises: (1) human face region and eyes location; (2) extraction of Zernike pseudo-matrix proper vector and Gabor characteristic vector; (3) fusion of Zernike pseudo-matrix proper vector and Gabor characteristic vector; (4) training of Eye states recognition model; (5) to the differentiation of new input eye state.
In said method, described step (2), comprising:
1.1): by the eye image that image capturing system collection is appropriate, comprise and open eyes and close one's eyes, and normalize to 64 × 48 pixel sizes, as training sample image.
1.2): the invariant moment features Zernike pseudo-matrix proper vector and the illumination invariant feature Gabor characteristic vector that extract training sample image respectively, and utilize PCA (PrincipalComponentAnalysis, principal component analysis (PCA)) method Gabor characteristic vector carries out dimensionality reduction and adopts LDA (LinearDiscriminantAnalysis, linear discriminant analysis) scatter matrix between scatter matrix and class is carried out in reconstruction class to the Gabor characteristic vector after dimensionality reduction, thus obtain dimensionality reduction and the vector of the Gabor characteristic after rebuilding.
1.3): Zernike pseudo-matrix proper vector and dimensionality reduction and the Gabor characteristic vector after rebuilding are normalized and dimension polishing, then carry out Parallel Fusion, obtain PZ_GAB Fusion Features proper vector.
In said method, described step (4), comprising:
2.1): the PZ_GAB fusion feature vector of all training sample image is input to based on Radial basis kernel function SVM (SupportVectorMachine, support vector machine) model training, obtains Eye states recognition model.
In said method, described step (3) comprising: merge obtaining the Zernike pseudo-matrix proper vector of eye sample image and dimensionality reduction and the Gabor characteristic vector after rebuilding in step (2), thus obtains the eye feature vector PZ_GAB feature after merging
In said method, described step (3), comprising:
3.1): gather to obtain a two field picture by image capturing system, face and eyes location are carried out respectively to this two field picture, if locate successfully, obtains eye areas image, perform step 3.2, otherwise skip this two field picture, continue to obtain next frame image.
3.2): eye areas size normalization step 3.1 obtained is to 64 × 48 pixel sizes, extract Zernike pseudo-matrix proper vector and Gabor characteristic vector respectively, and utilize PCA method to carry out dimensionality reduction to Gabor characteristic vector, LDA (LinearDiscriminantAnalysis, linear discriminant analysis) is adopted to carry out in reconstruction class scatter matrix between scatter matrix and class to the Gabor characteristic vector after dimensionality reduction.Zernike pseudo-matrix proper vector and dimensionality reduction and the Gabor characteristic vector after rebuilding are carried out Parallel Fusion, obtains PZ_GAB Fusion Features proper vector.
3.2): be input in step 2 train the Eye states recognition model obtained to carry out Eye states recognition by merging the proper vector obtained in step 3.2, finally this two field picture eye state is exported.
Compared with prior art, tool of the present invention has the following advantages and technique effect:
1, the present invention is directed to Eye states recognition be vulnerable to the impact of complex illumination change and head movement and cause the problem that accuracy of identification is not high, propose a kind of new Feature Fusion Algorithm in conjunction with illumination invariant feature and invariant moment features.The present invention can overcome open and close eyes, the impact of the factor such as glasses reflection, head rotation, robustly locate eye position and accurately export eye state information, thus improving the robustness of corresponding intelligent system;
2, after this method adopts Feature Dimension Reduction, computation complexity is low, can meet the requirement of real-time of system;
Accompanying drawing explanation
Fig. 1 is the overall flow figure of the eye state identification method that a kind of feature based of the present invention merges.
Fig. 2 is the Eye states recognition process flow diagram of Fusion Features.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described further.
Composition graphs 1, the eye state identification method that a kind of feature based of the present invention merges, embodiment is as follows:
Step 1: train the eyes cascade sort detecting device based on Haar feature, by the eye image that image capturing system collection is appropriate, comprise and open eyes and close one's eyes, to the image of eyebrow and eyes be comprised as positive sample, non-ocular image, as negative sample, trains the eyes cascade sort detecting device based on Haar feature by Adaboost algorithm.
Step 2: training Eye states recognition model, composition graphs 2, concrete steps are as follows:
2.1): by the eye image that the image capturing system identical with step 1 is appropriate, comprise and open eyes and close one's eyes, and normalize to 64 × 48 pixel sizes, as training sample image.
2.2): the Zernike pseudo-matrix feature extracting sample eye image, concrete steps are as follows:
A): extract eye image Zernike pseudo-matrix feature before, the present invention first adopt one fast and effectively adaptive thresholding algorithm binaryzation is carried out to eye image.Traversal eye image, to choose in image the maximum median pixel value with minimum pixel value as threshold value, thus carries out binaryzation to image.This threshold value is selected by pixel value domain space number percent, in the present invention based on experience value capture element codomain 0.4.
B): the Zernike pseudo-matrix feature calculating eye image, in the present invention, get its 0 to 9 rank moment characteristics, 55 dimensional vector plural numbers can be obtained.
Zernike pseudo-matrix is a kind of orthogonal plural square, and it can be tried to achieve according to formula (1):
A n m = n + 1 π ∫ 0 2 π ∫ 0 1 f ( ρ , θ ) V n m * ( ρ , θ ) ρ d ρ d θ - - - ( 1 )
Zernike square is that image function f (ρ, θ) is at orthogonal polynomial V nmprojection on (ρ, θ).Wherein V nm(ρ, θ) is orthogonal in unit circle, and its expression formula is such as formula shown in (2).
V nm(ρ,θ)=R nm(ρ)e jmθ(2)
R n m ( ρ ) = Σ s = 0 n - | m | D n , | m | s ρ n - s - - - ( 3 )
D n , | m | s = ( - 1 ) s ( 2 n + 1 - s ) ! s ! ( n - | m | - s ) ! ( n + | m | + 1 - s ) ! - - - ( 4 )
Wherein n is positive integer or zero, m is positive integer or negative integer, and must meet n-|m|=even, | m|≤n, ρ are the vector that initial point arrives that (x, y) puts length, and θ is the angle of vector ρ and axle x, R nm(ρ) be radial polynomial.
2.3): the Gabor characteristic extracting sample image, Two-Dimensional Gabor Wavelets conversion is the important tool of carrying out signal analysis and processing at time-frequency domain, and its conversion coefficient has good visual characteristic and Biological background, and concrete steps are as follows:
A): by the definition of Two-Dimensional Gabor Wavelets kernel such as formula shown in (5), can be in the hope of.
ψ ( k , z ) = | | k | | 2 σ 2 e - | | k | | 2 | | z | | 2 2 σ 2 ( e i k z - e - σ 2 2 ) - - - ( 5 )
Wherein σ is the constant relevant with small echo frequency bandwidth, and z=(x, y) is locus coordinate; K determines direction and the yardstick of Gabor kernel.In algorithm of the present invention, adopt the sampling of 8 directions and 5 yardsticks, the k on certain direction and yardstick is just such as formula shown in (6).
k u , v = k v e iφ μ - - - ( 6 )
Wherein k v=k max/ f v, be sampling scale, v belongs to 0,1 ..., 4} is yardstick label, φ μ=π μ/8 are sample direction, μ belongs to 0,1 ..., 7} is direction label.K maxbe maximum frequency, f is the kernel interval factor in frequency domain.In the present invention, parameter k max=pi/2, σ=2 π, can obtain good small echo and characterize and identification effect.
B): the Gabor characteristic of being tried to achieve sample image by the convolution of sample image I and Gabor kernel, such as formula (7).
J k(z)=I(z)*ψ(k,z)(7)
If J kz amplitude and the phase place of () are respectively A kwith then the J in combination different scale and direction kz (), composing images is at the Gabor characteristic vector of z position.
2.4): utilize PCA method to carry out dimensionality reduction to the Gabor characteristic vector obtained, obtain the proper subspace of low-dimensional, in the low dimension formulation feature got from PCA again, with LDA, scatter matrix between scatter matrix and class is carried out in reconstruction class to it, thus obtain dimensionality reduction and the vector of the Gabor characteristic after rebuilding.
2.5): the Gabor characteristic vector after carrying out PCA and LDA process is normalized with Zernike pseudo-matrix proper vector and carries out dimension polishing.Then Parallel Fusion, obtains fusion feature vector, merges such as formula (8).
γ=α+i·β(8)
Wherein A, B are Zernike pseudo-matrix feature and Gabor feature space in sample space respectively, for arbitrary sample Γ ∈ Ω, Zernike pseudo-matrix character representation vector is, Gabor characteristic represents that vector is β ∈ B, and these two proper vectors are fused into complex vector γ by Parallel Fusion exactly.
2.6): the input of the PZ_GAB proper vector of all sample images is trained based in the SVM model of Radial basis kernel function, obtains Eye states recognition model.
Step 3: obtain a two field picture, and on the image obtained, human face region is positioned, if locate successfully, obtain facial image, perform step 4, otherwise skip this two field picture, continue to obtain next frame image.
Step 4: utilize the eyes cascade sort detecting device location eye areas based on Haar feature at the eyes area-of-interest obtained, if locate successfully, i.e. exportable eyes exact position, obtains eye image.
Step 5: composition graphs 2, eye areas size normalization step 4 obtained is to 64 × 48 pixel sizes, according to step 2.2)-2.4) extract Zernike pseudo-matrix proper vector and Gabor characteristic vector respectively, utilize PCA method to carry out dimensionality reduction to the Gabor characteristic vector obtained, adopt LDA to carry out in reconstruction class scatter matrix between scatter matrix and class to the Gabor characteristic vector after dimensionality reduction.Zernike pseudo-matrix proper vector and dimensionality reduction and the Gabor characteristic vector after rebuilding are carried out Parallel Fusion, obtains PZ_GAB Fusion Features proper vector.Being input to step 2 trains the Eye states recognition model obtained to carry out Eye states recognition, final this two field picture eye state of output.
Step 6: repeat step 3 ~ 5, exports Eye states recognition result in real time.

Claims (6)

1. the eye state identification method of feature based fusion, is characterized in that comprising the steps:
(1) human face region and eyes location;
(2) extraction of Zernike pseudo-matrix proper vector and Gabor characteristic vector;
(3) fusion of Zernike pseudo-matrix proper vector and Gabor characteristic vector;
(4) training of Eye states recognition model;
(5) to the differentiation of new input eye state.
2. the eye state identification method of feature based fusion according to claim 1, it is characterized in that described step (2) comprising: the eye areas image obtained in step (1) is normalized, then adaptive threshold binaryzation is carried out to normalized eye areas sample image, obtain binaryzation eye image; Its Zernike pseudo-matrix feature is asked to above-mentioned bianry image, obtains the Zernike pseudo-matrix proper vector of eye image.
3. the eye state identification method of feature based fusion according to claim 2, it is characterized in that described self-adaption binaryzation step comprises: by the eye areas sample image after traversal normalization, seek obtaining maximum and minimum pixel value in image, then get median pixel value in minimax pixel value as adaptive threshold.
4. the eye state identification method of feature based fusion according to claim 1, it is characterized in that described step (2) also comprises: first locating the eye areas image that obtains and Gabor kernel carries out convolution to carrying out eyes in step (1), trying to achieve the Gabor characteristic of eye sample image; The Gabor characteristic vector of the method for PCA dimensionality reduction to the eye sample image extracted is utilized to carry out dimensionality reduction, obtain to obtain the best Expressive Features of low dimension, then LDA reconstruction process is carried out to the feature in PCA proper subspace, obtain dimensionality reduction and the vector of the Gabor characteristic after rebuilding.
5. the eye state identification method of feature based fusion according to claim 1, it is characterized in that described step (3) comprising: merge obtaining the Zernike pseudo-matrix proper vector of eye sample image and dimensionality reduction and the Gabor characteristic vector after rebuilding in step (2), thus obtain the eye feature vector PZ_GAB feature after merging.
6. the eye state identification method of feature based fusion according to claim 5, it is characterized in that described Fusion Features step comprises: first to each eigenwert application min-max normalization scheme to calculate normalization Zernike pseudo-matrix proper vector and Gabor characteristic vector to make eigenwert compatible, then polishing is carried out to two feature vector dimension, namely be as the criterion with the intrinsic dimensionality of higher-dimension most in two proper vectors, by mending 0 to the vector of low-dimensional, to reach dimension consistent; Finally to normalization with neat to dimension after Zernike pseudo-matrix proper vector and Gabor characteristic vector carry out in parallel fusion, obtain merging plural PZ_GAB proper vector.
CN201510511165.XA 2015-08-19 2015-08-19 Eye state identification method based on feature fusion Pending CN105095879A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510511165.XA CN105095879A (en) 2015-08-19 2015-08-19 Eye state identification method based on feature fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510511165.XA CN105095879A (en) 2015-08-19 2015-08-19 Eye state identification method based on feature fusion

Publications (1)

Publication Number Publication Date
CN105095879A true CN105095879A (en) 2015-11-25

Family

ID=54576267

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510511165.XA Pending CN105095879A (en) 2015-08-19 2015-08-19 Eye state identification method based on feature fusion

Country Status (1)

Country Link
CN (1) CN105095879A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897659A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The recognition methods of blink motion and device
CN106897363A (en) * 2017-01-11 2017-06-27 同济大学 The text for moving tracking based on eye recommends method
CN108021911A (en) * 2018-01-04 2018-05-11 重庆公共运输职业学院 A kind of driver tired driving monitoring method
CN108460420A (en) * 2018-03-13 2018-08-28 江苏实达迪美数据处理有限公司 A method of classify to certificate image
CN110119695A (en) * 2019-04-25 2019-08-13 江苏大学 A kind of iris activity test method based on Fusion Features and machine learning
CN110866508A (en) * 2019-11-20 2020-03-06 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for recognizing form of target object

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080037835A1 (en) * 2006-06-02 2008-02-14 Korea Institute Of Science And Technology Iris recognition system and method using multifocus image sequence
CN103336973A (en) * 2013-06-19 2013-10-02 华南理工大学 Multi-feature decision fusion eye state recognition method
CN104091147A (en) * 2014-06-11 2014-10-08 华南理工大学 Near infrared eye positioning and eye state identification method
CN104517104A (en) * 2015-01-09 2015-04-15 苏州科达科技股份有限公司 Face recognition method and face recognition system based on monitoring scene

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080037835A1 (en) * 2006-06-02 2008-02-14 Korea Institute Of Science And Technology Iris recognition system and method using multifocus image sequence
CN103336973A (en) * 2013-06-19 2013-10-02 华南理工大学 Multi-feature decision fusion eye state recognition method
CN104091147A (en) * 2014-06-11 2014-10-08 华南理工大学 Near infrared eye positioning and eye state identification method
CN104517104A (en) * 2015-01-09 2015-04-15 苏州科达科技股份有限公司 Face recognition method and face recognition system based on monitoring scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
X. FU 等: "Content based image retrieval using gabor-zernike features", 《THE 18TH INTERNATIONAL CONFERENCE ON PATTERN RECOGNITION》 *
仝锡民: "戴眼镜情况下眼睛定位及眼睛状态识别算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106897659A (en) * 2015-12-18 2017-06-27 腾讯科技(深圳)有限公司 The recognition methods of blink motion and device
CN106897659B (en) * 2015-12-18 2019-05-24 腾讯科技(深圳)有限公司 The recognition methods of blink movement and device
CN106897363A (en) * 2017-01-11 2017-06-27 同济大学 The text for moving tracking based on eye recommends method
CN106897363B (en) * 2017-01-11 2020-06-12 同济大学 Text recommendation method based on eye movement tracking
CN108021911A (en) * 2018-01-04 2018-05-11 重庆公共运输职业学院 A kind of driver tired driving monitoring method
CN108460420A (en) * 2018-03-13 2018-08-28 江苏实达迪美数据处理有限公司 A method of classify to certificate image
CN110119695A (en) * 2019-04-25 2019-08-13 江苏大学 A kind of iris activity test method based on Fusion Features and machine learning
CN110866508A (en) * 2019-11-20 2020-03-06 Oppo广东移动通信有限公司 Method, device, terminal and storage medium for recognizing form of target object

Similar Documents

Publication Publication Date Title
Lucey et al. Investigating spontaneous facial action recognition through aam representations of the face
CN107273845B (en) Facial expression recognition method based on confidence region and multi-feature weighted fusion
CN105095879A (en) Eye state identification method based on feature fusion
Song et al. Eyes closeness detection from still images with multi-scale histograms of principal oriented gradients
CN102375970B (en) A kind of identity identifying method based on face and authenticate device
CN108614999B (en) Eye opening and closing state detection method based on deep learning
CN104318221A (en) Facial expression recognition method based on ELM
CN105550658A (en) Face comparison method based on high-dimensional LBP (Local Binary Patterns) and convolutional neural network feature fusion
CN104463100A (en) Intelligent wheelchair man-machine interaction system and method based on facial expression recognition mode
CN104091147A (en) Near infrared eye positioning and eye state identification method
Guennouni et al. Biometric systems and their applications
Wang et al. A face recognition system based on local binary patterns and support vector machine for home security service robot
El Kaddouhi et al. Eye detection based on the Viola-Jones method and corners points
Presti et al. Boosting Hankel matrices for face emotion recognition and pain detection
Yu et al. An eye detection method based on convolutional neural networks and support vector machines
He et al. Automatic recognition of traffic signs based on visual inspection
Qin et al. An eye state identification method based on the Embedded Hidden Markov Model
Huang et al. Human emotion recognition based on face and facial expression detection using deep belief network under complicated backgrounds
CN103942545A (en) Method and device for identifying faces based on bidirectional compressed data space dimension reduction
Sallehuddin et al. Score level normalization and fusion of iris recognition
Sallehuddin et al. A survey of iris recognition system
Arivazhagan et al. Iris recognition using Ridgelet transform
Nie et al. The facial features analysis method based on human star-structured model
Xu et al. Eye detection and tracking using rectangle features and integrated eye tracker by web camera
Hong et al. Facial expression recognition under illumination variation

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20151125

RJ01 Rejection of invention patent application after publication