CN105631423A - Method for identifying human eye state by use of image information - Google Patents

Method for identifying human eye state by use of image information Download PDF

Info

Publication number
CN105631423A
CN105631423A CN201511004081.3A CN201511004081A CN105631423A CN 105631423 A CN105631423 A CN 105631423A CN 201511004081 A CN201511004081 A CN 201511004081A CN 105631423 A CN105631423 A CN 105631423A
Authority
CN
China
Prior art keywords
human eye
eye image
eye
image
local binary
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201511004081.3A
Other languages
Chinese (zh)
Inventor
卢磊
文正
胡燕彬
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Reconova Information Technology Co Ltd
Original Assignee
Xiamen Reconova Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Reconova Information Technology Co Ltd filed Critical Xiamen Reconova Information Technology Co Ltd
Priority to CN201511004081.3A priority Critical patent/CN105631423A/en
Publication of CN105631423A publication Critical patent/CN105631423A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2411Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Ophthalmology & Optometry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method for identifying a human eye state by use of image information. Human eye images are described by use of an image unified local binary mode, vertical projection features and local binary mode features are combined as description features of human eye image states; and final human eye image states are calculated by using a support vector machine and a Sigmoid kernel function as a classifier of the human eye image states, such that the correct rate of human eye state identification is improved. According to the invention, since unified local binary mode feature values and the vertical projection features are employed for describing the human eye mages, the robustness is quite good when a test is carried out in a scene with image rotation and nonuniform illumination.

Description

A kind of utilize graphic information to identify the method for human eye state
Technical field
The present invention relates to and a kind of utilize graphic information to identify the method for human eye state.
Background technology
The state recognition of people's eye has application scene widely, as by judging that human eye state is to identify the whether tired driving of officer, carries out man-machine interaction etc. by human eye state. The method for distinguishing of human eye state knowledge both at home and abroad mainly contains three kinds at present: based on the method for gray scale projection, based on the method for grey level histogram with based on geometric parameter feature matching method.
Owing to the gray scale of the eyeball of people's eye, iris, skin is different, because this person's eye image gray scale of eye areas when opening and closing has obvious difference. Based on such principle, first eye image is converted to gray-scale map picture by the method for gray scale projection, then image is carried out size normalization method and gray scale equilibrium, then the grey level of computed image projects and gray scale vertical projection, gray scale projection being trained as characteristic, final acquisition is based on the eye state classification device of gray scale projection. The mode that method owing to projecting based on gray scale utilizes horizontal and vertical to project obtains the human eye state feature in image, and therefore the location of people's eye, angle, illumination are had stronger dependence by the method. Very big in the peak change of the scene drop shadow curve of light source instability, also can there is skew in various degree in drop shadow curve when people's eye location is not very accurately or people's eye tilts, seriously affect the effect of sorter classification, and then affect the recognition effect of human eye state.
Owing to people's eye is when opening and closing, in image, each proportion shared by rank of ash of eye areas is different, is judged the state of people's eye by the proportion shared by each ash rank based on the method for grey level histogram. The flow process calculated is as follows: first eye image carries out gradation conversion, normalization method and gray scale balanced, then the grey level histogram of computed image, again 64 grey rank are merged on 256 grey rank in grey level histogram, being trained by 64 grey rank histogram features, final acquisition is based on the eye state classification device of grey level histogram. Owing to the state of people's eye and the gray scale of image have strong correlation, therefore when illumination variation is bigger, histogram can produce very obvious noise, and recognition effect is also poor.
Owing to the radian of the eyelid of people's eye when opening and closing is different, and time closed, can't see pupil. As long as the curve radian of eye areas edge profile in image therefore can be judged, so that it may to judge the state of people's eye. Obtain the parameter information of curvilinear equation according to the mode of fitting of a curve based on the method for geometric parameter feature, the radian of judgment curves judges the state of eyes. Idiographic flow is as follows: first eye image is carried out gradation conversion, re-use Canny operator and image is carried out rim detection, again edge image is carried out Hough transform, angle point information in jointing edge detection, just can calculate the parameter information such as the major axis of the radius of pupil and eyelid ellipse, minor axis, judge the radian of a face. By empirical equation, by judging whether to have the radian of pupil and eye face just can determine the state of people's eye opening and closing, such as Fig. 1. Method dependence edge feature based on geometric parameter feature carries out human eye state identification, and with first two Measures compare, illumination is had stronger robustness by the method. But when eye image tilts, curve fitting algorithm there will be deviation; In addition, the precision of eyelid fitting of a curve depends critically upon left canthus and the accuracy of detection at right canthus, also there are many technological difficulties in the Corner Detection Algorithm itself used due to the detection of left and right canthus, this Corner Detection Algorithm also have impact on final human eye state recognition rate to the overlay error that system is brought.
Summary of the invention
It is an object of the invention to provide a kind of graphic information to identify the method for human eye state, human eye state identification can be carried out in complex scene, improve the accuracy of human eye state identification, the application development of human eye state recognition technology can be promoted.
A kind of graphic information of the present invention identifies the method for human eye state, comprises human eye state recognition classifier and trains two steps of classifying with human eye state recognition classifier:
Step 1, human eye state recognition classifier are trained:
Step 11, acquisition eye image
The people's eye sample image gathered is carried out human eye detection computing, obtains position of human eye information, be partitioned into eye image;
Step 12, eye image gray scale equilibrium treatment
Travel through eye image and the grey level histogram of statistical graph picture, then grey level histogram is stretched, make histogram aggregation function keep linear increase, record the gray-scale value corresponding relation before stretching and after stretching, then according to gray-scale value corresponding relation, original eye image is carried out gradation conversion;
Step 13, calculating gray-scale map are as vertical projection feature
First eye image after gray scale equilibrium treatment carrying out the pixel size that size normalizes to 32x16, then travels through eye image, often row and the average gray value often arranged form the vertical projection proper vector of one 32 dimension to add up eye image respectively;
Step 14, calculating eye image unified local binary pattern eigenwert
Each pixel of eye image and the difference of its other pixels of neighborhood after calculating gray scale equilibrium treatment, if difference is greater than 0, assignment is 1, if difference is less than 0, assignment is 0, according to the neighborhood range computation of 3X3, just there is the binary result of 8, these 8 binary result are formed a new byte, as the local binary pattern eigenwert representing this pixel, 59 eigenwerts with invariable rotary are extracted as unified local binary pattern eigenwert from 256 local binary pattern eigenwerts, each eye image obtains the proper vector of 1344 unified local binary pattern eigenwerts altogether,
Step 15, human eye state recognition classifier are trained
The proper vector of unified to the 32 vertical projection proper vectors tieed up and 1344 local binary pattern eigenwert is combined, then everyone eye image pattern obtains the proper vector of one 1376 dimension, this proper vector is inputted in the eye state classification device of SVMs, adopt SVMs to carry out sample training and obtain eye state classification model;
Step 2, human eye state recognition classifier are classified:
The image gathered is carried out human eye detection computing and obtains eye image, by step 11 to step 14, eye image is carried out pre-treatment, obtain proper vector and the vertical projection proper vector of the unified local binary pattern eigenwert of eye image, these proper vectors are inputted in eye state classification device, and after loading eye state classification model, eye image is classified, judge human eye state.
Described SVMs is to human eye state model training, it may also be useful to Sigmoid function is as kernel function, and this Sigmoid kernel function is defined as follows:,
Wherein x represents input vector, and y represents output label, and g is eigenwert initial weight, and c is noise bias.
The present invention utilizes image unified local binary pattern feature to describe eye image, using the description feature of vertical projection characteristic sum local binary pattern integrate features as eye image state; And use SVMs and Sigmoid kernel function as the sorter of eye image state, calculate final eye image state, improve the accuracy of human eye state identification.
Be described by eye image owing to present invention employs unified local binary pattern eigenwert and vertical projection feature, rotate at image, the scene of uneven illumination carries out testing and all has relatively good robustness. Using the present invention to be tested by be not included in training set 2056 people's eye image patterns, judge that human eye state opens or closes, wherein correctly identify 1987, correct recognition rate is 96.6%, higher than the recognition rate of aforementioned additive method.
Accompanying drawing explanation
Fig. 1 is the schematic diagram based on eyelid and pupil curve model in geometric parameter characterization method;
Fig. 2 is human eye state of the present invention training schema;
Fig. 3 is human eye state classification process figure of the present invention.
Below in conjunction with the drawings and specific embodiments, the present invention is further described.
Embodiment
A kind of graphic information of the present invention identifies the method for human eye state, comprises human eye state recognition classifier and trains two steps of classifying with human eye state recognition classifier:
Step 1, human eye state recognition classifier are trained, as shown in Figure 2:
Step 11, acquisition eye image
The sample image gathered is carried out human eye detection computing, obtains position of human eye information, be partitioned into eye image;
Step 12, eye image gray scale equilibrium treatment
Travel through eye image and the grey level histogram of statistical graph picture, then grey level histogram is stretched, make histogram aggregation function keep linear increase, record the gray-scale value corresponding relation before stretching and after stretching, then according to gray-scale value corresponding relation, original eye image is carried out gradation conversion;
Step 13, calculating gray-scale map are as vertical projection feature
First eye image after gray scale equilibrium treatment carrying out the pixel size that size normalizes to 32x16, then travels through eye image, often row and the average gray value often arranged form the vertical projection proper vector of one 32 dimension to add up eye image respectively;
Step 14, calculating eye image unified local binary pattern eigenwert
The crucial part of the present invention is exactly calculate unified local binary pattern eigenwert, method of calculation are as follows: each pixel of eye image and the difference of its other pixels of neighborhood after calculating gray scale equilibrium treatment, if difference is greater than 0, assignment is 1, if difference is less than 0, assignment is 0, according to the neighborhood range computation of 3X3, just there is the binary result of 8, these 8 binary result are formed a new byte, as the local binary pattern eigenwert representing this pixel, this eigenwert describes the graded situation of pixel surrounding, compare with traditional character description method, local binary pattern eigenwert contains the gradient information of image all directions, image texture information can be described well, and local binary pattern eigenwert has invariable rotary, impact by illumination is also smaller, from 256 such local binary pattern eigenwerts, extract 59 eigenwerts most with invariable rotary represent unified local binary pattern eigenwert, the scene of rotation is had at image, unified local binary pattern eigenwert has better robustness than local binary pattern eigenwert. the present invention extracts unified local binary pattern eigenwert in the eye image of 32x16 size, adopts the cell size of 8x8, and the moving step length of 4x4, carries out feature extraction, and each eye image obtains the proper vector of 1344 unified local binary pattern eigenwerts altogether,
Step 15, human eye state recognition classifier are trained
The proper vector of unified to the 32 vertical projection proper vectors tieed up and 1344 local binary pattern eigenwert is combined, then everyone eye image pattern obtains the proper vector of one 1376 dimension, this proper vector is inputted in the eye state classification device of SVMs, adopt SVMs to carry out sample training and obtain eye state classification model;
6000 eye opening images and 6000 eye closing images are divided into 4 groups of samples by the present invention, often organize sample packages containing 1500 eye opening images and 1500 eye closing images, adopt SVMs to human eye state model training, using Sigmoid function as kernel function, this Sigmoid kernel function is defined as follows:,
Wherein x represents input vector, and y represents output label, and g is eigenwert initial weight, and c is noise bias;
By 4 groups of sample cross checkings, find 68 support vectors when g=0.5, c=8, and obtain the recognition accuracy of 98.13%;
Step 2, human eye state recognition classifier are classified, as shown in Figure 3:
The image gathered is carried out human eye detection computing and obtains eye image, by step 11 to step 14, eye image is carried out pre-treatment, obtain proper vector and the vertical projection proper vector of the unified local binary pattern eigenwert of eye image, these proper vectors are inputted in eye state classification device, and after loading eye state classification model, eye image is classified, judge human eye state.
During system initialize, human eye state recognition classifier needs to use unified local binary pattern eigenwert and vertical projection feature to be described by eye image, to promote the accuracy rate utilizing image to carry out human eye state identification. Replacement scheme can utilize HOG feature to be described by eye image, also can obtain the recognition effect similar with the present invention. But employing a large amount of float-point arithmetics in the computing of HOG feature, computational complexity is relatively big, impact promoting the use of in real world applications scene.
The above, it it is only the better embodiment of the present invention, not the technical scope of the present invention is imposed any restrictions, therefore every any trickle amendment, equivalent variations and modification above embodiment done according to the technical spirit of the present invention, all still belong in the scope of technical solution of the present invention.

Claims (2)

1. one kind identifies the method for human eye state by graphic information, it is characterised in that comprises human eye state recognition classifier and trains two steps of classifying with human eye state recognition classifier:
Step 1, human eye state recognition classifier are trained:
Step 11, acquisition eye image
The people's eye sample image gathered is carried out human eye detection computing, obtains position of human eye information, be partitioned into eye image;
Step 12, eye image gray scale equilibrium treatment
Travel through eye image and the grey level histogram of statistical graph picture, then grey level histogram is stretched, make histogram aggregation function keep linear increase, record the gray-scale value corresponding relation before stretching and after stretching, then according to gray-scale value corresponding relation, original eye image is carried out gradation conversion;
Step 13, calculating gray-scale map are as vertical projection feature
First eye image after gray scale equilibrium treatment carrying out the pixel size that size normalizes to 32x16, then travels through eye image, often row and the average gray value often arranged form the vertical projection proper vector of one 32 dimension to add up eye image respectively;
Step 14, calculating eye image unified local binary pattern eigenwert
Each pixel of eye image and the difference of its other pixels of neighborhood after calculating gray scale equilibrium treatment, if difference is greater than 0, assignment is 1, if difference is less than 0, assignment is 0, according to the neighborhood range computation of 3X3, just there is the binary result of 8, these 8 binary result are formed a new byte, as the local binary pattern eigenwert representing this pixel, 59 eigenwerts with invariable rotary are extracted as unified local binary pattern eigenwert from 256 local binary pattern eigenwerts, each eye image obtains the proper vector of 1344 unified local binary pattern eigenwerts altogether,
Step 15, human eye state recognition classifier are trained
The proper vector of unified to the 32 vertical projection proper vectors tieed up and 1344 local binary pattern eigenwert is combined, then everyone eye image pattern obtains the proper vector of one 1376 dimension, this proper vector is inputted in the eye state classification device of SVMs, adopt SVMs to carry out sample training and obtain eye state classification model;
Step 2, human eye state recognition classifier are classified:
The image gathered is carried out human eye detection computing and obtains eye image, by step 11 to step 14, eye image is carried out pre-treatment, obtain proper vector and the vertical projection proper vector of the unified local binary pattern eigenwert of eye image, these proper vectors are inputted in eye state classification device, and after loading eye state classification model, eye image is classified, judge human eye state.
2. a kind of graphic information according to claim 1 identifies the method for human eye state, it is characterised in that: described SVMs is to human eye state model training, it may also be useful to Sigmoid function is as kernel function, and this Sigmoid kernel function is defined as follows:, wherein x represents input vector, and y represents output label, and g is eigenwert initial weight, and c is noise bias.
CN201511004081.3A 2015-12-29 2015-12-29 Method for identifying human eye state by use of image information Pending CN105631423A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201511004081.3A CN105631423A (en) 2015-12-29 2015-12-29 Method for identifying human eye state by use of image information

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201511004081.3A CN105631423A (en) 2015-12-29 2015-12-29 Method for identifying human eye state by use of image information

Publications (1)

Publication Number Publication Date
CN105631423A true CN105631423A (en) 2016-06-01

Family

ID=56046336

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201511004081.3A Pending CN105631423A (en) 2015-12-29 2015-12-29 Method for identifying human eye state by use of image information

Country Status (1)

Country Link
CN (1) CN105631423A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165571A (en) * 2018-08-03 2019-01-08 北京字节跳动网络技术有限公司 Method and apparatus for being inserted into image
CN111652014A (en) * 2019-03-15 2020-09-11 上海铼锶信息技术有限公司 Eye spirit identification method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102346842A (en) * 2010-07-26 2012-02-08 比亚迪股份有限公司 Human eye state detection method and device
US8351662B2 (en) * 2010-09-16 2013-01-08 Seiko Epson Corporation System and method for face verification using video sequence
CN104091147A (en) * 2014-06-11 2014-10-08 华南理工大学 Near infrared eye positioning and eye state identification method
CN104616016A (en) * 2015-01-30 2015-05-13 天津大学 Global feature and local feature combined texture feature description method
CN104732216A (en) * 2015-03-26 2015-06-24 江苏物联网研究发展中心 Expression recognition method based on key points and local characteristics
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102346842A (en) * 2010-07-26 2012-02-08 比亚迪股份有限公司 Human eye state detection method and device
US8351662B2 (en) * 2010-09-16 2013-01-08 Seiko Epson Corporation System and method for face verification using video sequence
CN104091147A (en) * 2014-06-11 2014-10-08 华南理工大学 Near infrared eye positioning and eye state identification method
CN104616016A (en) * 2015-01-30 2015-05-13 天津大学 Global feature and local feature combined texture feature description method
CN104732216A (en) * 2015-03-26 2015-06-24 江苏物联网研究发展中心 Expression recognition method based on key points and local characteristics
CN104809445A (en) * 2015-05-07 2015-07-29 吉林大学 Fatigue driving detection method based on eye and mouth states

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165571A (en) * 2018-08-03 2019-01-08 北京字节跳动网络技术有限公司 Method and apparatus for being inserted into image
US11205290B2 (en) 2018-08-03 2021-12-21 Beijing Bytedance Network Technology Co., Ltd. Method and device for inserting an image into a determined region of a target eye image
CN111652014A (en) * 2019-03-15 2020-09-11 上海铼锶信息技术有限公司 Eye spirit identification method

Similar Documents

Publication Publication Date Title
US9842247B2 (en) Eye location method and device
CN104091147B (en) A kind of near-infrared eyes positioning and eye state identification method
CN106960202B (en) Smiling face identification method based on visible light and infrared image fusion
Li et al. Robust and accurate iris segmentation in very noisy iris images
CN103810491B (en) Head posture estimation interest point detection method fusing depth and gray scale image characteristic points
CN104835175B (en) Object detection method in a kind of nuclear environment of view-based access control model attention mechanism
WO2016145940A1 (en) Face authentication method and device
CN104123543B (en) A kind of eye movement recognition methods based on recognition of face
CN103218605B (en) A kind of fast human-eye positioning method based on integral projection and rim detection
Ilonen et al. Comparison of bubble detectors and size distribution estimators
Gou et al. Learning-by-synthesis for accurate eye detection
CN108182397B (en) Multi-pose multi-scale human face verification method
CN105335725A (en) Gait identification identity authentication method based on feature fusion
CN104715238A (en) Pedestrian detection method based on multi-feature fusion
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN105046197A (en) Multi-template pedestrian detection method based on cluster
CN104091155A (en) Rapid iris positioning method with illumination robustness
CN104794441B (en) Human face characteristic positioning method based on active shape model and POEM texture models under complex background
CN109409298A (en) A kind of Eye-controlling focus method based on video processing
Banerjee et al. Iris segmentation using geodesic active contours and grabcut
CN106557745A (en) Human eyeball's detection method and system based on maximum between-cluster variance and gamma transformation
Parikh et al. Effective approach for iris localization in nonideal imaging conditions
CN105631423A (en) Method for identifying human eye state by use of image information
Moeslund et al. BLOB analysis
Zhang et al. Pupil localization algorithm combining convex area voting and model constraint

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160601

RJ01 Rejection of invention patent application after publication