CN101908152B - Customization classifier-based eye state identification method - Google Patents

Customization classifier-based eye state identification method Download PDF

Info

Publication number
CN101908152B
CN101908152B CN2010101979800A CN201010197980A CN101908152B CN 101908152 B CN101908152 B CN 101908152B CN 2010101979800 A CN2010101979800 A CN 2010101979800A CN 201010197980 A CN201010197980 A CN 201010197980A CN 101908152 B CN101908152 B CN 101908152B
Authority
CN
China
Prior art keywords
image
eye
user
eye image
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN2010101979800A
Other languages
Chinese (zh)
Other versions
CN101908152A (en
Inventor
马争
解梅
孙睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Houpu Clean Energy Group Co ltd
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN2010101979800A priority Critical patent/CN101908152B/en
Publication of CN101908152A publication Critical patent/CN101908152A/en
Application granted granted Critical
Publication of CN101908152B publication Critical patent/CN101908152B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing and mode identification and is suitable for driver fatigue detection. The method comprises the following steps of: establishing a human face image library and a user face image library, calculating eye images of each image and mixing the two libraries according to different proportions; calculating the haar-like characteristic vector of each image in the mixed eye image library and constructing a strong classifier by using an AdaBoost method; randomly selecting a plurality of eye images in user face image library, judging the constructed strong classifier, and selecting the strong classifier with the highest identification accuracy as the eye state identification classifier used when the user drives. Through the method, different classifiers are used for different users by a method of mixing the user data and the human face library data according to the customization concept, so that the identification accuracy of the classifier is improved and the identification risks are reduced. The invention further provides two different classifiers for users wearing or not wearing glasses, and the eye state identification is more flexible.

Description

A kind of eye state identification method based on customization classifier
Technical field
The invention belongs to the image processing and pattern recognition field, relate to the driver fatigue detection technique.
Background technology
At present; Traffic hazard causes ten hundreds of vehicle collisions and great casualties every year; According to incompletely statistics; The whole world surpasses 600,000 because of road traffic accident causes dead number, and wherein because the traffic hazard that driver tired driving causes has 100,000 at least, direct economic loss reaches 12,500,000,000 dollars.Driver tired driving with drive when intoxicated equally, become the main hidden danger of traffic hazard.Follow development of computer; The various countries researchist has begun to further investigate the detection method of fatigue driving from every field; The United States Federal in 1998 Speedway Control Broad test has confirmed that PERCLOS (number percent that the unit interval human eye is closed) has the correlativity of height with driver's fatigue conditions, and this has opened up new thinking for fatigue driving detects.See document D.F.Dinges for details; And R.Grace; " PERCLOS:A valid psychophysiological measure of alertness asassessed by psychomotor vigilance; " US Department of Transportation, Federal highwayAdministration.Publication Number FHWA-MCRT-98-006.
Method for detecting fatigue driving based on the PERCLOS characteristic is gathered the driver front usually, and especially the video image of eye areas is handled, and whole detection method mainly comprises people's face location, human eye location, three processes of human eye state identification.And these processes all can be summed up as in the pattern-recognition people's face and non-face, human eye and non-human eye, the classification problem of opening eyes and closing one's eyes.Solve above-mentioned classification problem following several kinds of classical ways are arranged usually: (1) SVM, i.e. SVMs.SVM is a kind of learning machine of the Statistical Learning Theory based on structural risk minimization, is widely used in each branch of pattern-recognition.SVM is the earliest by propositions such as Vapnik, and it is specially adapted to the higher-dimension small sample problem, and the excellent popularization ability is arranged.(2) FLD, promptly Fisher is linear differentiates.FLD attempts to seek a projecting direction, makes to differentiate best to 2 types of samples.Try to achieve best projection direction w *After, all samples are projected to the best projection direction, obtain y=w * TX, and select a threshold value y 0Carry out 2 types of divisions.(3) based on the Adaboost algorithm of Haar type rectangular characteristic.The Adaboost algorithm is a kind of learning algorithm that is widely used in recent years, and it is proposed by people such as Schapire the earliest, and its main thought is from a big Weak Classifier space, to select the part Weak Classifier, and they are combined constitutes a strong classifier.
Experiment shows, and is fast based on Adaboost algorithm strong robustness, accuracy height and the speed of Haar type rectangular characteristic, has very significantly actual application value.Its specific practice is from positive negative sample, to extract the Haar-like proper vector, uses cascade AdaBoost method to make up sorter model then, trains the concrete parameter of sorter.See for details document Paul Viola andMichael J.Jones. " Rapid Object Detection using a Boosted Cascade of Simple Features; " IEEECVPR; 2001. with document R.Lienhart; A.Kuranov; And V.Pisarevsky. " Empirical analysis of detectioncascades of boosted classifiers for rapid object detection, " In DAGM25th Pattern RecognitionSymposium, 2003.
In practical application, adopt Adaboost algorithm based on Haar type rectangular characteristic, the classifier parameters of training out through general people's face sample storehouse can apply to people's face location and human eye location; And for the identification of eye state, this method can only reach certain accuracy rate for most of crowd, and higher relatively for another part crowd misclassification rate, even mistake fully.This is that and custom such as whether wear glasses is difficult to differentiate with a general sorter because everyone eyes are opened with closed otherness very greatly.
Summary of the invention
The present invention provides a kind of eye state identification method based on customization classifier, and this method can generate the sorter of different eye states according to different users, improves the accuracy rate and the scope of application of eye state identification.
Describe content of the present invention for ease, at first some terms are defined.
Definition 1: eye state.Detect for fatigue driving, eye state is divided into to be opened and closed two types.
Definition 2: people's face sample storehouse.People's face sample storehouse among the present invention is meant the image library that has comprised different front faces.Whether the image of this database should be gathered under the different illumination environment, and according to wearing glasses, and is divided into wearing spectacles database and wearing spectacles database not.
Definition 3: human eye central point.For the image of opening eyes, definition human eye central point is a pupil center location; For the image of closing one's eyes, definition human eye central point is an eye seam point midway.
Define five in 4: three front yards." five in three front yards " be people's face long with the wide ratio of face, think 3/10ths of human eye area width behaviour face width in the present invention, and the distance between two human eyes is the width of a human eye just.
Definition 5:Haar-like proper vector.The Haar-like characteristic is to be characterized in people's face by humans such as Papageorgiou the earliest.People such as Papageorgiou use the Haar wavelet basis function in to the research of front face and human detection problem; They find that standard quadrature Haar wavelet basis receives certain restriction on using; In order to obtain better spatial resolution, they have used the characteristic of 3 kinds of forms.People such as Viola have done expansion on this basis, use 2 types of characteristics of totally 4 kinds of forms.Lienhart has increased the rectangular characteristic of several kinds of hypotenuses again finally, makes characteristic type reach 3 types 14 kinds forms (as shown in Figure 2).
Definition 6:AdaBoost.The Adaboost full name is Adaptive Boost, is a kind of iterative algorithm, and its core concept is to the different sorter (Weak Classifier) of same training sample set training, combines these Weak Classifiers then, constitutes a strong classifier.Its algorithm itself realizes through changing DATA DISTRIBUTION whether it is correct according to the classification of each training sample among each training sample set, and the accuracy rate of overall classification last time, confirms the weights of each training sample.Give lower floor's sorter with the new training sample set of revising weights and train, will train the set of classifiers that obtains at last at every turn altogether as decision-making sorter (strong classifier).Use the Adaboost sorter can get rid of some unnecessary training sample characteristics, and the main foundation that will classify is placed on above the main training sample characteristic.Wherein common Adaboost has Discrete AdaBoost, Real AdaBoost and Gentle AdaBoost.Discrete AdaBoost be meant a kind of output valve of Weak Classifier be limited to 1 ,+1}'s and generate the AdaBoost algorithm of strong classifier through weights adjustment; Real AdaBoost is meant that a kind of Weak Classifier output area is R's and generate the AdaBoost algorithm of strong classifier through weights adjustment; Gentle AdaBoost be a kind of to the two kinds of AdaBoost in front because to " unlike " the very high problem that has caused the decrease in efficiency of sorter of positive sample weights adjustment, and the mutation algorithm of generation.
Technical scheme of the present invention is following:
A kind of eye state identification method based on customization classifier, as shown in Figure 1, may further comprise the steps:
Step 1: set up facial image database A.Said face database A comprises two sub-banks A1 and A2; One of them word bank A1 forms by removing with outdoor, Different Individual, that do not wear glasses, front face gray level image, and another word bank A2 forms by removing with the open air, Different Individual, that wear glasses, front face gray level image.Two central point distances of the people's face gray level image among the face database A are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state.
Step 2: set up user's facial image database B.Said user's facial image database B comprises two sub-banks B1 and B2, and one of them word bank B1 is made up of the user, that do not wear glasses, front face gray level image, and another word bank B2 is made up of the user, that wear glasses, front face gray level image.Two central point distances of the people's face gray level image among the face database B are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state.
Step 3: the eye image that calculates each width of cloth facial image among facial image database A and the user's facial image database B; Obtain respectively with facial image database A in two sub-banks A1 ' and the A2 ' of the corresponding eye image database A ' of two sub-banks A1 and A2, and with user's facial image database B in two sub-banks B1 ' and the B2 ' of the corresponding eye image database B ' of two sub-banks B1 and B2.The computing method of concrete eye image are: at first calculate the pixel distance d between two of people's face gray level images; According to the principle in five in three front yards, be the center then with the human eye central point, the long and wide rectangular area that is the d/2 pixel size of intercepting; All rectangular areas are zoomed to 24 * 24 pixel sizes, and rotation at random in-10 ° to 10 ° scopes in the direction of the clock, eye image obtained at last.
Step 4: set up and mix eye image database C.Said mixing eye image database C comprise 2N sub-banks
Figure BSA00000158390600031
and
Figure BSA00000158390600032
wherein word bank (1≤i≤N, N are natural number) by the eye image of the eye image of the A1 ' of word bank described in the step 3 and word bank B1 ' according to different proportion, mix at random; Word bank
Figure BSA00000158390600041
(1≤i≤N, N are natural number) by the eye image of the eye image of the A2 ' of word bank described in the step 3 and word bank B2 ' according to different proportion, mix at random.Said sub-libraries
Figure BSA00000158390600042
and
Figure BSA00000158390600043
number of images of the human eye is not less than 2000.
Step 5: calculate the eye image word bank
Figure BSA00000158390600044
With
Figure BSA00000158390600045
In the haar-like proper vector x of all eye images, said haar-like proper vector x comprises 3 types of 14 kinds of forms, and with each eye image word bank
Figure BSA00000158390600046
With
Figure BSA00000158390600047
All proper vector x combine and constitute 2N training sequence
Figure BSA00000158390600048
With
Figure BSA00000158390600049
(1≤i≤N); And training sequence
Figure BSA000001583906000410
With
Figure BSA000001583906000411
Can be expressed as { (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x M, y M) form, x wherein iExpression
Figure BSA000001583906000412
With
Figure BSA000001583906000413
In i haar-like proper vector; y i∈ 1,1}, expression haar-like proper vector x iThe state that pairing eye image is opened eyes or closed one's eyes; M is the eye image storehouse
Figure BSA000001583906000414
With
Figure BSA000001583906000415
Middle eye image quantity.
Step 6: Step 5 training sequence proceeds of 2N and using AdaBoost method to build a strong classifier corresponding 2N and
Figure BSA000001583906000419
Step 7: the eye image from user's eye image word bank B1 ' that step 3 is set up more than picked at random 1000 width of cloth; Calculate its haar-like proper vector x; Adopt the constructed strong classifier of step 6
Figure BSA000001583906000420
to judge respectively; Obtain judged result: 1-opens eyes, and 0-closes one's eyes; Same from user's eye image word bank B2 ' that step 3 is set up the eye image more than picked at random 1000 width of cloth; Calculate its haar-like proper vector x; Adopt the constructed strong classifier of step 6 to judge respectively; Obtain judged result: 1-opens eyes, and 0-closes one's eyes.
Step 8: the judged result of step 7 gained and selected eye image actual opened eyes or closed-eye state compares; And then count the recognition accuracy of two groups of strong classifiers
Figure BSA000001583906000422
and
Figure BSA000001583906000423
respectively; Choose recognition accuracy is the highest in the strong classifier
Figure BSA000001583906000424
strong classifier then and carry out the sorter of the human eye state identification in the driving procedure at wearing spectacles not, choose the sorter that recognition accuracy is the highest in the strong classifier
Figure BSA000001583906000425
strong classifier carries out the human eye state identification in the driving procedure as the user at wearing spectacles as the user.
Step 9: in user's driving procedure; Gather user's face image in real time; And calculate the eyes image of 24 * 24 pixel sizes and the haar-like proper vector x of this eyes image in real time, at last according to the user whether wearing spectacles select that corresponding strong classifier carries out human eye state identification in the step 8.
Through above step, just can use eye state sorter, thereby improve the accuracy rate of individual state identification according to different users based on customization.
Need to prove:
1. step 1 and step 2 are when setting up face database A and user's face database B, and facial image is preferably under various different light environment and gathers.Can at first make up one and gather environment, this collection environment is preferably the darkroom, is furnished with regulatable light source, can realize that the light and shade of photoenvironment changes, and can in a few minutes, collect individual thousands of width of cloth facial images.
2. in the step 6 the AdaBoost method that is adopted is not had special qualification, various AdaBoost methods all can be used, and are that last accuracy rate is slightly different.
The present invention adopts the constant method of characteristic according to the thought of customization, sets up facial image database and user's facial image database at first respectively; Calculate the eye image of every width of cloth image in facial image database and the user's facial image database then respectively; Eye image with the facial image database mixes by different proportion with the eye image of user's facial image database again, obtains mixing the eye image database; Calculate the haar-like proper vector of mixing every width of cloth image in the eye image database again, and adopt the AdaBoost method to make up strong classifier; Eye image in the some width of cloth users of the picked at random facial image database again; Calculate its haar-like proper vector; The strong classifier that adopts the AdaBoost method to make up is judged; Count the recognition accuracy of strong classifier, choose the highest strong classifier of recognition accuracy as the human eye state discriminator device that uses in user's driving procedure; In user's driving procedure, adopt this sorter to carry out human eye state identification at last.
Innovation part of the present invention is:
1, the thought with customization applies to use different sorters for different user in the human eye state identification, has improved the accuracy rate of individual human eyes state recognition.
2, the training sample of sorter has adopted the method for user data and face database data mixing, makes sorter to guarantee again simultaneously to be without loss of generality to the individual accuracy rate that improves, and reduces the identification risk.
3, the user's of raising wearing spectacles recognition accuracy, and user can be selected for use and wear glasses and the two kinds of different sorters of not wearing glasses, and possesses dirigibility.
Description of drawings
Fig. 1 is a schematic flow sheet of the present invention.
Fig. 2 is the synoptic diagram of haar-like characteristic, has comprised 3 types of 14 kinds of forms.
Fig. 3 is to be the quantity of the various haar-like characteristics of example with 24 * 24 sized images.
Embodiment
A kind of eye state identification method based on customization classifier, as shown in Figure 1, may further comprise the steps:
Step 1: set up facial image database A.Said face database A comprises two sub-banks A1 and A2; One of them word bank A1 forms by removing with outdoor, Different Individual, that do not wear glasses, front face gray level image, and another word bank A2 forms by removing with the open air, Different Individual, that wear glasses, front face gray level image.Two central point distances of the people's face gray level image among the face database A are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state.
Step 2: set up user's facial image database B.Said user's facial image database B comprises two sub-banks B1 and B2, and one of them word bank B1 is made up of the user, that do not wear glasses, front face gray level image, and another word bank B2 is made up of the user, that wear glasses, front face gray level image.Two central point distances of the people's face gray level image among the face database B are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state.
Step 3: the eye image that calculates each width of cloth facial image among facial image database A and the user's facial image database B; Obtain respectively with facial image database A in two sub-banks A1 ' and the A2 ' of the corresponding eye image database A ' of two sub-banks A1 and A2, and with user's facial image database B in two sub-banks B1 ' and the B2 ' of the corresponding eye image database B ' of two sub-banks B1 and B2.The computing method of concrete eye image are: at first calculate the pixel distance d between two of people's face gray level images; According to the principle in five in three front yards, be the center then with the human eye central point, the long and wide rectangular area that is the d/2 pixel size of intercepting; All rectangular areas are zoomed to 24 * 24 pixel sizes, and rotation at random in-10 ° to 10 ° scopes in the direction of the clock, eye image obtained at last.
Step 4: set up and mix eye image database C.Said mixing eye image database C comprise 2N sub-banks
Figure BSA00000158390600061
and
Figure BSA00000158390600062
wherein word bank
Figure BSA00000158390600063
(1≤i≤N, N are natural number) by the eye image of the eye image of the A1 ' of word bank described in the step 3 and word bank B1 ' according to different proportion, mix at random; Word bank (1≤i≤N, N are natural number) by the eye image of the eye image of the A2 ' of word bank described in the step 3 and word bank B2 ' according to different proportion, mix at random.Said sub-libraries
Figure BSA00000158390600065
and
Figure BSA00000158390600066
number of images of the human eye is not less than 2000.
Step 5: calculate the eye image word bank
Figure BSA00000158390600067
With In the haar-like proper vector x of all eye images, said haar-like proper vector x comprises 3 types of 14 kinds of forms, and with each eye image word bank
Figure BSA00000158390600069
With
Figure BSA000001583906000610
All proper vector x combine and constitute 2N training sequence With
Figure BSA000001583906000612
(1≤i≤N); And training sequence
Figure BSA000001583906000613
With
Figure BSA000001583906000614
Can be expressed as { (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x M, y M) form, x wherein iExpression
Figure BSA000001583906000615
With
Figure BSA000001583906000616
In i haar-like proper vector; y i∈ 1,1}, expression haar-like proper vector x iThe state that pairing eye image is opened eyes or closed one's eyes; M is the eye image storehouse
Figure BSA00000158390600071
With
Figure BSA00000158390600072
Middle eye image quantity.
Step 6: Step 5 training sequence proceeds of 2N
Figure BSA00000158390600073
and using AdaBoost method to build a strong classifier corresponding 2N
Figure BSA00000158390600075
and
Figure BSA00000158390600076
Step 7: the eye image from user's eye image word bank B1 ' that step 3 is set up more than picked at random 1000 width of cloth; Calculate its haar-like proper vector x; Adopt the constructed strong classifier of step 6 to judge respectively; Obtain judged result: 1-opens eyes, and 0-closes one's eyes; Same from user's eye image word bank B2 ' that step 3 is set up the eye image more than picked at random 1000 width of cloth; Calculate its haar-like proper vector x; Adopt the constructed strong classifier of step 6
Figure BSA00000158390600078
to judge respectively; Obtain judged result: 1-opens eyes, and 0-closes one's eyes.
Step 8: the judged result of step 7 gained and selected eye image actual opened eyes or closed-eye state compares; And then count the recognition accuracy of two groups of strong classifiers
Figure BSA00000158390600079
and
Figure BSA000001583906000710
respectively; Choose recognition accuracy is the highest in the strong classifier
Figure BSA000001583906000711
strong classifier then and carry out the sorter of the human eye state identification in the driving procedure at wearing spectacles not, choose the sorter that recognition accuracy is the highest in the strong classifier
Figure BSA000001583906000712
strong classifier carries out the human eye state identification in the driving procedure as the user at wearing spectacles as the user.
Step 9: in user's driving procedure; Gather user's face image in real time; And calculate the eyes image of 24 * 24 pixel sizes and the haar-like proper vector x of this eyes image in real time, at last according to the user whether wearing spectacles select that corresponding strong classifier carries out human eye state identification in the step 8.
The inventive method is compared with the method for only using general face database image to train, and the general individual accuracy rate improves about 2%, and the individual accuracy rate of wearing spectacles improves 3%~5%, and operation time is less than 0.1s.
In sum, method of the present invention is utilized the thought of customization, and user data is combined with the face database data, adopts the constant method of characteristic to train the human eye state sorter, thereby has realized human eye state identification fast and accurately.

Claims (1)

1. eye state identification method based on customization classifier may further comprise the steps:
Step 1: set up facial image database A;
Said face database A comprises two sub-banks A1 and A2; One of them word bank A1 forms by removing with outdoor, Different Individual, that do not wear glasses, front face gray level image, and another word bank A2 forms by removing with the open air, Different Individual, that wear glasses, front face gray level image; Two central point distances of the people's face gray level image among the face database A are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state;
Step 2: set up user's facial image database B;
Said user's facial image database B comprises two sub-banks B1 and B2, and one of them word bank B1 is made up of the user, that do not wear glasses, front face gray level image, and another word bank B2 is made up of the user, that wear glasses, front face gray level image; Two central point distances of the people's face gray level image among the face database B are not less than 48 pixel units, people's face gray level image quantity basically identical of open eyes state and closed-eye state;
Step 3: the eye image that calculates each width of cloth facial image among facial image database A and the user's facial image database B; Obtain respectively with facial image database A in two sub-banks A1 ' and the A2 ' of the corresponding eye image database A ' of two sub-banks A1 and A2, and with user's facial image database B in two sub-banks B1 ' and the B2 ' of the corresponding eye image database B ' of two sub-banks B1 and B2; The computing method of concrete eye image are: at first calculate the pixel distance d between two of people's face gray level images; According to the principle in five in three front yards, be the center then with the human eye central point, the long and wide rectangular area that is the d/2 pixel size of intercepting; All rectangular areas are zoomed to 24 * 24 pixel sizes, and rotation at random in-10 ° to 10 ° scopes in the direction of the clock, eye image obtained at last;
Step 4: set up and mix eye image database C;
The mixing of the human eye image database C includes 2N sub-libraries
Figure FSB00000711954900011
and
Figure FSB00000711954900012
where sub-libraries
Figure FSB00000711954900013
from step 3 in the sub-library A1 'of the human eye image and the sub-libraries B1' of eye images in different proportions, randomly mixed; shard
Figure FSB00000711954900014
from step 3 in the sub-library A2 'of the human eye image and the sub-libraries B2' of the human eye image in different proportions, random mixture; said sub-libraries and
Figure FSB00000711954900016
number of images in the human eye is not less than 2000; wherein 1 ≤ i ≤ N, N is a natural number;
Step 5: calculate the eye image word bank
Figure FSB00000711954900017
With
Figure FSB00000711954900018
In the haar-like proper vector x of all eye images, said haar-like proper vector x comprises 3 types of 14 kinds of forms, and with each eye image word bank With All proper vector x combine and constitute 2N training sequence
Figure FSB000007119549000111
With
Figure FSB000007119549000112
And training sequence
Figure FSB000007119549000113
With
Figure FSB000007119549000114
Can be expressed as { (x 1, y 1), (x 2, y 2) ..., (x i, y i) ..., (x M, y M) form, 1≤i≤M wherein, x iExpression With
Figure FSB00000711954900022
In i haar-like proper vector; y i∈ 1,1}, expression haar-like proper vector x iThe state that pairing eye image is opened eyes or closed one's eyes; M is the eye image storehouse With
Figure FSB00000711954900024
Middle eye image quantity;
Step 6: Step 5 training sequence proceeds of 2N
Figure FSB00000711954900025
and
Figure FSB00000711954900026
using AdaBoost method to build a strong classifier corresponding 2N
Figure FSB00000711954900027
and
Figure FSB00000711954900028
Step 7: the eye image from user's eye image word bank B1 ' that step 3 is set up more than picked at random 1000 width of cloth; Calculate its haar-like proper vector x; Adopt the constructed strong classifier of step 6
Figure FSB00000711954900029
to judge respectively; Obtain judged result: 1-opens eyes, and 0-closes one's eyes; Same from user's eye image word bank B2 ' that step 3 is set up the eye image more than picked at random 1000 width of cloth; Calculate its haar-like proper vector x; Adopt the constructed strong classifier of step 6
Figure FSB000007119549000210
to judge respectively; Obtain judged result: 1-opens eyes, and 0-closes one's eyes;
Step 8: the judged result of step 7 gained and selected eye image actual opened eyes or closed-eye state compares; And then count the recognition accuracy of two groups of strong classifiers
Figure FSB000007119549000211
and
Figure FSB000007119549000212
respectively; Choose recognition accuracy is the highest in the strong classifier
Figure FSB000007119549000213
strong classifier then and carry out the sorter of the human eye state identification in the driving procedure at wearing spectacles not, choose the sorter that recognition accuracy is the highest in the strong classifier
Figure FSB000007119549000214
strong classifier carries out the human eye state identification in the driving procedure as the user at wearing spectacles as the user;
Step 9: in user's driving procedure; Gather user's face image in real time; And calculate the eyes image of 24 * 24 pixel sizes and the haar-like proper vector x of this eyes image in real time, at last according to the user whether wearing spectacles select that corresponding strong classifier carries out human eye state identification in the step 8.
CN2010101979800A 2010-06-11 2010-06-11 Customization classifier-based eye state identification method Active CN101908152B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2010101979800A CN101908152B (en) 2010-06-11 2010-06-11 Customization classifier-based eye state identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2010101979800A CN101908152B (en) 2010-06-11 2010-06-11 Customization classifier-based eye state identification method

Publications (2)

Publication Number Publication Date
CN101908152A CN101908152A (en) 2010-12-08
CN101908152B true CN101908152B (en) 2012-04-25

Family

ID=43263607

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2010101979800A Active CN101908152B (en) 2010-06-11 2010-06-11 Customization classifier-based eye state identification method

Country Status (1)

Country Link
CN (1) CN101908152B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102096810B (en) * 2011-01-26 2017-06-30 北京中星微电子有限公司 The detection method and device of a kind of fatigue state of user before computer
CN102085099B (en) * 2011-02-11 2017-02-08 北京中星微电子有限公司 Method and device for detecting fatigue driving
CN102163289B (en) * 2011-04-06 2016-08-24 北京中星微电子有限公司 The minimizing technology of glasses and device, usual method and device in facial image
CN103584852A (en) * 2012-08-15 2014-02-19 深圳中科强华科技有限公司 Personalized electrocardiogram intelligent auxiliary diagnosis device and method
CN103049740B (en) * 2012-12-13 2016-08-03 杜鹢 Fatigue state detection method based on video image and device
CN104102896B (en) * 2013-04-14 2017-10-17 张忠伟 A kind of method for recognizing human eye state that model is cut based on figure
CN103902975A (en) * 2014-03-28 2014-07-02 北京科技大学 Human eye state detection method based on balanced Vector Boosting algorithm
CN105512603A (en) * 2015-01-20 2016-04-20 上海伊霍珀信息科技股份有限公司 Dangerous driving detection method based on principle of vector dot product
CN104504404B (en) * 2015-01-23 2018-01-12 北京工业大学 The user on the network's kind identification method and system of a kind of view-based access control model behavior
CN106485214A (en) * 2016-09-28 2017-03-08 天津工业大学 A kind of eyes based on convolutional neural networks and mouth state identification method
CN108294759A (en) * 2017-01-13 2018-07-20 天津工业大学 A kind of Driver Fatigue Detection based on CNN Eye state recognitions
EP3699808B1 (en) 2017-11-14 2023-10-25 Huawei Technologies Co., Ltd. Facial image detection method and terminal device
CN108021875A (en) * 2017-11-27 2018-05-11 上海灵至科技有限公司 A kind of vehicle driver's personalization fatigue monitoring and method for early warning
CN108491824A (en) * 2018-04-03 2018-09-04 百度在线网络技术(北京)有限公司 model generating method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889093A (en) * 2005-06-30 2007-01-03 上海市延安中学 Recognition method for human eyes positioning and human eyes opening and closing
CN101350063A (en) * 2008-09-03 2009-01-21 北京中星微电子有限公司 Method and apparatus for locating human face characteristic point
US20090097701A1 (en) * 2007-10-11 2009-04-16 Denso Corporation Sleepiness level determination device for driver
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1889093A (en) * 2005-06-30 2007-01-03 上海市延安中学 Recognition method for human eyes positioning and human eyes opening and closing
US20090097701A1 (en) * 2007-10-11 2009-04-16 Denso Corporation Sleepiness level determination device for driver
CN101350063A (en) * 2008-09-03 2009-01-21 北京中星微电子有限公司 Method and apparatus for locating human face characteristic point
CN101593425A (en) * 2009-05-06 2009-12-02 深圳市汉华安道科技有限责任公司 A kind of fatigue driving monitoring method and system based on machine vision

Also Published As

Publication number Publication date
CN101908152A (en) 2010-12-08

Similar Documents

Publication Publication Date Title
CN101908152B (en) Customization classifier-based eye state identification method
CN101944174B (en) Identification method of characters of licence plate
CN106096538B (en) Face identification method and device based on sequencing neural network model
CN106295522B (en) A kind of two-stage anti-fraud detection method based on multi-orientation Face and environmental information
CN105184309B (en) Classification of Polarimetric SAR Image based on CNN and SVM
CN107273845A (en) A kind of facial expression recognizing method based on confidence region and multiple features Weighted Fusion
CN101447020B (en) Pornographic image recognizing method based on intuitionistic fuzzy
CN106951867A (en) Face identification method, device, system and equipment based on convolutional neural networks
CN102270308B (en) Facial feature location method based on five sense organs related AAM (Active Appearance Model)
CN103336973B (en) The eye state identification method of multiple features Decision fusion
CN110348416A (en) Multi-task face recognition method based on multi-scale feature fusion convolutional neural network
CN104091147A (en) Near infrared eye positioning and eye state identification method
CN109902560A (en) A kind of fatigue driving method for early warning based on deep learning
CN102663413A (en) Multi-gesture and cross-age oriented face image authentication method
CN102129574B (en) A kind of face authentication method and system
CN102915453B (en) Real-time feedback and update vehicle detection method
CN104463128A (en) Glass detection method and system for face recognition
CN106056059B (en) The face identification method of multi-direction SLGS feature description and performance cloud Weighted Fusion
CN102096810A (en) Method and device for detecting fatigue state of user before computer
CN103679161B (en) A kind of face identification method and device
CN101916369B (en) Face recognition method based on kernel nearest subspace
CN113221655B (en) Face spoofing detection method based on feature space constraint
CN106778512A (en) Face identification method under the conditions of a kind of unrestricted based on LBP and depth school
CN102880864A (en) Method for snap-shooting human face from streaming media file
CN107665361A (en) A kind of passenger flow counting method based on recognition of face

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20210518

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy Co.,Ltd.

Address before: 611731, No. 2006, West Avenue, Chengdu hi tech Zone (West District, Sichuan)

Patentee before: University of Electronic Science and Technology of China

CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee after: Houpu clean energy (Group) Co.,Ltd.

Address before: No.3, 11th floor, building 6, no.599, shijicheng South Road, Chengdu hi tech Zone, China (Sichuan) pilot Free Trade Zone, Chengdu, Sichuan 610041

Patentee before: Houpu clean energy Co.,Ltd.