CN106909879A - A kind of method for detecting fatigue driving and system - Google Patents
A kind of method for detecting fatigue driving and system Download PDFInfo
- Publication number
- CN106909879A CN106909879A CN201710021413.1A CN201710021413A CN106909879A CN 106909879 A CN106909879 A CN 106909879A CN 201710021413 A CN201710021413 A CN 201710021413A CN 106909879 A CN106909879 A CN 106909879A
- Authority
- CN
- China
- Prior art keywords
- key point
- human face
- face region
- shape
- default
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Multimedia (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method for detecting fatigue driving and system, wherein method comprises the following steps:Driver's video image of collection is input into, human face region image set to be detected is obtained using default face characteristic separator;Face images in human face region image set to be detected are carried out into normalizing and obtains average face, the initial key point shape of a certain facial image in human face region image set to be detected is obtained according to average face, the deviation of initial key point is calculated by default recurrence device, key point shape is updated according to bias contribution until determining the shape of key point in human face region;Shape according to the human face region key point for determining obtains the position of eyes key point, and extracts corresponding textural characteristics, judges whether user wears eyes using default sunglasses grader, and judged result carries out the detection of driver fatigue state.Processing speed of the present invention is fast, discrimination is high, is particularly well-suited to embedded intelligent equipment, it is therefore prevented that driver wears sunglasses causes the accuracy of erroneous judgement reduction fatigue driving detection.
Description
Technical field
The invention belongs to technical field of computer vision, in particular to a kind of method for detecting fatigue driving and system.
Background technology
In order to reduce because of traffic accident incidence caused by fatigue driving, driver fatigue monitor system is widely used in driving
Field.Driver fatigue monitor system utilizes facial characteristics, eye signal, head movement of driver etc. to infer that driver's is tired
Labor state, and carry out alarm and take the device of corresponding measure.But fatigue can be then reduced after driver wears sunglasses
The accuracy rate that early warning system judges is driven, and then produces wrong report.
Current driver fatigue monitor system generally comprises infrared illumination element, filter element and camera, by shooting
Even if head obtains infrared image to ensure, in the case where night is poor, more visible facial image can be also obtained, so as to sentence
The fatigue state of disconnected driver.However, after driver wears sunglasses, existing driver fatigue monitor system cannot accurately judge
The fatigue state of driver.Because the sunglasses worn when detecting irradiation using the shorter infrared light sources of wavelength are directly affected to people
The acquisition of eye state can not penetrate sunglasses, and irradiation human eye will be caused for a long time using longer wavelengths of infrared light sources
The eye diseases such as cataract.
The content of the invention
To solve above-mentioned technological deficiency, the extraction that the present invention passes through the eyes peripheral information to driver is driven
Whether the person of sailing has worn the state of sunglasses, and carries out fatigue detecting according to the influence of this state, improves the accurate of fatigue driving detection
Property, reduce influence of the infrared light sources to human body.
The invention provides a kind of method for detecting fatigue driving, comprise the following steps:
Driver's video image of collection is input into, human face region figure to be detected is obtained using default face characteristic separator
Image set;
Face images in human face region image set to be detected are carried out into normalizing and obtains average face, obtained according to average face
The initial key point shape of a certain facial image in human face region image set to be detected, initial closing is calculated by default recurrence device
The deviation of key point, updates key point shape until determining the shape of key point in human face region according to bias contribution;
Shape according to the human face region key point for determining obtains the position of eyes key point, and it is special to extract corresponding texture
Levy, judge whether user wears eyes using default sunglasses grader, judged result carries out the detection of driver fatigue state.
Further, the shape according to the human face region key point for determining obtains the position of eyes key point, and extracts
Corresponding textural characteristics, judge whether user wears eyes using default sunglasses grader, and judged result carries out driver fatigue
The detection of state includes
If sunglasses grader output result is less than threshold value, do not process;At otherwise according to eyes key point position
Whether analysis of texture glasses closure state, closure state is compared with fatigue criteria, judge driver in fatigue state.
Further, driver's video image of the input collection, obtains to be checked using default face characteristic separator
Surveying human face region image set includes
Video image is searched for using sliding window detection method, candidate's subregion is generated;
Candidate's subregion is input into face characteristic grader, judges that whether output result, not less than classification thresholds, is to mark
Human face region to be detected is designated as, is not processed otherwise.
Further, the initial key that a certain facial image in human face region image set to be detected is obtained according to average face
Point shape, by it is default recurrence device calculate initial key point deviation, according to bias contribution update key point shape until
Determining the shape of key point in human face region includes
Human face region image set to be detected is traveled through, the key point shape of a certain facial image is obtained according to face is evaluated, and carry
Take corresponding textural characteristics;
The deviation of the facial image is calculated according to textural characteristics;
Key point shape output regression result after being updated according to deviation.
Further, the key point shape output regression result after the renewal according to deviation also includes
Whether the regressand value of the key point shape after judging to update according to default identification model is not less than returns to threshold value, is
Do not process then, otherwise using regression result as the deviation for returning next time.
Present invention also offers a kind of fatigue driving detecting system, including
Face detection module, the driver's video image for being input into collection, is obtained using default face characteristic separator
Take human face region image set to be detected;
Key point tracing module, is put down for face images in human face region image set to be detected to be carried out into normalizing
Equal face, the initial key point shape of a certain facial image in human face region image set to be detected is obtained according to average face, by pre-
If return device calculate initial key point deviation, according to bias contribution update key point shape until determine human face region in
The shape of key point;
Sunglasses identification module, the position of eyes key point is obtained for the shape according to the human face region key point for determining,
And corresponding textural characteristics are extracted, and judging whether user wears eyes using default sunglasses grader, judged result is driven
The detection of member's fatigue state.
Further, face detection module includes
Candidate unit, for using sliding window detection method search video image, generating candidate's subregion;
Whether output unit, face characteristic grader is input into by candidate's subregion, judge output result not less than classification threshold
Value, is then, labeled as human face region to be detected, not process otherwise.
Further, sunglasses identification module also includes
Judging unit, if being less than threshold value for sunglasses grader output result, does not process;Otherwise closed according to eyes
Whether analysis of texture glasses closure state at key point position, closure state is compared with fatigue criteria, judges driver
In fatigue state.
Further, key point tracing module includes
Feature extraction unit, for traveling through human face region image set to be detected, a certain facial image is obtained according to face is evaluated
Key point shape, and extract corresponding textural characteristics;
Deviation computing unit, the deviation for calculating the facial image according to textural characteristics;
Return output unit, for being updated according to deviation after key point shape output regression result.
Further, returning output unit includes
Treatment subelement, for according to default identification model judge renewal after key point shape regressand value whether not
Less than threshold value is returned to, it is, does not process, otherwise using regression result as the deviation for returning next time.
To sum up, the present invention obtains human face region by human face detection tech first, is then based on this zone location and tracking
To face key point, the textural characteristics further according to eyes peripheral region differentiate whether driver has worn sunglasses, and then prevent because driving
Member wears sunglasses causes the accuracy of erroneous judgement reduction fatigue driving detection.
Brief description of the drawings
In order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art, below will be to institute in embodiment
The accompanying drawing for needing to use is briefly described, it should be apparent that, drawings in the following description are only described in the present invention
A little embodiments, for those of ordinary skill in the art, can also obtain other accompanying drawings according to these accompanying drawings.
Fig. 1 is the schematic flow sheet of method for detecting fatigue driving of the present invention;
Fig. 2 is the schematic flow sheet of determination key point shape in method for detecting fatigue driving of the present invention;
Fig. 3 is the structural representation of fatigue driving detecting system of the present invention.
Specific embodiment
In order that those skilled in the art more fully understands technical scheme, below in conjunction with accompanying drawing to this hair
It is bright to be further detailed.
The present invention is described in further detail below by specific embodiment and with reference to accompanying drawing.
Detection driver's glasses state typically utilizes infrared light sources, and what is typically utilized is the infrared ray of 1.5 microns of wavelength
Light source, can cause the eye diseases such as cataract for a long time, but select the larger infrared light sources of wavelength, bring one
Problem is exactly that the sunglasses of some materials wear impermeable, the state of eyes is not just seen so on image, it is impossible to judge fatigue state.
Therefore the problem to be solved in the present invention is exactly to judge whether driver has worn sunglasses, if a determination be made that if wear dark glasses, just not
Into the detection module of eye state, possible false police is prevented.It is this invention provides a kind of method for detecting fatigue driving.
As shown in figure 1, methods described comprises the following steps:
S101, driver's video image of input collection, face to be detected is obtained using default face characteristic separator
Administrative division map image set;
Each video frame images includes human face region in video image, and also including non-face region, non-face region includes
Also include background area and human body other parts.The purpose of the step recognizes human face region in video frame images, and it is right to accelerate
The detection in face key point region.Region where recognizing face from video image in the step, is generally available rectangle frame etc.
It is marked, the human face region of this mark is not accurate facial contour curve.
Face characteristic grader of the present invention uses the cascade decision tree based on grey scale pixel value contrast characteristic
Grader, the advantage of the method is fast speed, and discrimination is higher, is particularly well-suited to embedded intelligent equipment.
Further, S101 comprises the following steps:
Video image is searched for using sliding window detection method, candidate's subregion is generated;
Candidate's subregion is input into face characteristic grader, judges that whether output result, not less than classification thresholds, is to mark
Human face region to be detected is designated as, is not processed otherwise.For the region of each candidate, only by the classification of all cascades
Device, is just output as human face region.
In order to further illustrate the present invention, pixel comparison feature (pixel intensity) definition is given below:
pixel intensity(I;I1, i2) if=0 I (i1)<=I (i2);
pixel intensity(I;I1, i2) if=1 I (i1)>I(i2);
Wherein, i1, i2 are respectively location of pixels, and I (*) represents pixel intensity.I1, i2 respectively to two pixels
Position, uses when pixel difference feature is extracted.Decision tree classifier is made up of the cascade of several Weak Classifiers.Each Weak Classifier
It is made up of several decision trees, wherein, each decision tree is a Weak Classifier.When detection, for treating for input
Constituency area image, only can just detect human face region by the decision tree separator of all of cascade.Each decision tree protects
Deposit the position of several pixels pair, for extract feature (such as pixel comparison feature, to the feature extracted for distinguish classification and
Non-face contribution is larger), based on the feature extracted obtaining confidence level (score) of the area image to be selected for this decision tree.
If less than the threshold value of certain decision tree, then it is assumed that be non-human face region, otherwise enter later separation device.These pixels are to position
Obtained by off-line training.
Further, the training process of face characteristic grader comprises the following steps:
A:Determine sample set;
B:Initialization sample concentrates the weight of each sample, generates a decision tree;
C:Sample weights are updated, iteration decision tree ultimately generates decision tree classifier.
During specific implementation, following steps are carried out:
For a training set { Is,cs, IsIt is image set, cs{ -1,1 } image be whether be face mark collection, -1 table
Show be not face, 1 represents face, wherein s=1, and 2,3 ..., n, s are the number of image pattern.
(1) the weight W of each training sample is initialized first;
(2) for each k=1,2,3 ..., K decision trees,
<a>To minimize the minimum mean-square error WMSE based on weight, decision tree Tk is trained
Wherein, Δ and xkIt is the set that the training set for being respectively -1 and 1 is marked in each node in decision tree,With
It is two averages of the mark true value based on weight of set.
<b>Update the weight of each sample
<c>Normalized weight, make all weights and be 1;
(3) weight of each sample is updated, to minimize the minimum mean-square error WMSE based on weight, decision tree is trained,
Wherein all weights and be 1;
Wherein, Δ and xkIt is the set that the training set for being respectively -1 and 1 is marked in each node in decision tree,With
It is two averages of the mark true value based on weight of set.
(3) output cascade decision tree.
S102, face images in human face region image set to be detected are carried out into normalizing obtain average face, according to average
Face obtains the initial key point shape of a certain facial image in human face region image set to be detected, is calculated by default recurrence device
The deviation of initial key point, updates key point shape until determining the shape of key point in human face region according to bias contribution;
During specific implementation, the image of each human face region scope is carried out into normalization first for the ease of management, optionally will be all
Human face region be normalized to unified resolution ratio, such as 50*50.
During specific implementation, average face is obtained when off-line training, and its method is as follows:
Average face model includes average distance mean.x of each key point from central point (centers of all key points)
And mean.y, and whole shape central point from the y direction of Face datection frame central point mean deviation dy, key point
The width of shape and the mean ratio sx of Face datection width of frame, the height of key point shape are average with Face datection frame height degree
Ratio sy.In actual applications, Face datection frame (detect_x, detect_y, detect_w, detect_h) is obtained to obtain afterwards
The method of initial key point shape is as follows:For each key point:
X=mean.x*detect_w*sx+detect_x+detect_w/2;Y=mean.y*detect_h*sy+
detect_y+detect_h/2+dy。
Further, S102 includes
S201, traversal human face region image set to be detected, according to the key point shape for evaluating the face a certain facial image of acquisition,
And extract corresponding textural characteristics;During specific implementation, textural characteristics realization calculating initial key is used as by extracting SIFT feature
The deviation of point.The extraction of SIFT feature optionally uses equation below, using the gradient direction distribution of key point neighborhood territory pixel
Characteristic is each key point assigned direction parameter, operator is possessed rotational invariance.
θ (x, y)=a tan2 ((L (x, y+1)-L (x, y-1))/(L (x+1, y)-L (x-1, y)))
M (x, y)=√ (L (x+1, y)-L (x-1, y))2+ (L (x, y+1)-L (x, y-1))2
Wherein, θ and m are respectively direction and the modulus value formula of pixel (x, y) place gradient.Wherein L is the chi where key point
Angle value, does not consider dimensional information in the present embodiment, be defaulted as 1.Sampling (sample content in neighborhood window centered on key point
It is SIFT feature, SURF features, converging channels feature, local binary feature etc.), with the sub-block divided in statistics with histogram neighborhood
Interior pixel gradient direction.In practical application, the scope of histogram of gradients is 0~360 degree, and we are divided into it every 45 degree one
Post, 8 posts, go here and there after the gradient orientation histogram normalization that finally all sub-blocks sampled in neighborhood window are calculated altogether
It is feature used that connection gets up.
S202, the deviation that the facial image is calculated according to textural characteristics;
S203, according to deviation update after key point shape output regression result.
The present invention based on average face initial key point, and by its with extract target human face region initial key
Pixel characteristic at point shape carries out key point recurrence, so that the key point shape after returning is generated, and then by the pass after recurrence
Key point shape with the key point shape of other human face regions return until last human face region, exports final face
Key point shape.
During specific implementation, the present invention returns initial shape of the device to face key point using the cascade based on supervision descending method
Shape is updated, and it is as follows that the cascade based on supervision descending method returns device:
Wherein, Δ xkRepresent the deviation of all key points after kth time is returned, RkAnd bk(k=0,1 ... N) represents prison
Superintend and direct the kth step recurrence device that descending method off-line learning is arrived, xkRepresent face key point result, φ after k recurrencekRepresent k recurrence
When the feature extracted, such as SIFT feature (scale invariant feature).
The present invention obtains each key point institute first since a face key point original shape after once returning
The deviation that need to be moved, second returns according to the regression result of preceding step, after returning several times, face key point from original shape by
Step approaches real key point shape.That is, when starting to return face key point, first since initial key point shape, from
Then each key point peripheral region texture feature extraction, such as SIFT is in certain sequence together in series feature for returning
Return i.e. φ0, wk=[Rk,bk], and W0*φ0=Δ x0, the deviation that each key point is obtained according to above-mentioned formula go update key point
Shape, iteration repeatedly exports x afterwardskThe as result of key point shape, wherein, wk=[Rk,bk] it is to return device to be obtained by off-line training
Arrive.
Further, it is also optional after the key point shape output regression result after being updated according to deviation, according to default
Identification model judge that whether the regressand value of the key point shape after updating, not less than returning to threshold value, is not process, otherwise
Using regression result as the deviation for returning next time.Determination to face shape is realized by the judgement to regression result, is carried
Judgement rate to whether wearing sunglasses on face high.
S103, the position that eyes key point is obtained according to the shape of the human face region key point for determining, and extract corresponding
Textural characteristics, judge whether user wears eyes using default sunglasses grader, and judged result carries out driver fatigue state
Detection.
Further, S103 includes
If sunglasses grader output result is less than threshold value, do not process;At otherwise according to eyes key point position
Whether analysis of texture glasses closure state, closure state is compared with fatigue criteria, judge driver in fatigue state.
Present invention determine that two positions of eyes are obtained after face key point shape, texture feature extraction such as SIFT feature around it,
BP artificial nerve network classifiers are then based on to be differentiated.Can be selected in when being therefore embodied around two eye positions of collection
Region texture feature extraction again, such as SIFT feature, then employment artificial neural networks carry out two classification, BP artificial neural networks are
A kind of existing sorting technique, is not invention emphasis of the invention, so the present invention will not be repeated here.The present invention is by meter
The deviation of average face and the initial key point of each human face region is calculated, the correction to face key point is realized, and then extract face
Pixel characteristic at key point position, thus judge driver whether wearing spectacles.
As shown in figure 3, present invention also offers a kind of fatigue driving detecting system, including face detection module 10, key
Point tracking module 20, sunglasses identification module 30.
Wherein,
Face detection module 10, the driver's video image for being input into collection, using default face characteristic separator
Obtain human face region image set to be detected;
Key point tracing module 20, obtains for face images in human face region image set to be detected to be carried out into normalizing
Average face, the initial key point shape of a certain facial image in human face region image set to be detected is obtained according to average face, is passed through
It is default to return the deviation that device calculates initial key point, key point shape is updated according to bias contribution until determining human face region
The shape of interior key point;
Sunglasses identification module 30, the position of eyes key point is obtained for the shape according to the human face region key point for determining
Put, and extract corresponding textural characteristics, judge whether user wears eyes using default sunglasses grader, judged result is driven
The detection of the person's of sailing fatigue state.
Further, face detection module includes
Candidate unit, for using sliding window detection method search video image, generating candidate's subregion;
Whether output unit, face characteristic grader is input into by candidate's subregion, judge output result not less than classification threshold
Value, is then, labeled as human face region to be detected, not process otherwise.
Further, sunglasses identification module also includes
Judging unit, if being less than threshold value for sunglasses grader output result, does not process;Otherwise closed according to eyes
Whether analysis of texture glasses closure state at key point position, closure state is compared with fatigue criteria, judges driver
In fatigue state.
Further, key point tracing module includes
Feature extraction unit, for traveling through human face region image set to be detected, a certain facial image is obtained according to face is evaluated
Key point shape, and extract corresponding textural characteristics;
Deviation computing unit, the deviation for calculating the facial image according to textural characteristics;
Return output unit, for being updated according to deviation after key point shape output regression result.Further, return
Returning output unit includes
Treatment subelement, for according to default identification model judge renewal after key point shape regressand value whether not
Less than threshold value is returned to, it is, does not process, otherwise using regression result as the deviation for returning next time.
Some one exemplary embodiments of the invention only are described by way of explanation above, undoubtedly, for ability
The those of ordinary skill in domain, without departing from the spirit and scope of the present invention, can be with a variety of modes to institute
The embodiment of description is modified.Therefore, above-mentioned accompanying drawing and description are inherently illustrative, should not be construed as to the present invention
The limitation of claims.
Claims (10)
1. a kind of method for detecting fatigue driving, it is characterised in that comprise the following steps:
Driver's video image of collection is input into, human face region image to be detected is obtained using default face characteristic separator
Collection;
Face images in human face region image set to be detected are carried out into normalizing and obtains average face, obtain to be checked according to average face
The initial key point shape of a certain facial image in human face region image set is surveyed, initial key point is calculated by default recurrence device
Deviation, according to bias contribution update key point shape until determine human face region in key point shape;
Shape according to the human face region key point for determining obtains the position of eyes key point, and extracts corresponding textural characteristics,
Judge whether user wears eyes using default sunglasses grader, judged result carries out the detection of driver fatigue state.
2. method for detecting fatigue driving according to claim 1, it is characterised in that described to be closed according to the human face region for determining
The shape of key point obtains the position of eyes key point, and extracts corresponding textural characteristics, judges to use using default sunglasses grader
Whether family wears eyes, and the detection that judged result carries out driver fatigue state includes
If sunglasses grader output result is less than threshold value, do not process;Otherwise according to the texture at eyes key point position
Whether signature analysis glasses closure state, closure state is compared with fatigue criteria, judge driver in fatigue state.
3. method for detecting fatigue driving according to claim 1, it is characterised in that driver's video of the input collection
Image, obtaining human face region image set to be detected using default face characteristic separator includes
Video image is searched for using sliding window detection method, candidate's subregion is generated;
Candidate's subregion is input into face characteristic grader, judges that whether output result, not less than classification thresholds, is to be labeled as
Human face region to be detected, does not process otherwise.
4. method for detecting fatigue driving according to claim 1, it is characterised in that described to obtain to be detected according to average face
The initial key point shape of a certain facial image in human face region image set, initial key point is calculated by default recurrence device
Deviation, update key point shape according to bias contribution includes until determining the shape of key point in human face region
Human face region image set to be detected is traveled through, the key point shape of a certain facial image is obtained according to face is evaluated, and extract right
The textural characteristics answered;
The deviation of the facial image is calculated according to textural characteristics;
Key point shape output regression result after being updated according to deviation.
5. method for detecting fatigue driving according to claim 4, it is characterised in that it is described updated according to deviation after pass
Key point shape output regression result also includes
Whether the regressand value of the key point shape after judging to update according to default identification model, not less than returning to threshold value, is not then not
Process, otherwise using regression result as the deviation for returning next time.
6. a kind of fatigue driving detecting system, it is characterised in that including
Face detection module, the driver's video image for being input into collection is obtained using default face characteristic separator and treated
Detection human face region image set;
Key point tracing module, obtains averagely for face images in human face region image set to be detected to be carried out into normalizing
Face, the initial key point shape of a certain facial image in human face region image set to be detected is obtained according to average face, by default
Return device calculate initial key point deviation, according to bias contribution update key point shape until determine human face region in close
The shape of key point;
Sunglasses identification module, the position of eyes key point is obtained for the shape according to the human face region key point for determining, and is carried
Corresponding textural characteristics are taken, judges whether user wears eyes using default sunglasses grader, it is tired that judged result carries out driver
The detection of labor state.
7. fatigue driving detecting system according to claim 6, it is characterised in that face detection module includes
Candidate unit, for using sliding window detection method search video image, generating candidate's subregion;
Output unit, face characteristic grader is input into by candidate's subregion, judges that whether output result, not less than classification thresholds, is
Human face region to be detected is then labeled as, is not processed otherwise.
8. fatigue driving detecting system according to claim 6, it is characterised in that sunglasses identification module also includes
Judging unit, if being less than threshold value for sunglasses grader output result, does not process;Otherwise according to eyes key point
Analysis of texture glasses closure state at position, closure state is compared with fatigue criteria, judges whether driver is in
Fatigue state.
9. fatigue driving detecting system according to claim 6, it is characterised in that key point tracing module includes
Feature extraction unit, for traveling through human face region image set to be detected, according to the pass for evaluating the face a certain facial image of acquisition
Key point shape, and extract corresponding textural characteristics;
Deviation computing unit, the deviation for calculating the facial image according to textural characteristics;
Return output unit, for being updated according to deviation after key point shape output regression result.
10. fatigue driving detecting system according to claim 9, it is characterised in that returning output unit includes
Whether treatment subelement, the regressand value for judging the key point shape after renewal according to default identification model is not less than
Threshold value is returned to, is, do not processed, otherwise using regression result as the deviation for returning next time.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710021413.1A CN106909879A (en) | 2017-01-11 | 2017-01-11 | A kind of method for detecting fatigue driving and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710021413.1A CN106909879A (en) | 2017-01-11 | 2017-01-11 | A kind of method for detecting fatigue driving and system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106909879A true CN106909879A (en) | 2017-06-30 |
Family
ID=59207429
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710021413.1A Pending CN106909879A (en) | 2017-01-11 | 2017-01-11 | A kind of method for detecting fatigue driving and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106909879A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107818310A (en) * | 2017-11-03 | 2018-03-20 | 电子科技大学 | A kind of driver attention's detection method based on sight |
CN107862285A (en) * | 2017-11-07 | 2018-03-30 | 哈尔滨工业大学深圳研究生院 | A kind of face alignment method |
CN107992815A (en) * | 2017-11-28 | 2018-05-04 | 北京小米移动软件有限公司 | Eyeglass detection method and device |
CN108460345A (en) * | 2018-02-08 | 2018-08-28 | 电子科技大学 | A kind of facial fatigue detection method based on face key point location |
CN109241842A (en) * | 2018-08-02 | 2019-01-18 | 平安科技(深圳)有限公司 | Method for detecting fatigue driving, device, computer equipment and storage medium |
CN109803583A (en) * | 2017-08-10 | 2019-05-24 | 北京市商汤科技开发有限公司 | Driver monitoring method, apparatus and electronic equipment |
CN109858553A (en) * | 2019-01-31 | 2019-06-07 | 深圳市赛梅斯凯科技有限公司 | Monitoring model update method, updating device and the storage medium of driving condition |
CN109858466A (en) * | 2019-03-01 | 2019-06-07 | 北京视甄智能科技有限公司 | A kind of face critical point detection method and device based on convolutional neural networks |
US11341769B2 (en) | 2017-12-25 | 2022-05-24 | Beijing Sensetime Technology Development Co., Ltd. | Face pose analysis method, electronic device, and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102324166A (en) * | 2011-09-19 | 2012-01-18 | 深圳市汉华安道科技有限责任公司 | Fatigue driving detection method and device |
CN102799888A (en) * | 2011-05-27 | 2012-11-28 | 株式会社理光 | Eye detection method and eye detection equipment |
CN104361716A (en) * | 2014-10-31 | 2015-02-18 | 新疆宏开电子系统集成有限公司 | Method for detecting and reminding fatigue in real time |
CN104881955A (en) * | 2015-06-16 | 2015-09-02 | 华中科技大学 | Method and system for detecting fatigue driving of driver |
CN105426870A (en) * | 2015-12-15 | 2016-03-23 | 北京文安科技发展有限公司 | Face key point positioning method and device |
CN106250801A (en) * | 2015-11-20 | 2016-12-21 | 北汽银翔汽车有限公司 | Based on Face datection and the fatigue detection method of human eye state identification |
-
2017
- 2017-01-11 CN CN201710021413.1A patent/CN106909879A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102799888A (en) * | 2011-05-27 | 2012-11-28 | 株式会社理光 | Eye detection method and eye detection equipment |
CN102324166A (en) * | 2011-09-19 | 2012-01-18 | 深圳市汉华安道科技有限责任公司 | Fatigue driving detection method and device |
CN104361716A (en) * | 2014-10-31 | 2015-02-18 | 新疆宏开电子系统集成有限公司 | Method for detecting and reminding fatigue in real time |
CN104881955A (en) * | 2015-06-16 | 2015-09-02 | 华中科技大学 | Method and system for detecting fatigue driving of driver |
CN106250801A (en) * | 2015-11-20 | 2016-12-21 | 北汽银翔汽车有限公司 | Based on Face datection and the fatigue detection method of human eye state identification |
CN105426870A (en) * | 2015-12-15 | 2016-03-23 | 北京文安科技发展有限公司 | Face key point positioning method and device |
Non-Patent Citations (1)
Title |
---|
张君昌等: "基于改进AdaBoost算法的人脸检测", 《计算机仿真》 * |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109803583A (en) * | 2017-08-10 | 2019-05-24 | 北京市商汤科技开发有限公司 | Driver monitoring method, apparatus and electronic equipment |
CN107818310A (en) * | 2017-11-03 | 2018-03-20 | 电子科技大学 | A kind of driver attention's detection method based on sight |
CN107818310B (en) * | 2017-11-03 | 2021-08-06 | 电子科技大学 | Driver attention detection method based on sight |
CN107862285A (en) * | 2017-11-07 | 2018-03-30 | 哈尔滨工业大学深圳研究生院 | A kind of face alignment method |
CN107992815A (en) * | 2017-11-28 | 2018-05-04 | 北京小米移动软件有限公司 | Eyeglass detection method and device |
US11341769B2 (en) | 2017-12-25 | 2022-05-24 | Beijing Sensetime Technology Development Co., Ltd. | Face pose analysis method, electronic device, and storage medium |
CN108460345A (en) * | 2018-02-08 | 2018-08-28 | 电子科技大学 | A kind of facial fatigue detection method based on face key point location |
CN109241842A (en) * | 2018-08-02 | 2019-01-18 | 平安科技(深圳)有限公司 | Method for detecting fatigue driving, device, computer equipment and storage medium |
CN109241842B (en) * | 2018-08-02 | 2024-03-05 | 平安科技(深圳)有限公司 | Fatigue driving detection method, device, computer equipment and storage medium |
CN109858553A (en) * | 2019-01-31 | 2019-06-07 | 深圳市赛梅斯凯科技有限公司 | Monitoring model update method, updating device and the storage medium of driving condition |
CN109858553B (en) * | 2019-01-31 | 2023-12-12 | 锦图计算技术(深圳)有限公司 | Method, device and storage medium for updating driving state monitoring model |
CN109858466A (en) * | 2019-03-01 | 2019-06-07 | 北京视甄智能科技有限公司 | A kind of face critical point detection method and device based on convolutional neural networks |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106909879A (en) | A kind of method for detecting fatigue driving and system | |
CN109558810B (en) | Target person identification method based on part segmentation and fusion | |
CN103632136B (en) | Human-eye positioning method and device | |
CN108288033B (en) | A kind of safety cap detection method based on random fern fusion multiple features | |
CN105512640B (en) | A kind of people flow rate statistical method based on video sequence | |
CN103632132B (en) | Face detection and recognition method based on skin color segmentation and template matching | |
Shi et al. | Real-time traffic light detection with adaptive background suppression filter | |
Yang et al. | Sieving regression forest votes for facial feature detection in the wild | |
CN108256459A (en) | Library algorithm is built in detector gate recognition of face and face based on multiple-camera fusion automatically | |
CN108197587A (en) | A kind of method that multi-modal recognition of face is carried out by face depth prediction | |
CN105894701B (en) | The identification alarm method of transmission line of electricity external force damage prevention Large Construction vehicle | |
CN103902962B (en) | One kind is blocked or the adaptive face identification method of light source and device | |
CN105868689A (en) | Cascaded convolutional neural network based human face occlusion detection method | |
CN107133569A (en) | The many granularity mask methods of monitor video based on extensive Multi-label learning | |
CN103093215A (en) | Eye location method and device | |
CN104091176A (en) | Technology for applying figure and head portrait comparison to videos | |
CN108537143B (en) | A kind of face identification method and system based on key area aspect ratio pair | |
CN106599785A (en) | Method and device for building human body 3D feature identity information database | |
CN108960047A (en) | Face De-weight method in video monitoring based on the secondary tree of depth | |
CN108256462A (en) | A kind of demographic method in market monitor video | |
CN103886305A (en) | Specific face searching method for grassroots policing, safeguard stability and counter-terrorism | |
CN107992854A (en) | Forest Ecology man-machine interaction method based on machine vision | |
CN106203338B (en) | Human eye state method for quickly identifying based on net region segmentation and threshold adaptive | |
CN109697727A (en) | Method for tracking target, system and storage medium based on correlation filtering and metric learning | |
CN107862298A (en) | It is a kind of based on the biopsy method blinked under infrared eye |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170630 |
|
RJ01 | Rejection of invention patent application after publication |