Summary of the invention
For in the method for detecting fatigue driving that prior art exists, normally based on a universal model irrelevant with individuality, there is the technical matters of larger difference in its accuracy, the invention provides a kind of method for detecting fatigue driving followed the tracks of based on drivers working states between different drivers.
To achieve these goals, the present invention adopts following technical scheme:
A kind of method for detecting fatigue driving, the method comprises the following steps:
Driver identity certification, according to the video image of driver at present period, by recognition of face, just carries out certification at the driver identity of steering vehicle to current;
Driving time adds up, and according to the identity authentication result of driver, obtains the driving time record of driver;
Emotional state extracts, and according to the video image of driver at present period, extracts the emotional state of driver at present period;
Limbs state is extracted, and according to the video image of driver at present period, extracts the limbs state of driver at present period;
Traffic information is collected, and according to the video image that scene camera is taken at present period, obtains the environmental information of driver vehicle-surroundings;
Vehicle performance state is extracted, the driving condition of combining road condition information and vehicle, extracts driver in the manipulation state of present period to vehicle;
Fatigue driving detects, the timestamp of driving time as driver's emotional state, limbs state and vehicle performance state is added up using present period, upgrade the data of following the tracks of drivers working states change, analyze the state change of driver in more new data, judge whether current time driver is in fatigue driving.
Method for detecting fatigue driving provided by the invention, by recognition of face, tracking driver's facial expression and limb action are passed in time, present and change from normal condition to fatigue state, such as closed-eye time is elongated, the frequency of bowing increase etc.; Follow the tracks of driver vehicle to manipulate state and whether decline in time, such as rotating of steering wheel variance becomes large etc., detects driver whether be in fatigue driving state with this.Method for detecting fatigue driving provided by the invention avoids and presets one and define threshold value, and its accuracy is by the impact of driver's individual difference.
Further, described driver identity certification comprises the following steps:
According to the video image of driver at present period, extract the key point of driver's face area in start-stop frame/intermediate frame video image;
According to the relative position of key point, size normalization is carried out to face area;
By convolutional neural networks, from the face area after size normalization, extract face feature;
Use probability Linear analysis model, calculate the similarity of current extraction face feature and registered driver's face feature, if similarity is more than or equal to 0.9, then the identity of present period driver is exactly this registered driver; If similarity is less than 0.9, then need to register the driver of present period, the face area detected with present period intermediate frame, as the identity marks of this driver.
Further, described driving time is accumulative comprises the following steps:
If the last time detects the time of this driver vehicle, 4 hours are more than or equal to the lead time of current time, then the last time is detected that the time of this driver vehicle replaces with current time, driving time accumulative when the last time is detected replaces with 120 seconds, and face area during driver login is replaced with the face area that present period intermediate frame detects;
If the last time detects the time of this driver vehicle, 4 hours are less than with the lead time of current time, then the last time is detected that the time of this driver vehicle replaces with current time, cumulative 120 seconds of driving time accumulative when the last time is detected.
Further, described emotional state extracts and comprises the following steps:
According to the video image of driver at present period, extract eye key point and the mouth key point of driver in every frame video image;
According to the relative position of eye key point in every frame video image, calculate the form parameter of eyelid profile;
The eyelid contour shape parameter of every frame image in present period, calculates the distribution of shapes of driver at present period palpebra interna profile;
According to the relative position of mouth key point in every frame video image, judge the opening and closing state of mouth;
From the opening and closing status switch of mouth, calculate the number of times of driver's yawn in present period.
Further, described limbs state is extracted and is comprised the following steps:
According to the video image of driver at present period, extract the key point of driver's face area in every frame video image;
By the position of face's key point when face's key point and driver login in more every frame video image, calculate driver relative to head rotation angle during registration;
From the head rotation angle of every frame, calculate the time scale of driver's head pitch attitude in present period.
Further, described traffic information is collected and is comprised the following steps:
According to the video image that scene camera is taken at present period, obtain the lane boundary information of driver vehicle-surroundings;
According to the video image that scene camera is taken at present period, obtain the vehicle distributed intelligence of driver vehicle-surroundings;
According to the video image that scene camera is taken at present period, obtain pedestrian's distributed intelligence of driver vehicle-surroundings.
Further, described vehicle performance state is extracted and is comprised the following steps:
Calculate mean value respectively in present period of the rotational angle of steering vehicle bearing circle and velocity of rotation and standard deviation;
Calculate mean value respectively in present period of the rotational angle of steering vehicle gas pedal and velocity of rotation and standard deviation;
The bearing circle of combining road condition information and steering vehicle and the driving condition parameter of gas pedal, form 15 dimensional feature vectors, and by this 15 dimensional feature vector input neural network regression model, calculate driver at the score value of present period to vehicle performance state.
Further, described fatigue driving detects and comprises the following steps:
Add up the timestamp of driving time as driver's emotional state, limbs state and vehicle performance state using present period, present period is added up the transport condition value of driving time, be updated in the historical data of following the tracks of drivers working states change;
From the drivers working states tracking data upgraded, extract 5 transport condition subsequences, the data corresponding with each transport condition subsequence time period are intercepted from transport condition tracking data, and the data extracting 3 constant duration points from the data segment intercepted insert corresponding traveling between state subgroup sequence data, after combination, reconstruct a new transport condition subsequence;
Operate time series model, resolves each new transport condition subsequence, and whether existence declines each new transport condition subsequence to judge driver status;
Combine the judged result of 5 new transport condition subsequences, calculate the fatigue exponent of current time driver, to judge whether current time driver is in fatigue driving.
The present invention also provides a kind of fatigue driving detecting system, and this system comprises:
Driver identity authentication ' unit, is suitable for according to the video image of driver at present period, by recognition of face, just carries out certification at the driver identity of steering vehicle to current;
Driving time accumulated unit, is suitable for the identity authentication result according to driver, obtains the driving time record of driver;
Emotional state extraction unit, is suitable for according to the video image of driver at present period, extracts the emotional state of driver at present period;
Limbs state extraction unit, is suitable for according to the video image of driver at present period, extracts the limbs state of driver at present period;
Traffic information collector unit, is suitable for the video image taken at present period according to scene camera, obtains the environmental information of driver vehicle-surroundings;
Vehicle performance state extraction unit, is suitable for the driving condition of combining road condition information and vehicle, extracts driver in the manipulation state of present period to vehicle;
Fatigue driving detecting unit, be suitable for adding up the timestamp of driving time as driver's emotional state, limbs state and vehicle performance state using present period, upgrade the data of following the tracks of drivers working states change, analyze the state change of driver in more new data, judge whether current time driver is in fatigue driving.
Fatigue driving detecting system provided by the invention, by recognition of face, tracking driver's facial expression and limb action are passed in time, present and change from normal condition to fatigue state, such as closed-eye time is elongated, the frequency of bowing increase etc.; Follow the tracks of driver vehicle to manipulate state and whether decline in time, such as rotating of steering wheel variance becomes large etc., detects driver whether be in fatigue driving state with this.Fatigue driving detecting system provided by the invention avoids and presets one and define threshold value, and its accuracy is by the impact of driver's individual difference.
Embodiment
The technological means realized to make the present invention, creation characteristic, reaching object and effect is easy to understand, below in conjunction with concrete diagram, setting forth the present invention further.
Please refer to shown in Fig. 1, a kind of method for detecting fatigue driving, the method comprises the following steps:
Driver identity certification 1, according to the video image of driver at present period, by recognition of face, just carries out certification at the driver identity of steering vehicle to current;
Driving time accumulative 2, according to the identity authentication result of driver, obtains the driving time record of driver;
Emotional state extracts 3, according to the video image of driver at present period, extracts the emotional state of driver at present period;
Limbs state extracts 4, according to the video image of driver at present period, extracts the limbs state of driver at present period;
Traffic information collects 5, according to the video image that scene camera is taken at present period, obtains the environmental information of driver vehicle-surroundings;
Vehicle performance state extracts 6, the driving condition of combining road condition information and vehicle, extracts driver in the manipulation state of present period to vehicle;
Fatigue driving detects 7, the timestamp of driving time as driver's emotional state, limbs state and vehicle performance state is added up using present period, upgrade the data of following the tracks of drivers working states change, analyze the state change of driver in more new data, judge whether current time driver is in fatigue driving.
Method for detecting fatigue driving provided by the invention, by recognition of face, tracking driver's facial expression and limb action are passed in time, present and change from normal condition to fatigue state, such as closed-eye time is elongated, the frequency of bowing increase etc.; Follow the tracks of driver vehicle to manipulate state and whether decline in time, such as rotating of steering wheel variance becomes large etc., detects driver whether be in fatigue driving state with this.Method for detecting fatigue driving provided by the invention avoids and presets one and define threshold value, and its accuracy is by the impact of driver's individual difference.
As specific embodiment, described driver identity certification comprises the following steps:
Step 10, according to the video image of driver at present period, extract the key point of driver's face area in start-stop frame/intermediate frame video image.Particularly, in driver's video image of start-stop frame/intermediate frame, " viola-johns " (viola-Jones) algorithm frame is used to detect the rectangular area at driver face place; And utilize " superviseddescentmethod " (having the Gradient Descent of supervision) algorithm in the face rectangular area detected, extract 31 key points of face.These 31 key points are mainly distributed in the eyes of face, nose and lip, particularly: have 5 key points to be distributed on the bridge of the nose and the wing of nose, be referred to as the key point of nasal area; There are 6 key points to be distributed on right eye on palpebra inferior, are referred to as the key point of right eye region; There are 6 key points to be distributed on left eye on palpebra inferior, are referred to as the key point of left eye region; Have 12 key points to be distributed on mouth outline, have 1 key point to be distributed in the lower limb central point of upper lip, have 1 key point to be distributed in the coboundary central point of lower lip, these 14 key points are referred to as the key point of mouth region.Wherein, described " viola-johns " algorithm and " superviseddescentmethod " algorithm are the known technology means of this area, do not repeat them here.
Step 11, relative position according to key point, carry out size normalization to face area; Specifically can carry out affined transformation to 31 key points, the rectangular area at face place is mapped in the template of 128 × 128 sizes.Wherein, described affined transformation is the known technology means of this area, does not repeat them here.
Step 12, by convolutional neural networks, from the face area after size normalization, extract face feature.Particularly, adopt convolutional neural networks, from face area, extract face feature, namely by the convolution kernel of convolutional neural networks, to the image pixel value in face area, adopt a series of different convolution kernel to carry out convolution algorithm and obtain a series of convolution value, in this, as facial eigenvectors.Described convolutional neural networks can adopt technological means well known in the art, and preferably, the convolutional neural networks that the present invention adopts 1 input layer, 5 hidden layers and 1 output layer to form, extracting dimension is the facial eigenvectors of 1024.
Step 13, utilization probability Linear analysis model (plda), calculate in present period start-stop frame/intermediate frame, the driver's face feature extracted and the similarity of registered driver's face feature, if the driver of a registered mistake, the face feature extracted in its face feature and start-stop frame/intermediate frame has the similarity (output valve of plda) being more than or equal to 0.9, then the identity of present period driver is exactly this registered driver; Otherwise, if the driver of a registered mistake cannot be found, the face feature extracted in its face feature and start-stop frame/intermediate frame has the similarity being more than or equal to 0.9, namely both similarities are less than 0.9, then need to register the driver of present period, specifically with the face area that present period intermediate frame detects, as the identity marks of this driver.
As specific embodiment, the driving time record of described driver just generates when driver login, and constantly update in the process of moving.Particularly, the last time this driver vehicle being detected before driving time record comprises, and driver is to driving time accumulative during the last detection.Thus this driving time record, what record to a certain extent is the time that " drives " continuously, and it will as the timestamp following the tracks of drivers working states change.But in the present invention, it is not that free of discontinuities is truly driven that its " drives " continuously, and refers to that driving time is interrupted and be less than 4 hours, be namely considered to " and drive " continuously.Thus, as a kind of embodiment, when upgrading driving time record, described driving time is accumulative to be comprised the following steps:
If the last time detects the time of this driver vehicle, 4 hours are more than or equal to the lead time of current time, then the last time is detected that the time of this driver vehicle replaces with current time, driving time accumulative when the last time is detected replaces with 120 seconds, and face area during driver login is replaced with the face area that present period intermediate frame detects;
If the last time detects the time of this driver vehicle, 4 hours are less than with the lead time of current time, then the last time is detected that the time of this driver vehicle replaces with current time, cumulative 120 seconds of driving time accumulative when the last time is detected.
In driving time accumulating step, described driving time accumulative when the last time is detected is cumulative or replace with 120 seconds, be because in the detection method of the application, described driver identity authenticating step just identified once the driver of current steering vehicle every 120 seconds.Certainly, those skilled in the art, on the basis of aforementioned interval time, can also carry out corresponding conversion according to actual detection mode and arrange.
As specific embodiment, described emotional state extracts and comprises the following steps:
Step 30, according to the video image of driver at present period, extract eye key point and the mouth key point of driver in every frame video image.Particularly, in the video image of every frame driver, adopt the algorithm identical with step 10, " viola-johns " algorithm is used to detect the face area of driver, use " superviseddescentmethod " (having the Gradient Descent of supervision) algorithm to detect 31 key points again, these 31 key points are mainly distributed in the eyes of face, nose and lip.Described " viola-johns " algorithm and " superviseddescentmethod " algorithm are techniques well known means, repeat no more.
Step 31, relative position according to eye key point in every frame video image, calculate the form parameter of eyelid profile.Particularly, by the key point of palpebra inferior upper in every frame video image, by " least square method ", an ellipse is simulated, as the profile describing upper palpebra inferior; Calculate the ratio between longitudinal axis radius and transverse axis radius in elliptic contour, as the form parameter describing driver's eyelid profile.This eyelid contour shape parameter can reflect that the eyes of driver are from change when opening closed, so as a feature judge driver be in clear-headed, or sleep, or hypnagogic state.When the clear-headed driving of driver, eyes are opened completely, and when driver fatigue is fallen asleep, eyes close completely, therefore compared with during eyes closed, have the larger radius of axle in length and breadth when eyes are opened.
Step 32, in present period the eyelid contour shape parameter of every frame image, calculate the distribution of shapes of driver at present period palpebra interna profile.Particularly, the codomain [0 at above-mentioned eyelid contour shape parameter place, 0.6] four sub-ranges are divided into: [0,0.15], [0.15,0.3], [0.3,0.45], [0.45,0.6], sub-range residing for metric, carries out disperse segmentaly, respectively the shape of corresponding four kinds of eyelid profiles.For each in four eyelid contour shapes, calculate in present period, comprise the video image frame number of this eyelid contour shape and the ratio during this period of time between interior video image totalframes.
Step 33, relative position according to mouth key point in every frame video image, judge the opening and closing state of mouth.Particularly, by the key point of lip outline in every frame video image, simulate an ellipse, as the shape describing lip outline; Then the ratio in elliptic contour between longitudinal axis radius and transverse axis radius is calculated, as the metric describing driver's mouth opening and closing state.Compared to conjunction mouth, there is when opening one's mouth the larger radius of axle in length and breadth.
Step 34, from the opening and closing status switch of mouth, calculate the number of times of driver yawn in present period.Particularly, use " Hidden Markov " time series models, resolve the metric of mouth opening and closing state and the change between adjacent image frame thereof in every frame video image, thus in the opening and closing status switch of mouth, detect the subsequence that yawn occurs, add up the number of yawn subsequence again, as the yawn number of times at present period.Wherein, described " Hidden Markov " time series models are techniques well known means, repeat no more.
So far, according to the video image of driver at present period, complete the emotional state of driver in present period and extract, described emotional state comprises eyes open and-shut mode and mouth opening and closing state.
As specific embodiment, described limbs state is extracted and is comprised the following steps:
Step 40, according to the video image of driver at present period, extract the key point of driver's face area in every frame video image.Particularly, adopt the algorithm identical with step 10, detect the rectangular area of driver place face in every frame video image, and 31 key points in face area.
Step 41, by the position of face's key point in more every frame video image with face's key point during driver login, calculate driver relative to head rotation angle when registering.Particularly, adopt posit algorithm, in more every frame video image, the relative position of face area key point when 31 face's key points and driver login, calculates the parameter projected between them, i.e. rotational angle in three dimensions, comprises the angle of pitch, crab angle and roll angle.Relative to the video image of sitting normal during driver login, the described angle of pitch refers to and rotates formed angle downward or upward around the transverse axis on the plane of delineation, described crab angle refers to the angle formed around the Y rotation with in surface on the plane of delineation, and described roll angle refers to the angle that the rotation in the plane of delineation is formed.Wherein, described " posit " algorithm is techniques well known means, repeats no more.
Step 42, from the head rotation angle of every frame, calculate the time scale of driver head pitch attitude in present period.Particularly, get the angle of pitch of head, describe the change of driver head's pitch attitude.The codomain at this metric place [-180 °, 180 °] be divided into 5 sub-ranges, its sub-range specifically comprises [-180 ° ,-45 °], [-45 ° ,-15 °], [-15 °, 15 °], [15 °, 45 °], [45 °, 180 °], the sub-range residing for metric, carry out disperse segmentaly, the angle of corresponding five head pitching respectively.For each in five head luffing angles, calculate in present period, comprise the video image frame number of this head luffing angle and the ratio during this period of time between interior video image totalframes, obtain the time scale of driver's head luffing angle in present period thus.
As specific embodiment, described traffic information is collected and is comprised the following steps:
Step 50, the video image taken at present period according to scene camera, obtain the lane boundary information of driver vehicle-surroundings.Particularly, to each two field picture from scene camera, carry out marginal point extraction by Canny operator, then use Hough transform matching quadratic polynomial from marginal point, as the absorbing boundary equation in track.
With the quadratic term coefficient of absorbing boundary equation, as the metric describing lane curvature.
With the Monomial coefficient of absorbing boundary equation, as the metric describing direction, track.
With the constant term of absorbing boundary equation, depart from the metric on (run-off-road border) as description vehicle lane border.
Wherein, described " Canny operator ", " Hough transform " is techniques well known means, repeats no more.
Step 51, the video image taken at present period according to scene camera, obtain the vehicle distributed intelligence of driver vehicle-surroundings.Particularly, to each two field picture taken from scene camera, adopt the vehicle detecting algorithm detecting framework based on " HarrWavelet " feature and " SVM ", detect the vehicle in image.Based on the image-forming principle of the near big and far smaller " of ", the present invention, to detect the size at vehicle place rectangle frame (vehicle detects frame) in image, detects the metric of the spacing of vehicle and current driver's steering vehicle as description.Detect vehicle from every two field picture, select 3 cars detecting vehicle place rectangle frame size maximum (namely distance is the shortest), by following formula (1) calculating in this two field picture place moment, the traffic density of driver vehicle-surroundings.
If this frame is without detecting vehicle, then traffic density is 0.Wherein, described " HarrWavelet ", " SVM " is techniques well known means, repeats no more.
Step 52, the video image taken at present period according to scene camera, obtain pedestrian's distributed intelligence of driver vehicle-surroundings.Particularly, to each two field picture taken from scene camera, the same vehicle detecting algorithm adopted based on " HarrWavelet " feature and " SVM " detection framework, detects the pedestrian in image.Based on the image-forming principle of the near big and far smaller " of ", the present invention, to detect the size (the rectangle frame area at pedestrian place) at pedestrian place rectangle frame (pedestrian detects frame) in image, detects the metric of the spacing of pedestrian and current driver's steering vehicle as description.Detect vehicle from every two field picture, select 3 pedestrians of rectangle frame size maximum (namely apart from the shortest), calculate this two field picture place moment by following formula (2), the pedestrian density of driver vehicle-surroundings.
If this frame is without detecting pedestrian, pedestrian density is 0.
As specific embodiment, described vehicle performance state is extracted and is comprised the following steps:
Mean value respectively in present period of step 60, the rotational angle calculating steering vehicle bearing circle and velocity of rotation and standard deviation.Particularly, with each sampling time point in current slot for transverse axis, with the rotating of steering wheel angle on sampling time point each in current slot for the longitudinal axis, to obtain in current slot rotating of steering wheel angle relative to the waveform of time variations.By " crosszero " algorithm, in current slot rotating of steering wheel angle waveform, extract rotational angle Rapid Variable Design (as amplitude >30 °, width <10s) crest and trough, then add up the number of crest and trough in present period.Wherein, described " crosszero " algorithm is techniques well known means, repeats no more.
Mean value respectively in present period of step 61, the rotational angle calculating steering vehicle gas pedal and velocity of rotation and standard deviation.Particularly, mean value in present period of the rotational angle of described gas pedal and standard deviation, and the mean value of the velocity of rotation of described gas pedal in present period and standard deviation, with calculate mean value respectively in present period of the rotational angle of steering vehicle bearing circle and velocity of rotation in described step 60 and standard deviation similar, do not repeat them here.
The bearing circle of step 62, combining road condition information and steering vehicle and the driving condition parameter of gas pedal, form 15 dimensional feature vectors, and by this 15 dimensional feature vector input neural network regression model, calculate driver at the score value of present period to vehicle performance state.Particularly, the " lane curvature using step 50 to obtain, direction, track, lane boundary depart from ", the " traffic density " that step 51 obtains, the " pedestrian density " that step 52 obtains, the mean value of the mean value of the " rotating of steering wheel angle that step 60 obtains and standard deviation, rotating of steering wheel speed and standard deviation, rotating of steering wheel angle crest and trough number ", the mean value of the mean value of the " gas pedal rotational angle that step 61 obtains and standard deviation, gas pedal velocity of rotation and standard deviation " totally 15 driving condition parameters, form 15 dimensional feature vectors.And make " neural network " as prediction driver to the regression model of vehicle performance state, using the input of described 15 dimensional feature vectors as this model, output valve is and describes driver to the score value of vehicle performance state, its scope is between 0 ~ 1, its output valve is larger, represents that the vehicle performance state of this driver is better.Wherein, described " neural network " is techniques well known means, repeats no more.
As specific embodiment, described fatigue driving detects and comprises the following steps:
Step 70, add up the timestamp of driving time as driver's emotional state, limbs state and vehicle performance state using present period, present period is added up the transport condition value of driving time, be updated in the historical data of following the tracks of drivers working states change.Particularly, the form parameter of present period driver eyelid profile, mouth opening and closing state value, head pitch attitude value and vehicle performance state value, as the transport condition value of current accumulative driving time, add in the historical data of following the tracks of drivers working states change, its historical data is upgraded.
Step 71, from upgrade drivers working states tracking data, extract 5 transport condition subsequences, the data corresponding with each transport condition subsequence time period are intercepted from transport condition tracking data, and the data extracting 3 constant duration points from the data segment intercepted insert corresponding traveling between state subgroup sequence data, after combination, reconstruct a new transport condition subsequence.Particularly, the data of initial five time points in transport condition data are got, respectively as the starting point of five transport condition subsequences; Get the data stopping five time points in transport condition data, respectively as the terminal of five transport condition subsequences.To each transport condition subsequence, using its start time and terminal as terminal, from transport condition data, intercept the data of corresponding time period; And from each data segment intercepted, the data extracting 3 time points of constant duration, then, between the starting point sequentially inserting this transport condition subsequence and endpoint data, combine with starting point and endpoint data, form a new transport condition subsequence be made up of 5 time points.Therefore, the new transport condition subsequence be made up of 5 time points, its data formed are randomly drawed, and add the follow-up authenticity that judges accordingly and accuracy.
Step 72, operate time series model, resolve each new transport condition subsequence, and whether existence declines in each new transport condition subsequence to judge driver status.Particularly, to each new transport condition subsequence, use " Hidden Markov " time series models, resolve the transport condition of each time point in subsequence, and transport condition is at the diff of its adjacent two time points, thus whether judge driver status in this transport condition subsequence in decline, if, this subsequence is then referred to as " state decline subsequence ", and from " Hidden Markov " time series models, obtain the degree of confidence (i.e. state decline degree of confidence) of described judgement, as the metric describing this subsequence state decline degree.
The judged result of step 73, associating 5 new transport condition subsequences, calculates the fatigue exponent of current time driver, to judge whether current time driver is in fatigue driving.Particularly, based on number and the degree of confidence of " state decline subsequence ", calculated the fatigue exponent of current time driver by following formula (3).
If fatigue exponent is greater than 3, judge that current time driver is in fatigue state.
The present invention also provides a kind of fatigue driving detecting system, and this system comprises:
Driver identity authentication ' unit, is suitable for according to the video image of driver at present period, by recognition of face, just carries out certification at the driver identity of steering vehicle to current;
Driving time accumulated unit, is suitable for the identity authentication result according to driver, obtains the driving time record of driver;
Emotional state extraction unit, is suitable for according to the video image of driver at present period, extracts the emotional state of driver at present period;
Limbs state extraction unit, is suitable for according to the video image of driver at present period, extracts the limbs state of driver at present period;
Traffic information collector unit, is suitable for the video image taken at present period according to scene camera, obtains the environmental information of driver vehicle-surroundings;
Vehicle performance state extraction unit, is suitable for the driving condition of combining road condition information and vehicle, extracts driver in the manipulation state of present period to vehicle;
Fatigue driving detecting unit, be suitable for adding up the timestamp of driving time as driver's emotional state, limbs state and vehicle performance state using present period, upgrade the data of following the tracks of drivers working states change, analyze the state change of driver in more new data, judge whether current time driver is in fatigue driving.
Fatigue driving detecting system provided by the invention, by recognition of face, tracking driver's facial expression and limb action are passed in time, present and change from normal condition to fatigue state, such as closed-eye time is elongated, the frequency of bowing increase etc.; Follow the tracks of driver vehicle to manipulate state and whether decline in time, such as rotating of steering wheel variance becomes large etc., detects driver whether be in fatigue driving state with this.Fatigue driving detecting system provided by the invention avoids and presets one and define threshold value, and its accuracy is by the impact of driver's individual difference.
As specific embodiment, the implementation of described fatigue driving detecting system and aforementioned method for detecting fatigue driving is similar, does not repeat them here.
These are only embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every equivalent structure utilizing instructions of the present invention and accompanying drawing content to do, is directly or indirectly used in the technical field that other are relevant, all in like manner within scope of patent protection of the present invention.