CN114973212A - Fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention - Google Patents

Fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention Download PDF

Info

Publication number
CN114973212A
CN114973212A CN202210379947.2A CN202210379947A CN114973212A CN 114973212 A CN114973212 A CN 114973212A CN 202210379947 A CN202210379947 A CN 202210379947A CN 114973212 A CN114973212 A CN 114973212A
Authority
CN
China
Prior art keywords
fatigue
driver
steering wheel
network model
state
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210379947.2A
Other languages
Chinese (zh)
Inventor
朱贤瑛
项颖
张文慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202210379947.2A priority Critical patent/CN114973212A/en
Publication of CN114973212A publication Critical patent/CN114973212A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/02Alarms for ensuring the safety of persons
    • G08B21/06Alarms for ensuring the safety of persons indicating a condition of sleep, e.g. anti-dozing alarms
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B7/00Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00
    • G08B7/06Signalling systems according to more than one of groups G08B3/00 - G08B6/00; Personal calling systems according to more than one of groups G08B3/00 - G08B6/00 using electric transmission, e.g. involving audible and visible signalling through the use of sound and light sources

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Probability & Statistics with Applications (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention. The method comprises the steps of collecting the face state and the steering wheel rotation information of a driver, identifying the eye state of the driver by using an SSD convolutional network model, and estimating the current fatigue state of the driver through the identified face features and the steering wheel rotation information. And if the driver is judged to be in the fatigue state, the automobile enters a fatigue intervention mode so as to reduce the risk brought by fatigue driving. The invention adopts an SSD convolutional network model to realize extraction and identification of facial information of a driver, acquires steering wheel rotation information through an automobile CAN bus, and adds visual characteristics and driving behavior characteristics as input factors for a fatigue detection model; therefore, the method has higher identification accuracy and capability of resisting objective environment interference. The invention simultaneously intervenes the driving mode through special automobile fatigue, and achieves the purpose of shortening the fatigue time period of the driver.

Description

Fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention
Technical Field
The invention relates to the field of automatic control and intelligent cabins of automobiles, in particular to a fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention.
Background
The fatigue driving of a driver is one of main reasons for motor vehicle accidents, and the probability of causing the motor vehicle accidents in a fatigue driving state is 4-6 times of that in a waking state. In daily life of the modern society, fatigue driving phenomena are caused by underestimation of vehicle risks and lucky psychology of people, and a survey completed by related institutions shows that nearly half of drivers have fatigue driving behaviors. The long-time continuous driving, lack of sufficient sleep quantity or poor sleep quality, simple and repeated vehicle control and control can lead to the fatigue driving of the driver, and then the phenomena of inattention, reaction speed reduction, misoperation or incapability of driving the vehicle and the like of the driver can be generated, so that road traffic accidents are easily caused.
In a traffic accident caused by fatigue driving, the characteristic behavior of the driver may be significantly different from that of normal driving. The detection method is researched around the characteristic differences, the fatigue driving early warning system is designed to actively avoid the fatigue driving behavior, whether the driver is tired or not is accurately determined in real time, people can go out more safely, and the method has remarkable research significance and application prospect in preventing motor vehicle accidents caused by fatigue driving and protecting the life and property safety of the masses.
At present, the fatigue detection method is mainly based on relevant characteristics such as physiological information, driving operation, computer vision and the like, but the detection methods based on single or similar characteristics generally have low recognition accuracy and poor robustness to complex driving environments, and the early warning method cannot effectively reduce fatigue driving behaviors, possibly causes physiological and psychological burdens on drivers and cannot really satisfy users.
Chinese patent CN 201310396102.5A method for detecting fatigue of automobile driver based on steering wheel angle characteristics, which comprises reading steering wheel angle data of vehicle in operation process by a steering wheel steering parameter reader; then extracting and normalizing the feature vector of the obtained steering wheel corner data; then, establishing a driving fatigue detection model based on a support vector machine after optimization of a genetic algorithm and an Adaboost algorithm; then, developing driving fatigue detection software; and finally, the fatigue state of the driver can be detected in real time by transmitting steering wheel angle data into driving fatigue detection software. However, fatigue representation of steering wheel parameters is not obvious, recognition accuracy is low, and the fatigue representation cannot be used as an independent judgment condition.
The Chinese patent CN201610264968.4 fatigue detection method and system based on a video intelligent algorithm collects video images of workers through fatigue detection equipment, carries out fatigue detection on the collected video images, judges whether the workers are on duty, judges the opening degree of eyes and mouth of the workers, and calculates the frequency of reaching a threshold value of the opening degree of eyes and mouth of the workers in a certain time through weighted average; and the monitoring client receives the states of the plurality of pushed fatigue detection devices. But the fatigue driving behavior cannot be stopped in time and the fatigue driving behavior can only be used as a collector to store related information.
Disclosure of Invention
In order to solve the problems brought forward by the background technology, such as the problem of low recognition accuracy of a detection method aiming at single or similar characteristics and the problem of unobvious effect of an early warning method, the invention provides a method for stopping fatigue driving based on visual characteristics, steering wheel operation detection and active intervention.
In order to achieve the purpose, the technical scheme of the invention is as follows;
the invention relates to a fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention. The method comprises the steps of collecting the face state and the steering wheel rotation information of a driver, identifying the eye state of the driver by using an SSD convolutional network model, and estimating the current fatigue state of the driver through the identified face features and the steering wheel rotation information. And if the driver is judged to be in the fatigue state, the automobile enters a fatigue intervention mode so as to reduce the risk brought by fatigue driving. The invention adopts the SSD convolutional network model to realize the extraction and identification of the facial information of the driver, acquires the steering wheel rotation information through the automobile CAN bus, and adds visual characteristics and driving behavior characteristics as input factors for the fatigue detection model.
The visual characteristic and steering wheel operation fatigue driving detection and active intervention automobile mode specifically comprises the following steps:
s1, loading a data set FDDB suitable for human eye state calibration into a visual labeling tool LabelImg, segmenting, labeling and storing human eye states in pictures in the data set, and randomly distributing the human eye states into training set data and testing set data according to the ratio of 3: 1;
s2, constructing an SSD neural network model according to the scale of the data set and the output requirement, training by using the training data set in the S1, and further testing and inspecting the model by using the testing data set;
s3, acquiring a driver image through a camera on the column A at the driving side of the automobile, and inputting the driver image into the SSD neural network model in the step two to obtain a human eye state detection result;
s4, calculating and outputting a steering wheel corner amplitude standard deviation and a zero speed percentage value in an E time window by using steering wheel corner information and corner speed information in an automobile CAN bus;
s5, inputting the output value processed by the SSD neural network model in the step S3 and the output value in the step S4 into a D-S evidence fusion model, and expressing an output result in a trust probability form to be used as a basis for judging fatigue driving of a driver;
and S6, evaluating the output value of the D-S evidence fusion model in the step 5, evaluating the current state of the driver, entering a fatigue intervention mode if a determined value is obtained, and performing intervention actions of different depths according to the fatigue degree.
The requirements of the training and testing data set in step S1 of the above-mentioned method of suppressing fatigue driving based on visual characteristics, steering wheel operation detection and active intervention are: the data volume ratio of the waking state to the fatigue state is 1: 1; the method comprises the data of special working conditions such as facial occlusion, deviation from detection visual field and the like.
The step S2 of the method for stopping fatigue driving based on visual characteristics, steering wheel operation detection and active intervention is to construct a plurality of convolutional layers aiming at the data and set scale obtained in the step S2, and the method comprises the following specific steps: by a transfer learning method, a VGG16 convolutional network model is transferred to the network to serve as an SSD pre-training model, and the size of an input image for network training is 300 × 300. The pooling layer is adjusted from 2 × 2 to 3 × 3 (the size of the pooling kernel is 3 × 3, the step length is 1, and the filling is 1), two fully-connected layers are changed into a 3 × 3 convolutional layer and a 1 × 1 convolutional layer, and four convolutional layers are added to generate feature maps with different scales.
In the step S2 of the method for stopping fatigue driving based on visual characteristics, steering wheel operation detection and active intervention, the SSD neural network model is trained by putting the training data set in the step S1 into the constructed SSD neural network model, and continuously adjusting the weight and the offset by the gradient descent method, so that the final fluctuation of the loss function is within the error allowable range, and at this time, a suitable SSD neural network model is generated.
In the step S2 of the method for stopping fatigue driving based on visual characteristics, steering wheel operation detection and active intervention, the test is to put the test data set into the generated SSD neural network model for detection, and to check the test effect; if the error obtained by the test fluctuates around the training error, the test effect is good; otherwise, the structure or parameters of the SSD neural network model are further adjusted.
The step S4 of the method for stopping fatigue driving based on visual features, steering wheel operation detection and active intervention collects steering wheel rotation information by connecting the car OBD interface to the CAN bus, grabbing the CAN message on the bus once in 100ms, and calculating the zero speed percentage (the ratio of the number of sampling points to the total sampling points at an angular speed of ± 1 °/S) and the standard deviation of the steering angle within 1S:
Figure BDA0003592391030000051
in the D-S evidence fusion model in step S5 of the method for stopping fatigue driving based on visual characteristics, steering wheel operation detection and active intervention, the output values in step S3 and step S4 are respectively determined as sample evidence 1 (determined as fatigue state by eye state) and sample evidence 2 (determined as fatigue state by steering wheel rotation parameters), basic probability functions m1 and m2 are set for sample evidences 1 and 2 (weighted number is set), and a fused fatigue probability function is obtained by the following Dempster combination rule
Figure BDA0003592391030000052
Figure DEST_PATH_IMAGE001
Wherein K is a normalization constant and has a value range of (0, 1).
Figure DEST_PATH_IMAGE002
In the method for stopping fatigue driving based on visual characteristics, steering wheel operation detection and active intervention, the vehicle-mounted fatigue monitoring module judges that the driver is slightly tired in step S6, a command for entering the slight fatigue intervention is sent out by the CAN bus, and after each module of the whole vehicle receives the command, the vehicle enters the mode: the air conditioner automatically reduces the temperature, increases the air volume and adjusts the air environment in the vehicle; the instrument lights up the alarm icon and continuously flickers; and the multimedia performs voice alarm. And judging that the fatigue is severe fatigue, sending a mild fatigue intervention instruction by the fatigue monitoring module on a CAN bus, and after each module of the whole vehicle receives the instruction: the self-adaptive cruise module is used for self-adaptively reducing the speed of the vehicle by combining the current road condition and the surrounding vehicle condition; the map automatically navigates the nearest parking space or rest area.
At present, most fatigue driving detection systems adopt single visual features, effective features are extracted from images for judgment, drivers are prompted through sound and light, the effect of actually relieving fatigue states is limited, and therefore in order to timely find and reduce fatigue driving time of the drivers, the invention synchronously acquires the visual features and the driving operation features for fusion analysis, more accurately judges the current driver states, and reduces fatigue driving risks through a hard intervention mode.
Compared with the prior art, the method for stopping fatigue driving based on visual characteristics, steering wheel operation detection and active intervention has the following advantages and beneficial effects:
1. the fatigue driving stopping method based on visual characteristics, steering wheel operation detection and active intervention combines the visual characteristics and steering characteristics of the steering wheel to perform fusion evaluation on the fatigue state.
2. The invention performs powerful intervention on the driving operation of the driver in the fatigue state through a special automobile driving mode, effectively relieves the mental fatigue state and reduces the traffic safety risk.
3. Compared with the traditional fatigue detection early warning method, the method has the advantages that the normal operation of a driver is not influenced, the non-invasive performance is realized, and the large-scale popularization is facilitated.
Drawings
FIG. 1 is a flow chart of an automotive model for detecting fatigue driving and active intervention based on visual characteristics, steering wheel operation, and the present invention;
FIG. 2 is a diagram of an SSD network architecture;
FIG. 3 is a D-S evidence fusion graph;
fig. 4 is a vehicle-mounted electrical module structure.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments and features of the embodiments of the present application may be combined with each other without conflict.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described herein, and therefore the scope of the present invention is not limited by the specific embodiments disclosed below.
In this specification, a schematic representation of certain terms does not necessarily refer to the same embodiment or example. Furthermore, the particular features, steps, methods, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples.
The technical solution of the present invention is further described with reference to fig. 1 to 4 and the embodiment.
FIG. 1 is a flow chart of a method for suppressing fatigue driving based on visual characteristics, steering wheel operation detection and active intervention according to the present invention.
FIG. 2 is a diagram of an SSD network architecture; extracting target characteristics in input signals through convolutional layers, compressing the input characteristics through pooling layers to reduce calculated amount, using a softmax function as an activation function, limiting the range of each element within 0-1, and enabling the sum of all the elements to be 1, so as to obtain an SSD neural network model, continuously training through a training data set, judging whether the model construction is qualified according to training errors, if so, using the model construction for testing the set, otherwise, reconstructing the network model, and re-training the SSD neural network model.
FIG. 3 is a D-S evidence fusion graph; and for the result output from the SSD neural network model, the standard deviation of the steering wheel information and the zero-speed percentage calculation result, the D-S evidence rule is fused, and the final result is output as the basis for judging the fatigue driving.
Fig. 4 is a structure of an automotive electrical module.
Example 1
The fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention mainly comprises the following steps:
s1, loading a data set FDDB suitable for human eye state calibration into a visual labeling tool LabelImg, segmenting and labeling human eye states in pictures in the data set, storing, and randomly distributing the human eye states into training set data and testing set data according to a ratio of 3: 1.
S2, an SSD neural network model is built according to the scale of the data set and the output requirement, the training data set in the S1 is used for training, and the test data set is further used for testing and checking the model.
S3, acquiring a driver image through a camera on the column A at the driving side of the automobile, and inputting the driver image into the SSD neural network model in the step two to obtain a human eye state detection result;
s4, calculating and outputting a steering wheel corner amplitude standard deviation and a zero speed percentage value in an E time window by using steering wheel corner information and corner speed information in an automobile CAN bus;
s5, inputting the output value processed by the SSD neural network model in the step S3 and the output value in the step S4 into a D-S evidence fusion model (as shown in FIG. 3), and expressing an output result in a trust probability form to be used as a basis for judging fatigue driving of a driver;
constructing a D-S evidence fusion model, respectively determining output values in the step S3 and the step S4 as a sample evidence 1 (judged as a fatigue state by an eye state) and a sample evidence 2 (judged as a fatigue state by a steering wheel rotation parameter), setting basic probability functions m1 and m2 (setting a weighted number) for the sample evidences 1 and 2, and obtaining a fatigue probability function after fusion by the following Dempster combination rule
Figure BDA0003592391030000081
(evaluation of fatigue by integrating eye State and steering wheel turning parameter results)
Figure BDA0003592391030000082
Wherein K is a normalization constant and has a value range of (0, 1).
Figure BDA0003592391030000083
As exemplified in table 1:
Figure BDA0003592391030000084
Figure BDA0003592391030000091
calculating a normalization constant K:
Figure BDA0003592391030000092
calculating a fatigue probability function after fusion:
Figure BDA0003592391030000093
and S6, evaluating the output value of the D-S evidence fusion model in the step 5, evaluating the current state of the driver, entering a fatigue intervention mode if a determined value is obtained, and performing intervention actions of different depths according to the fatigue degree.
Example 2
In this embodiment, a method for suppressing fatigue driving based on visual features, detection of steering wheel operation and active intervention is provided, which specifically includes the following steps:
and S1, loading the FDDB suitable for human eye state calibration into a visual labeling tool LabelImg, segmenting, labeling and storing human eye states in pictures in the data set, and randomly distributing the human eye states into training set and test set data according to the ratio of 3: 1.
S2: and constructing an SSD neural network model according to the scale and the output requirement of the data set, training by applying the training data set in the S1, and further testing and checking the model by applying the testing data set.
And (3) putting the training data set in the S2 into the SSD neural network model in the S3 for training, continuously adjusting weight and bias through a gradient descent method, and stopping training when the final error reaches 1/10000 with the characteristic number or the training frequency exceeds 10000 times. And carrying out cross check on the accuracy of the trained network by using a training sample set, and randomly extracting 20% of data from the training samples to carry out cross validation, wherein the accuracy of the training set is closer to 100%, and the theoretical classification effect is better. If not, modifying the network parameters and retraining.
And (4) putting the test data set into the generated SSD network model for detection, and checking the test effect. If the error obtained by the test fluctuates around the training error, the test effect is good; otherwise, the structure or parameters of the SSD neural network model need to be further adjusted, so that the number of convolution layers can be increased accordingly, and the size and step size of the convolution kernel can be adjusted.
S3, acquiring a driver image through a camera on the column A at the driving side of the automobile, and inputting the driver image into the SSD neural network model in the step two to obtain a human eye state detection result;
and transplanting the trained SSD network model into a vehicle-mounted fatigue detection module, driving a camera to acquire an image by the vehicle-mounted fatigue detection module, performing filtering and denoising pretreatment on the image, and inputting the image to the SSD model for eye state recognition.
S4, calculating and outputting a steering wheel corner amplitude standard deviation and a zero speed percentage value in an E time window by using steering wheel corner information and corner speed information in an automobile CAN bus;
s5, inputting the output value processed by the SSD neural network model in the step S3 and the output value in the step S4 into a D-S evidence fusion model (as shown in FIG. 3), and expressing an output result in a trust probability form to be used as a basis for judging fatigue driving of a driver;
and S6, evaluating the output value of the D-S evidence fusion model in the step 5, evaluating the current state of the driver, if a determined value is obtained, enabling the automobile to enter a fatigue intervention mode, and performing intervention actions of different depths according to the fatigue degree.
Example 3
In this embodiment, on the basis of embodiment 1, the step S6 of refining specifically includes:
and implementing corresponding intervention means by judging different degrees of fatigue states. According to the result obtained by the D-S evidence fusion model, if the obtained result is a clear state, the current state of the driver is continuously detected in real time; if the obtained result is a light fatigue state, the fatigue monitoring module sends a CAN command to enable the whole vehicle to enter a light fatigue intervention mode, the temperature of the air conditioner is automatically controlled to be reduced, the air volume is increased, the alarm icon of the instrument periodically flickers, and the multimedia carries out voice reminding; when the driver is detected to be in the waking state, the driver exits from the mild fatigue intervention mode, the temperature and the air volume of the air conditioner are recovered to default, and the alarm is cancelled by the instrument and the multimedia; if the obtained result is a severe fatigue state, the fatigue monitoring module sends a CAN command to enable the whole vehicle to enter a severe fatigue intervention mode, the self-adaptive cruise system automatically starts to detect the surrounding environment, judges the current road condition and vehicle condition, slowly controls the vehicle to decelerate, starts the double-flash warning lamp, and the map automatically navigates to the nearest rest area or parking spot and continues voice navigation. The fatigue detection module continuously works in real time, so that the normal driving mode, the light fatigue intervention mode and the heavy fatigue intervention mode can be switched through the current fatigue detection result, and the structure of the electric module of the automobile part is shown in fig. 4.
Furthermore, various embodiments or examples and features of different embodiments or examples described in this specification can be combined and combined by one skilled in the art without contradiction.
It should be noted that the above-mentioned embodiments of the present invention are only examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (10)

1. The fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention is characterized by comprising the following steps of: collecting the face state and steering wheel rotation information of a driver, identifying the eye and mouth states of the driver by using an SSD convolutional network model, and estimating the current fatigue state of the driver through the identified face features and steering wheel rotation information; and if the driver is judged to be in the fatigue state, the automobile enters a fatigue intervention mode so as to reduce the risk brought by fatigue driving.
2. The method of claim 1, wherein: the method specifically comprises the following steps:
s1, loading a data set FDDB suitable for human eye state calibration into a visual labeling tool LabelImg, segmenting, labeling and storing human eye states in pictures in the data set, and randomly distributing the human eye states into training set data and testing set data according to the ratio of 3: 1;
s2, constructing an SSD neural network model according to the scale of the data set and the output requirement, training by using the training data set in the S1, and further testing and inspecting the model by using the testing data set;
s3, acquiring a driver image through a camera on the column A at the driving side of the automobile, and inputting the driver image into the SSD neural network model in the step S2 to obtain a human eye state detection result;
s4, calculating and outputting a steering wheel corner amplitude standard deviation and a zero speed percentage value in an E time window by using steering wheel corner information and corner speed information in an automobile CAN bus;
s5, inputting the output value processed by the SSD neural network model in the step S3 and the output value in the step S4 into a D-S evidence fusion model, and expressing an output result in a trust probability form to be used as a basis for judging fatigue driving of a driver;
and S6, evaluating the output value of the D-S evidence fusion model in the step S5, evaluating the current state of the driver, entering a fatigue intervention mode if a determined value is obtained, and performing intervention actions of different depths according to the fatigue degree.
3. The method of claim 2, wherein: the requirements of the training and testing data set in step S1 are: the data volume ratio of the waking state to the fatigue state is 1: 1; the method comprises the data of special working conditions such as facial occlusion, deviation from detection visual field and the like.
4. The method of claim 2, wherein: the step S2 of constructing the SSD neural network model is that a plurality of convolutional layers are constructed according to the data set scale obtained in the step S2, and the concrete steps are as follows: through a transfer learning method, a VGG16 convolution network model structure is modified to be used as an SSD pre-training model: the size of the network training input image is 300 x 300, the pooling layer is adjusted from 2 x 2 to 3 x 3, two fully-connected layers are changed into a 3 x 3 convolutional layer and a 1 x 1 convolutional layer, and four convolutional layers are added to generate feature maps with different scales.
5. The method of claim 2, wherein: in the step S2, the SSD neural network model is trained by putting the training data set in the step S1 into the constructed SSD neural network model, and continuously adjusting the weight and the offset by the gradient descent method, so that the final fluctuation of the loss function is within the error allowable range [1.1-2.6], and at this time, the suitable SSD neural network model is generated.
6. The method of claim 2, wherein: the test in the step S2 is to put the test data set into the generated SSD neural network model for detection, and to check the test effect; if the error obtained by the test fluctuates around the training error, the test effect is good; otherwise, the structure or parameters of the SSD neural network model are further adjusted.
7. The method of claim 2, wherein: in the step S4, the steering wheel rotation information is collected by connecting the CAN bus through the automobile OBD interface, capturing the CAN message on the bus once in 100ms, and calculating the zero-speed percentage and the standard deviation of the steering angle within 1S:
Figure FDA0003592391020000021
wherein N is the number of sampling points in the sample, SA i To the steering wheel angle, SA m The calculation formula of (a) is as follows:
Figure FDA0003592391020000031
8. the method of claim 2, wherein: in the D-S evidence fusion model in step S5, the output values in step S3 and step S4 are respectively determined as sample evidence 1 and sample evidence 2, basic probability functions m1 and m2 are set for sample evidence 1 and sample evidence 2, and the fatigue probability function after fusion is obtained by the following Dempster combination rule
Figure FDA0003592391020000032
Figure FDA0003592391020000033
Wherein K is a normalization constant and has a value range of (0, 1).
Figure FDA0003592391020000034
9. The method of claim 8, wherein: said fatigue probability function
Figure FDA0003592391020000035
Namely a parameter for judging whether fatigue is caused by integrating the eye state and the steering wheel rotation parameter,
Figure FDA0003592391020000036
wherein K is a normalization constant, a value range (0,1)
Figure FDA0003592391020000037
10. The method of claim 2, wherein: in the step S6, the vehicle-mounted fatigue monitoring module judges that the driver is slightly tired, sends a command for entering the slight fatigue intervention from the CAN bus, and after receiving the command, all modules of the whole vehicle enter the mode: the air conditioner automatically reduces the temperature, increases the air volume and adjusts the air environment in the vehicle; the instrument lights up the alarm icon and continuously flickers; multimedia carries on the voice alarm; judging to be severe fatigue, the fatigue monitoring module sends out a command for entering mild fatigue intervention at the CAN bus, and after each module of the whole vehicle receives the command: the self-adaptive cruise module is used for self-adaptively reducing the speed of the vehicle by combining the current road condition and the surrounding vehicle condition; the map automatically navigates the nearest parking space or rest area.
CN202210379947.2A 2022-04-12 2022-04-12 Fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention Pending CN114973212A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210379947.2A CN114973212A (en) 2022-04-12 2022-04-12 Fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210379947.2A CN114973212A (en) 2022-04-12 2022-04-12 Fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention

Publications (1)

Publication Number Publication Date
CN114973212A true CN114973212A (en) 2022-08-30

Family

ID=82976726

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210379947.2A Pending CN114973212A (en) 2022-04-12 2022-04-12 Fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention

Country Status (1)

Country Link
CN (1) CN114973212A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024131400A1 (en) * 2022-12-21 2024-06-27 虹软科技股份有限公司 Vision-based early fatigue detection method and apparatus, and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024131400A1 (en) * 2022-12-21 2024-06-27 虹软科技股份有限公司 Vision-based early fatigue detection method and apparatus, and storage medium

Similar Documents

Publication Publication Date Title
CN111274881B (en) Driving safety monitoring method and device, computer equipment and storage medium
CN112389448B (en) Abnormal driving behavior identification method based on vehicle state and driver state
Doshi et al. A comparative exploration of eye gaze and head motion cues for lane change intent prediction
US20220089163A1 (en) Lane change maneuver intention detection systems and methods
CN111753674A (en) Fatigue driving detection and identification method based on deep learning
CN111311914A (en) Vehicle driving accident monitoring method and device and vehicle
CN113635897B (en) Safe driving early warning method based on risk field
CN111231971A (en) Automobile safety performance analysis and evaluation method and system based on big data
CN113753059B (en) Method for predicting takeover capacity of driver under automatic driving system
CN110781872A (en) Driver fatigue grade recognition system with bimodal feature fusion
CN115534994A (en) Man-machine driving sharing control right self-adaptive switching method based on cooperative sensing inside and outside vehicle
CN115690750A (en) Driver distraction detection method and device
CN112070927A (en) Highway vehicle microscopic driving behavior analysis system and analysis method
CN114973212A (en) Fatigue driving stopping method based on visual features, steering wheel operation detection and active intervention
CN111301428A (en) Motor vehicle driver distraction detection warning method and system and motor vehicle
CN111563468A (en) Driver abnormal behavior detection method based on attention of neural network
CN110992709A (en) Active speed limiting system based on fatigue state of driver
CN109886100A (en) A kind of pedestrian detecting system based on Area generation network
CN116453345B (en) Bus driving safety early warning method and system based on driving risk feedback
CN111027859B (en) Driving risk prevention method and system based on motor vehicle state monitoring data mining
CN110705416B (en) Safe driving early warning method and system based on driver face image modeling
CN110263836B (en) Bad driving state identification method based on multi-feature convolutional neural network
CN111775948B (en) Driving behavior analysis method and device
CN112329566A (en) Visual perception system for accurately perceiving head movements of motor vehicle driver
Altunkaya et al. Design and implementation of a novel algorithm to smart tachograph for detection and recognition of driving behaviour

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination