CN115690750A - Driver distraction detection method and device - Google Patents

Driver distraction detection method and device Download PDF

Info

Publication number
CN115690750A
CN115690750A CN202211293932.0A CN202211293932A CN115690750A CN 115690750 A CN115690750 A CN 115690750A CN 202211293932 A CN202211293932 A CN 202211293932A CN 115690750 A CN115690750 A CN 115690750A
Authority
CN
China
Prior art keywords
driver
distraction
network
discrete
labels
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211293932.0A
Other languages
Chinese (zh)
Inventor
徐新民
李健卫
华迎凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jinhua Research Institute Of Zhejiang University
Zhejiang University ZJU
Original Assignee
Jinhua Research Institute Of Zhejiang University
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jinhua Research Institute Of Zhejiang University, Zhejiang University ZJU filed Critical Jinhua Research Institute Of Zhejiang University
Priority to CN202211293932.0A priority Critical patent/CN115690750A/en
Publication of CN115690750A publication Critical patent/CN115690750A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a driver distraction detection method and device. The application provides a driver distraction detection method and a detection device deployed in a vehicle-mounted terminal. Preprocessing a driver face image data set; inputting a trained distraction detection network model, outputting a driver sight estimation result, namely the discrete label probability of a pitch angle and an azimuth angle of sight, and obtaining the pitch angle and the azimuth angle through calculation; and mapping the pitch angle and the azimuth angle to a pre-divided interest area of the driver, judging whether the sight line deviates from a normal area for a long time, and early warning the distraction behavior of the driver. The invention obviously reduces the calculated amount of the deep learning distraction detection network model and keeps higher estimation accuracy, thereby reducing the performance requirement of a processor of the terminal device, and simultaneously laying a cushion for further analyzing the driving behavior and state of the driver in the follow-up vision estimation result.

Description

Driver distraction detection method and device
Technical Field
The invention relates to the technical field of image processing, in particular to a driver distraction detection method and device.
Background
As the number of automobiles increases, driving safety issues begin to appear in the visual field of people and are concerned, and with the popularization of vehicle-mounted terminal devices and mobile phones, more and more irrelevant information influences the attention of drivers during driving. Research indicates that more and more distraction sources lead to easier distraction of drivers and bring serious potential safety hazards. Thus, driving safety is most compromised when the driver looks away from the road. The main manifestations of visual distraction include inattention, drowsiness, right-left distraction, and fatigue, and also the driver's easy behavior during driving. Therefore, the driver can be effectively prevented from traffic accidents by recognizing the distraction behavior and giving early warning according to the visual characteristics of the driver.
In recent years, a gaze estimation-distraction detection method based on deep learning has become a hot spot. Which presents many advantages not present in conventional approaches. 1) High-level abstract sight line features can be extracted from high-dimensional images. 2) It learns a highly non-linear mapping function from the eye appearance to the gaze. 3) Compared with the traditional appearance-based method, the deep learning-based method can better cope with the interference of illumination change, glasses shielding, head movement and the like.
However, although the deep learning neural network is more sensitive to the feature information, the model parameters are more, the calculation amount is large, and the cost of the distraction detection is higher.
Disclosure of Invention
The invention aims to provide a driver distraction detection method and device aiming at the defects of the prior art.
The purpose of the invention is realized by the following technical scheme: a driver distraction detection method comprising the steps of:
step 1: preprocessing a data set: dividing a driver face image data set into a training set and a testing set, wherein each image in the data set comprises an expected result label, namely a continuous label and a discrete classification label, and then carrying out scaling and normalization on the images;
and 2, step: constructing a distraction detection network based on sight estimation, wherein the network comprises a feature extraction part and two paths of full-connection layers, and the feature extraction part combines a residual error structure, a depth separation convolution and a channel hierarchical convolution to realize a light-weight multi-scale feature extraction function;
and 3, step 3: inputting a certain training sample of the preprocessed training set into a feature extraction part of the distraction detection network to obtain a feature map; performing global average pooling on the feature map to obtain total feature vectors, and respectively inputting the total feature vectors into two full-connection layers of the sight line estimation distraction detection network to obtain the discrete label probability of a pitch angle and an azimuth angle;
and 4, step 4: calculating a loss function by fusing the mean square error and the cross entropy of the discrete label probability of the pitch angle and the azimuth angle, and updating network parameters according to the loss function;
and 5: selecting other training samples of the training set, and adjusting network parameters by using the steps 3-4 in sequence to obtain a final detection model with sight line estimation errors within a preset threshold value;
step 6: selecting an image to be detected in any test set, inputting the final detection model obtained in the step 5 to obtain a pitch angle and a pitch angle discrete tag probability, obtaining a pitch angle result and a pitch angle result estimated by the sight of a driver through calculation, pre-dividing an interest area of the cockpit, mapping the estimation result to the interest area, and making a distraction detection judgment result according to distraction judgment conditions of the pre-divided interest area.
Further, the expected result labels in the step 1 are divided into continuous labels and discrete classification labels, the continuous labels are view-line pitch angles and azimuth angles, the discrete classification labels are obtained by mapping the continuous labels, specifically, the angle range of 0-180 degrees is divided equally, the continuous labels are mapped to the nearest range, and the range labels are the discrete classification labels.
Further, the specific network structure of the gaze estimation distraction detection network feature extraction part in step 2 comprises:
the first layer adopts 7 multiplied by 7 convolution kernels, the step of the convolution kernels is kept to be 1, and the active layer adopts a ReLU nonlinear active function;
the subsequent network adopts an 8-layer, 10-layer or 12-layer network structure with steps 2 and 1 used alternately, and light-weight multi-scale feature extraction backhaul is included.
Further, the lightweight multi-scale feature extraction backhaul adopts deep separation convolution instead of normal convolution, and is used for reducing the calculation amount to realize lightweight: a residual error structure is adopted, so that the network performance and accuracy are enhanced; and performing multi-scale feature extraction by using channel hierarchical convolution.
Further, the hierarchical convolution of the channels is: equally dividing the characteristic diagram channel 4 into C1, C2, C3 and C4, performing convolution processing on the channel C2 without processing the channel C1, performing convolution processing on the channel C3 and the channel C2 after output accumulation, performing convolution processing on the channel C4 and the channel C3 after output accumulation, re-splicing characteristic diagrams of the channels, and obtaining multi-scale characteristics through characteristic fusion.
Further, the step 4 of calculating the loss function includes: calculating the weighted sum of the mean square error and the cross entropy by the output discrete label probability through Softmax;
softmax is calculated as follows:
Figure BDA0003902316840000021
the cross entropy is calculated as follows:
Figure BDA0003902316840000022
calculating a predicted angle value by discrete label probability:
Figure BDA0003902316840000023
calculating the mean square error:
Figure BDA0003902316840000031
final loss function calculation:
Loss=H(y,p)+αMSE(angle,p)
where p is the set of all expected discrete classification labels, p i For the expected discrete classification label specific value, N is the number of discrete labels, y is the vector of the network output discrete label probability, y i Are the values in vector y corresponding to 1-N; alpha adjusts the weight of the mean square error.
Further, if the neural network model does not meet the expected sight estimation requirement in the step 5, the data set is replaced and the training is continued.
Further, the step 6 of comparing the estimation result with the pre-divided driver interest area comprises the following sub-steps:
6-1, dividing interest areas of a driver, and selecting areas of a left front window, a right front window, a left rear view mirror, a right rear view mirror, a middle rear view mirror, an instrument panel, a center console and a gear shifting lever;
step 6-2, collecting a view line pitch angle and an azimuth angle corresponding to each area, and constructing a distraction judgment data set;
and 6-3, training and classifying the view angle by using an SVM classifier, judging a sight line staying area, recording and reminding distraction behavior deviating from a normal area for more than 2 seconds, wherein the number of pictures per second is determined by sampling frequency.
Another aspect of the present specification provides a driver distraction detecting apparatus including: the system comprises a camera, a processor and a memory;
the camera is preferentially arranged on the front of the driver, and the interesting area division of the driver is executed according to the camera;
the memory is used for storing a network model structure, network model parameters and a corresponding mobile terminal deployment architecture NCNN so as to be operated in the processor;
and the processor is used for reading the computer program stored in the memory and executing the distraction detection and judgment in the step 6.
Further, the device also comprises an optional display screen or a loudspeaker according to the use requirement, and the optional display screen or the loudspeaker is used for early warning in time after distraction detection.
The invention has the beneficial effects that:
through channel multi-scale feature extraction, deep separation convolution and residual network structure and a loss function calculation method combining continuous and discrete labels, the multi-scale lightweight sight line estimation network is realized, the calculated amount of a deep learning distraction detection network model is reduced, meanwhile, higher estimation accuracy can be kept, and therefore the processor performance requirement of a terminal device can be lowered.
Drawings
FIG. 1 is a general flow chart of a method for detecting driver distraction according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a distraction detection network according to an embodiment of the present invention;
fig. 3 is a structural diagram of feature extraction of a distraction detection network according to an embodiment of the present invention;
FIG. 4 is a structural diagram of a lightweight multi-scale feature extraction backhaul provided by an embodiment of the present invention;
FIG. 5 is a schematic diagram of a deep separation convolution according to an embodiment of the present invention;
FIG. 6 is a schematic diagram of feature fusion and loss function calculation according to an embodiment of the present invention;
FIG. 7 is a diagram illustrating the effect of gaze estimation provided by an embodiment of the present invention;
FIG. 8 is a driver interest area division diagram provided by an embodiment of the present invention;
fig. 9 is a schematic diagram of a vehicle-mounted terminal device according to an embodiment of the present invention.
Detailed Description
In order to more specifically describe the present invention, the following detailed description is provided for the technical solution of the present invention with reference to the accompanying drawings and the specific embodiments.
Referring to FIG. 1, a general flow chart of the driver distraction detection method of the present invention is shown, which comprises the following steps
S101, preprocessing a data set to obtain a model training set and a test set.
In the step, firstly, images in a data set are divided into a training set and a testing set, each image in the data set comprises an expected result label for guiding model training, and then scaling and normalization of the images are carried out; the method specifically comprises the steps of firstly, equally dividing the angle range, mapping the continuous label to the nearest range, and obtaining the range label as the discrete classification label.
In this embodiment, the data set image is scaled to 144 × 144, and the discrete classification labels are 90, and are equally divided into 0 to 180 °.
S102, after a distraction detection network is constructed, inputting the preprocessed face image into a distraction detection network feature extraction part to realize a light-weight multi-scale feature extraction function;
as shown in fig. 2, which is a schematic diagram of a decentration detection network structure, the network outputs the discrete label probability of the azimuth angle and the pitch angle, the mean square error and the cross entropy are fused in the training process to calculate the loss function, and the predicted azimuth angle and the predicted pitch angle are obtained through weighted summation in the verification process.
Fig. 3 is a structural diagram of a lightweight multi-scale feature extraction part of the distraction detection network in the embodiment, in which a 7 × 7 convolution kernel is adopted in a first layer, the sampling interval of the convolution kernel is kept to be 1, and a ReLU nonlinear activation function is sampled in an activation layer; the subsequent network adopts an 8-layer network structure with steps 2 and 1 used alternately and comprises a lightweight multi-scale feature extraction backhaul.
As shown in fig. 4, the light-weighted multi-scale feature extraction backhaul selected in this embodiment is implemented by first extending a feature map to an expected output channel number, equally dividing the channel number 4 into C1, C2, C3, and C4, performing a convolution processing on the channel C2 without processing the channel C1, performing a convolution processing on the channel C3 after accumulating outputs of the channel C2, performing a convolution processing on the channel C4 after accumulating outputs of the channel C3, rewriting and splicing feature maps of the channels, obtaining a multi-scale feature through feature fusion, and using a residual error structure for a bypass to enhance network performance and accuracy.
In the lightweight multi-scale feature extraction backhaul provided in this embodiment, 64 × 144 feature maps are input, and a step of 2 is selected, and the number of channels is output 128, so that 128 × 72 feature maps can be obtained.
Specifically, FIG. 5 is a schematic diagram of a depth-separated convolution, used to replace a conventional convolution. The depth separation convolution decomposes the conventional convolution into a 1 × 1 depth direction convolution and an n × n point convolution for reducing the amount of calculation and realizing lightweight.
S103, as shown in FIG. 6, performing global average pooling compression dimensionality on the feature map, and inputting the total feature vectors into two full-connection layers respectively to obtain discrete label probabilities of the sight estimation azimuth angle and the pitch angle;
s104, calculating a loss function by fusing the mean square error and the cross entropy of the discrete label probability of the azimuth angle and the pitch angle, and updating network parameters according to the loss function; the specific calculation process of the loss function is as follows:
after Softmax, calculating the weighted sum of the mean square error (l 2 loss) and the cross entropy of the discrete label probability of the azimuth angle and the pitch angle.
Softmax is calculated as follows:
Figure BDA0003902316840000051
the cross entropy is calculated as follows:
Figure BDA0003902316840000052
calculating a predicted angle value by discrete label probability:
Figure BDA0003902316840000053
calculating the mean square error:
Figure BDA0003902316840000054
final loss function calculation:
Loss=H(y,p)+αMSE(angle,p)
where p is the set of all expected discrete classification labels, p i For the expected discrete classification label specific value, N is the number of discrete labels, y is the vector of the network output discrete label probability, y i Is the value in vector y corresponding to 1-N; alpha adjusts the weight of the mean square error.
S105, selecting other data set training samples, and adjusting the model parameters by using the steps in sequence until the sight estimation error of the network model reaches the preset error threshold range, so as to obtain the final detection model.
In the present embodiment, the initial learning rate is set to 0.001, the image batch size is set to 16, and the modified learning rate is 0.0001 after 20 epochs of training. Meanwhile, in order to improve the accuracy, an Adma optimizer is used for parameter optimization. The error threshold range may be set according to practical application requirements, such as the sight line estimation error is less than 12 °.
S106, arbitrarily inputting an image to be detected, inputting a model, obtaining a driver sight estimation result, mapping the driver sight estimation result to a pre-divided driver interest area, and making a distraction detection judgment result.
Fig. 7 is a view showing the sight line estimation effect of the embodiment, and the sight line direction is visually shown according to the azimuth angle Pitch and the Pitch angle Yaw predicted by the network.
As shown in fig. 8, the driver interest area is divided, the left front/right front window, the left rear-view mirror, the right rear-view mirror, the middle rear-view mirror, the instrument panel, the center console, and the shift lever area are selected, and the corresponding view line pitch angle, the corresponding azimuth angle, and the structure determination data set of each area are acquired. By using the sight angle, training and classifying are carried out by using an SVM classifier, and recording and reminding are carried out on the distraction behavior deviating from the normal area for more than 2s, wherein the number of images in 2s time is 60 in the embodiment.
Fig. 9 is a schematic diagram of a vehicle-mounted terminal apparatus according to the present invention, including: a camera, a processor, a memory, and an optional display screen or speaker. The camera is preferentially arranged on the front of the driver, and therefore the division of the interest area of the driver and the acquisition of the facial image data of the driver are executed;
the memory is used for storing a network model structure, network model parameters and a corresponding mobile terminal deployment architecture NCNN so as to be operated in the processor; a processor for reading the computer program stored in the memory and performing the distraction detection and determination; and the optional display screen or the loudspeaker is used for early warning in time after distraction detection.
The above-described embodiments are intended to illustrate rather than to limit the invention, and any modifications and variations of the present invention are within the spirit of the invention and the scope of the appended claims.

Claims (10)

1. A driver distraction detection method, comprising the steps of:
step 1: preprocessing a data set: dividing a driver face image data set into a training set and a testing set, wherein each image in the data set comprises an expected result label, namely a continuous label and a discrete classification label, and then carrying out scaling and normalization on the images;
and 2, step: constructing a distraction detection network based on sight estimation, wherein the network comprises a feature extraction part and two paths of full-connection layers, and the feature extraction part combines a residual error structure, a depth separation convolution and a channel hierarchical convolution to realize a light-weight multi-scale feature extraction function;
and 3, step 3: inputting a certain training sample of the preprocessed training set into a feature extraction part of the distraction detection network to obtain a feature map; performing global average pooling on the feature maps to obtain total feature vectors, and inputting the total feature vectors into two full-connection layers of the sight estimation distraction detection network respectively to obtain a pitch angle and azimuth angle discrete tag probability;
and 4, step 4: calculating a loss function by fusing the mean square error and the cross entropy of the discrete label probability of the pitch angle and the azimuth angle, and updating network parameters according to the loss function;
and 5: selecting other training samples of the training set, and adjusting network parameters by using the steps 3-4 in sequence to obtain a final detection model with sight line estimation errors within a preset threshold value;
step 6: selecting an image to be detected in any test set, inputting the final detection model obtained in the step 5 to obtain a pitch angle and an azimuth angle discrete label probability, obtaining an azimuth angle and a pitch angle result estimated by the sight of a driver through calculation, pre-dividing an interest area of the cockpit, mapping the estimation result to the interest area, and making a distraction detection judgment result according to a distraction judgment condition of the pre-divided interest area.
2. The method as claimed in claim 1, wherein the labels of the expected results in step 1 are divided into continuous labels and discrete classification labels, the continuous labels are view pitch angles and azimuth angles, the discrete classification labels are obtained by mapping the continuous labels, specifically, the angular range of 0-180 ° is divided equally, the continuous labels are mapped to the nearest range, and the range labels are discrete classification labels.
3. A driver distraction detection method according to claim 1, characterized in that: the specific network structure of the sight line estimation distraction detection network feature extraction part in the step 2 comprises the following steps:
the first layer adopts 7 multiplied by 7 convolution kernels, the step of the convolution kernels is kept to be 1, and the active layer adopts a ReLU nonlinear active function;
the subsequent network adopts an 8-layer, 10-layer or 12-layer network structure with steps 2 and 1 used alternately, and light-weight multi-scale feature extraction backhaul is included.
4. The method for detecting the driver distraction according to claim 3, wherein the lightweight multi-scale feature extraction Backbone adopts a deep separation convolution instead of a normal convolution, and is used for reducing the calculation amount to realize lightweight: a residual error structure is adopted, so that the network performance and accuracy are enhanced; and performing multi-scale feature extraction by using channel hierarchical convolution.
5. The driver distraction detection method of claim 4, wherein the channel hierarchical convolution is: equally dividing a characteristic diagram channel 4 into C1, C2, C3 and C4, performing primary convolution processing on the channel C2 without processing the channel C1, performing primary convolution processing after the channel C3 and the channel C2 output are accumulated, performing primary convolution processing after the channel C4 and the channel C3 output are accumulated, re-splicing characteristic diagrams of the channels, and obtaining multi-scale characteristics through characteristic fusion.
6. The driver distraction detection method of claim 1, wherein the step 4 loss function calculation comprises: calculating the weighted sum of the mean square error and the cross entropy after the output discrete label probability passes through Softmax;
softmax is calculated as follows:
Figure FDA0003902316830000021
the cross entropy is calculated as follows:
Figure FDA0003902316830000022
calculating a predicted angle value by discrete label probability:
Figure FDA0003902316830000023
calculating the mean square error:
Figure FDA0003902316830000024
final loss function calculation:
Loss=H(y,p)+αMSE(angle,p)
where p is the set of all expected discrete classification labels, p i For a particular value of the expected discrete class label, N is the number of discrete labels, y is the vector of the probability of the network output discrete labels, y i Are the values in vector y corresponding to 1-N;alpha adjusts the weight of the mean square error.
7. The method as claimed in claim 1, wherein in step 5, if the neural network model does not meet the expected line-of-sight estimation requirement, the training is continued by replacing the data set.
8. The method as claimed in claim 1, wherein the step 6 of comparing the estimation result with the pre-divided driver interest area comprises the sub-steps of:
6-1, dividing interest areas of a driver, and selecting areas of a left front window, a right front window, a left rear view mirror, a right rear view mirror, a middle rear view mirror, an instrument panel, a center console and a gear shifting lever;
step 6-2, collecting a view line pitch angle and an azimuth angle corresponding to each area, and constructing a distraction judgment data set;
and 6-3, training and classifying the view angle by using an SVM classifier, judging a sight line staying area, recording and reminding distraction behavior deviating from a normal area for more than 2 seconds, wherein the number of pictures per second is determined by sampling frequency.
9. A driver distraction detecting apparatus for implementing any one of claims 1 to 8, comprising: a camera, a processor and memory, and an optional display screen or speaker;
the camera is preferentially arranged on the front of the driver, and is used for dividing the interest area of the driver and acquiring the facial image of the driver;
the memory is used for storing a network model structure, network model parameters and a corresponding mobile terminal deployment architecture NCNN so as to be operated in the processor;
and the processor is used for reading the computer program stored in the memory and executing the distraction detection and judgment in the step 6.
10. A driver distraction detection device according to claim 9, further comprising an optional display screen or speaker for detecting a timely warning after distraction, according to the usage requirements.
CN202211293932.0A 2022-10-21 2022-10-21 Driver distraction detection method and device Pending CN115690750A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211293932.0A CN115690750A (en) 2022-10-21 2022-10-21 Driver distraction detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211293932.0A CN115690750A (en) 2022-10-21 2022-10-21 Driver distraction detection method and device

Publications (1)

Publication Number Publication Date
CN115690750A true CN115690750A (en) 2023-02-03

Family

ID=85067243

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211293932.0A Pending CN115690750A (en) 2022-10-21 2022-10-21 Driver distraction detection method and device

Country Status (1)

Country Link
CN (1) CN115690750A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311181A (en) * 2023-03-21 2023-06-23 重庆利龙中宝智能技术有限公司 Method and system for rapidly detecting abnormal driving
CN116543419A (en) * 2023-07-06 2023-08-04 浙江大学金华研究院 Hotel health personnel wearing detection method and system based on embedded platform

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116311181A (en) * 2023-03-21 2023-06-23 重庆利龙中宝智能技术有限公司 Method and system for rapidly detecting abnormal driving
CN116311181B (en) * 2023-03-21 2023-09-12 重庆利龙中宝智能技术有限公司 Method and system for rapidly detecting abnormal driving
CN116543419A (en) * 2023-07-06 2023-08-04 浙江大学金华研究院 Hotel health personnel wearing detection method and system based on embedded platform
CN116543419B (en) * 2023-07-06 2023-11-07 浙江大学金华研究院 Hotel health personnel wearing detection method and system based on embedded platform

Similar Documents

Publication Publication Date Title
US20210357670A1 (en) Driver Attention Detection Method
US20230169867A1 (en) Vehicle collision alert system and method for detecting driving hazards
Alkinani et al. Detecting human driver inattentive and aggressive driving behavior using deep learning: Recent advances, requirements and open challenges
CN111723596B (en) Gaze area detection and neural network training method, device and equipment
KR102060662B1 (en) Electronic device and method for detecting a driving event of vehicle
CN115690750A (en) Driver distraction detection method and device
CN105654753A (en) Intelligent vehicle-mounted safe driving assistance method and system
JP2019533209A (en) System and method for driver monitoring
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
CN105354986A (en) Driving state monitoring system and method for automobile driver
CN110781718B (en) Cab infrared vision system and driver attention analysis method
CN110222596B (en) Driver behavior analysis anti-cheating method based on vision
CN112406704A (en) Virtual mirror with automatic zoom based on vehicle sensor
CN205230272U (en) Driver drive state monitoring system
CN111506057A (en) Automatic driving auxiliary glasses for assisting automatic driving
Fan et al. Gazmon: Eye gazing enabled driving behavior monitoring and prediction
EP3956807A1 (en) A neural network for head pose and gaze estimation using photorealistic synthetic data
CN111860427A (en) Driving distraction identification method based on lightweight class eight-dimensional convolutional neural network
CN111062300A (en) Driving state detection method, device, equipment and computer readable storage medium
CN114299473A (en) Driver behavior identification method based on multi-source information fusion
CN116935361A (en) Deep learning-based driver distraction behavior detection method
CN116012822B (en) Fatigue driving identification method and device and electronic equipment
CN113743163A (en) Traffic target recognition model training method, traffic target positioning method and device
CN113343903B (en) License plate recognition method and system in natural scene
Yuan et al. Predicting drivers’ eyes-off-road duration in different driving scenarios

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination