CN112417945A - Distracted driving real-time monitoring method and device based on special neural network - Google Patents

Distracted driving real-time monitoring method and device based on special neural network Download PDF

Info

Publication number
CN112417945A
CN112417945A CN202010940231.6A CN202010940231A CN112417945A CN 112417945 A CN112417945 A CN 112417945A CN 202010940231 A CN202010940231 A CN 202010940231A CN 112417945 A CN112417945 A CN 112417945A
Authority
CN
China
Prior art keywords
driving
neural network
distraction
driver
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010940231.6A
Other languages
Chinese (zh)
Inventor
李荣宽
李明
谭杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiaxing Najie Microelectronic Technology Co ltd
Original Assignee
Jiaxing Najie Microelectronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiaxing Najie Microelectronic Technology Co ltd filed Critical Jiaxing Najie Microelectronic Technology Co ltd
Priority to CN202010940231.6A priority Critical patent/CN112417945A/en
Publication of CN112417945A publication Critical patent/CN112417945A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching

Abstract

The embodiment of the application provides a method and equipment for monitoring the distracted driving in real time based on a special neural network, which comprises the steps of constructing a dynamic identification model of the distracted driving of a driver of the special neural network comprising a deep convolutional neural network and a cyclic neural network; acquiring the running speed of a current vehicle in real time, and generating a triggering signal for real-time monitoring of the distracted driving when the running speed is higher than a preset threshold value; when a trigger signal is received, acquiring a multi-frame driving video of a driver within a preset time length, and calling a driver distraction driving dynamic recognition model to carry out real-time monitoring on driver distraction driving based on the multi-frame driving video; and determining the level of the driver distraction based on the monitoring result. Because the deep CNN is used for extracting the space structure characteristics and the RNN is used for connecting the characteristics of the space-time dynamic information in the real-time monitoring process, the variable-ability characteristics of the driver before and after distraction are extracted by the RNN on the basis of the recognition accuracy of the deep CNN, and the distraction driving recognition accuracy is further improved.

Description

Distracted driving real-time monitoring method and device based on special neural network
Technical Field
The application belongs to the field of neural networks, and particularly relates to a distracted driving real-time monitoring method and device based on a special neural network.
Background
Machine learning has made great progress in various applications, especially in the field of computer vision, mainly due to the rise of deep learning. In recent years, the accuracy and robustness of image recognition algorithms based on deep convolutional neural networks (deep convolutional neural networks) have been greatly improved.
In the application of the distracted driving recognition of the driver of the motor vehicle, the existing distracted driving recognition system based on the deep convolutional neural network only independently processes a single frame of picture, so that valuable dynamic information of the distraction of the driver cannot be captured, and the method is particularly not suitable for analyzing the real-time state of the distraction of the driver. Although systems based on large deep convolutional neural networks can learn useful features from video data, in practice, the method can only be processed in a delayed mode, and real-time monitoring of the distracted driving cannot be achieved.
Disclosure of Invention
In order to overcome the defects and shortcomings in the prior art, the method for monitoring the distraction driving in real time based on the special neural network is provided, and the identification accuracy is improved by means of the special neural network comprising the deep convolutional neural network and the cyclic neural network.
Specifically, on the one hand, the method for monitoring the distraction driving based on the proprietary neural network in real time provided by the embodiment of the application includes:
constructing a driver distraction driving dynamic identification model of a special neural network comprising a deep convolution neural network and a cyclic neural network;
acquiring the running speed of a current vehicle in real time, and generating a triggering signal for real-time monitoring of the distracted driving when the running speed is higher than a preset threshold value;
when a trigger signal is received, acquiring a multi-frame driving video of a driver within a preset time length, and calling a driver distraction driving dynamic recognition model to carry out real-time monitoring on driver distraction driving based on the multi-frame driving video;
and determining the level of the driver distraction based on the monitoring result.
Optionally, the constructing a driver distracted driving real-time monitoring model of a proprietary neural network including a deep convolutional neural network and a cyclic neural network includes:
decomposing a driving video of a driver into picture frames, and dividing the obtained picture frames into static identification data, dynamic identification data and test data;
performing a distraction driving static recognition training based on a deep convolutional neural network based on the static recognition data to obtain a distraction driving static recognition model;
training the distraction driving static recognition model based on the dynamic recognition data to obtain a distraction driving dynamic recognition model;
performing model training on the center-driving dynamic recognition model based on the test data;
wherein the size of the picture frame is modified to a standard size.
Optionally, the static recognition data-based distraction driving static recognition training based on the deep convolutional neural network is performed to obtain a distraction driving static recognition model, including:
static identification data are imported into a softmax layer in the deep convolutional neural network, and the probability of output classification of the softmax layer is obtained;
evaluating the probability of classification by using a Loss1 function;
carrying out parameter updating on the deep convolutional neural network by optimizing the model parameters, and recalculating the evaluation value of Loss1 according to the updated parameters;
and iterating the steps for m times to obtain a distraction driving static identification model.
Optionally, the training the static recognition model of the split driving based on the dynamic recognition data to obtain the dynamic recognition model of the split driving includes:
providing the dynamic identification data to a distraction driving static identification model in batches in sequence, and extracting the characteristics of each frame of picture in the dynamic identification data through the static identification model;
the extracted features of each frame of picture are taken as the input of each level of the recurrent neural network, and are input to each level of the recurrent neural network in sequence;
the driving state information of each moment is extracted by using the n-level cyclic neural network and is used as the input of a Logistic layer in the cyclic neural network, and the probability of the distraction of a driver at each moment is output through the Logistic layer;
estimating the distraction probability of the driver at all times by using a Loss2 function;
updating the parameters of the n-level recurrent neural network, and recalculating the evaluation value of Loss2 according to the updated parameters;
and iterating the steps for k times to obtain a distraction driving dynamic identification model.
Optionally, when the trigger signal is received, obtaining a multi-frame driving video of the driver within a preset time period, and calling a driver distraction driving dynamic recognition model to perform real-time monitoring of driver distraction driving based on the multi-frame driving video, including:
when a trigger signal is received, acquiring a multi-frame driving video of a driver within a preset time length;
dividing a multi-frame driving video into picture frames, and introducing the obtained picture frames into a driver distraction driving dynamic identification model frame by frame to judge whether the condition of driver distraction driving exists in the multi-frame driving video;
a picture frame is obtained which is marked as the presence of driver distraction.
Optionally, the determining the level of driver distraction based on the monitoring result includes:
calculating the proportion of the picture frames marked as the distracted driving of the driver in the multi-frame driving video;
the level of driver distraction is determined based on the proportional result.
On the other hand, this application embodiment has still provided distraction driving real-time supervision equipment based on proprietary neural network, distraction driving real-time supervision equipment includes:
the model construction unit is used for constructing a driver distraction driving dynamic identification model of a special neural network comprising a deep convolution neural network and a cyclic neural network;
the signal generating unit is used for acquiring the running speed of the current vehicle in real time and generating a triggering signal for monitoring the distracted driving in real time when the running speed is higher than a preset threshold value;
the real-time monitoring unit is used for acquiring multi-frame driving videos of a driver within a preset time length when a trigger signal is received, and calling a driver distraction driving dynamic recognition model to carry out real-time monitoring on driver distraction driving based on the multi-frame driving videos;
and the grade dividing unit is used for judging the level of the distraction degree of the driver based on the monitoring result.
Optionally, the model building unit includes:
the data dividing subunit is used for decomposing the driving video of the driver into picture frames and dividing the obtained picture frames into static identification data, dynamic identification data and test data;
the static model subunit is used for performing the distraction driving static identification training based on the deep convolutional neural network based on the static identification data to obtain a distraction driving static identification model;
the dynamic model subunit is used for training the distraction driving static recognition model based on the dynamic recognition data to obtain a distraction driving dynamic recognition model;
the model training subunit is used for carrying out model training on the center-driving dynamic recognition model based on the test data;
wherein the size of the picture frame is modified to a standard size.
Optionally, the static model subunit includes:
static identification data are imported into a softmax layer in the deep convolutional neural network, and the probability of output classification of the softmax layer is obtained;
evaluating the probability of classification by using a Loss1 function;
carrying out parameter updating on the deep convolutional neural network by optimizing the model parameters, and recalculating the evaluation value of Loss1 according to the updated parameters;
and iterating the steps for m times to obtain a distraction driving static identification model.
Optionally, the dynamic model subunit includes:
providing the dynamic identification data to a distraction driving static identification model in batches in sequence, and extracting the characteristics of each frame of picture in the dynamic identification data through the static identification model;
the extracted features of each frame of picture are taken as the input of each level of the recurrent neural network, and are input to each level of the recurrent neural network in sequence;
the driving state information of each moment is extracted by using the n-level cyclic neural network and is used as the input of a Logistic layer in the cyclic neural network, and the probability of the distraction of a driver at each moment is output through the Logistic layer;
estimating the distraction probability of the driver at all times by using a Loss2 function;
updating the parameters of the n-level recurrent neural network, and recalculating the evaluation value of Loss2 according to the updated parameters;
and iterating the steps for k times to obtain a distraction driving dynamic identification model.
The beneficial effect that technical scheme that this application provided brought is:
because the deep CNN is used for extracting the space structure characteristics and the RNN is used for connecting the characteristics of the space-time dynamic information in the real-time monitoring process, the variable-ability characteristics of the driver before and after distraction are extracted by the RNN on the basis of the recognition accuracy of the deep CNN, and the distraction driving recognition accuracy is further improved.
Drawings
In order to more clearly illustrate the technical solutions of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a schematic flowchart of a method for monitoring distraction driving in real time based on a proprietary neural network according to an embodiment of the present application;
FIG. 2 is a flowchart of a static recognition training of distracted driving proposed by an embodiment of the present application;
FIG. 3 is a flowchart of a dynamic distracted driving recognition training proposed in an embodiment of the present application;
FIG. 4 is a frame of a driver driving gesture picture according to an embodiment of the present disclosure;
fig. 5 is a structural diagram of a dynamic recognition model for implementing trained distracted driving in a Raspberry Pi development board according to an embodiment of the present application;
fig. 6 is a schematic structural diagram of a dedicated neural network-based real-time monitoring of the decentralized driving according to an embodiment of the present application.
Detailed Description
To make the structure and advantages of the present application clearer, the structure of the present application will be further described with reference to the accompanying drawings.
Example one
The method for monitoring the distracted driving in real time based on the proprietary neural network, as shown in fig. 1, includes:
11. constructing a driver distraction driving dynamic identification model of a special neural network comprising a deep convolution neural network and a cyclic neural network;
12. acquiring the running speed of a current vehicle in real time, and generating a triggering signal for real-time monitoring of the distracted driving when the running speed is higher than a preset threshold value;
13. when a trigger signal is received, acquiring a multi-frame driving video of a driver within a preset time length, and calling a driver distraction driving dynamic recognition model to carry out real-time monitoring on driver distraction driving based on the multi-frame driving video;
14. and determining the level of the driver distraction based on the monitoring result.
In implementation, the method for monitoring the distracted driving in real time provided by the embodiment of the application includes firstly constructing a driver distracted driving dynamic identification model of a proprietary neural network including a deep convolutional neural network and a cyclic neural network, and then monitoring a multi-frame driving video of a driver within a preset time length in real time, which is acquired in real time, by using the driving speed of a current vehicle as a trigger condition of real-time monitoring. And finally, judging the distraction level of the driver according to the monitoring result.
Because the used special neural network comprises the deep convolutional neural network CNN and the cyclic neural network RNN, the characteristics of extracting a space structure by the deep CNN and the characteristics of connecting space-time dynamic information by the RNN are simultaneously utilized in the real-time monitoring process, so that the variable-energy characteristics before and after the distraction of the driver are extracted by the RNN on the basis of the recognition accuracy of the deep CNN, and the recognition accuracy of the distracted driving is further improved.
Optionally, the constructing a driver distracted driving real-time monitoring model of a proprietary neural network including a deep convolutional neural network and a cyclic neural network, that is, step 11 includes:
111. decomposing a driving video of a driver into picture frames, and dividing the obtained picture frames into static identification data, dynamic identification data and test data;
112. performing a distraction driving static recognition training based on a deep convolutional neural network based on the static recognition data to obtain a distraction driving static recognition model;
113. training the distraction driving static recognition model based on the dynamic recognition data to obtain a distraction driving dynamic recognition model;
114. performing model training on the center-driving dynamic recognition model based on the test data;
wherein the size of the picture frame is modified to a standard size.
In implementation, video data of driver driving is collected, the collected video data is stored as picture frames in the form of each frame, the picture frames are divided into static identification data, dynamic identification data and test data, the sizes of all pictures are modified according to standard sizes, and the modified pictures are sequentially written into Tregisters 1, Tregisters 2 and Tregisters 3 files according to the order of the static identification data, the dynamic identification data and the test data.
The static identification data written in Tfrecords1 is used to derive a static identification model for distracted driving,
the dynamic identification data written in Tfrecords2 is used to derive a distraction driving dynamic identification model,
the test data written in the Tfrecords3 are used for testing the accuracy of the split driving dynamic recognition model, and finally the split driving recognition model after being tested can be used for monitoring the split driving of the driver in real time.
Based on the above three types of file usage descriptions, the specific step 112 specifically includes:
1121. static identification data are imported into a softmax layer in the deep convolutional neural network, and the probability of output classification of the softmax layer is obtained;
1122. evaluating the probability of classification by using a Loss1 function;
1123. carrying out parameter updating on the deep convolutional neural network by optimizing the model parameters, and recalculating the evaluation value of Loss1 according to the updated parameters;
1124. and iterating the steps for m times to obtain a distraction driving static identification model.
In implementation, Tfrecords1 static identification data are shuffled and batched sequentially as input to the softmax layer, and then sorted probabilities are output via the softmax layer. And (3) evaluating the classified probability by using a Loss1 function, updating the parameters of the depth CNN by using the optimization of the model parameters, recalculating the Loss1, and iterating for m times by using the same calculation process to obtain the distraction driving static recognition model.
The process related to optimization of the depth CNN, the Softmax layer, the Loss1 function and the model parameters is as shown in fig. 2, and based on the process given in fig. 3, an optimization model of the depth CNN, the Softmax layer, the Loss1 function and the model parameters is established, the parameters are initialized, and the static recognition data after preprocessing is used for distraction driving static recognition training, and the process aims to extract the characteristics of the driving posture of the driver.
The specific step 113 specifically includes:
1131. providing the dynamic identification data to a distraction driving static identification model in batches in sequence, and extracting the characteristics of each frame of picture in the dynamic identification data through the static identification model;
1132. the extracted features of each frame of picture are taken as the input of each level of the recurrent neural network, and are input to each level of the recurrent neural network in sequence;
1133. the driving state information of each moment is extracted by using the n-level cyclic neural network and is used as the input of a Logistic layer in the cyclic neural network, and the probability of the distraction of a driver at each moment is output through the Logistic layer;
1134. estimating the distraction probability of the driver at all times by using a Loss2 function;
1135. updating the parameters of the n-level recurrent neural network, and recalculating the evaluation value of Loss2 according to the updated parameters;
1136. and iterating the steps for k times to obtain a distraction driving dynamic identification model.
In implementation, the Tfrecords2 dynamic recognition data are sequentially provided to the distraction driving static recognition model in batches, so that the features of each frame of picture are extracted according to the static recognition model, and the extracted features of each frame of picture are used as the input of each stage of RNN in the distraction driving dynamic recognition training and are sequentially input to each stage of RNN.
The steps including 1131-1136 extract driving state information at each time as an input of a Logistic layer by using n-level RNNs, and then output the probability of driver distraction at each time through the Logistic layer. And (3) evaluating the driver distraction probability at all times by using a Loss2 function, updating the parameters of the n-level RNN by optimizing the model parameters, recalculating the Loss2, and iterating for k times by using the same calculation process to obtain the distraction driving dynamic recognition model.
The optimization process related to the n-level RNN, the Logistic layer, the Loss2 function and the model parameters is shown in FIG. 4, and based on the process given in FIG. 3, the dynamic recognition data after preprocessing is used for the dynamic recognition training of the distraction driving; the function of the split-driving dynamic recognition model module is to acquire trained model parameters for subsequent testing.
Optionally, step 13 proposes, including:
131. when a trigger signal is received, acquiring a multi-frame driving video of a driver within a preset time length;
132. dividing a multi-frame driving video into picture frames, and introducing the obtained picture frames into a driver distraction driving dynamic identification model frame by frame to judge whether the condition of driver distraction driving exists in the multi-frame driving video;
133. a picture frame is obtained which is marked as the presence of driver distraction.
In implementation, fig. 4 is a picture of driving posture of the driver for a certain period of time monitored by the system, and the picture shows that the driving posture of the driver is displayed according to fig. 6, the first 3 moments are normal driving, the driver takes a mobile phone to answer the call from the 4 th moment until the driver puts down the mobile phone from the 9 th moment, and the normal driving is recovered from the 10 th moment. Suppose that 0 represents normal driving and 1 represents distracted driving, and the monitoring result is completely matched with the real-time driving state of the driver according to the monitoring data display, so that the real-time monitoring of the distracted driving is realized, and the moment when the driver is in the distracted driving state is accurately pointed out.
Optionally, the determining the level of driver distraction based on the monitoring result in step 14 includes:
141. calculating the proportion of the picture frames marked as the distracted driving of the driver in the multi-frame driving video;
142. the level of driver distraction is determined based on the proportional result.
In practice, the step of outputting the level of driver distraction from step 14 may also be performed in order to improve the understanding of the real-time monitoring results obtained after performing steps 11-13. The simplest step of outputting the level of the driver distraction degree is to calculate the proportion of the picture frame marked as the driver distraction driving in the multi-frame driving video, and then, the proportion value is rounded to be used as an output result. For example, if a ratio of 60% is obtained, then the current output driver distraction rating is 6, with a larger value indicating a higher distraction and a higher risk.
In the actual use process, as shown in fig. 5, a trained dynamic recognition model of the distracted driving is realized in a Raspberry Pi development board. The driving speed of the automobile is measured in real time by using the speedometer module, the driving posture picture data of the driver is collected in real time by the camera module, and the collected driving speed data and the driving posture picture data of the driver are simultaneously stored in a ROM of a Raspberry Pi development board. The Raspberry Pi development board controls the driver driving posture picture data transmitted to the dedicated distracted driving dynamic recognition model module by judging the driving speed stored in the ROM. According to the input driving posture picture data of the driver, the system monitors the real-time driving state of the driver through the calculation of a special distraction driving dynamic recognition model module so as to provide monitoring data. And correspondingly driving an alarm module according to the monitoring data, wherein the alarm module integrates a prompting lamp and sound to give a proper alarm to the driver.
Example two
On the basis of the first embodiment, the present embodiment provides a distracted driving real-time monitoring device 6 based on a proprietary neural network, as shown in fig. 6, specifically including:
the model construction unit 61 is used for constructing a driver distraction driving dynamic identification model of a special neural network comprising a deep convolution neural network and a cyclic neural network;
the signal generating unit 62 is used for acquiring the running speed of the current vehicle in real time, and generating a triggering signal for the real-time monitoring of the distracted driving when the running speed is higher than a preset threshold value;
the real-time monitoring unit 63 is used for acquiring multi-frame driving videos of a driver within a preset time length when receiving the trigger signal, and calling a driver distraction driving dynamic identification model to perform real-time monitoring of driver distraction driving based on the multi-frame driving videos;
and a ranking unit 64 for determining a level of driver distraction based on the monitoring result.
In implementation, in the real-time monitoring device for the decentralized driving provided by the embodiment of the application, a dynamic recognition model of the driver decentralized driving of a proprietary neural network including a deep convolutional neural network and a cyclic neural network is firstly constructed, and then a multi-frame driving video of the driver within a preset time length acquired in real time is monitored in real time by taking the running speed of the current vehicle as a trigger condition of the real-time monitoring. And finally, judging the distraction level of the driver according to the monitoring result.
Because the used special neural network comprises the deep convolutional neural network CNN and the cyclic neural network RNN, the characteristics of extracting a space structure by the deep CNN and the characteristics of connecting space-time dynamic information by the RNN are simultaneously utilized in the real-time monitoring process, so that the variable-energy characteristics before and after the distraction of the driver are extracted by the RNN on the basis of the recognition accuracy of the deep CNN, and the recognition accuracy of the distracted driving is further improved.
Optionally, the model building unit 61 includes:
the data dividing subunit 611 is configured to decompose the driving video of the driver into picture frames, and divide the obtained picture frames into three types, namely static identification data, dynamic identification data, and test data;
the static model subunit 612 is configured to perform, based on the static identification data, a distraction driving static identification training based on the deep convolutional neural network to obtain a distraction driving static identification model;
a dynamic model subunit 613, configured to train a distraction driving static recognition model based on the dynamic recognition data to obtain a distraction driving dynamic recognition model;
the model training subunit 614 is configured to perform model training on the dynamic recognition model of the centric driving based on the test data;
wherein the size of the picture frame is modified to a standard size.
In implementation, the driving video data of the driver is collected, the collected video data is stored as picture frames in the form of each frame, the picture frames are divided into static identification data, dynamic identification data and test data, the sizes of all the pictures are modified according to standard sizes, and the modified pictures are sequentially written into the files of Tregisters 1, Tregisters 2 and Tregisters 3 according to the order of the static identification data, the dynamic identification data and the test data.
The static identification data written in Tfrecords1 is used to derive a static identification model for distracted driving,
the dynamic identification data written in Tfrecords2 is used to derive a distraction driving dynamic identification model,
the test data written in the Tfrecords3 are used for testing the accuracy of the split driving dynamic recognition model, and finally the split driving recognition model after being tested can be used for monitoring the split driving of the driver in real time.
Based on the description of the applications of the three types of files, the specific signal generating unit 62 specifically includes:
optionally, the static model subunit 612 includes:
static identification data are imported into a softmax layer in the deep convolutional neural network, and the probability of output classification of the softmax layer is obtained;
evaluating the probability of classification by using a Loss1 function;
carrying out parameter updating on the deep convolutional neural network by optimizing the model parameters, and recalculating the evaluation value of Loss1 according to the updated parameters;
and iterating the steps for m times to obtain a distraction driving static identification model.
In implementation, Tfrecords1 static identification data are shuffled and batched sequentially as input to the softmax layer, and then sorted probabilities are output via the softmax layer. And (3) evaluating the classified probability by using a Loss1 function, updating the parameters of the depth CNN by using the optimization of the model parameters, recalculating the Loss1, and iterating for m times by using the same calculation process to obtain the distraction driving static recognition model.
The process related to optimization of the depth CNN, the Softmax layer, the Loss1 function and the model parameters is as shown in fig. 2, and based on the process given in fig. 3, an optimization model of the depth CNN, the Softmax layer, the Loss1 function and the model parameters is established, the parameters are initialized, and the static recognition data after preprocessing is used for distraction driving static recognition training, and the process aims to extract the characteristics of the driving posture of the driver.
Optionally, the dynamic model subunit 613 includes:
providing the dynamic identification data to a distraction driving static identification model in batches in sequence, and extracting the characteristics of each frame of picture in the dynamic identification data through the static identification model;
the extracted features of each frame of picture are taken as the input of each level of the recurrent neural network, and are input to each level of the recurrent neural network in sequence;
the driving state information of each moment is extracted by using the n-level cyclic neural network and is used as the input of a Logistic layer in the cyclic neural network, and the probability of the distraction of a driver at each moment is output through the Logistic layer;
estimating the distraction probability of the driver at all times by using a Loss2 function;
updating the parameters of the n-level recurrent neural network, and recalculating the evaluation value of Loss2 according to the updated parameters;
and iterating the steps for k times to obtain a distraction driving dynamic identification model.
In implementation, the Tfrecords2 dynamic recognition data are sequentially provided to the distraction driving static recognition model in batches, so that the features of each frame of picture are extracted according to the static recognition model, and the extracted features of each frame of picture are used as the input of each stage of RNN in the distraction driving dynamic recognition training and are sequentially input to each stage of RNN.
The steps including 1131-1136 extract driving state information at each time as an input of a Logistic layer by using n-level RNNs, and then output the probability of driver distraction at each time through the Logistic layer. And (3) evaluating the driver distraction probability at all times by using a Loss2 function, updating the parameters of the n-level RNN by optimizing the model parameters, recalculating the Loss2, and iterating for k times by using the same calculation process to obtain the distraction driving dynamic recognition model.
The optimization process related to the n-level RNN, the Logistic layer, the Loss2 function and the model parameters is shown in FIG. 4, and based on the process given in FIG. 3, the dynamic recognition data after preprocessing is used for the dynamic recognition training of the distraction driving; the function of the split-driving dynamic recognition model module is to acquire trained model parameters for subsequent testing.
Optionally, the real-time monitoring unit 63 includes:
631. when a trigger signal is received, acquiring a multi-frame driving video of a driver within a preset time length;
632. dividing a multi-frame driving video into picture frames, and introducing the obtained picture frames into a driver distraction driving dynamic identification model frame by frame to judge whether the condition of driver distraction driving exists in the multi-frame driving video;
633. a picture frame is obtained which is marked as the presence of driver distraction.
In implementation, fig. 4 is a picture of driving posture of the driver for a certain period of time monitored by the system, and the picture shows that the driving posture of the driver is displayed according to fig. 6, the first 3 moments are normal driving, the driver takes a mobile phone to answer the call from the 4 th moment until the driver puts down the mobile phone from the 9 th moment, and the normal driving is recovered from the 10 th moment. Suppose that 0 represents normal driving and 1 represents distracted driving, and the monitoring result is completely matched with the real-time driving state of the driver according to the monitoring data display, so that the real-time monitoring of the distracted driving is realized, and the moment when the driver is in the distracted driving state is accurately pointed out.
Optionally, the determining the level of the driver's distraction based on the monitoring result by the ranking unit 64 includes:
641. calculating the proportion of the picture frames marked as the distracted driving of the driver in the multi-frame driving video;
642. the level of driver distraction is determined based on the proportional result.
In implementation, in order to improve the understanding degree of the real-time monitoring result obtained after the model constructing unit 61, the signal generating unit 62 and the real-time monitoring unit 63 perform corresponding operations, the level of the distraction degree of the driver output by the ranking unit 64 is also used. The simplest step of outputting the level of the driver distraction degree is to calculate the proportion of the picture frame marked as the driver distraction driving in the multi-frame driving video, and then, the proportion value is rounded to be used as an output result. For example, if a ratio of 60% is obtained, then the current output driver distraction rating is 6, with a larger value indicating a higher distraction and a higher risk.
In the actual use process, as shown in fig. 5, a trained dynamic recognition model of the distracted driving is realized in a Raspberry Pi development board. The driving speed of the automobile is measured in real time by using the speedometer module, the driving posture picture data of the driver is collected in real time by the camera module, and the collected driving speed data and the driving posture picture data of the driver are simultaneously stored in a ROM of a Raspberry Pi development board. The Raspberry Pi development board controls the driver driving posture picture data transmitted to the dedicated distracted driving dynamic recognition model module by judging the driving speed stored in the ROM. According to the input driving posture picture data of the driver, the system monitors the real-time driving state of the driver through the calculation of a special distraction driving dynamic recognition model module so as to provide monitoring data. And correspondingly driving an alarm module according to the monitoring data, wherein the alarm module integrates a prompting lamp and sound to give a proper alarm to the driver.
The sequence numbers in the above embodiments are merely for description, and do not represent the sequence of the assembly or the use of the components.
The above description is only exemplary of the present application and should not be taken as limiting the present application, as any modification, equivalent replacement, or improvement made within the spirit and principle of the present application should be included in the protection scope of the present application.

Claims (10)

1. The distracted driving real-time monitoring method based on the special neural network is characterized by comprising the following steps of:
constructing a driver distraction driving dynamic identification model of a special neural network comprising a deep convolution neural network and a cyclic neural network;
acquiring the running speed of a current vehicle in real time, and generating a triggering signal for real-time monitoring of the distracted driving when the running speed is higher than a preset threshold value;
when a trigger signal is received, acquiring a multi-frame driving video of a driver within a preset time length, and calling a driver distraction driving dynamic recognition model to carry out real-time monitoring on driver distraction driving based on the multi-frame driving video;
and determining the level of the driver distraction based on the monitoring result.
2. The private neural network-based real-time monitoring method for split-driving of driver as claimed in claim 1, wherein the constructing of the driver split-driving real-time monitoring model of the private neural network including the deep convolutional neural network and the cyclic neural network comprises:
decomposing a driving video of a driver into picture frames, and dividing the obtained picture frames into static identification data, dynamic identification data and test data;
performing a distraction driving static recognition training based on a deep convolutional neural network based on the static recognition data to obtain a distraction driving static recognition model;
training the distraction driving static recognition model based on the dynamic recognition data to obtain a distraction driving dynamic recognition model;
performing model training on the center-driving dynamic recognition model based on the test data;
wherein the size of the picture frame is modified to a standard size.
3. The method for monitoring the distracted driving based on the proprietary neural network in real time as claimed in claim 2, wherein the static recognition data based on the static recognition data is used for performing the static recognition training of the distracted driving based on the deep convolutional neural network to obtain a static recognition model of the distracted driving, and the method comprises the following steps:
static identification data are imported into a softmax layer in the deep convolutional neural network, and the probability of output classification of the softmax layer is obtained;
evaluating the probability of classification by using a Loss1 function;
carrying out parameter updating on the deep convolutional neural network by optimizing the model parameters, and recalculating the evaluation value of Loss1 according to the updated parameters;
and iterating the steps for m times to obtain a distraction driving static identification model.
4. The method for monitoring the distracted driving based on the proprietary neural network in real time as claimed in claim 2, wherein the training of the static recognition model of the distracted driving based on the dynamic recognition data to obtain the dynamic recognition model of the distracted driving comprises:
providing the dynamic identification data to a distraction driving static identification model in batches in sequence, and extracting the characteristics of each frame of picture in the dynamic identification data through the static identification model;
the extracted features of each frame of picture are taken as the input of each level of the recurrent neural network, and are input to each level of the recurrent neural network in sequence;
the driving state information of each moment is extracted by using the n-level cyclic neural network and is used as the input of a Logistic layer in the cyclic neural network, and the probability of the distraction of a driver at each moment is output through the Logistic layer;
estimating the distraction probability of the driver at all times by using a Loss2 function;
updating the parameters of the n-level recurrent neural network, and recalculating the evaluation value of Loss2 according to the updated parameters;
and iterating the steps for k times to obtain a distraction driving dynamic identification model.
5. The method for monitoring the driver distracted driving based on the proprietary neural network in real time according to claim 1, wherein when the trigger signal is received, a multi-frame driving video of the driver within a preset time length is acquired, and a dynamic recognition model of the driver distracted driving is called to perform the real-time monitoring of the driver distracted driving based on the multi-frame driving video, and the method comprises the following steps:
when a trigger signal is received, acquiring a multi-frame driving video of a driver within a preset time length;
dividing a multi-frame driving video into picture frames, and introducing the obtained picture frames into a driver distraction driving dynamic identification model frame by frame to judge whether the condition of driver distraction driving exists in the multi-frame driving video;
a picture frame is obtained which is marked as the presence of driver distraction.
6. The method for monitoring the distracted driving based on the proprietary neural network in real time as claimed in claim 1, wherein the determining the level of the driver's distraction based on the monitoring result comprises:
calculating the proportion of the picture frames marked as the distracted driving of the driver in the multi-frame driving video;
the level of driver distraction is determined based on the proportional result.
7. Distracted driving real-time monitoring equipment based on a special neural network is characterized by comprising the following components:
the model construction unit is used for constructing a driver distraction driving dynamic identification model of a special neural network comprising a deep convolution neural network and a cyclic neural network;
the signal generating unit is used for acquiring the running speed of the current vehicle in real time and generating a triggering signal for monitoring the distracted driving in real time when the running speed is higher than a preset threshold value;
the real-time monitoring unit is used for acquiring multi-frame driving videos of a driver within a preset time length when a trigger signal is received, and calling a driver distraction driving dynamic recognition model to carry out real-time monitoring on driver distraction driving based on the multi-frame driving videos;
and the grade dividing unit is used for judging the level of the distraction degree of the driver based on the monitoring result.
8. The private neural network-based real-time driving distraction monitoring device of claim 7, wherein the model construction unit comprises:
the data dividing subunit is used for decomposing the driving video of the driver into picture frames and dividing the obtained picture frames into static identification data, dynamic identification data and test data;
the static model subunit is used for performing the distraction driving static identification training based on the deep convolutional neural network based on the static identification data to obtain a distraction driving static identification model;
the dynamic model subunit is used for training the distraction driving static recognition model based on the dynamic recognition data to obtain a distraction driving dynamic recognition model;
the model training subunit is used for carrying out model training on the center-driving dynamic recognition model based on the test data;
wherein the size of the picture frame is modified to a standard size.
9. The private neural network-based real-time driving distraction monitoring device of claim 8, wherein the static model subunit comprises:
static identification data are imported into a softmax layer in the deep convolutional neural network, and the probability of output classification of the softmax layer is obtained;
evaluating the probability of classification by using a Loss1 function;
carrying out parameter updating on the deep convolutional neural network by optimizing the model parameters, and recalculating the evaluation value of Loss1 according to the updated parameters;
and iterating the steps for m times to obtain a distraction driving static identification model.
10. The private neural network-based real-time driving distraction monitoring device of claim 8, wherein the dynamic model subunit comprises:
providing the dynamic identification data to a distraction driving static identification model in batches in sequence, and extracting the characteristics of each frame of picture in the dynamic identification data through the static identification model;
the extracted features of each frame of picture are taken as the input of each level of the recurrent neural network, and are input to each level of the recurrent neural network in sequence;
the driving state information of each moment is extracted by using the n-level cyclic neural network and is used as the input of a Logistic layer in the cyclic neural network, and the probability of the distraction of a driver at each moment is output through the Logistic layer;
estimating the distraction probability of the driver at all times by using a Loss2 function;
updating the parameters of the n-level recurrent neural network, and recalculating the evaluation value of Loss2 according to the updated parameters;
and iterating the steps for k times to obtain a distraction driving dynamic identification model.
CN202010940231.6A 2020-09-09 2020-09-09 Distracted driving real-time monitoring method and device based on special neural network Pending CN112417945A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010940231.6A CN112417945A (en) 2020-09-09 2020-09-09 Distracted driving real-time monitoring method and device based on special neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010940231.6A CN112417945A (en) 2020-09-09 2020-09-09 Distracted driving real-time monitoring method and device based on special neural network

Publications (1)

Publication Number Publication Date
CN112417945A true CN112417945A (en) 2021-02-26

Family

ID=74854239

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010940231.6A Pending CN112417945A (en) 2020-09-09 2020-09-09 Distracted driving real-time monitoring method and device based on special neural network

Country Status (1)

Country Link
CN (1) CN112417945A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007265377A (en) * 2006-03-01 2007-10-11 Toyota Central Res & Dev Lab Inc Driver state determining device and driving support device
CN108694367A (en) * 2017-04-07 2018-10-23 北京图森未来科技有限公司 A kind of method for building up of driving behavior model, device and system
CN110615001A (en) * 2019-09-27 2019-12-27 汉纳森(厦门)数据股份有限公司 Driving safety reminding method, device and medium based on CAN data
CN111178272A (en) * 2019-12-30 2020-05-19 东软集团(北京)有限公司 Method, device and equipment for identifying driver behavior

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007265377A (en) * 2006-03-01 2007-10-11 Toyota Central Res & Dev Lab Inc Driver state determining device and driving support device
CN108694367A (en) * 2017-04-07 2018-10-23 北京图森未来科技有限公司 A kind of method for building up of driving behavior model, device and system
CN110615001A (en) * 2019-09-27 2019-12-27 汉纳森(厦门)数据股份有限公司 Driving safety reminding method, device and medium based on CAN data
CN111178272A (en) * 2019-12-30 2020-05-19 东软集团(北京)有限公司 Method, device and equipment for identifying driver behavior

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
李明: "一种用于分心驾驶监测的人工智能算法的研究与实现", 电子科技大学, pages 1 - 54 *

Similar Documents

Publication Publication Date Title
CN107862270B (en) Face classifier training method, face detection method and device and electronic equipment
CN111414813A (en) Dangerous driving behavior identification method, device, equipment and storage medium
CN111738044B (en) Campus violence assessment method based on deep learning behavior recognition
CN103164993B (en) Digital teaching system and screen monitoring method thereof
CN115272182A (en) Lane line detection method, lane line detection device, electronic device, and computer-readable medium
CN112417945A (en) Distracted driving real-time monitoring method and device based on special neural network
CN111626197A (en) Human behavior recognition network model and recognition method
CN107844777B (en) Method and apparatus for generating information
CN110598716A (en) Personnel attribute identification method, device and system
EP4156115A1 (en) Method and apparatus for identifying product that has missed inspection, electronic device, and storage medium
US11954955B2 (en) Method and system for collecting and monitoring vehicle status information
CN112990892A (en) Video information acquisition method and image processing system for teaching evaluation
CN114005054A (en) AI intelligence system of grading
CN113239894A (en) Crowd sensing system based on crowd behavior analysis assistance
CN112329566A (en) Visual perception system for accurately perceiving head movements of motor vehicle driver
CN116935366B (en) Target detection method and device, electronic equipment and storage medium
CN117132430B (en) Campus management method and device based on big data and Internet of things
CN114445711B (en) Image detection method, image detection device, electronic equipment and storage medium
CN114170421B (en) Image detection method, device, equipment and storage medium
CN117830044B (en) Interactive teaching data management system and method based on cloud computing
CN112101279B (en) Target object abnormality detection method, target object abnormality detection device, electronic equipment and storage medium
CN114241474A (en) Violation behavior identification method based on convolution and graph convolution
CN113762218A (en) Distraction driving real-time monitoring system based on neural network
CN114937185A (en) Image sample acquisition method and device, electronic equipment and storage medium
CN116625702A (en) Vehicle collision detection method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination