CN112836669B - Driver distraction driving detection method - Google Patents

Driver distraction driving detection method Download PDF

Info

Publication number
CN112836669B
CN112836669B CN202110199459.9A CN202110199459A CN112836669B CN 112836669 B CN112836669 B CN 112836669B CN 202110199459 A CN202110199459 A CN 202110199459A CN 112836669 B CN112836669 B CN 112836669B
Authority
CN
China
Prior art keywords
image
driver
neural network
convolutional neural
layer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110199459.9A
Other languages
Chinese (zh)
Other versions
CN112836669A (en
Inventor
秦斌斌
钱江波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202110199459.9A priority Critical patent/CN112836669B/en
Publication of CN112836669A publication Critical patent/CN112836669A/en
Application granted granted Critical
Publication of CN112836669B publication Critical patent/CN112836669B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Biophysics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Biomedical Technology (AREA)
  • Evolutionary Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a driver distraction driving detection method, which comprises the following steps of 1, constructing a training set and a testing set; converting the driver images in the multi-frame vehicle cab into gray images, sequentially carrying out normalization processing, preprocessing and contour texture extraction on the gray images to obtain second images corresponding to each frame of driver images, and then dividing the second images into a training set and a testing set; step 2, constructing and training a convolutional neural network to obtain a trained convolutional neural network, wherein the size of a convolutional kernel of the constructed convolutional neural network is reduced layer by layer; and step 3, selecting a second image corresponding to the to-be-tested driver image at will in the test set, and inputting the second image into the convolutional neural network trained in the step 2, so as to obtain the action category corresponding to the to-be-tested driver image. The advantages are that: the learning content is simplified by extracting the contour texture, and the driver information is quickly extracted and the network parameter quantity is reduced by reducing the convolution kernel size layer by layer.

Description

Driver distraction driving detection method
Technical Field
The invention relates to the field of image processing, in particular to a driver distraction driving detection method.
Background
In recent years, with the increasing number of private cars, traffic accidents are increasing, a large part of which is caused by driver's distractions, for example: traffic accidents are easy to occur in distracted actions such as answering calls, drinking water, taking things and the like in the driving process of a driver. Therefore, the action of the driver is necessary to be detected in real time and timely reminding is carried out on the driver who drives separately, so that the occurrence of safety accidents can be effectively avoided.
In order to realize detection of driver distraction, for example, chinese patent application No. cn201910532626.X (application publication No. CN110363093 a) discloses a method and device for identifying driver action, which comprises: by acquiring an image of a current driver; respectively inputting the images into a two-dimensional convolutional neural network and a three-dimensional convolutional neural network which are obtained by training in advance to obtain a first recognition result and a second recognition result of the action of a driver; and comparing the first recognition result with the second recognition result to determine the action type of the action of the driver. The two-dimensional convolutional neural network is used for recognizing the driver action at a certain moment of the driving gesture, the three-dimensional convolutional neural network is used for recognizing the driver action in the middle process of the driving gesture, and the method of combining the two-dimensional convolutional neural network and the three-dimensional convolutional neural network is used for jointly recognizing the driving gesture of the driver, so that the accuracy of the driver action recognition is improved. However, the huge number of network parameters of the convolutional neural network makes the training model of the convolutional neural network difficult to use for real-time detection, and the deep neural network is easy to generate an overfitting problem.
In addition, in the case of achieving the same receptive field in the existing convolutional neural network, the smaller the required parameters and the calculated amount are, the more better the superposition of a plurality of small convolutional kernels is than the single use of a large convolutional kernel, for example, although the receptive field of one 7×7 convolutional layer can be obtained by three 3×3 convolutional layers, the calculated amount of the three 3×3 convolutional layers is lower than that of the one 7×7 convolutional layer, therefore, in the existing convolutional neural network, a plurality of small convolutional kernels are generally used for superposition instead of one large convolutional kernel, and of course, the smaller the convolutional kernels are not better, the characteristics of particularly sparse data cannot be represented when the small convolutional kernels are used, and the complexity is greatly increased if the large convolutional kernels are adopted. However, the convolution kernels of the convolution layers are not limited in the present convolution neural network, and in order to reduce the calculation amount, most of the convolution layers of 3×3 are selected, so that the convolution neural network in the prior art cannot be used for real-time detection.
Disclosure of Invention
Aiming at the current state of the art, the invention provides a driver distraction driving detection method for realizing real-time detection by reducing the number of network parameters, improving the accuracy and improving the image processing capability.
The technical scheme adopted for solving the technical problems is as follows: a driver distraction driving detection method is characterized in that: the method comprises the following steps:
step 1, constructing a training set and a testing set; the method comprises the following specific steps:
step 1-1, converting driver images in a multi-frame vehicle cab into gray images with the size of N x M, wherein N and M are positive integers;
step 1-2, respectively carrying out normalization processing and preprocessing on the gray level images of N x M of each frame in the step 1-1 in sequence to obtain a first image corresponding to each frame of driver image;
step 1-3, extracting the outline texture of each frame of first image to obtain a second image corresponding to each frame of driver image;
step 1-4, dividing all the second images into a training set and a testing set, wherein each training sample in the training set comprises a second image corresponding to each frame of driver image and an action type label corresponding to the driver image;
step 2, constructing a convolutional neural network and training the convolutional neural network to obtain the trained convolutional neural network, wherein the specific steps are as follows:
step 2-1, constructing a convolutional neural network comprising q convolutional layers sequentially connected; wherein the convolution kernel of the ith convolution layer has a size m i *m i The method comprises the steps of carrying out a first treatment on the surface of the i=1, 2 … q, q being a positive integer; m is m 1 >m 2 >…m q
Step 2-2, inputting a second image corresponding to a certain frame of driver image in the training set into the convolutional neural network constructed in the step 2-1 to obtain the actual action category of the driver;
step 2-3, calculating a loss function in the convolutional neural network according to the action type label corresponding to the driver image in the training sample and the actual action type of the driver obtained in the step 2-2, and updating initialization parameters in the convolutional neural network according to the loss function to obtain the convolutional neural network after training and updating;
step 2-4, selecting other training samples in sequence, and training the convolutional neural network after training update by using the same method in step 2-2 and step 2-3 in sequence to finally obtain the convolutional neural network after training;
and step 3, selecting a second image corresponding to the to-be-tested driver image at will in the test set, and inputting the second image into the convolutional neural network trained in the step 2, so as to obtain the action category corresponding to the to-be-tested driver image.
Preferably, the second image acquiring method in the step 1-3 includes: extracting HOG features from each frame of first image to obtain HOG feature vectors corresponding to each frame of first image, and converting the HOG feature vectors corresponding to each frame of first image into images to obtain second images with outline textures.
Further, the convolutional neural network constructed in the step 2-1 is in a pyramid shape. The pyramid-shaped convolutional neural network gradually increases the number of the feature graphs along with the deep layer number of the network, and reduces the spatial resolution of the learned feature graphs, so that the information of the feature graphs is maintained.
In order to avoid overfitting of the convolutional neural network, the convolutional neural network constructed in the step 2-1 further comprises a Dropout layer, a global average pooling layer and a Softmax classification layer which are connected with the q-th convolutional layer, wherein the rear end of each convolutional layer is further sequentially connected with an activation layer, a batch regularization layer and a pooling layer.
Preferably, the activation layer adopts a ReLU nonlinear activation function, and the pooling layer adopts a maximum pooling method.
Compared with the prior art, the invention has the advantages that: firstly, through carrying out gray level processing, normalization processing, preprocessing and contour texture extraction on a driver image, useless information in the driver image is filtered, so that learning content input into a convolutional neural network is simplified, in addition, through carrying out processing on the image only comprising the contour texture by a larger convolutional check, a larger receptive field can be obtained, and through reducing the size of the convolutional kernel layer by layer, the rapid extraction of the driver information and the reduction of the network parameter quantity are realized. And can therefore be used for real-time detection in the present method.
Drawings
FIG. 1 is a block diagram of a distraction detection of any one of the driver's images in an embodiment of the present invention;
FIG. 2 is a graph of the convolution kernel size of a single convolution layer on a GPU and CPU versus the time required for the convolution layer to process a single image in an embodiment of the present invention;
FIG. 3 is a graph of the number of layers of a convolution layer versus the time required for the convolution layer on the GPU to process a single image in an embodiment of the present invention;
fig. 4 is a graph showing the relationship between the number of layers of the convolution layer and the time required for the convolution layer on the CPU to process a single image in an embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the embodiments of the drawings.
As shown in fig. 1, a driver distraction driving detection method includes the steps of:
step 1, constructing a training set and a testing set; the method comprises the following specific steps:
step 1-1, converting driver images in a multi-frame vehicle cab into gray images with the size of N x M, wherein N and M are positive integers; the driver image can be obtained by shooting in real time by adopting a camera arranged in a vehicle cab;
in nature, the color is very easy to be influenced by illumination, the change of RGB is very large, gradient information can provide more essential information, and the operation amount is greatly reduced after the three channels are converted into a single channel. The environment of the driver driving the vehicle is often affected by light, so that the driver image needs to be converted into a gray image, and in this embodiment, the size of the converted gray image is adjusted to 224×224;
step 1-2, respectively carrying out normalization processing and preprocessing on the gray level images of N x M of each frame in the step 1-1 in sequence to obtain a first image corresponding to each frame of driver image;
in this embodiment, in order to improve the training effect, the samples are subjected to data preprocessing and data enhancement as follows:
(1) Each sample minus its average value returns its sample center to 0. Subtracting the average value of all pixel points in the whole frame of image from each pixel point in the gray level image subjected to normalization processing;
(2) Regularizing each sample;
(3) The basic image processing technology, such as a Gaussian filter, is applied to randomly blur the image, so that noise data is increased, and the robustness of the model is improved.
(4) A series of subregions are cut out from an original image by a random cutting method to serve as new training samples, the random cutting is equivalent to establishing a weight relation between each factor characteristic and a corresponding category, the weights of background (or noise) factors are weakened, a better learning effect can be generated, and the model stability is improved.
By the data enhancement method, more data can be created on the basis of the original data, the training data volume is increased, and the generalization capability of the model is improved.
Step 1-3, extracting the outline texture of each frame of first image to obtain a second image corresponding to each frame of driver image;
in this embodiment, the second image acquisition method includes: extracting HOG features from each frame of first image to obtain HOG feature vectors corresponding to each frame of first image, and converting the HOG feature vectors corresponding to each frame of first image into images to obtain a second image comprising contour textures; the method for converting the HOG feature vector into the image can directly call the function packaged in the existing system library, wherein the image size of the second image is the same as the gray image size corresponding to the driver image and is 224 multiplied by 224;
step 1-4, dividing all the second images into a training set and a testing set, wherein each training sample in the training set comprises a second image corresponding to each frame of driver image and an action type label corresponding to the driver image;
the action type labels corresponding to the driver images are marked by judging through a manual identification method;
step 2, constructing a convolutional neural network and training the convolutional neural network to obtain the trained convolutional neural network, wherein the specific steps are as follows:
step 2-1, constructing a convolutional neural network comprising q convolutional layers sequentially connected; wherein the convolution kernel of the ith convolution layer has a size m i *m i The method comprises the steps of carrying out a first treatment on the surface of the i=1, 2 … q, q being a positive integer; m is m 1 >m 2 >…m q
Preferably, the constructed convolutional neural network is in a pyramid shape, namely the number of convolutional kernels is increased layer by layer, so that the number of feature images is increased layer by layer along with the deep layer of the network, and the spatial resolution of the learned feature images is reduced, so that the information of the feature images is kept;
the constructed convolutional neural network further comprises a Dropout layer, a global average pooling layer and a Softmax classifying layer which are connected with the q-th convolutional layer, wherein the rear end of each convolutional layer is further connected with an activating layer, a batch regularization layer and a pooling layer in sequence.
In this embodiment, as shown in fig. 1, the convolutional neural network is constructed of 4 convolutional layers: the first convolution layer is 64 convolution kernels, the convolution kernel size is 12×12, the second convolution layer is 128 convolution kernels, and the convolution kernel size is 9×9; the third convolution layer is 256 convolution kernels, and the convolution kernel size is 6×6; the fourth convolution layer is 512 convolution kernels, which are 3 x 3 in size.
Each convolution layer is followed by a ReLU nonlinear activation function, an L2 weight regularization layer and a maximum pooling layer, and a Dropout layer is further arranged behind the maximum pooling layer of the fourth convolution layer; regularization is carried out through L2 weight, so that overfitting caused by overlarge convolution kernel parameters is restrained; in addition, as the number of layers of the neural network is deeper and easier to fit, the added Dropout layer is used for avoiding network overfitting, enhancing the generalization capability of the neural network, the value of the Dropout layer is 0.5, and finally, softmax classification is carried out to obtain the final action category of a driver;
the specific layer number and parameters of the convolutional neural network are adjusted according to the actual image processing condition, so that a better processing effect is obtained;
step 2-2, inputting a second image corresponding to a certain frame of driver image in the training set into the convolutional neural network constructed in the step 2-1 to obtain the actual action category of the driver;
step 2-3, calculating a loss function in the convolutional neural network according to the action type label corresponding to the driver image in the training sample and the actual action type of the driver obtained in the step 2-2, and updating initialization parameters in the convolutional neural network according to the loss function to obtain the convolutional neural network after training and updating;
step 2-4, selecting other training samples in sequence, and training the convolutional neural network after training update by using the same method in step 2-2 and step 2-3 in sequence to finally obtain the convolutional neural network after training;
and step 3, selecting a second image corresponding to the to-be-tested driver image at will in the test set, and inputting the second image into the convolutional neural network trained in the step 2, so as to obtain the action category corresponding to the to-be-tested driver image.
The test set is only used for proving the effect of the convolutional neural network after training, and when the accuracy of the identification result of the test set reaches a set value, the convolutional neural network after training can be used for detecting a driver in a cab in real time so as to know the distraction condition of the driver in real time and give an alarm or caution in time to prevent accidents.
On the one hand, in the above method, since the original driver image contains a lot of useless background information, such as the color of clothes and the intensity of light in the vehicle, only the action of the driver needs to be focused, the contour texture of the driver image is firstly extracted to reduce the number of network parameters and improve the accuracy, for example: in the embodiment, HOG feature vectors are extracted from the gray level images and converted into images, and the converted images can filter out useless background information and only retain action outline textures of a driver, so that useless information is removed, the content to be learned by a network is simplified, and the neural network is beneficial to training; on the other hand, the image converted according to the HOG feature vector has the characteristic of a large amount of blank useless space, so that when the convolutional neural network is constructed, the large convolutional kernel size is needed to ensure that the network has a large receptive field, the method is favorable for quickly acquiring the contour information of a driver in a shallow network, focusing the attention point of the network on the contour information, gradually 'carefully checking' the network along with deepening of the network layer, and generating effective feature information for the next layer, so that the method and the device gradually linearly reduce the size of the convolutional kernel to finely tune the network, and further reduce the network parameter.
Because the convolution layers of the convolution neural network can be subjected to parallel computation on a GPU (i.e. an image processor) or a CPU, but the convolution layers cannot be performed in parallel, in addition, because the driver distraction detection needs to meet the real-time requirement, a larger convolution kernel is selected to construct the network. However, the larger convolution kernel is insufficient, which leads to too many training parameters and possibly over-fitting. Therefore, the invention firstly adopts a larger convolution kernel to construct a first convolution layer, and the second image containing the outline texture of the driver is processed through the convolution layer, so that the driver information can be rapidly extracted, and the network parameter quantity is reduced through the network structure of the decreasing convolution kernel, so that the image processing speed is improved, and the method can be used for real-time detection.
In order to prove that the effect of processing an image only containing contour textures is achieved by selecting a larger convolution kernel to construct a first layer of a model in the invention, then the effect of processing an MNIST data set (the data set is a data set commonly used in the field of image processing) by using a PyTorch 1.7 framework, the time required for processing a single image by a single convolution layer with 256 convolution kernels on a GPU and a CPU is shown in FIG. 2, the time cost of executing convolution on the GPU tends to be stable along with the increase of the size of the convolution kernels on the convolution layer in FIG. 2, the time cost of executing convolution on the GPU also does not change greatly, 3 on the abscissa in FIG. 2 corresponds to the size of the convolution kernels as 3*3, and other numbers are the same, so that parallel calculation can be carried out in the convolution layer; in addition, as in fig. 3 and 4, the cost of time required for the GPU and CPU to process a single image using one and more convolution layers, respectively, although three 3 x 3 convolution layers may result in a receptive field of one 7 x 7 convolution layer. However, as can be seen from fig. 3 and 4, the processing time of the three convolution layers with a convolution kernel size of 3×3 on the GPU is longer than that of the single convolution layer with a convolution kernel size of 7×7, because parallel computation is possible in a single convolution layer, but not between the convolution layers, especially on the GPU. Therefore, the convolution layer using the single-layer large convolution kernel can realize the rapid processing speed meeting the requirements of application programs, and in addition, the problem of fitting is effectively prevented by gradually decreasing the convolution kernel layer by layer, and the effective characteristic information can be extracted more rapidly, so that the requirements of real-time detection are met.
The foregoing is merely a preferred embodiment of the present invention, and it should be noted that it will be apparent to those skilled in the art that several modifications and variations can be made without departing from the technical principle of the present invention, and these modifications and variations should also be regarded as the scope of the invention.

Claims (5)

1. A driver distraction driving detection method is characterized in that: the method comprises the following steps:
step 1, constructing a training set and a testing set; the method comprises the following specific steps:
step 1-1, converting driver images in a multi-frame vehicle cab into gray images with the size of N x M, wherein N and M are positive integers;
step 1-2, respectively carrying out normalization processing and preprocessing on the gray level images of N x M of each frame in the step 1-1 in sequence to obtain a first image corresponding to each frame of driver image;
step 1-3, extracting the outline texture of each frame of first image to obtain a second image corresponding to each frame of driver image;
step 1-4, dividing all the second images into a training set and a testing set, wherein each training sample in the training set comprises a second image corresponding to each frame of driver image and an action type label corresponding to the driver image;
step 2, constructing a convolutional neural network and training the convolutional neural network to obtain the trained convolutional neural network, wherein the specific steps are as follows:
step 2-1, constructing a convolutional neural network comprising q convolutional layers sequentially connected; wherein the convolution kernel of the ith convolution layer has a size m i *m i The method comprises the steps of carrying out a first treatment on the surface of the i=1, 2 … q, q being a positive integer; m is m 1 >m 2 >…m q
Step 2-2, inputting a second image corresponding to a certain frame of driver image in the training set into the convolutional neural network constructed in the step 2-1 to obtain the actual action category of the driver;
step 2-3, calculating a loss function in the convolutional neural network according to the action type label corresponding to the driver image in the training sample and the actual action type of the driver obtained in the step 2-2, and updating initialization parameters in the convolutional neural network according to the loss function to obtain the convolutional neural network after training and updating;
step 2-4, selecting other training samples in sequence, and training the convolutional neural network after training update by using the same method in step 2-2 and step 2-3 in sequence to finally obtain the convolutional neural network after training;
and step 3, selecting a second image corresponding to the to-be-tested driver image at will in the test set, and inputting the second image into the convolutional neural network trained in the step 2, so as to obtain the action category corresponding to the to-be-tested driver image.
2. The driver distraction detection method according to claim 1, wherein: the second image acquisition method in the step 1-3 comprises the following steps: extracting HOG features from each frame of first image to obtain HOG feature vectors corresponding to each frame of first image, and converting the HOG feature vectors corresponding to each frame of first image into images to obtain second images with outline textures.
3. The driver distraction detection method according to claim 1, wherein: the convolutional neural network constructed in the step 2-1 is in a pyramid shape.
4. The driver distraction detection method according to claim 1, wherein: the convolutional neural network constructed in the step 2-1 further comprises a Dropout layer, a global average pooling layer and a Softmax classification layer which are connected with the q-th convolutional layer, wherein the rear end of each convolutional layer is further sequentially connected with an activation layer, a batch regularization layer and a pooling layer.
5. The driver distraction detection method according to claim 4, wherein: the activation layer adopts a ReLU nonlinear activation function, and the pooling layer adopts a maximum pooling method.
CN202110199459.9A 2021-02-22 2021-02-22 Driver distraction driving detection method Active CN112836669B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110199459.9A CN112836669B (en) 2021-02-22 2021-02-22 Driver distraction driving detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110199459.9A CN112836669B (en) 2021-02-22 2021-02-22 Driver distraction driving detection method

Publications (2)

Publication Number Publication Date
CN112836669A CN112836669A (en) 2021-05-25
CN112836669B true CN112836669B (en) 2023-12-12

Family

ID=75932974

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110199459.9A Active CN112836669B (en) 2021-02-22 2021-02-22 Driver distraction driving detection method

Country Status (1)

Country Link
CN (1) CN112836669B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114255454A (en) * 2021-12-16 2022-03-29 杭州电子科技大学 Training method of distraction detection model, distraction detection method and device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242127A (en) * 2020-01-15 2020-06-05 上海应用技术大学 Vehicle detection method with granularity level multi-scale characteristics based on asymmetric convolution
WO2020156028A1 (en) * 2019-01-28 2020-08-06 南京航空航天大学 Outdoor non-fixed scene weather identification method based on deep learning
CN111565978A (en) * 2018-01-29 2020-08-21 华为技术有限公司 Primary preview area and gaze-based driver distraction detection
CN111985403A (en) * 2020-08-20 2020-11-24 中再云图技术有限公司 Distracted driving detection method based on face posture estimation and sight line deviation

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10769461B2 (en) * 2017-12-14 2020-09-08 COM-IoT Technologies Distracted driver detection
US10974728B2 (en) * 2018-09-11 2021-04-13 Lightmetrics Technologies Pvt. Ltd. Methods and systems for facilitating drive related data for driver monitoring
US11138469B2 (en) * 2019-01-15 2021-10-05 Naver Corporation Training and using a convolutional neural network for person re-identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111565978A (en) * 2018-01-29 2020-08-21 华为技术有限公司 Primary preview area and gaze-based driver distraction detection
WO2020156028A1 (en) * 2019-01-28 2020-08-06 南京航空航天大学 Outdoor non-fixed scene weather identification method based on deep learning
CN111242127A (en) * 2020-01-15 2020-06-05 上海应用技术大学 Vehicle detection method with granularity level multi-scale characteristics based on asymmetric convolution
CN111985403A (en) * 2020-08-20 2020-11-24 中再云图技术有限公司 Distracted driving detection method based on face posture estimation and sight line deviation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于RGB相机的驾驶员注视区域估计;刘觅涵;代欢欢;;现代计算机(第36期);全文 *

Also Published As

Publication number Publication date
CN112836669A (en) 2021-05-25

Similar Documents

Publication Publication Date Title
CN104537393B (en) A kind of traffic sign recognition method based on multiresolution convolutional neural networks
US20170270653A1 (en) Retinal image quality assessment, error identification and automatic quality correction
CN112132156A (en) Multi-depth feature fusion image saliency target detection method and system
CN109359661B (en) Sentinel-1 radar image classification method based on convolutional neural network
CN111611851B (en) Model generation method, iris detection method and device
Shaikh et al. A novel approach for automatic number plate recognition
CN109871792B (en) Pedestrian detection method and device
CN111832461A (en) Non-motor vehicle riding personnel helmet wearing detection method based on video stream
CN113256624A (en) Continuous casting round billet defect detection method and device, electronic equipment and readable storage medium
CN111046793A (en) Tomato disease identification method based on deep convolutional neural network
CN112836669B (en) Driver distraction driving detection method
CN114926374A (en) Image processing method, device and equipment based on AI and readable storage medium
CN114202747A (en) Real-time lane line segmentation method, device, terminal and storage medium
CN112132104A (en) ISAR ship target image domain enhancement identification method based on loop generation countermeasure network
CN111626186A (en) Driver distraction detection method
Neagoe et al. Drunkenness diagnosis using a neural network-based approach for analysis of facial images in the thermal infrared spectrum
CN107832723B (en) Smoke identification method and system based on LBP Gaussian pyramid
CN115457614B (en) Image quality evaluation method, model training method and device
CN115424163A (en) Lip-shape modified counterfeit video detection method, device, equipment and storage medium
CN116958615A (en) Picture identification method, device, equipment and medium
CN114648738A (en) Image identification system and method based on Internet of things and edge calculation
CN106846366A (en) Use the TLD video frequency motion target trackings of GPU hardware
AU2021102962A4 (en) A system for driver behavior analysis based on mood detection
CN113808055B (en) Plant identification method, device and storage medium based on mixed expansion convolution
Yin¹ et al. 1 Shanghaitech University, Shanghai, China zhuyunyin413@ outlook. com 2 School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant