CN111626186A - Driver distraction detection method - Google Patents

Driver distraction detection method Download PDF

Info

Publication number
CN111626186A
CN111626186A CN202010449852.4A CN202010449852A CN111626186A CN 111626186 A CN111626186 A CN 111626186A CN 202010449852 A CN202010449852 A CN 202010449852A CN 111626186 A CN111626186 A CN 111626186A
Authority
CN
China
Prior art keywords
image
driver
convolutional
layer
neural network
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010449852.4A
Other languages
Chinese (zh)
Inventor
秦斌斌
钱江波
陈叶芳
严迪群
董一鸿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ningbo University
Original Assignee
Ningbo University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ningbo University filed Critical Ningbo University
Priority to CN202010449852.4A priority Critical patent/CN111626186A/en
Publication of CN111626186A publication Critical patent/CN111626186A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
    • G06V20/597Recognising the driver's state or behaviour, e.g. attention or drowsiness
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis

Abstract

A driver distraction detection method comprises the steps of converting each frame of driver image into a gray image, sequentially carrying out normalization processing and preprocessing, inputting one training sample into an initialized convolutional neural network, simultaneously carrying out batch regularization processing on HOG characteristics extracted from the gray image corresponding to the training sample, and then connecting the HOG characteristics with full-connection layers to obtain HOG characteristic vectors; finally, after the output result of each convolutional layer is subjected to global mean pooling, a total feature vector composed of the feature vector and the HOG feature vector is classified by a full connection layer and Softmax in the convolutional neural network in sequence, and then the actual action category of the driver is obtained, so that the parameters in the convolutional neural network are updated; and the convolutional neural network is updated in sequence by adopting the same method. And finally, obtaining the action type corresponding to the driver image in the test set. The detection result of the detection method is more accurate and the network structure has fewer network parameters.

Description

Driver distraction detection method
Technical Field
The invention relates to the field of image processing, in particular to a driver distraction detection method.
Background
In recent years, with the increasing number of private cars, traffic accidents are increasing, and a large part of the traffic accidents are caused by driver distraction, such as: the traffic accidents are easy to happen when the driver takes the call, drinks water, takes things and other actions. Therefore, it is necessary to detect the actions of the driver in real time and remind the distracted driver in time so as to effectively avoid the occurrence of safety accidents.
If chinese invention patent with application number cn201910532626.x (application publication number CN110363093A) discloses a driver action recognition method and device, comprising: by acquiring an image of a current driver; respectively inputting the images into a two-dimensional convolutional neural network and a three-dimensional convolutional neural network obtained by pre-training to obtain a first recognition result and a second recognition result of the driver action; and comparing the first recognition result with the second recognition result to determine the action type of the driver action. The two-dimensional convolutional neural network is used for identifying the action of a driver at a certain moment of the driving gesture, the three-dimensional convolutional neural network is used for identifying the action of the driver in the middle process of the driving gesture, and the method of combining the two-dimensional convolutional neural network and the three-dimensional convolutional neural network is used for identifying the driving gesture of the driver together, so that the accuracy of identifying the action of the driver is improved. However, the huge network parameters of the convolutional neural network make the training model difficult to be used for real-time detection, and the existing convolutional neural network model only focuses on the output of the last layer of the network and cannot fully utilize the output characteristics of the middle layer, and in fact, the characteristics of the middle layer contain much useful information. Further improvements are therefore desirable.
Disclosure of Invention
The invention aims to solve the technical problem of providing a driver distraction detection method with fewer network parameters and higher detection accuracy aiming at the current situation of the prior art.
The technical scheme adopted by the invention for solving the technical problems is as follows: a driver distraction detection method is characterized in that: the method comprises the following steps:
step 1, forming a data set by a plurality of frames of driver images in a vehicle cab;
step 2, converting each frame of driver image in the data set into a gray image with the size of N x M, sequentially carrying out normalization processing and preprocessing on each frame of N x M gray image in the data set, and finally obtaining a preprocessed data set, wherein N and M are positive integers;
step 3, dividing the preprocessed data set into a training set and a testing set, wherein each training sample in the training set comprises a preprocessed driver image and an action category label corresponding to the driver image;
step 4, randomly selecting one of the training samples in the training set to be input into the initialized convolutional neural network, respectively extracting the output result of each convolutional layer, and performing global mean pooling on the output result of each convolutional layer to obtain a feature vector corresponding to each convolutional layer; wherein, the characteristic vector A corresponding to the ith convolution layeriIs 1 miM ofiThe number of convolution kernels corresponding to the ith convolution layer; q ═ 1,2,. q; q is the total number of convolutional layers;
step 5, extracting HOG characteristics from the original N x M gray level image corresponding to the training sample used in the step 4, and performing batch regularization on the extracted HOG characteristics to obtain HOG characteristics after batch regularization;
step 6, carrying out full-connection layer connection on the HOG features after batch regularization to obtain 1 x n HOG feature vectors; n is a positive integer;
step 7, combining the feature vector corresponding to each convolution layer obtained in the step 4 and the HOG feature vector of 1 x n obtained in the step 6 to form a total feature vector;
step 8, classifying the total feature vectors sequentially through a full connection layer and Softmax in a convolutional neural network to obtain the actual action category of the driver;
step 9, calculating a loss function in the convolutional neural network according to the action category label corresponding to the driver image in the training sample and the actual action category of the driver obtained in the step 8, and updating the initialization parameter in the convolutional neural network according to the loss function to obtain the convolutional neural network after training updating;
step 10, sequentially selecting other training samples, and sequentially training the convolutional neural network after the training is updated by using the same method in the steps 4-9 to finally obtain the trained convolutional neural network;
and 11, randomly selecting a driver image to be tested in the test set, inputting the driver image to the convolutional neural network trained in the step 10, and obtaining the action type corresponding to the driver image to be tested according to the same method in the steps 4-8.
Specifically, the specific steps of preprocessing the image in the data set in step 2 are as follows:
step 2-1, subtracting the average value of all pixel points in the whole frame image from each pixel point in each frame of normalized gray level image to obtain a first image;
2-2, performing regularization processing on the first image to obtain a second image;
step 2-3, adding noise data in the second image to obtain a third image;
and 2-4, randomly cutting the third image to obtain a plurality of images.
Preferably, the convolutional neural network in the step 4 is pyramid-shaped, and an active layer, a pooling layer and a Dropout layer which are sequentially connected are further respectively arranged at the rear end of each of the first q-1 convolutional layers.
Specifically, the active layer samples a ReLU nonlinear active function, and the pooling layer adopts a maximum pooling method.
Preferably, the convolutional neural network in step 4 includes 4 convolutional layers, the first convolutional layer is 64 convolutional kernels, the second convolutional layer is 128 convolutional kernels, the third convolutional layer is 256 convolutional kernels, and the fourth convolutional layer is 512 convolutional kernels.
Compared with the prior art, the invention has the advantages that: the HOG feature extraction is carried out on each input image, and the extracted HOG feature vector and the feature vector corresponding to each convolution layer in the convolutional neural network are extracted and fused to form a total feature vector, so that the feature output of the convolutional neural network is enriched, the detection result is more accurate, and in addition, the network structure has fewer network parameters and simultaneously keeps a better detection function.
Drawings
Fig. 1 is a block diagram of a distraction detection for any one driver image in an embodiment of the invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying examples.
A driver distraction detection method comprising the steps of:
step 1, forming a data set by a plurality of frames of driver images in a vehicle cab;
step 2, converting each frame of driver image in the data set into a gray image with the size of N x M, sequentially carrying out normalization processing and preprocessing on each frame of N x M gray image in the data set, and finally obtaining a preprocessed data set, wherein N and M are positive integers;
the color of the driver is very easily influenced by illumination in nature, RGB (red, green and blue) changes greatly, gradient information can provide more essential information, and the environment of a driver driving a vehicle is often influenced by light, so that the calculation amount is greatly reduced after the driver image with three channels is converted into the gray image with one channel; in addition, if the converted grayscale image size is too large, it takes a long training time to input into the convolutional neural network, and the requirement for computer configuration is high, and if the converted grayscale image size is too small, the information of the image is lost; in this embodiment, the size of the converted grayscale image is 224 × 224;
the specific steps of preprocessing the images in the data set are as follows:
step 2-1, subtracting the average value of all pixel points in the whole frame image from each pixel point in each frame of normalized gray level image to obtain a first image;
2-2, performing regularization processing on the first image to obtain a second image;
step 2-3, adding noise data in the second image to obtain a third image;
and 2-4, randomly cutting the third image to obtain a plurality of images.
In the embodiment, in the step 2-3, the second image can be blurred randomly by applying a gaussian filter, so that noise data can be added to the second image, and the robustness of the trained convolutional neural network is improved; in the step 2-4, the third image is randomly cut to create more data, so that the training data volume is increased, and the generalization capability of the convolutional neural network is improved;
step 3, dividing the preprocessed data set into a training set and a testing set, wherein each training sample in the training set comprises a preprocessed driver image and an action category label corresponding to the driver image;
the action type label corresponding to the driver image is marked by judging through a manual identification method;
step 4, randomly selecting one of the training samples in the training set to be input into the initialized convolutional neural network, respectively extracting the output result of each convolutional layer, and performing global mean pooling on the output result of each convolutional layer to obtain a feature vector corresponding to each convolutional layer; wherein, the characteristic vector A corresponding to the ith convolution layeriIs 1 miM ofiThe number of convolution kernels corresponding to the ith convolution layer; q ═ 1,2,. q; q is the total number of convolutional layers;
in this embodiment, the convolutional neural network in step 4 is pyramid-shaped, and the back end of each convolutional layer in the first q-1 convolutional layers is further provided with an activation layer, a pooling layer and a Dropout layer which are sequentially connected, wherein the Dropout layer is used for avoiding over-fitting of the network and enhancing the generalization capability of the neural network; since the deeper the number of neural network layers, the easier the overfitting, the Dropout layers in this embodiment increase linearly from front to back, and all convolution kernels are 3 × 3 in size; in the embodiment, the ReLU nonlinear activation function is sampled by the activation layer, the maximum pooling method is adopted by the pooling layer, and in addition, the weight of each convolution layer is subjected to L2 weight regularization, so that overfitting caused by overlarge convolution kernel parameters is inhibited;
step 5, extracting HOG characteristics from the original N x M gray level image corresponding to the training sample used in the step 4, and performing batch regularization on the extracted HOG characteristics to obtain HOG characteristics after batch regularization;
the method for extracting the HOG features is an existing algorithm, and the network training speed is accelerated by carrying out batch regularization processing on the extracted HOG features;
step 6, carrying out full-connection layer connection on the HOG features after batch regularization to obtain 1 x n HOG feature vectors; n is a positive integer;
in order to make two different features of the extracted HOG feature vector and the feature vector extracted through the convolutional neural network have the same action proportion on the final driver action category, the relationship that the features between the extracted HOG feature vector and the feature vector extracted through the convolutional neural network are 1:1 needs to be maintained as much as possible; the full-connection layer used in this embodiment is not the same as the full-connection layer in the convolutional neural network, and the purpose of connecting the full-connection layer to the HOG features after batch regularization is to keep the relationship of the HOG feature vectors to the feature vectors extracted by the convolutional neural network as much as possible at a ratio of 1: 1;
step 7, combining the feature vector corresponding to each convolution layer obtained in the step 4 and the HOG feature vector of 1 x n obtained in the step 6 to form a total feature vector;
step 8, classifying the total feature vectors sequentially through a full connection layer and Softmax in a convolutional neural network to obtain the actual action category of the driver;
step 9, calculating a loss function in the convolutional neural network according to the action category label corresponding to the driver image in the training sample and the actual action category of the driver obtained in the step 8, and updating the initialization parameter in the convolutional neural network according to the loss function to obtain the convolutional neural network after training updating;
step 10, sequentially selecting other training samples, and sequentially training the convolutional neural network after the training is updated by using the same method in the steps 4-9 to finally obtain the trained convolutional neural network;
and 11, randomly selecting a driver image to be tested in the test set, inputting the driver image to the convolutional neural network trained in the step 10, and obtaining the action type corresponding to the driver image to be tested according to the same method in the steps 4-8.
To illustrate the above detection method, as shown in fig. 1, a driver image is used for detection, wherein a convolutional neural network is provided with 4 convolutional layers, the first convolutional layer is 64 convolutional kernels, the second convolutional layer is 128 convolutional kernels, the third convolutional layer is 256 convolutional kernels, and the fourth convolutional layer is 512 convolutional kernels; dividing the driver image into two paths for feature extraction, wherein the upper path is a first feature vector extracted from the driver image through a convolutional neural network, the lower path is a second feature vector obtained by HOG feature extraction from the driver image, then fusing the first feature vector and the second feature vector to form a total feature vector, and classifying the total feature vector through a full connection layer and Softmax in the convolutional neural network in sequence to obtain the actual action category of the driver.
The first feature vector extraction process is as follows: respectively performing global mean pooling on the output result of the first convolutional layer, the output result of the second convolutional layer, the output result of the third convolutional layer and the output result of the fourth convolutional layer to form a result, wherein the output result of the first convolutional layer is subjected to global mean pooling to obtain a feature vector of 1 x 64, the output result of the second convolutional layer is subjected to global mean pooling to obtain a feature vector of 1 x 128, the output result of the third convolutional layer is subjected to global mean pooling to obtain a feature vector of 1 x 256, the output result of the fourth convolutional layer is subjected to global mean pooling to obtain a feature vector of 1 x 512, and then the feature vectors are combined to obtain a first feature vector of 1 x 960; in addition, the extraction process of the second feature vector is as follows: extracting HOG features from a gray level image corresponding to a driver, and performing batch regularization on the extracted HOG features to obtain HOG feature vectors after batch regularization, wherein the feature vectors are 1 × 2916; however, since the value of the feature vector is much different from that of the first feature vector, in order to keep the first feature vector and the second feature vector as close as possible to 1:1, so that the full-concatenation connection of the feature vectors of 1 × 2916 is required to obtain a second feature vector of 1 × 1024; finally, the first feature vector of 1 x 960 and the second feature vector of 1 x 1024 are merged into a total feature vector of 1 x 1984.
The existing CNN network model is too deep and possibly causes network overfitting, the output characteristics of a network intermediate layer are ignored, the output characteristics of the intermediate layer can learn unique characteristic information such as colors and edges, and the identification accuracy is improved; in addition, HOG is a feature descriptor used for object detection in computer vision and image processing, and constitutes feature information by calculating and counting a histogram of gradient directions of a local region of an image, which has been highly successful in the detection field. Therefore, the output characteristics of each convolution layer of the convolution neural network are fused with the HOG characteristics, the information view angle of the convolution neural network is enriched, the network structure has few network parameters, the network model is simpler, the good detection performance is kept, and the distraction action of a driver can be effectively identified.

Claims (5)

1. A driver distraction detection method is characterized in that: the method comprises the following steps:
step 1, forming a data set by a plurality of frames of driver images in a vehicle cab;
step 2, converting each frame of driver image in the data set into a gray image with the size of N x M, sequentially carrying out normalization processing and preprocessing on each frame of N x M gray image in the data set, and finally obtaining a preprocessed data set, wherein N and M are positive integers;
step 3, dividing the preprocessed data set into a training set and a testing set, wherein each training sample in the training set comprises a preprocessed driver image and an action category label corresponding to the driver image;
step 4, randomly selecting one of the training samples in the training set to be input into the initialized convolutional neural network, respectively extracting the output result of each convolutional layer, and performing global mean pooling on the output result of each convolutional layer to obtain a feature vector corresponding to each convolutional layer; wherein, the characteristic vector A corresponding to the ith convolution layeriIs 1 miM ofiThe number of convolution kernels corresponding to the ith convolution layer; q ═ 1,2,. q; q is the total number of convolutional layers;
step 5, extracting HOG characteristics from the original N x M gray level image corresponding to the training sample used in the step 4, and performing batch regularization on the extracted HOG characteristics to obtain HOG characteristics after batch regularization;
step 6, carrying out full-connection layer connection on the HOG features after batch regularization to obtain 1 x n HOG feature vectors; n is a positive integer;
step 7, combining the feature vector corresponding to each convolution layer obtained in the step 4 and the HOG feature vector of 1 x n obtained in the step 6 to form a total feature vector;
step 8, classifying the total feature vectors sequentially through a full connection layer and Softmax in a convolutional neural network to obtain the actual action category of the driver;
step 9, calculating a loss function in the convolutional neural network according to the action category label corresponding to the driver image in the training sample and the actual action category of the driver obtained in the step 8, and updating the initialization parameter in the convolutional neural network according to the loss function to obtain the convolutional neural network after training updating;
step 10, sequentially selecting other training samples, and sequentially training the convolutional neural network after the training is updated by using the same method in the steps 4-9 to finally obtain the trained convolutional neural network;
and 11, randomly selecting a driver image to be tested in the test set, inputting the driver image to the convolutional neural network trained in the step 10, and obtaining the action type corresponding to the driver image to be tested according to the same method in the steps 4-8.
2. The driver distraction detection method according to claim 1, wherein: the specific steps of preprocessing the images in the data set in the step 2 are as follows:
step 2-1, subtracting the average value of all pixel points in the whole frame image from each pixel point in each frame of normalized gray level image to obtain a first image;
2-2, performing regularization processing on the first image to obtain a second image;
step 2-3, adding noise data in the second image to obtain a third image;
and 2-4, randomly cutting the third image to obtain a plurality of images.
3. The driver distraction detection method according to claim 1, wherein: the convolutional neural network in the step 4 is in a pyramid shape, and the back end of each convolutional layer in the first q-1 convolutional layers is further provided with an activation layer, a pooling layer and a Dropout layer which are sequentially connected.
4. The driver distraction detection method of claim 3, wherein: the active layer samples a ReLU nonlinear active function, and the pooling layer adopts a maximum pooling method.
5. The driver distraction detection method of claim 3, wherein: the convolutional neural network in step 4 includes 4 convolutional layers, the first convolutional layer is 64 convolutional kernels, the second convolutional layer is 128 convolutional kernels, the third convolutional layer is 256 convolutional kernels, and the fourth convolutional layer is 512 convolutional kernels.
CN202010449852.4A 2020-05-25 2020-05-25 Driver distraction detection method Pending CN111626186A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010449852.4A CN111626186A (en) 2020-05-25 2020-05-25 Driver distraction detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010449852.4A CN111626186A (en) 2020-05-25 2020-05-25 Driver distraction detection method

Publications (1)

Publication Number Publication Date
CN111626186A true CN111626186A (en) 2020-09-04

Family

ID=72260733

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010449852.4A Pending CN111626186A (en) 2020-05-25 2020-05-25 Driver distraction detection method

Country Status (1)

Country Link
CN (1) CN111626186A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884025A (en) * 2021-02-01 2021-06-01 安徽大学 Tea disease classification system based on multi-feature sectional type training
CN114255454A (en) * 2021-12-16 2022-03-29 杭州电子科技大学 Training method of distraction detection model, distraction detection method and device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960029A (en) * 2018-03-23 2018-12-07 北京交通大学 A kind of pedestrian diverts one's attention behavioral value method
CN109993093A (en) * 2019-03-25 2019-07-09 山东大学 Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic
CN110532878A (en) * 2019-07-26 2019-12-03 中山大学 A kind of driving behavior recognition methods based on lightweight convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108960029A (en) * 2018-03-23 2018-12-07 北京交通大学 A kind of pedestrian diverts one's attention behavioral value method
CN109993093A (en) * 2019-03-25 2019-07-09 山东大学 Road anger monitoring method, system, equipment and medium based on face and respiratory characteristic
CN110532878A (en) * 2019-07-26 2019-12-03 中山大学 A kind of driving behavior recognition methods based on lightweight convolutional neural networks

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MD RIFAT AREFIN.ET.: "Aggregating CNN and HOG features for Real-Time Distracted Driver Detection", 《2019 IEEE INTERNATIONAL CONFERENCE ON CONSUMER ELECTRONICS (ICCE)》, pages 1 - 3 *
梁昭德: "基于卷积神经网络的驾驶人疲劳检测算法研究", no. 12, pages 035 - 153 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112884025A (en) * 2021-02-01 2021-06-01 安徽大学 Tea disease classification system based on multi-feature sectional type training
CN112884025B (en) * 2021-02-01 2022-11-04 安徽大学 Tea disease classification system based on multi-feature sectional type training
CN114255454A (en) * 2021-12-16 2022-03-29 杭州电子科技大学 Training method of distraction detection model, distraction detection method and device

Similar Documents

Publication Publication Date Title
CN109977812B (en) Vehicle-mounted video target detection method based on deep learning
CN104050471B (en) Natural scene character detection method and system
CN110555465B (en) Weather image identification method based on CNN and multi-feature fusion
CN108009518A (en) A kind of stratification traffic mark recognition methods based on quick two points of convolutional neural networks
CN109284669A (en) Pedestrian detection method based on Mask RCNN
CN111738064B (en) Haze concentration identification method for haze image
CN108108761A (en) A kind of rapid transit signal lamp detection method based on depth characteristic study
CN108229458A (en) A kind of intelligent flame recognition methods based on motion detection and multi-feature extraction
CN106650786A (en) Image recognition method based on multi-column convolutional neural network fuzzy evaluation
CN112132156A (en) Multi-depth feature fusion image saliency target detection method and system
CN112163511B (en) Method for identifying authenticity of image
CN104766071B (en) A kind of traffic lights fast algorithm of detecting applied to pilotless automobile
CN104077577A (en) Trademark detection method based on convolutional neural network
CN110569782A (en) Target detection method based on deep learning
CN106023257A (en) Target tracking method based on rotor UAV platform
CN109543632A (en) A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features
CN110738160A (en) human face quality evaluation method combining with human face detection
CN110969171A (en) Image classification model, method and application based on improved convolutional neural network
CN110222604A (en) Target identification method and device based on shared convolutional neural networks
CN113870263B (en) Real-time monitoring method and system for pavement defect damage
CN112733815B (en) Traffic light identification method based on RGB outdoor road scene image
CN111160194B (en) Static gesture image recognition method based on multi-feature fusion
CN111626186A (en) Driver distraction detection method
CN112308005A (en) Traffic video significance prediction method based on GAN
CN110570469B (en) Intelligent identification method for angle position of automobile picture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination