CN109871778B - Lane keeping control method based on transfer learning - Google Patents
Lane keeping control method based on transfer learning Download PDFInfo
- Publication number
- CN109871778B CN109871778B CN201910065082.0A CN201910065082A CN109871778B CN 109871778 B CN109871778 B CN 109871778B CN 201910065082 A CN201910065082 A CN 201910065082A CN 109871778 B CN109871778 B CN 109871778B
- Authority
- CN
- China
- Prior art keywords
- data
- model
- training
- steering
- vehicle
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000013526 transfer learning Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 42
- 238000000605 extraction Methods 0.000 claims abstract description 13
- 230000008569 process Effects 0.000 claims description 18
- 238000004422 calculation algorithm Methods 0.000 claims description 12
- 238000012360 testing method Methods 0.000 claims description 9
- 230000006870 function Effects 0.000 claims description 8
- 238000010606 normalization Methods 0.000 claims description 7
- 230000004913 activation Effects 0.000 claims description 5
- 238000012545 processing Methods 0.000 claims description 5
- 230000008014 freezing Effects 0.000 claims description 4
- 238000007710 freezing Methods 0.000 claims description 4
- 210000002569 neuron Anatomy 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 3
- 238000006243 chemical reaction Methods 0.000 claims description 2
- 238000002407 reforming Methods 0.000 claims description 2
- 238000007781 pre-processing Methods 0.000 abstract description 3
- 238000012423 maintenance Methods 0.000 abstract 1
- 238000013135 deep learning Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 7
- 238000013528 artificial neural network Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 238000012567 pattern recognition method Methods 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 230000008034 disappearance Effects 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000013508 migration Methods 0.000 description 1
- 230000005012 migration Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 230000001172 regenerating effect Effects 0.000 description 1
- 229920006395 saturated elastomer Polymers 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
Images
Classifications
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Image Analysis (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a lane keeping control method based on transfer learning, which comprises the steps of firstly collecting videos recorded by a front camera of a driving recorder and steering control signals of a vehicle as training data, preprocessing the data by changing brightness, resizing, increasing shadows and the like, using VGGNet trained based on an ImageNet data set as a feature extraction network, and simultaneously adding a full connection layer on the top layer to construct an end-to-end lane line keeping control network model; further training the model through the collected driving video and steering signals, and finally evaluating the robustness of the network model; the VGGNet trained based on the ImageNet data set is used as the feature extraction network, the steering angle of automatic driving can be well fitted under the condition that vehicle-mounted computing resources and data sets are limited, meanwhile, the method has certain effectiveness and reliability in the aspect of generalization of a network model, and can be widely applied to various lane line maintenance task systems related to automatic driving.
Description
Technical Field
The invention belongs to the technical field of computers, and particularly relates to a lane keeping control method based on transfer learning.
Background
A conventional pattern recognition method based on computer vision, often requiring the features to be designed artificially. However, the artificial design features are often prone to omission, and for an automatic driving automobile, neglecting the programming defects in certain situations may cause serious consequences. In recent years, a deep learning method represented by a Convolutional Neural Network (CNN) has become one of key technologies in the field of automatic driving due to its adaptability to a complex environment. The successful application of convolutional neural networks depends on two factors, namely a large common data set (such as ImageNet) and a high-performance computing platform (such as a GPU cluster). However, some technical difficulties still exist in the process of automatic driving by utilizing deep learning and need to be solved urgently.
One of the two methods is that the deep learning algorithm model has high requirements on computing resources, so that the training of a vehicle-mounted computing platform is not facilitated. Training of a large deep neural network has high requirements on computing resources, but due to the fact that a vehicle-mounted computing platform has requirements on power consumption, size and the like, the vehicle-mounted computing platform is only suitable for building a simple deep learning computing environment, and therefore the vehicle-mounted computing platform becomes one of bottlenecks which hinder wide application of deep learning in the field of automatic driving. For example, training a convolutional neural network with a depth of 16 layers with the ImageNet dataset takes about 10 days to complete model training with the NVIDIA Tesla K80 GPU. If an ordinary PC is used instead (GTX 970 for example), it may take more than 1 year to complete training. Another problem is that driving data sets are difficult to collect, while models trained with smaller data sets perform poorly in generalization in the face of real complex environments.
Therefore, under the condition of limited computing resources and data sets, how to efficiently obtain a reliable model is a main difficulty of the application of the current deep learning technology in the field of automatic driving. In addition, the traditional lane keeping algorithm is a lane keeping and automatic centering system and method based on a deviation prediction algorithm (application date: 2014-10-31, application number: CN 201410613860, publication number: CN 104442814B) proposed by Chongqing Changan automobile GmbH and a lane line detection and stabilization algorithm (application date: 2017-8-18, application number: CN 201710712894, publication number: CN 107516078A) proposed by electronic technology university. The methods of the two applications can effectively detect the lane line to a certain extent. However, both methods essentially separate the lane recognition from the generation of the steering control signal, and then perform lane keeping based on a specific rule. Therefore, these algorithms based on rule judgment hardly satisfy driving demands under complicated conditions.
Disclosure of Invention
The invention aims to provide a lane keeping control method based on transfer learning, which adopts an end-to-end model and directly outputs a vehicle steering control signal by inputting a front video of an original automobile data recorder, thereby achieving the purpose of keeping a lane line and solving the problems in the prior art.
In order to achieve the above object, the present invention adopts a technical solution that a lane keeping control method based on transfer learning includes the following specific steps:
s1, collecting video data and steering data in a driving process;
s2, segmenting and converting the data collected in the S1; the data set after the cutting and conversion is divided into a training set and a testing set,
s3, constructing an end-to-end lane line keeping control model, specifically:
s31, initializing the model by adopting a transfer learning means, and freezing the front 8 layers of weights of the VGG16 by using VGGNet trained based on the ImageNet data set as a feature extraction network; adding a Flatten layer between the convolution layer and the full-connection layer; on the basis, the original 3 full connection layers of the VGG16 are changed into 5 full connection layers; adding a full-connection layer output steering angle value to construct an end-to-end lane keeping control model;
s32, optimizing the model constructed in the step 31, and optimizing the model by adopting a Max-firing, batch Normalization and Truncated Normalization method on the algorithm level;
s4, training the model constructed in the S3 by adopting the training set data separated in the S2 to obtain a training model and weight parameters;
and S5, in the driving process of the vehicle, the training model and the weight parameters obtained in the S4 are adopted, and the real-time control of the driving lane of the vehicle is realized according to the road information monitored by the vehicle data recorder of the vehicle.
In the S1, the video data in the driving process comes from a video in a vehicle data recorder, and the steering data adopts a steering control signal in the driving process of the vehicle.
In S2, the video data in S1 is encoded in H264/MKV format at a resolution of 1280 × 720.
In S2, video data in S1 is preprocessed in a mode of changing brightness, resizing and increasing shadow after being extracted frame by frame according to pictures.
In S3, reducing errors of feature extraction by adopting Maxpooling; a Bernoulli function is adopted to randomly generate a vector of 0 or 1 according to the probability P, so that a certain neuron stops working according to the probability P;
and adding normal standardization processing in the middle layer of the deep network by adopting Batch Normalization, and automatically adjusting the standardized intensity by the constraint network in the training process.
And S3, using a correction unit Relu as an activation function for all the layers except the last layer in the network model.
And S3, initializing the truncated Gaussian distribution by adopting a TruncatedNormal, discarding data beyond two standard deviations of the mean value, and reforming the data of the truncated distribution.
In S4, the operating environment of the model is: NVIDIA Tesla K80, 12GB Memory, 61GB RAM and Tensorflow-gpu1.10.0.
Evaluating the robustness and the accuracy of the training model obtained in the S4 by using MSE (Mean Squared Error); the MSE reflects the robustness and the accuracy of a training model by evaluating the change degree of data, and the smaller the value of the MSE is, the higher the accuracy of the model is; MSE is:
wherein,for the true steering value of the ith data in the test set obtained at S2,predicting a predicted value of the model obtained in the step (S4) to the ith steering signal according to the image data input in the test set, wherein N is the size of the data set; when MSE is less than 3 degrees, the robustness and the accuracy of the model obtained by S4 meet the requirements, and when MSE is more than or equal to 3 degrees, S3 and S4 are repeated until MSE is less than 3 degrees.
Compared with the prior art, the invention has at least the following beneficial effects:
firstly, the neural network algorithm adopted in the invention integrates the step of feature extraction into the algorithm, thus improving the efficiency of establishing the model; the method solves the problems that the traditional pattern recognition method extracts features from original data, has too many image pixels and high data dimension, can generate dimension disasters, needs enough experience for designing the features, and is more and more difficult under the condition of larger data volume; the traditional lane line keeping method usually separates and processes lane line identification and generation of steering control signals, firstly identifies lane lines, and then carries out steering control based on specific rules so as to achieve the effect of keeping the lane lines, therefore, algorithms based on rule judgment are difficult to meet the driving requirements under complex conditions, an end-to-end control model is adopted in the invention, a steering angle value is directly output through a last full-connection layer, video data acquired by a driving recorder is input, and a vehicle steering control angle can be directly output, so that the control mode is simplified; the deep learning algorithm model has the characteristics of high requirement on computing resources and strong dependence on data; in contrast, the method is based on transfer learning, VGGNet trained based on ImageNet data sets is used as a feature extraction network, and under the condition that vehicle-mounted computing resources and data sets are limited, a method for obtaining an efficient and reliable model is provided; simulation results show that the method can better fit the steering angle of automatic driving, has certain effectiveness and reliability in the aspect of model generalization, and can be widely applied to various lane line keeping task systems related to automatic driving.
Drawings
FIG. 1 is a block diagram of an implementation flow of the present invention;
fig. 2 (a) and 2 (b) are acquired image data, and fig. 2 (c) and 2 (d) are data diagrams of the middle of the process.
Fig. 3 (a) is a general neural network training process, and fig. 3 (b) is a training process using a Dropout neural network.
FIG. 4 is a diagram of a modified Unit activation function ReLU (x)
FIG. 5 is an algorithmic model architecture diagram of the present invention;
FIG. 6 is a graph comparing the performance of the test set with the predicted results of the model of the present invention.
Detailed Description
Firstly, acquiring videos recorded by a front camera of a driving recorder and steering control signals of a vehicle as training data, preprocessing the data by changing brightness, resizing, increasing shadows and the like, using VGGNet trained based on an ImageNet data set as a feature extraction network, and simultaneously adding a full connection layer on the top layer to construct an end-to-end lane line keeping control model; then, the model is further trained through the collected driving video and the steering signal; and finally, evaluating the robustness of the model by using MSE.
Referring to the attached figure 1, the specific implementation steps of the invention are as follows:
s1: collecting data sets
Firstly, videos recorded by a front camera of a driving recorder and steering control signals of a vehicle are collected to be used as training data. The data set gives the steering angle corresponding to each frame of image. In the invention, the input independent variable X is a single-frame picture acquired by a camera, and the output dependent variable Y is the steering angle of a steering wheel. The essence of the problem is that an end-to-end model F is trained by a Convolutional Neural Network (CNN), a video image x is input, and the model is used for predicting the steering angle of a steering wheel, namely y = F (x);
s2: data pre-processing
The video file is coded in a H264/MKV format at a resolution of 1280 multiplied by 720; a large number of interference factors such as sky and the like exist in the video, and the data set needs to be segmented and converted.
Firstly, extracting the video frame by frame according to pictures so as to be convenient for further image processing subsequently. Referring to fig. 2, specifically to fig. 2 (a), fig. 2 (b), fig. 2 (c) and fig. 2 (d), the data is preprocessed by changing brightness, resizing, adding shadow, etc., so as to artificially increase the size of the training set in the case of limited data set.
In addition to the illumination condition, another possible situation on the real road condition is shadow, such as shadows of buildings, trees and other vehicles, which can interfere the algorithm result, so the method simulates the situation by randomly adding the shadow, namely randomly selecting 3-4 points on the image, dimming the color tone of the area, and particularly realizing the purpose by adding a shadow mask layer on a second channel in the HLS color gamut; and then segmenting the processed data dataset into a training set and a test set.
S3: modeling
S31, model initialization: the method comprises the steps of utilizing VGGNet trained based on an ImageNet data set as a feature extraction network, freezing the weight of the front 8 layers of VGG16, adding a Flatten layer between a convolution layer and a full-connection layer, changing the original 3 full-connection layers of the VGG16 into 5 full-connection layers, and directly outputting a steering angle value by the last full-connection layer, so that an end-to-end lane line keeping control model is constructed;
because the automatic driving data set is smaller, the method is based on transfer learning, and a network model is initialized by adopting a transfer learning means; the invention freezes the weight of the front 8 layers of the VGGNet network model, namely the layers with the strongest generalization ability, and then carries out S32 on the following network to achieve the ideal effect, which specifically comprises the following steps: firstly, introducing VGG16notop weight based on ImageNet training, wherein the network model is suitable for feature extraction; a regularization layer is added to the bottom layer of the network model, so that the operation speed of the network model is increased; freezing the weight parameters of the first 8 layers, and keeping the generalization capability; adding a Flatten layer between the convolution layer and the full-connection layer, changing original 3 full-connection layers of VGG16 into 5 full-connection layers, and directly outputting a prediction angle value by the last full-connection layer, thereby realizing the function of end-to-end prediction, wherein the mathematical expression of the full-connection layer network is as follows: y = WH + b; wherein, W represents the weight matrix between the full connection layer and the previous network, b is the bias value matrix, and Y is the output of the layer; the scatter layer is used for flat parameters, i.e. multidimensional input one-dimensional.
S32, optimizing a network model:
the error of the S321 feature extraction mainly comes from two aspects: firstly, the variance of an estimated value is increased due to the limited size of the neighborhood; secondly, migration of an estimated Mean value is caused by convolutional layer parameter errors, mean-posing is adopted to reduce a first error, more background information of the image is reserved, max-posing is adopted to reduce a second error, and more texture information is reserved; the method mainly needs to identify the lane line texture information, and adopts Maxpooling to reduce the error of feature extraction.
S322, the large-scale neural network model has two disadvantages: long training time and easy overfitting; referring to fig. 3, in the training process of the deep learning network, dropout is adopted in the present invention, and for the neural network unit, it is temporarily discarded from the network model according to a certain probability; the invention adopts Bernoulli function to randomly generate a vector of 0 or 1 with probability P, thereby leading a certain neuron to stop working with probability P, namely changing the activation value of the neuron into 0.
S323, adding normal standardization processing in the middle layer of the network model by adopting Batch Normalization to serve as a BN layer, and simultaneously constraining the network model to automatically adjust the standardized strength in the training process, thereby accelerating the training speed and reducing the cost of weight initialization;
s324, except for the last layer of each layer of the network model, a correction unit Relu is used as an activation function, so that the network model is converged more quickly and cannot be saturated, the problem of gradient disappearance is solved, reLU is applied to each pixel, all pixel values smaller than 0 in the feature map are set to be zero, nonlinearity is introduced into ConvNet, the convergence speed of the nonlinearity is far higher than sigmoid, in a negative number part, the ReLU sets the pixel values to be 0, the gradient is also 0, and the negative number part cannot be updated in the training process, as shown in FIG. 4;
s325, initializing the truncated Gaussian distribution by adopting a truncatedNormal, discarding and regenerating data beyond two standard deviations of the mean value to form data of the truncated distribution; the resulting model structure is shown in fig. 5.
S4, training a model: training the model constructed in the S3 by adopting the training set data separated in the S2 to obtain a keras derived training model (JSON format) and a weight parameter (HDF 5 format); the operation environment is NVIDIA Tesla K80, 12GB Memory, 61GB RAM and Tensorflow-gpu1.10.0;
and S5, in the driving process of the vehicle, adopting the training model and the weight parameters obtained in the S4, realizing real-time control of a driving lane of the vehicle according to the road information monitored by the vehicle driving recorder of the vehicle, carrying out the same processing as the S2 on the data in the vehicle driving recorder, and then using the data as the input of the model obtained in the S4 to carry out real-time control on the driving of the vehicle in the lane.
The robustness and the accuracy of the training model obtained in the S4 are evaluated by using MSE (Mean Squared Error), and the smaller the value of the MSE is, the better the accuracy of the experimental data described by the prediction model is,
wherein,is the true steering value of the ith dataThe model is used for predicting the ith signal, N is the size of a data set, and when MSE is less than 3 degrees, the robustness and the accuracy of the training model meet the requirements; the comparison result of the model prediction result and the test set performance is shown in fig. 6, and the robustness and the accuracy of the training model obtained by the method meet the requirements.
Claims (6)
1. The lane keeping control method based on the transfer learning is characterized by comprising the following specific steps of:
s1, collecting video data and steering data in a driving process;
s2, segmenting and converting the data collected in the S1; dividing the data set after cutting and conversion into a training set and a testing set;
s3, constructing an end-to-end lane line keeping control model, specifically:
s31, initializing the model by adopting a transfer learning means, and freezing the front 8 layers of weights of the VGG16 by using VGGNet trained on the ImageNet data set as a feature extraction network; adding a Flatten layer between the convolution layer and the full-connection layer; on the basis, the original 3 full connection layers of the VGG16 are changed into 5 full connection layers; adding a full-connection layer output steering angle value to construct an end-to-end lane keeping control model;
s32, optimizing the model constructed in the step 31, and optimizing the model by adopting a Max-posing, batch Normalization and Truncated Normalization method in an algorithm level; specifically, the method comprises the following steps: reducing errors of feature extraction by adopting Maxpooling; a Bernoulli function is adopted to randomly generate a vector of 0 or 1 according to the probability P, so that a certain neuron stops working according to the probability P;
adding normal standardization processing in a middle layer of the deep network by adopting Batch Normalization, and simultaneously restricting the network to automatically adjust the standardized strength in the training process;
except the last layer in the network model, the other layers all use a correction unit Relu as an activation function;
initializing Truncated Gaussian distribution by adopting a Truncated Normal, discarding data beyond two standard deviations of the mean value, and reforming the data of the Truncated distribution;
s4, training the model constructed in the S3 by adopting the training set data separated in the S2 to obtain a training model and weight parameters;
and S5, in the driving process of the vehicle, the training model and the weight parameters obtained in the S4 are adopted, and the real-time control of the driving lane of the vehicle is realized according to the road information monitored by the vehicle data recorder of the vehicle.
2. The method for controlling lane keeping based on transfer learning of claim 1, wherein in S1, the video data in driving is from a video in a drive recorder, and the steering data is a steering control signal in the driving process of the vehicle.
3. The method of claim 1, wherein in S2, the video data in S1 is encoded in H264/MKV format at 1280 x 720 resolution.
4. The method for controlling lane keeping based on transfer learning of claim 1, wherein in S2, the video data in S1 is pre-processed by changing brightness, resizing and increasing shadow after being extracted frame by frame.
5. The transfer learning-based lane-keeping control method according to claim 1, wherein in S4, the operating environment of the model is: NVIDIA Tesla K80, 12GB Memory, 61GB RAM and Tensorflow-gpu1.10.0.
6. The transfer learning-based lane-keeping control method according to claim 1, wherein the robustness and accuracy of the training model obtained in S4 is evaluated using MSE; MSE is:
wherein,for the true steering value of the ith data in the test set obtained at S2,the model obtained in S4 is used for predicting the ith steering signal according to the image data input in the test set, N is the size of the data set, when MSE is less than 3 degrees, the robustness and the accuracy of the model obtained in S4 meet the requirements, and when MSE is more than or equal to 3 degreesS3 and S4 are repeated until MSE < 3 deg..
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910065082.0A CN109871778B (en) | 2019-01-23 | 2019-01-23 | Lane keeping control method based on transfer learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910065082.0A CN109871778B (en) | 2019-01-23 | 2019-01-23 | Lane keeping control method based on transfer learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109871778A CN109871778A (en) | 2019-06-11 |
CN109871778B true CN109871778B (en) | 2022-11-15 |
Family
ID=66917899
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910065082.0A Active CN109871778B (en) | 2019-01-23 | 2019-01-23 | Lane keeping control method based on transfer learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109871778B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109161932A (en) * | 2018-10-22 | 2019-01-08 | 中南大学 | A kind of extracting method of aluminium cell acute conjunctivitis video behavioral characteristics |
CN111353644B (en) * | 2020-02-27 | 2023-04-07 | 成都美云智享智能科技有限公司 | Prediction model generation method of intelligent network cloud platform based on reinforcement learning |
CN111439259B (en) * | 2020-03-23 | 2020-11-27 | 成都睿芯行科技有限公司 | Agricultural garden scene lane deviation early warning control method and system based on end-to-end convolutional neural network |
CN111559321A (en) * | 2020-05-11 | 2020-08-21 | 四川芯合利诚科技有限公司 | Control method of automobile camera |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106971563A (en) * | 2017-04-01 | 2017-07-21 | 中国科学院深圳先进技术研究院 | Intelligent traffic lamp control method and system |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
WO2018171109A1 (en) * | 2017-03-23 | 2018-09-27 | 北京大学深圳研究生院 | Video action detection method based on convolutional neural network |
CN109002807A (en) * | 2018-07-27 | 2018-12-14 | 重庆大学 | A kind of Driving Scene vehicle checking method based on SSD neural network |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10068140B2 (en) * | 2016-12-02 | 2018-09-04 | Bayerische Motoren Werke Aktiengesellschaft | System and method for estimating vehicular motion based on monocular video data |
-
2019
- 2019-01-23 CN CN201910065082.0A patent/CN109871778B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018171109A1 (en) * | 2017-03-23 | 2018-09-27 | 北京大学深圳研究生院 | Video action detection method based on convolutional neural network |
CN106971563A (en) * | 2017-04-01 | 2017-07-21 | 中国科学院深圳先进技术研究院 | Intelligent traffic lamp control method and system |
CN108009524A (en) * | 2017-12-25 | 2018-05-08 | 西北工业大学 | A kind of method for detecting lane lines based on full convolutional network |
CN109002807A (en) * | 2018-07-27 | 2018-12-14 | 重庆大学 | A kind of Driving Scene vehicle checking method based on SSD neural network |
Non-Patent Citations (2)
Title |
---|
基于深度强化学习的自动驾驶策略学习方法;夏伟等;《集成技术》;20170515(第03期);全文 * |
基于端到端深度学习的智能车自动转向研究;邹斌等;《计算机应用研究》;20171010(第09期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN109871778A (en) | 2019-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109871778B (en) | Lane keeping control method based on transfer learning | |
CN113421269B (en) | Real-time semantic segmentation method based on double-branch deep convolutional neural network | |
CN108961235B (en) | Defective insulator identification method based on YOLOv3 network and particle filter algorithm | |
WO2021238019A1 (en) | Real-time traffic flow detection system and method based on ghost convolutional feature fusion neural network | |
CN108805002B (en) | Monitoring video abnormal event detection method based on deep learning and dynamic clustering | |
CN109800692B (en) | Visual SLAM loop detection method based on pre-training convolutional neural network | |
CN109919032B (en) | Video abnormal behavior detection method based on motion prediction | |
CN111461083A (en) | Rapid vehicle detection method based on deep learning | |
CN111882620B (en) | Road drivable area segmentation method based on multi-scale information | |
CN112801027B (en) | Vehicle target detection method based on event camera | |
CN109886159B (en) | Face detection method under non-limited condition | |
CN112287941B (en) | License plate recognition method based on automatic character region perception | |
CN114155210B (en) | Crowd counting method based on attention mechanism and standardized dense cavity space multi-scale fusion network | |
CN113378775B (en) | Video shadow detection and elimination method based on deep learning | |
CN111079539A (en) | Video abnormal behavior detection method based on abnormal tracking | |
CN113011338A (en) | Lane line detection method and system | |
CN114140622B (en) | Image real-time saliency detection method based on dual-branch network | |
CN113139431A (en) | Image saliency target detection method based on deep supervised learning | |
CN112132746A (en) | Small-scale pedestrian target rapid super-resolution method for intelligent roadside equipment | |
CN112446245A (en) | Efficient motion characterization method and device based on small displacement of motion boundary | |
CN115761438A (en) | Depth estimation-based saliency target detection method | |
CN113887371B (en) | Data enhancement method for low-resolution face recognition | |
CN114445618A (en) | Cross-modal interaction RGB-D image salient region detection method | |
CN110084190B (en) | Real-time unstructured road detection method under severe illumination environment based on ANN | |
CN114092827A (en) | Image data set generation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |