CN112434588A - Inference method for end-to-end driver expressway lane change intention - Google Patents
Inference method for end-to-end driver expressway lane change intention Download PDFInfo
- Publication number
- CN112434588A CN112434588A CN202011289274.9A CN202011289274A CN112434588A CN 112434588 A CN112434588 A CN 112434588A CN 202011289274 A CN202011289274 A CN 202011289274A CN 112434588 A CN112434588 A CN 112434588A
- Authority
- CN
- China
- Prior art keywords
- lane
- driver
- data
- training
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000008859 change Effects 0.000 title claims abstract description 32
- 238000000034 method Methods 0.000 title claims abstract description 30
- 238000012549 training Methods 0.000 claims abstract description 44
- 238000012360 testing method Methods 0.000 claims abstract description 24
- 238000012545 processing Methods 0.000 claims abstract description 22
- 238000004458 analytical method Methods 0.000 claims abstract description 12
- 238000000605 extraction Methods 0.000 claims abstract description 10
- 238000001914 filtration Methods 0.000 claims abstract description 8
- 239000011159 matrix material Substances 0.000 claims abstract description 7
- 230000004927 fusion Effects 0.000 claims abstract description 5
- 230000001360 synchronised effect Effects 0.000 claims abstract description 4
- 230000006399 behavior Effects 0.000 claims description 27
- 238000013528 artificial neural network Methods 0.000 claims description 22
- 239000013598 vector Substances 0.000 claims description 13
- 238000013527 convolutional neural network Methods 0.000 claims description 8
- 238000012795 verification Methods 0.000 claims description 8
- 125000004122 cyclic group Chemical group 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 238000001514 detection method Methods 0.000 claims description 6
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 claims description 4
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 claims description 4
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 claims description 4
- 230000000306 recurrent effect Effects 0.000 claims description 4
- 206010039203 Road traffic accident Diseases 0.000 claims description 3
- 230000004913 activation Effects 0.000 claims description 3
- 230000002457 bidirectional effect Effects 0.000 claims description 3
- 230000001815 facial effect Effects 0.000 claims description 3
- 230000007787 long-term memory Effects 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000011897 real-time detection Methods 0.000 claims description 3
- 230000006403 short-term memory Effects 0.000 claims description 3
- 238000013526 transfer learning Methods 0.000 claims description 3
- 230000009466 transformation Effects 0.000 claims description 3
- 238000002372 labelling Methods 0.000 claims description 2
- 230000009471 action Effects 0.000 description 6
- 238000010801 machine learning Methods 0.000 description 3
- 210000001364 upper extremity Anatomy 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000003414 extremity Anatomy 0.000 description 2
- 210000003128 head Anatomy 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000001627 detrimental effect Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
- G06V20/597—Recognising the driver's state or behaviour, e.g. attention or drowsiness
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Traffic Control Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a reasoning method of an end-to-end driver expressway lane change intention, which is characterized by comprising the following steps of S1, data processing and feature extraction, including driver behavior analysis, road scene analysis and CAN bus data filtering; s2, feature fusion and synchronization, namely, synchronizing according to the features extracted in the step S1, and carrying out interpolation processing on the feature matrix by taking the vehicle CAN bus time as a reference and combining a video stream data timestamp to obtain synchronized feature data; s3, model training and testing, and S4, system output. The method has the advantages that a characteristic association model between the current moment and the past 6 seconds is established, and the intention of the driver is accurately inferred by combining the behavior of the driver and the characteristics of the traffic environment. The method can realize the prediction precision of more than 90 percent under the condition of ensuring the prediction of at least 1 second before the lane change starts in a high-speed scene.
Description
Technical Field
The invention relates to an advanced driver assistance system of an intelligent automobile, in particular to a cognitive modeling and identification method of lane changing intention of a driver based on the combination of machine vision and a deep learning algorithm.
Background
At present, most of the technologies for identifying the intention of the driver adopt methods based on machine learning and machine vision. The driver's intention should be able to make a comprehensive inference combining the driver's overall behavior characteristics over a period of time with the road environment characteristics. In the process of generating the intention and confirming, the driver can generate a series of behavior actions, such as rearview mirror viewing. The features reflecting the intention of the driver include the head, eyes, and body features of the driver, and the like. Meanwhile, as an incentive for causing the intention of the driver, the road environment characteristics reflect the judgment of the driver on the current road, and are one of important intention reasoning clues. The SVM algorithm adopted in patent 2014107811549 belongs to a discriminant machine learning method, only sparse time series data can be considered, and driver intention is difficult to find early. The vehicle dynamics information adopted in patent 2013102421104 reflects only the vehicle control after the driver starts to execute the related intention, and the intention cannot be predicted. The eye gaze direction feature employed in patent 201510115355X does not adequately represent the driver's intent preparation process, while ignoring traffic scene information is detrimental to early detection of driver intent. At present, most of patent technologies aiming at driver intention identification, particularly lane changing intention identification, adopt shallow small-volume machine learning models, and are poor in capability of capturing long-term video features.
Disclosure of Invention
The intention reasoning method combining the deep cyclic neural network and the deep convolutional neural network can realize the end-to-end model training process, reduce the artificial intervention in the model establishing process and improve the model precision. The deep circulation neural network can be combined with the time sequence information of a driver and a road scene to find the internal behavior characteristics and the driving mode of the driver in time, and the intention can be reasoned and identified before the lane changing operation is not started. The technical proposal is that the method comprises the following steps,
a reasoning method for the expressway lane-changing intention of an end-to-end driver comprises the following steps,
s1, data processing and feature extraction, including driver behavior analysis, road scene analysis and CAN bus data filtering;
s11, the driver behavior analysis is to respectively acquire the facial features and the limb features of the driver when the driving behavior is taken as the guide by utilizing a first camera and a second camera, respectively construct two deep convolutional neural networks DCNN1 and DCNN2,
s12, the road scene analysis is to acquire the position and the type of a lane line and the information of the distance between a vehicle and a vehicle ahead by using a camera III and acquire a road scene feature vector,
s13, the CAN bus data is filtered, vehicle speed information is collected, and median filtering processing is carried out on the vehicle speed;
s2. feature fusion and synchronization
Synchronizing according to the features extracted in the step S1, and performing interpolation processing on the feature matrix by taking the vehicle CAN bus time as a reference and combining a video stream data time stamp to acquire synchronized feature data;
s3, model training and testing, specifically comprising
S31, marking the training data,
s32, training a circulating neural network,
s33, testing a cyclic application network;
and S4, outputting the system.
Further, the DCNN1 model in step S11 processes video data from the camera one directly facing the driver, the DCNN2 model processes video data from the camera two of the side front vehicle a pillar, the DCNN1 and the DCNN2 models are both provided with GPU training, and the number of iterations and the minimum batch processing amount are set.
Further, in step S11, the specific steps of constructing the DCNN1 and DCNN2 models are,
s110, manually marking picture data containing a driver behavior label from model data, wherein the driver behavior label comprises a driver left rear view mirror view, a driver right rear view mirror view, a driver front view and a driver middle rear view mirror view;
s111, fine-tuning deep convolution neural network structures of DCNN1 and DCNN2 by adopting a transfer learning mode;
s112, based on local data, reducing the number of neurons of the three rear full-connection layers of the model by half to accelerate model training, and meanwhile, setting the output of the last full-connection layer into four types, namely, meeting the requirement of a behavior recognition model;
and S113, extracting activation output of the model ReLu5 layer as a feature vector under the requirement of ensuring the behavior recognition accuracy.
Further, in step S12, the step of obtaining the road scene feature vector includes,
s120, mounting a camera III at the position, close to a rearview mirror, of the center of a front windshield;
s121, obtaining the positions of lane lines on two sides in the image by using an image processing algorithm, further establishing a virtual vehicle center line, and calculating the positions of the lane lines on the two sides of the center line;
s122, establishing scanning points and scanning lines on the detected lane lines, obtaining lane line types according to pixel and edge information, and distinguishing straight lines from dotted lines, yellow lines and white lines;
s123, detecting a front vehicle by using a depth convolution neural network, and obtaining distance information of the vehicle from the front vehicle, a left lane and a right lane by combining three calibration of a camera and inverse perspective transformation;
s124, constructing road scene feature vectors (x1, y1, x2, y2, d1, d2 and d3), wherein x1 and y2 respectively represent the distance between the driving center line and the left lane line and the type of the left lane line, x2 and y2 represent the distance between the driving center line and the right lane line and the type of the right lane line, d1, d2 and d3 respectively represent the distance between the own lane line, the left lane and the right lane, and the distance between the front vehicles without the front vehicles is 0.
Further, in step S31, the training data labeling specifically comprises the steps of,
s310, inviting a plurality of skilled drivers to freely drive on the expressway, marking hundreds of collected lane change data according to lane change time of the drivers, wherein the training data comprises three types: changing lanes to the left and changing lanes to the right, and keeping straight lines;
and S311, cutting the data of each lane change, and pushing forward for 6 seconds from the lane change, wherein the time sequence characteristic data in the 6 seconds are training and testing data of the model.
Further, in step S32, the specific steps of training the recurrent neural network are,
s320, establishing a deep bidirectional cyclic neural network, setting a long-term and short-term memory network structure between layers as GPU training, and setting iteration times and a basic learning rate;
s321, starting a training program to obtain a training model, verifying the accuracy of the training model on a test set, continuously training and optimizing a neural network to obtain network parameters meeting the prediction precision, and writing the trained network model into a processor unit for real-time detection.
Further, in step S33, the test loop application network includes an off-line test and an on-line verification, the on-line verification specifically includes,
s330, combining data from a plurality of sensors by using a sliding time window mode, designing a feature extraction window with the length of 6 seconds, updating a feature matrix when new data comes every time, removing a feature vector at the earliest moment before 6 seconds, and increasing the features at the latest moment;
and S331, when the inferred number of lane change intentions in 0.5 second is greater than a set threshold value, reporting the related lane change probability of the driver.
Furthermore, in step 4, a lane change intention is inferred by combining the lane change probability and the continuous forecast quantity, and the result is transmitted to the ADAS in time, so that the vehicle starts the detection and early warning of the interested area in advance, and the occurrence of traffic accidents is reduced.
Advantageous effects
1. Different state characteristics (head, upper limb characteristics) of the driver can be identified end-to-end for subsequent driver behavior and intent reasoning.
2. The lane changing intention of the driver can be predicted in advance, so that the vehicle-mounted ADAS system such as a lane changing auxiliary system monitors the region of interest in advance and carries out early warning, and the driving safety can be obviously improved.
Drawings
FIG. 1 is a system flow diagram;
fig. 2 is an illustration of the camera mounting position.
Detailed Description
The following detailed description is exemplary and is intended to provide further explanation of the invention as claimed. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. It is noted that the terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments according to the present application.
A reasoning method for the expressway lane-changing intention of an end-to-end driver is characterized by comprising the following steps,
s1, a data processing and feature extraction part
The data processing and feature extraction part is mainly responsible for processing the original data acquired by the vehicle-mounted sensor, and the extraction part can be used for effective feature data of driver intention reasoning, and the data processing and feature extraction part can be subdivided into three parts, namely: the method comprises the following steps of analyzing the behavior of a driver, analyzing a road scene and filtering CAN bus data, wherein the respective functions are described as follows:
s11. analysis of driver behavior
This section uses the labeled driver data to build two deep convolutional neural networks DCNN1, DCNN 2. Wherein the DCNN1 model is responsible for processing video data from the camera one facing the driver and the DCNN2 model is responsible for processing video data from the camera two of the front side vehicle a-pillar. The two cameras are respectively used for extracting facial features and limb features of a driver which are guided by driving behaviors.
S110, the reasoning of the lane changing intention of the driver related to the patent takes the active lane changing of the driver as a base number and does not contain the passive lane changing intention caused by the interference of surrounding vehicles. The driving behaviors involved in aiming at the lane changing intention of the driver comprise four actions of viewing by a left rearview mirror, viewing by a right rearview mirror, looking forward and viewing by a middle rearview mirror of the driver. Data for driver behavior model training and testing is based on driving video recordings of multiple drivers in a real-world environment. The observed action of the driver is deemed to be noted by the playback of the video data.
S111, manually marked picture data containing a driver action label are used for training a DCNN1 model and a DCNN2 model, the DCNN1 and the DCNN2 respectively receive video data from a camera I and a camera II, the DCNN1 and the DCNN2 deep neural networks adopt the same deep convolutional neural network structure (AlexNet), and in order to reduce model building time and model prediction accuracy, a transfer learning mode is adopted to fine tune a trained AlexNet model;
s112, based on local data, reducing the number of neurons of the three rear full-connection layers of the model by half to accelerate model training, modifying the output of the last full-connection layer from 1000 to 4 types to meet the requirement of a behavior recognition model, and training and testing the modified AlexNet model;
and S113, extracting activation output of the model ReLu5 layer as a feature vector under the requirement of ensuring the behavior recognition accuracy.
The second camera adopts the same parameter setting as the first camera, the sampling frequency of the first camera is 25fps, namely 25 frames of video data are sampled every second, and the difference is that the second camera is arranged near the front right A column of the vehicle and is responsible for detecting the characteristics of the upper limbs of a driver. And transmitting the video data acquired by the second camera to a DCNN2 model, extracting the characteristics of a DCNN2ReLu5 layer, and obtaining the characteristics of the upper limb skeleton points of the driver. The training of the DCNN1 and the training of the DCNN2 models are set as GPU training, the iteration times are set to be 10, the minimum batch processing amount is 20, and the basic learning rate is 0.001.
S12. road scene analysis
The road scene information consists of the position and the type of a lane line and the information of the front vehicle distance, and the third camera adopts the same parameter setting as the first camera.
S120, a third camera is arranged at the center of the front windshield and close to the rearview mirror, and meanwhile, a forward-looking camera is used for collecting road scene information.
S121, obtaining the positions of the lane lines on the two sides in the image by using an image processing algorithm, further establishing a virtual vehicle center line (namely an image center line), and calculating the positions of the center line and the lane lines on the two sides.
And S122, establishing scanning points and scanning lines on the detected lane lines, obtaining the types of the lane lines according to the pixel and edge information, and distinguishing straight lines from dotted lines and yellow from white.
And S123, detecting the front vehicle by using the depth convolution neural network, and combining camera calibration and perspective inverse transformation to obtain the distance information between the vehicle and the front vehicle (a lane line, a left lane and a right lane).
S124, the road scene feature vector may be described as (x1, y1, x2, y2, d1, d2, d3), where x1 and y2 respectively represent the distance between the driving center line and the left lane line and the type of the left lane line, x2 and y2 are similar information of the right lane, d1, d2 and d3 respectively represent the own lane line, the distance between the left lane and the right lane, and the distance between the left lane and the right lane is 0 without the front vehicle.
S13.CAN bus data filtering
The CAN bus data only collects vehicle speed information and performs median filtering processing on the vehicle speed. In contrast to past lane-change intentions based on vehicle steering angle or steering wheel angle, the present method aims to obtain a look-ahead intent inference rather than recognition after the lane-change action has occurred.
S2, feature fusion and synchronization
The feature fusion and synchronization part is used for synchronizing according to the features extracted from the previous part, and interpolation processing is performed on each feature matrix by taking the vehicle CAN bus time as a reference and combining a video stream data time stamp to obtain synchronized feature data.
S3 model training and testing
S31, marking training data
S310, inviting a plurality of skilled drivers to freely drive on the expressway, and not informing the drivers of the real purpose of the test to drive the vehicle in a natural mode. Marking the collected hundreds of sets of lane change data according to the lane change time of the driver. It should be noted that in many cases, the driver does not turn on the lane change indicator light, and the lane change start time is based on the first time the driver turns the steering wheel. The training data of the lane change intention reasoning model comprises three types: and changing lanes from left to right, and keeping the straight line.
And S311, cutting the data of each lane change, and pushing forward for 6 seconds from the lane change, wherein the time sequence characteristic data within 6 seconds are training and testing data of the model, wherein 80% of pictures are used for training the model, and 20% of pictures are used for testing the model.
S32, training a recurrent neural network
S320, establishing a deep bidirectional cyclic neural network, setting a long-term and short-term memory network structure between layers as GPU training, setting the iteration number to 10000 and the basic learning rate to 0.001, and S321, starting a training program to obtain a training model. And verifying the accuracy on the test set. And obtaining network parameters meeting the prediction precision by continuously training and optimizing the neural network, and writing the trained network model into a processor unit for real-time detection.
S33, testing the recurrent neural network
The model test is divided into off-line test and on-line verification. And performing model online verification after a good offline test effect is obtained. The on-line verification comprises the specific steps of,
s330, on-line verification needs to combine data from a plurality of sensors in a sliding time window mode. And designing a feature extraction window with the length of 6 seconds, updating a feature matrix when new data comes every time, removing a feature vector at the earliest moment before 6 seconds, and adding features at the latest moment.
And S331, when the inferred number of lane change intentions in 0.5 second is greater than a set threshold (more than half), reporting the related lane change probability of the driver.
S4, system output part
And finally, reasoning the lane change intention by combining the lane change probability and the continuous forecast quantity, transmitting the result to the ADAS in time, and starting the detection and early warning of the interested area in advance by the vehicle to reduce the occurrence of traffic accidents.
The method has the key points that an end-to-end driver lane change intention identification method based on the combination of a deep cyclic neural network and a deep convolutional neural network is adopted, an end-to-end detection method for detecting the behavior of a driver in video data is realized through the deep convolutional neural network, and the lane change intention is inferred by combining road scene information, so that the accurate prediction before the lane change action is started is realized. The main aspects that differ from the previous patents are: firstly, a deep convolutional neural network is utilized to establish a driver behavior detection model. And then, establishing an end-to-end time sequence intention inference model through a deep cycle neural network. Equivalents which may be derived from the foregoing description are intended to be within the scope of the invention.
Claims (8)
1. A reasoning method for the expressway lane-changing intention of an end-to-end driver is characterized by comprising the following steps,
s1, data processing and feature extraction, including driver behavior analysis, road scene analysis and CAN bus data filtering;
s11, the driver behavior analysis is to respectively acquire the facial features and the limb features of the driver when the driving behavior is taken as the guide by utilizing a first camera and a second camera, respectively construct two deep convolutional neural networks DCNN1 and DCNN2,
s12, the road scene analysis is to acquire the position and the type of a lane line and the information of the distance between a vehicle and a vehicle ahead by using a camera III and acquire a road scene feature vector,
s13, the CAN bus data is filtered, vehicle speed information is collected, and median filtering processing is carried out on the vehicle speed;
s2. feature fusion and synchronization
Synchronizing according to the features extracted in the step S1, and performing interpolation processing on the feature matrix by taking the vehicle CAN bus time as a reference and combining a video stream data time stamp to acquire synchronized feature data;
s3, model training and testing, specifically comprising
S31, marking the training data,
s32, training a circulating neural network,
s33, testing a cyclic application network;
and S4, outputting the system.
2. The method as claimed in claim 1, wherein the DCNN1 model in step S11 processes video data from the first camera facing the driver, the DCNN2 model processes video data from the second camera of the a pillar of the vehicle in front of the side, and the DCNN1 and DCNN2 models are both provided with GPU training and set the number of iterations and minimum batch processing amount.
3. The reasoning method for end-to-end driver expressway lane-changing intention as claimed in claim 2, wherein in the step S11, the specific steps of constructing DCNN1 and DCNN2 models are as follows,
s110, manually marking picture data containing a driver behavior label from model data, wherein the driver behavior label comprises a driver left rear view mirror view, a driver right rear view mirror view, a driver front view and a driver middle rear view mirror view;
s111, fine-tuning deep convolution neural network structures of DCNN1 and DCNN2 by adopting a transfer learning mode;
s112, based on local data, reducing the number of neurons of the three rear full-connection layers of the model by half to accelerate model training, and meanwhile, setting the output of the last full-connection layer into four types, namely, meeting the requirement of a behavior recognition model;
and S113, extracting activation output of the model ReLu5 layer as a feature vector under the requirement of ensuring the behavior recognition accuracy.
4. The method for reasoning the expressway lane-change intention of an end-to-end driver according to claim 1, wherein in the step S12, the step of obtaining the road scene feature vector comprises,
s120, mounting a camera III at the position, close to a rearview mirror, of the center of a front windshield;
s121, obtaining the positions of lane lines on two sides in the image by using an image processing algorithm, further establishing a virtual vehicle center line, and calculating the positions of the lane lines on the two sides of the center line;
s122, establishing scanning points and scanning lines on the detected lane lines, obtaining lane line types according to pixel and edge information, and distinguishing straight lines from dotted lines, yellow lines and white lines;
s123, detecting a front vehicle by using a depth convolution neural network, and obtaining distance information of the vehicle from the front vehicle, a left lane and a right lane by combining three calibration of a camera and inverse perspective transformation;
s124, constructing road scene feature vectors (x1, y1, x2, y2, d1, d2 and d3), wherein x1 and y2 respectively represent the distance between the driving center line and the left lane line and the type of the left lane line, x2 and y2 represent the distance between the driving center line and the right lane line and the type of the right lane line, d1, d2 and d3 respectively represent the distance between the own lane line, the left lane and the right lane, and the distance between the front vehicles without the front vehicles is 0.
5. The method for reasoning the end-to-end driver expressway lane-changing intention according to claim 1, wherein in the step S31, the specific steps of training data labeling are,
s310, inviting a plurality of skilled drivers to freely drive on the expressway, marking hundreds of collected lane change data according to lane change time of the drivers, wherein the training data comprises three types: changing lanes to the left and changing lanes to the right, and keeping straight lines;
and S311, cutting the data of each lane change, and pushing forward for 6 seconds from the lane change, wherein the time sequence characteristic data in the 6 seconds are training and testing data of the model.
6. The method for reasoning the end-to-end driver expressway lane-changing intention as claimed in claim 1, wherein in step S32, the concrete steps of training the recurrent neural network are,
s320, establishing a deep bidirectional cyclic neural network, setting a long-term and short-term memory network structure between layers as GPU training, and setting iteration times and a basic learning rate;
s321, starting a training program to obtain a training model, verifying the accuracy of the training model on a test set, continuously training and optimizing a neural network to obtain network parameters meeting the prediction precision, and writing the trained network model into a processor unit for real-time detection.
7. The reasoning method for end-to-end driver expressway lane-changing intention as claimed in claim 1, wherein in step S33, the test cycle application network comprises off-line test and on-line verification, the on-line verification comprises the specific steps of,
s330, combining data from a plurality of sensors by using a sliding time window mode, designing a feature extraction window with the length of 6 seconds, updating a feature matrix when new data comes every time, removing a feature vector at the earliest moment before 6 seconds, and increasing the features at the latest moment;
and S331, when the inferred number of lane change intentions in 0.5 second is greater than a set threshold value, reporting the related lane change probability of the driver.
8. The method for reasoning the expressway lane-changing intention of the end-to-end driver as claimed in claim 1, wherein in the step 4, the lane-changing intention is reasoned by combining the lane-changing probability and the continuous forecast quantity, and the result is transmitted to ADAS in time, so that the vehicle starts the detection and early warning of the interested area in advance, and the occurrence of traffic accidents is reduced.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011289274.9A CN112434588A (en) | 2020-11-18 | 2020-11-18 | Inference method for end-to-end driver expressway lane change intention |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011289274.9A CN112434588A (en) | 2020-11-18 | 2020-11-18 | Inference method for end-to-end driver expressway lane change intention |
Publications (1)
Publication Number | Publication Date |
---|---|
CN112434588A true CN112434588A (en) | 2021-03-02 |
Family
ID=74700362
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011289274.9A Pending CN112434588A (en) | 2020-11-18 | 2020-11-18 | Inference method for end-to-end driver expressway lane change intention |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112434588A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117485348A (en) * | 2023-11-30 | 2024-02-02 | 长春汽车检测中心有限责任公司 | Driver intention recognition method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106971194A (en) * | 2017-02-16 | 2017-07-21 | 江苏大学 | A kind of driving intention recognition methods based on the double-deck algorithms of improvement HMM and SVM |
CN109858359A (en) * | 2018-12-28 | 2019-06-07 | 青岛科技大学 | A kind of motorist driving intention discrimination method considering emotion |
WO2019179094A1 (en) * | 2018-03-23 | 2019-09-26 | 广州汽车集团股份有限公司 | Method and apparatus for maintaining driverless driveway, computer device, and storage medium |
CN110427850A (en) * | 2019-07-24 | 2019-11-08 | 中国科学院自动化研究所 | Driver's super expressway lane-changing intention prediction technique, system, device |
CN110991353A (en) * | 2019-12-06 | 2020-04-10 | 中国科学院自动化研究所 | Early warning method for recognizing driving behaviors of driver and dangerous driving behaviors |
US20200242382A1 (en) * | 2019-01-25 | 2020-07-30 | Fujitsu Limited | Deep learning model used for driving behavior recognition and training apparatus and method thereof |
CN111476283A (en) * | 2020-03-31 | 2020-07-31 | 上海海事大学 | Glaucoma fundus image identification method based on transfer learning |
-
2020
- 2020-11-18 CN CN202011289274.9A patent/CN112434588A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106971194A (en) * | 2017-02-16 | 2017-07-21 | 江苏大学 | A kind of driving intention recognition methods based on the double-deck algorithms of improvement HMM and SVM |
WO2019179094A1 (en) * | 2018-03-23 | 2019-09-26 | 广州汽车集团股份有限公司 | Method and apparatus for maintaining driverless driveway, computer device, and storage medium |
CN109858359A (en) * | 2018-12-28 | 2019-06-07 | 青岛科技大学 | A kind of motorist driving intention discrimination method considering emotion |
US20200242382A1 (en) * | 2019-01-25 | 2020-07-30 | Fujitsu Limited | Deep learning model used for driving behavior recognition and training apparatus and method thereof |
CN110427850A (en) * | 2019-07-24 | 2019-11-08 | 中国科学院自动化研究所 | Driver's super expressway lane-changing intention prediction technique, system, device |
CN110991353A (en) * | 2019-12-06 | 2020-04-10 | 中国科学院自动化研究所 | Early warning method for recognizing driving behaviors of driver and dangerous driving behaviors |
CN111476283A (en) * | 2020-03-31 | 2020-07-31 | 上海海事大学 | Glaucoma fundus image identification method based on transfer learning |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117485348A (en) * | 2023-11-30 | 2024-02-02 | 长春汽车检测中心有限责任公司 | Driver intention recognition method |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110427850B (en) | Method, system and device for predicting lane change intention of driver on expressway | |
EP3759700B1 (en) | Method for determining driving policy | |
Yao et al. | Egocentric vision-based future vehicle localization for intelligent driving assistance systems | |
CN109902562B (en) | Driver abnormal posture monitoring method based on reinforcement learning | |
JP2022537143A (en) | Detecting Driver Attention Using Heatmaps | |
Ghori et al. | Learning to forecast pedestrian intention from pose dynamics | |
CN110826369A (en) | Driver attention detection method and system during driving | |
Peng et al. | Driving maneuver early detection via sequence learning from vehicle signals and video images | |
CN117037115A (en) | Automatic driving obstacle avoidance system and method based on machine vision | |
CN110807352B (en) | In-vehicle scene visual analysis method for dangerous driving behavior early warning | |
Franke et al. | From door to door—Principles and applications of computer vision for driver assistant systems | |
CN111845787A (en) | Lane change intention prediction method based on LSTM | |
CN111259829B (en) | Processing method and device of point cloud data, storage medium and processor | |
CN111923915B (en) | Traffic light intelligent reminding method, device and system | |
CN112434588A (en) | Inference method for end-to-end driver expressway lane change intention | |
CN113386775B (en) | Driver intention identification method considering human-vehicle-road characteristics | |
CN114299473A (en) | Driver behavior identification method based on multi-source information fusion | |
CN110472508A (en) | Lane line distance measuring method based on deep learning and binocular vision | |
CN112487986A (en) | Driving assistance recognition method based on high-precision map | |
CN116543266A (en) | Automatic driving intelligent model training method and device guided by gazing behavior knowledge | |
Wang et al. | An end-to-end auto-driving method based on 3D LiDAR | |
CN116453107A (en) | 3D target detection method and device, electronic equipment and storage medium | |
WO2021024905A1 (en) | Image processing device, monitoring device, control system, image processing method, computer program, and recording medium | |
CN110427504B (en) | System and method for acquiring and processing automobile data in real time based on button | |
CN109960034B (en) | System and method for adjusting brightness of head-up display |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20210302 |
|
RJ01 | Rejection of invention patent application after publication |