CN115273460A - Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium - Google Patents

Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium Download PDF

Info

Publication number
CN115273460A
CN115273460A CN202210742575.5A CN202210742575A CN115273460A CN 115273460 A CN115273460 A CN 115273460A CN 202210742575 A CN202210742575 A CN 202210742575A CN 115273460 A CN115273460 A CN 115273460A
Authority
CN
China
Prior art keywords
lane change
vehicle
change prediction
target vehicle
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202210742575.5A
Other languages
Chinese (zh)
Inventor
李开兴
梁斯硕
陆思宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Changan Automobile Co Ltd
Original Assignee
Chongqing Changan Automobile Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Changan Automobile Co Ltd filed Critical Chongqing Changan Automobile Co Ltd
Priority to CN202210742575.5A priority Critical patent/CN115273460A/en
Publication of CN115273460A publication Critical patent/CN115273460A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0125Traffic data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/049Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/588Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/0104Measuring and analyzing of parameters relative to traffic conditions
    • G08G1/0137Measuring and analyzing of parameters relative to traffic conditions for specific applications
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/167Driving aids for lane monitoring, lane changing, e.g. blind spot detection

Abstract

The invention discloses a multi-mode perception fusion vehicle lane change prediction method, computer equipment and a storage medium, wherein the prediction method comprises the following steps: s1: synchronously acquiring characteristic information of a target vehicle and a sequence image acquired by a front-looking camera of the vehicle; s2: inputting the sequence image into an image feature extraction network so as to obtain the image features of the sequence image; s3: splicing the characteristic information of the target vehicle and the image characteristics obtained in the step S2, and inputting the spliced characteristic information into a characteristic fusion network based on an attention mechanism to obtain fused characteristics; s4: and inputting the fused features into a pre-trained lane change prediction model so as to obtain the lane change intention of the target vehicle. The prediction method uses an attention mechanism method to perform feature fusion, and can effectively improve the accuracy of vehicle lane change prediction.

Description

Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium
Technical Field
The invention belongs to the technical field of automatic driving, and particularly relates to a multi-mode perception fusion vehicle lane change prediction method, computer equipment and a storage medium.
Background
With the great increase of automobile consumption in recent years, the automobile keeping amount of 2021 in China reaches 3.02 hundred million, the automobiles bring convenience to life, meanwhile, the hidden danger of traffic accidents is increased, and the traffic jam is one of important factors of the traffic accidents. The intention of surrounding vehicles cutting into the lane where the vehicle is located is accurately predicted, and early warning signals are sent to a driver in advance, so that the probability of traffic accidents is effectively reduced.
Chinese CN201910984614.0 proposes a vehicle lane change intention prediction method, which includes: inputting various types of vehicle running information into a lane change intention prediction network; the lane change intention prediction network is used for predicting lane change intentions of vehicles in a driving state; respectively extracting the characteristics of the running information of each vehicle through a sub-network, and outputting the characteristic extraction result; and performing feature fusion on the feature extraction results output by each sub-network, and predicting the lane changing intention of the vehicle according to the feature fusion results. The method disclosed by the invention can solve the problem of predicting the lane change intention of the vehicle to a certain extent, and improves the accuracy of predicting the lane change intention of the vehicle. However, this method has the following disadvantages: 1. road network information is used, the road network information is complex and variable, and the problem of untimely updating exists, so that the accuracy of predicting the vehicle lane change intention is influenced; 2. original visual image information is not used, and prediction accuracy is affected.
Disclosure of Invention
In view of the above-mentioned shortcomings in the prior art, the present invention aims to provide a method, a computer device and a storage medium for predicting lane change of a vehicle by multi-mode perception fusion, wherein the prediction method uses an attention mechanism method to perform feature fusion, so as to effectively improve the accuracy of the prediction of lane change of the vehicle.
The technical scheme of the invention is realized as follows:
a multi-mode perception fusion vehicle lane change prediction method comprises the following steps:
s1: synchronously acquiring characteristic information of a target vehicle and a sequence image acquired by a front-looking camera of the vehicle;
s2: inputting the sequence image into an image feature extraction network so as to obtain the image features of the sequence image;
s3: splicing the characteristic information of the target vehicle and the image characteristics obtained in the step S2, and inputting the spliced characteristic information into a characteristic fusion network based on an attention mechanism to obtain fused characteristics;
s4: and inputting the fused features into a pre-trained lane change prediction model so as to obtain the lane change intention of the target vehicle.
Further, if the time axes of the acquired sequence images and the target vehicle characteristic information in the step S1 are not synchronous, the sampling time of the acquired sequence images and the sampling time of the target vehicle characteristic information are synchronized to make the sampling periods of the sequence images and the target vehicle characteristic information consistent, and then resampling is performed.
Further, the characteristic information includes a lateral velocity, a longitudinal velocity, a lateral acceleration, and a longitudinal acceleration of the target vehicle with respect to the host vehicle.
Further, the characteristic information of the target vehicle is collected through a multi-sensor of the vehicle.
Further, the multi-sensor includes a millimeter wave radar, a laser radar, and an ultrasonic radar.
Further, the image feature extraction network is a deep residual network ResNet50.
Further, the lane change prediction model is constructed based on an LSTM lane change prediction algorithm.
The invention provides an electronic device, which comprises a processor and a memory, wherein the memory is used for storing executable instructions of the processor; the processor is used for executing the multimode perception fusion vehicle lane change prediction method.
The present invention also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements a multimodal perception fusion vehicle lane change prediction method as set forth above.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the image characteristics and the characteristic information of the target vehicle are fused to obtain the fused characteristics, and then the lane change prediction is carried out based on the fused characteristics, so that the accuracy of the lane change prediction can be effectively improved, and the possibility of the error of the lane change prediction caused by the fact that the road network information is more complicated and changeable and is not updated in time can be avoided.
2. The method ensures that the sampling periods of the sequence images and the target vehicle characteristic information are consistent, and then resampling is carried out, thereby ensuring the real-time performance and the accuracy of the vehicle lane change prediction.
Drawings
FIG. 1-schematic flow chart of the present invention.
Fig. 2-schematic structural diagram of a feature fusion network.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
A multi-mode perception fusion vehicle lane change prediction method is disclosed, the flow of which is shown in figure 1, and the method specifically comprises the following steps:
s1: synchronously acquiring characteristic information of a target vehicle and a sequence image acquired by a front-looking camera of the vehicle;
s2: inputting the sequence image into an image feature extraction network so as to obtain the image features of the sequence image;
s3: splicing the characteristic information of the target vehicle and the image characteristics obtained in the step S2, and inputting the spliced characteristic information into a characteristic fusion network based on an attention mechanism to obtain fused characteristics;
s4: and inputting the fused features into a pre-trained lane change prediction model so as to obtain the lane change intention of the target vehicle.
The target vehicles are vehicles positioned in the left and right lanes in front of the vehicle, and the lane change is possible to cause safety accidents in front of the vehicle, so that the lane change intention is predicted, the possibility of accidents can be effectively reduced, and the safety of automatic driving can be improved.
In the running process of the vehicle, the feature information of the target vehicle and the sequence image acquired by the front-view camera of the vehicle are simultaneously acquired at the same sampling time, the image features and the feature information of the target vehicle are fused to obtain fused features, and then the lane change prediction is carried out based on the fused features, so that the accuracy of the lane change prediction can be effectively improved, and the possibility of prediction errors caused by the fact that the road network information is complex and changeable and is not updated in time can be avoided.
After splicing the feature information of the target vehicle and the image features, when feature fusion is carried out, three feature vectors (Q feature vector, K feature vector and V feature vector) of an original X are input into a feature fusion network (the structural schematic diagram of the feature fusion network is shown in figure 2), then a fused feature Y can be obtained, and then prediction is carried out through a lane change prediction model, so that the lane change intention of the target vehicle can be predicted.
In specific implementation, if the time axes of the acquired sequence images and the target vehicle characteristic information in the step S1 are not synchronous, the sampling time of the acquired sequence images and the sampling time of the target vehicle characteristic information are synchronized to make the sampling periods of the sequence images and the target vehicle characteristic information consistent, and then resampling is performed.
Because the sampling time of the sequence image and the target vehicle characteristic information may be inconsistent, for example, a frame is acquired by 100ms for the sequence image, and the target vehicle characteristic information is acquired only once in 50ms, the sequence image and the target vehicle characteristic information are not aligned at a time point, so that the real-time performance and the accuracy of vehicle lane change prediction cannot be guaranteed by performing prediction for a plurality of times. Therefore, synchronous processing is needed to ensure the consistency of the column images and the target vehicle information characteristic sampling period, and further ensure the real-time performance and the accuracy of the vehicle lane change prediction.
In a specific implementation, the characteristic information includes a lateral speed, a longitudinal speed, a lateral acceleration and a longitudinal acceleration of the target vehicle relative to the host vehicle.
In specific implementation, the characteristic information of the target vehicle is acquired by a multi-sensor of the vehicle.
In specific implementation, the multi-sensor comprises a millimeter wave radar, a laser radar and an ultrasonic radar.
In this way, the vehicle-mounted forward-looking camera is used for obtaining the sequence image, and the vehicle-mounted millimeter wave radar, the laser radar, the ultrasonic radar and other sensors are used for obtaining the characteristic information of the target vehicle, so that the lane change intention of the target vehicle can be predicted, and the accuracy of predicting the lane change intention can be further improved.
In specific implementation, the image feature extraction network is a depth residual error network ResNet50.
In specific implementation, the lane change prediction model is constructed based on an LSTM lane change prediction algorithm.
An electronic device comprising a processor and a memory for storing executable instructions of the processor; the processor is configured to execute a multi-modal perception fusion vehicle lane change prediction method as described above.
A computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a multimodal perception fusion vehicle lane change prediction method as set forth above.
Finally, it should be noted that the above examples of the present invention are only for illustrating the present invention and are not intended to limit the embodiments of the present invention. Variations and modifications in other variations will occur to those skilled in the art upon reading the foregoing description. Not all embodiments are exhaustive. Obvious changes and modifications of the present invention are also within the scope of the present invention.

Claims (9)

1. A multi-mode perception fusion vehicle lane change prediction method is characterized by comprising the following steps:
s1: synchronously acquiring characteristic information of a target vehicle and a sequence image acquired by a front-looking camera of the vehicle;
s2: inputting the sequence image into an image feature extraction network so as to obtain the image features of the sequence image;
s3: splicing the characteristic information of the target vehicle and the image characteristics obtained in the step S2, and inputting the image characteristics into a characteristic fusion network based on an attention mechanism to obtain fused characteristics;
s4: and inputting the fused features into a pre-trained lane change prediction model so as to obtain the lane change intention of the target vehicle.
2. The multi-mode perception fusion vehicle lane change prediction method according to claim 1, wherein if time axes of the collected sequence images and the target vehicle feature information are not synchronous in S1, sampling time of the collected sequence images and the target vehicle feature information is synchronized first, so that sampling periods of the sequence images and the target vehicle feature information are consistent, and then resampling is performed.
3. The method according to claim 1, wherein the characteristic information includes lateral velocity, longitudinal velocity, lateral acceleration and longitudinal acceleration of the target vehicle relative to the host vehicle.
4. The method according to claim 1 or 3, wherein the characteristic information of the target vehicle is collected by a multi-sensor of the host vehicle.
5. The multi-mode perception-fused vehicle lane change prediction method according to claim 4, wherein the multi-sensor comprises a millimeter wave radar, a laser radar and an ultrasonic radar.
6. The method according to claim 1, wherein the image feature extraction network is a deep residual error network ResNet50.
7. The multimode perceptual fusion vehicle lane change prediction method according to claim 1, wherein the lane change prediction model is constructed based on an LSTM lane change prediction algorithm.
8. An electronic device comprising a processor and a memory, the memory for storing executable instructions of the processor; the processor is used for executing the multi-mode perception fusion vehicle lane change prediction method as claimed in any one of claims 1-7.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements a multimodal perception fusion vehicle lane change prediction method according to any one of claims 1 to 7.
CN202210742575.5A 2022-06-28 2022-06-28 Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium Withdrawn CN115273460A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210742575.5A CN115273460A (en) 2022-06-28 2022-06-28 Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210742575.5A CN115273460A (en) 2022-06-28 2022-06-28 Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115273460A true CN115273460A (en) 2022-11-01

Family

ID=83763658

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210742575.5A Withdrawn CN115273460A (en) 2022-06-28 2022-06-28 Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115273460A (en)

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213794A1 (en) * 2004-03-26 2005-09-29 Omron Corporation Vehicle detection apparatus and method
CN107042824A (en) * 2015-10-23 2017-08-15 哈曼国际工业有限公司 System and method for detecting the accident in vehicle
CN110949395A (en) * 2019-11-15 2020-04-03 江苏大学 Curve ACC target vehicle identification method based on multi-sensor fusion
CN111950467A (en) * 2020-08-14 2020-11-17 清华大学 Fusion network lane line detection method based on attention mechanism and terminal equipment
CN112130153A (en) * 2020-09-23 2020-12-25 的卢技术有限公司 Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera
CN112614373A (en) * 2020-12-29 2021-04-06 厦门大学 BiLSTM-based weekly vehicle lane change intention prediction method
CN112801928A (en) * 2021-03-16 2021-05-14 昆明理工大学 Attention mechanism-based millimeter wave radar and visual sensor fusion method
CN113065590A (en) * 2021-03-26 2021-07-02 清华大学 Vision and laser radar multi-mode data fusion method based on attention mechanism
CN113449650A (en) * 2021-06-30 2021-09-28 南京航空航天大学 Lane line detection system and method
CN114298142A (en) * 2021-11-22 2022-04-08 理工雷科智途(泰安)汽车科技有限公司 Multi-source heterogeneous sensor information fusion method and device for camera and millimeter wave radar
CN114332494A (en) * 2021-12-22 2022-04-12 北京邮电大学 Three-dimensional target detection and identification method based on multi-source fusion under vehicle-road cooperation scene
CN114590275A (en) * 2022-03-30 2022-06-07 重庆长安汽车股份有限公司 Method for predicting lane change intention of vehicle based on composite model

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050213794A1 (en) * 2004-03-26 2005-09-29 Omron Corporation Vehicle detection apparatus and method
CN107042824A (en) * 2015-10-23 2017-08-15 哈曼国际工业有限公司 System and method for detecting the accident in vehicle
CN110949395A (en) * 2019-11-15 2020-04-03 江苏大学 Curve ACC target vehicle identification method based on multi-sensor fusion
CN111950467A (en) * 2020-08-14 2020-11-17 清华大学 Fusion network lane line detection method based on attention mechanism and terminal equipment
CN112130153A (en) * 2020-09-23 2020-12-25 的卢技术有限公司 Method for realizing edge detection of unmanned vehicle based on millimeter wave radar and camera
CN112614373A (en) * 2020-12-29 2021-04-06 厦门大学 BiLSTM-based weekly vehicle lane change intention prediction method
CN112801928A (en) * 2021-03-16 2021-05-14 昆明理工大学 Attention mechanism-based millimeter wave radar and visual sensor fusion method
CN113065590A (en) * 2021-03-26 2021-07-02 清华大学 Vision and laser radar multi-mode data fusion method based on attention mechanism
CN113449650A (en) * 2021-06-30 2021-09-28 南京航空航天大学 Lane line detection system and method
CN114298142A (en) * 2021-11-22 2022-04-08 理工雷科智途(泰安)汽车科技有限公司 Multi-source heterogeneous sensor information fusion method and device for camera and millimeter wave radar
CN114332494A (en) * 2021-12-22 2022-04-12 北京邮电大学 Three-dimensional target detection and identification method based on multi-source fusion under vehicle-road cooperation scene
CN114590275A (en) * 2022-03-30 2022-06-07 重庆长安汽车股份有限公司 Method for predicting lane change intention of vehicle based on composite model

Similar Documents

Publication Publication Date Title
CN111626208A (en) Method and apparatus for detecting small targets
CN113734203B (en) Control method, device and system for intelligent driving and storage medium
CN112307978B (en) Target detection method and device, electronic equipment and readable storage medium
CN111527013A (en) Vehicle lane change prediction
CN111649740B (en) Method and system for high-precision positioning of vehicle based on IMU
CN112373474B (en) Lane line fusion and transverse control method, system, vehicle and storage medium
CN111845787A (en) Lane change intention prediction method based on LSTM
CN113158349A (en) Vehicle lane change simulation method and device, electronic equipment and storage medium
CN115480726B (en) Display method, display device, electronic equipment and storage medium
CN109946688A (en) Lane-change contextual data extracting method, device and server
CN111409455A (en) Vehicle speed control method and device, electronic device and storage medium
CN110472508B (en) Lane line distance measurement method based on deep learning and binocular vision
CN115273460A (en) Multi-mode perception fusion vehicle lane change prediction method, computer equipment and storage medium
US20200369268A1 (en) Vehicles and systems for predicting road agent behavior based on driving style
US20220358620A1 (en) Remote assistance system and remote assistance method
CN116343148A (en) Lane line detection method, device, vehicle and storage medium
CN115169477A (en) Method, device and equipment for evaluating safety sense of assistant driving system and storage medium
CN115214708A (en) Vehicle intention prediction method and related device thereof
US20240140477A1 (en) Processing system, processing device, and processing method
CN114008682A (en) Method and system for identifying objects
CN114043993B (en) Key target selection method and device suitable for intelligent driving vehicle
CN116279538A (en) Visualization method and system for assisting vehicle driving
US20240106989A1 (en) Vehicle display control device and non-transitory computer-readable medium
CN117944713A (en) Automatic driving method, device, domain controller, medium, system and vehicle
CN115719484A (en) Method for identifying driving obstructing object on road surface of mining area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20221101

WW01 Invention patent application withdrawn after publication