CN117002530A - Method and device for predicting future motion trail of vehicle and unmanned equipment - Google Patents
Method and device for predicting future motion trail of vehicle and unmanned equipment Download PDFInfo
- Publication number
- CN117002530A CN117002530A CN202310972110.3A CN202310972110A CN117002530A CN 117002530 A CN117002530 A CN 117002530A CN 202310972110 A CN202310972110 A CN 202310972110A CN 117002530 A CN117002530 A CN 117002530A
- Authority
- CN
- China
- Prior art keywords
- vehicle
- feature
- predicted
- features
- vehicles
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 64
- 238000000034 method Methods 0.000 title claims abstract description 56
- 230000004927 fusion Effects 0.000 claims abstract description 62
- 230000003993 interaction Effects 0.000 claims abstract description 58
- 238000013528 artificial neural network Methods 0.000 claims description 46
- 238000004364 calculation method Methods 0.000 claims description 25
- 238000000605 extraction Methods 0.000 claims description 22
- 230000008569 process Effects 0.000 claims description 13
- 239000011159 matrix material Substances 0.000 claims description 9
- 238000006243 chemical reaction Methods 0.000 claims description 6
- 230000008859 change Effects 0.000 claims description 5
- 230000006835 compression Effects 0.000 claims description 5
- 238000007906 compression Methods 0.000 claims description 5
- 230000003014 reinforcing effect Effects 0.000 claims description 3
- 230000000694 effects Effects 0.000 description 4
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 238000012790 confirmation Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 235000004522 Pentaglottis sempervirens Nutrition 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000012508 change request Methods 0.000 description 1
- 230000008094 contradictory effect Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 238000005065 mining Methods 0.000 description 1
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W60/00—Drive control systems specially adapted for autonomous road vehicles
- B60W60/001—Planning or execution of driving tasks
- B60W60/0027—Planning or execution of driving tasks using trajectory prediction for other traffic participants
- B60W60/00276—Planning or execution of driving tasks using trajectory prediction for other traffic participants for two or more other traffic participants
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W50/0097—Predicting future conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/16—Image acquisition using multiple overlapping images; Image stitching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/761—Proximity, similarity or dissimilarity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/80—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
- G06V10/806—Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/588—Recognition of the road, e.g. of lane markings; Recognition of the vehicle driving pattern in relation to the road
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B60—VEHICLES IN GENERAL
- B60W—CONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
- B60W50/00—Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
- B60W2050/0001—Details of the control system
- B60W2050/0043—Signal treatments, identification of variables or parameters, parameter estimation or state estimation
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Human Computer Interaction (AREA)
- Transportation (AREA)
- Automation & Control Theory (AREA)
- Mechanical Engineering (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Traffic Control Systems (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
The application relates to the technical field of unmanned aerial vehicle, and discloses a method and a device for predicting future motion trail of a vehicle and unmanned equipment, wherein the method comprises the following steps: collecting and selecting vehicles within a preset distance range as vehicles to be predicted; extracting features and fusing features of the historical tracks of all vehicles in the driving scene and lane lines where the historical tracks are positioned to obtain vehicle enhancement features of the vehicles to be predicted and vehicle enhancement features of the vehicles; extracting features of a planning track of the own vehicle to obtain an own vehicle planning feature, and splicing the own vehicle planning feature with an own vehicle enhancement feature to obtain an own vehicle fusion feature; performing feature interaction to obtain interaction features of the vehicle to be predicted; and finally, outputting the future motion trail of the vehicle to be predicted. According to the method, the planned track of the own vehicle is introduced, the influence of the planned track on the vehicle to be predicted is modeled, advanced interaction of the own vehicle and other vehicles in the yielding scene is completed, and the prediction precision of the future motion track of the vehicle to be predicted is improved.
Description
Technical Field
The application relates to the technical field of unmanned aerial vehicle, in particular to a method and a device for predicting a future motion trail of a vehicle and unmanned aerial vehicle equipment.
Background
For unmanned technology, safety is critical. In the existing partial urban driving environment, unmanned vehicles coexist with manned vehicles, and in order to ensure the driving safety, the unmanned vehicles generally use a conservative strategy, for example, under the yielding scenes of lane changing, lane robbery, remittance and the like of the manned vehicles, the unmanned vehicles preferentially use yielding deceleration, so that the manned vehicles pass through. For the unmanned equipment, under the yielding scene, the future travelling tracks of other vehicles in the scene are accurately predicted, so that the prediction is made in advance, more reasonable path planning is performed, the safety of the unmanned equipment in the road travelling process can be effectively ensured, and the intelligence of the unmanned equipment is improved. Therefore, modeling is necessary to be carried out on other vehicles in the yielding scene, possible future motion tracks of the other vehicles are predicted, reference is provided for planning and decision making of the unmanned vehicles, and safe running of the unmanned vehicles is guaranteed.
In the prior art, when predicting the track of a motor vehicle running on a road, features of the motor vehicle and a lane line are often obtained, and prediction is performed based on a rule or model method, and the model-based method often uses a neural network to encode the historical track and the lane line. Under the situation of letting go, the unmanned vehicle knows the future movement track of the unmanned vehicle, and the track is used as a future travel route of the own vehicle, so that the yielding vehicle can be influenced to a certain extent, however, the existing model method does not fully consider the introduction of the planned track.
In the patent with publication number CN116001810A, the name of the application is lane changing auxiliary method, system and storage medium based on vehicle interaction, by acquiring vehicle driving information and other adjacent vehicles in a vehicle safety range, by exchanging vehicle driving information with the adjacent vehicles and determining the distance between the adjacent vehicles and the vehicle, when the distance between the adjacent vehicles and the vehicle is smaller than the safety distance, early warning prompt is carried out on the adjacent vehicles and the vehicle; based on the interactive button, a lane change request is sent to the adjacent vehicle, the vehicle is reminded of changing after receiving a lane change confirmation signal sent by the adjacent vehicle, and the vehicle is warned safely after not receiving the lane change confirmation signal sent by the adjacent vehicle. The application enhances the interaction with other vehicles in the running process of the vehicles in a manual operation mode, can not liberate drivers, and is not suitable for unmanned vehicles and scenes.
In the patent with publication number CN113954864A, named as an intelligent automobile track prediction system and method integrating peripheral vehicle interaction information, a graph convolution neural network considering peripheral vehicle interaction is disclosed, the problem that information interaction of peripheral vehicles is not considered in the existing track prediction algorithm is overcome, a mode of extracting map information by using a high-definition vector map instead of a bird's eye view map is proposed in the scheme, a vector map is used for defining a lane geometric shape, and the problem of prediction discretization caused by resolution is reduced. A mode of fusing the space-time relationship between the vehicle and the driving scene is provided, new lane characteristics are introduced to represent the generalized geometric relationship between the vehicle and the lanes, and the accuracy of track prediction in the face of lanes with different shapes and numbers is effectively improved. The application uses the graph convolution neural network to model complex vehicle driving situations, automatically predicts the driving intention and the track of the vehicle, and has intelligence and advancement. The method does not fully utilize the planning track of the vehicle, and further carries out advanced interactive modeling, so that track prediction is carried out on the yielding vehicle, and a certain improvement space is still provided.
Disclosure of Invention
Aiming at the defects in the prior art, the application provides a method and a device for predicting the future motion trail of a vehicle and unmanned equipment.
In a first aspect, an embodiment of the present application provides a method for predicting a future motion trajectory of a vehicle, including the steps of:
step 1, collecting information of all vehicles in a driving scene and selecting the vehicles within a preset distance range as vehicles to be predicted; if the vehicle to be predicted is in the yielding scene, defining as yielding vehicles, otherwise defining as non-yielding vehicles;
step 2, extracting features and fusing features of the historical tracks of all vehicles in the driving scene and lane lines where the historical tracks are located to obtain vehicle enhancement features and vehicle enhancement features of the vehicles to be predicted;
step 3, extracting features of a planning track of the own vehicle to obtain an own vehicle planning feature, and splicing the own vehicle planning feature with the own vehicle enhancement feature and compressing the features to obtain an own vehicle fusion feature;
step 4, carrying out feature interaction between the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted based on the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted to obtain interaction features of the vehicle to be predicted;
step 5, if the vehicle is a traveling vehicle, obtaining a future motion trail of the vehicle to be predicted through neural network regression by using the interaction characteristics; and if the vehicle is not a traveling vehicle, obtaining the future motion trail of the vehicle to be predicted through neural network regression by using the vehicle enhancement characteristics of the vehicle to be predicted.
In one embodiment, in step 2, feature extraction and feature fusion are performed on the historical tracks of all vehicles in the driving scene and the lane lines where the historical tracks are located, so as to obtain the vehicle enhancement features of the vehicles to be predicted and the vehicle enhancement features of the own vehicles, and the method further includes:
step 21, performing feature extraction on the history tracks of all vehicles in the driving scene and the entering of each lane line into a sub-network;
and step 22, performing feature fusion by using the graph neural network to obtain the vehicle enhancement features and the vehicle enhancement features of the vehicles to be predicted.
In one embodiment, in step 21, the feature extraction is performed by the history tracks of all vehicles in the driving scene and each lane line entering the sub-network, and further includes:
extracting the characteristics of all the historical tracks in the driving scene through a sub-network to obtain vehicle characteristics;
assuming that there are vehicles R in the yield scene, the history track of each vehicle includes track points within the predicted current time T, and each track point includes information as the world coordinates x, y of the current time, whereby the vehicle information s= { S 0 ,…,S i ,…,S R },S i ={S i0 ,…,S ij ,…,S iT },S ij ={x ij ,y ij And obtaining a vehicle characteristic S ' = { S ' after extracting the characteristics of the sub-network ' 0 ,…,S′ i ,…,S′ R };
Extracting the characteristics of each lane line through a sub-network characteristic to obtain lane line characteristics;
assuming that there are M lane lines in the traffic scene, each lane line includes N lane line coordinate points, and each lane line coordinate point includes information as world coordinates x and y, so that the lane line information c= { C 0 ,…,C i ,…,C M },C i ={C i0 ,…,C ij ,…,C N },C ij ={x ij ,y ij Obtaining lane line characteristics C ' = { C ' after extracting the characteristics of the sub-network ' 0 ,…,C′ i ,…,C′ M };
In step 22, the feature fusion is performed by using the graph neural network to obtain a vehicle enhancement feature and a vehicle enhancement feature of each vehicle to be predicted, and the method further includes:
the R vehicle features and the M lane line features use a graph annotation force network to carry out mutual information transfer to complete feature fusion;
abstracting each vehicle characteristic and each lane line characteristic into a node, and forming the input of a graph attention network after all the nodes are spliced;
and performing self-attention calculation among the nodes to obtain the vehicle enhancement features and the vehicle enhancement features of the vehicles to be predicted after fusion.
In one embodiment, the self-attention calculation is performed between the nodes to obtain the vehicle enhancement feature and the own vehicle enhancement feature of each vehicle to be predicted after fusion, which includes:
after each node Q calculates the similarity with other nodes K, information needing to be enhanced is obtained from other nodes V; the characteristics of the ith vehicle node after self-attention computation are expressed as:
wherein Q is i ,K j ,V j From S' i The linear change is obtained, f () represents neural network conversion, s () represents similarity calculation, the similarity calculation is obtained by matrix multiplication, and N represents characteristic dimension; thus, the vehicle characteristics are reinforced after the characteristic fusion, and the vehicle reinforcing characteristic S '= { S' -of the vehicle to be predicted is formed 0 ,…,S″ i ,…,S″ R }。
In one embodiment, in step 3, the feature extraction is performed on the planned track of the own vehicle to obtain an own vehicle planning feature, the own vehicle planning feature and the own vehicle enhancement feature are spliced and feature compression is performed to obtain an own vehicle fusion feature, and the method further includes:
step 31, performing feature dimension lifting on planned track points by using a neural network in the process of performing feature extraction on the planned track of the own vehicle;
hypothetical self-organizationThe planned track points of the vehicle comprise L track points after the current time T, and each track point comprises world coordinates x and y of the current time, so that the planned track E= { E 0 ,…,E i ,…,E L },E i ={x i ,y i Obtaining a vehicle planning feature E' through a neural network consisting of two layers of MLP;
and step 32, splicing the vehicle planning features and the vehicle enhancement features, and then performing feature compression to obtain vehicle fusion features.
In one embodiment, in step 4, the performing feature interaction between the vehicle enhancement feature and the vehicle fusion feature based on the vehicle to be predicted, to obtain an interaction feature of the vehicle to be predicted, further includes:
step 41, giving the vehicle enhancement feature S' of a certain vehicle to be predicted i And the self-vehicle fusion feature E', performing similarity calculation between the two;
step 42, multiplying the self-vehicle fusion feature E 'with a similarity matrix, and extracting a vehicle enhancement feature S' for being transmitted to the vehicle to be predicted i Information of (2);
step 43, combining said information with a vehicle enhancement feature S' of said vehicle to be predicted i Adding, namely finishing feature interaction of the vehicle to be predicted and obtaining interaction features s 'of the vehicle to be predicted' ii 。
In one embodiment, the process of feature interaction is expressed as:
wherein f () represents neural network conversion, S () represents similarity calculation, and is obtained by matrix multiplication, N represents characteristic dimension, S' ii Vehicle enhancement feature representing the ith vehicle to be predicted, E j And (5) representing the self-vehicle fusion characteristic with the characteristic dimension of j.
In one embodiment, in step 5, if the vehicle is a traveling vehicle, the future motion trail of the vehicle to be predicted is obtained by regression through a neural network using the interaction characteristics; if the vehicle is a non-passing vehicle, obtaining the future motion trail of the vehicle to be predicted by using the vehicle enhancement characteristic of the vehicle to be predicted through neural network regression, and further comprising:
step 51, giving vehicle enhancement features or interaction features of a vehicle to be predicted, and flattening the vehicle enhancement features or interaction features in feature dimensions to form neural network input;
and step 52, performing weighted calculation based on the neural network input to obtain a plurality of preset coordinate regression values, namely future motion trail of the vehicle to be predicted.
In a second aspect, an embodiment of the present application further provides a device for predicting a future motion trajectory of a vehicle, where the device includes:
the information acquisition module is used for acquiring information of all vehicles in a driving scene and selecting the vehicles within a preset distance range as vehicles to be predicted; if the vehicle to be predicted is in the yielding scene, defining as yielding vehicles, otherwise defining as non-yielding vehicles;
the vehicle enhancement feature acquisition module is used for carrying out feature extraction and feature fusion on the historical tracks of all vehicles in the driving scene and lane lines where the historical tracks are located to obtain vehicle enhancement features of the vehicles to be predicted and vehicle enhancement features of the vehicles to be predicted;
the vehicle fusion feature acquisition module is used for extracting features of a planning track of a vehicle to obtain vehicle planning features, splicing the vehicle planning features with the vehicle enhancement features of the vehicle and compressing the features to obtain vehicle fusion features;
the feature interaction module is used for carrying out feature interaction between the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted based on the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted, so as to obtain interaction features of the vehicle to be predicted;
the track output module is used for obtaining the future motion track of the vehicle to be predicted through neural network regression by using the interaction characteristics if the vehicle is a passing vehicle; and if the vehicle is not a traveling vehicle, obtaining the future motion trail of the vehicle to be predicted through neural network regression by using the vehicle enhancement characteristics of the vehicle to be predicted.
In a third aspect, an embodiment of the present application further provides an unmanned apparatus, where the unmanned apparatus is provided with:
a memory for storing a program;
a processor for executing the program stored in the memory, the processor being configured to perform part or all of the steps of the future movement track prediction method of the vehicle as described above when the program is executed.
In a fourth aspect, embodiments of the present application further provide a computer-readable storage medium having stored therein computer-executable instructions; the computer-executable instructions, when executed by the processor, perform some or all of the steps of a method for predicting future motion trajectories of a vehicle as previously described.
The beneficial technical effects of the application are as follows:
the method for predicting the future movement track of the vehicle models the vehicle information and the lane line information of all vehicles in the yielding scene, introduces the own vehicle planning track, divides the vehicles to be predicted into yielding vehicles and non-yielding vehicles, and performs information mining in different modes, so that the track prediction of the vehicles to be predicted is completed under the combined action of the history track information, the lane line information and the planning track information of the vehicles to be predicted. Compared with other schemes for completing prediction at the lane line/track level, the scheme has the advantages that the planned track of the own vehicle is introduced, the influence of the planned track on the vehicle to be predicted is modeled, the advanced interaction between the own vehicle and other vehicles in the yielding scene is completed, the prediction precision of the future track of the vehicle to be predicted is improved, and therefore the track prediction scheme is well applied to the track prediction of the vehicle to be predicted in the yielding scene. In addition, the method and the device repeatedly utilize the enhanced characteristic information of the vehicle to be predicted, avoid the consumption of calculation resources caused by repeated calculation, have advantages in feasibility, prediction effect and calculation power consumption, can be effectively applied to the situation of yielding driving, and have practical use significance for future motion trail prediction of the vehicle.
It is to be appreciated that the future movement track prediction apparatus, the unmanned device and the computer-readable storage medium for a vehicle provided in the second aspect to the fourth aspect also have the above-mentioned advantageous effects, and are not described herein.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to the structures shown in these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a method for predicting a future motion trajectory of a vehicle according to an embodiment of the present application;
FIG. 2 is a flow chart of acquiring vehicle enhancement features of all vehicles in a driving scenario in accordance with a first embodiment of the present application;
FIG. 3 is a flowchart of feature extraction for all vehicles in a driving scene and lane lines thereof according to a first embodiment of the present application;
FIG. 4 is a flowchart of acquiring a vehicle fusion feature according to a first embodiment of the present application;
FIG. 5 is a flowchart of acquiring interactive features of a vehicle to be predicted according to a first embodiment of the present application;
FIG. 6 is a flow chart of information transfer between a planned trajectory of a host vehicle and a vehicle to be predicted according to an embodiment of the present application;
FIG. 7 is a flowchart of outputting a future motion trajectory of a vehicle to be predicted according to a first embodiment of the present application;
FIG. 8 is a block diagram illustrating a future motion trajectory prediction apparatus for a vehicle according to a second embodiment of the present application;
fig. 9 is a partial block diagram of the unmanned aerial vehicle in the third embodiment of the present application.
The achievement of the objects, functional features and advantages of the present application will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are only some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
It should be noted that all directional indicators (such as up, down, left, right, front, and rear … …) in the embodiments of the present application are merely used to explain the relative positional relationship, movement, etc. between the components in a particular posture (as shown in the drawings), and if the particular posture is changed, the directional indicator is changed accordingly.
In addition, the technical solutions of the embodiments of the present application may be combined with each other, but it is necessary to be based on the fact that those skilled in the art can implement the technical solutions, and when the technical solutions are contradictory or cannot be implemented, the combination of the technical solutions should be considered as not existing, and not falling within the scope of protection claimed by the present application.
In the let-down scenario, when the vehicle track is predicted, the relation modeling of the unmanned vehicle and the vehicle to be predicted has important practical significance, and the existing method cannot be well adapted to the complexity. Therefore, the scheme for predicting the future track of the vehicle to be predicted in the yielding scene is developed, the planned track of the own vehicle is introduced, the influence of the planned track on the vehicle to be predicted is modeled, the advanced interaction between the own vehicle and other vehicles in the yielding scene is completed, the prediction precision of the future track of the vehicle to be predicted is improved, and the track prediction scheme is perfected so as to be better applied to track prediction of the vehicle to be predicted in the yielding scene.
Example 1
In order to cope with future motion trail prediction uncertainty of a vehicle to be predicted caused by interaction of a self vehicle and other vehicles in a driving scene, and ensure accuracy of a prediction result of the vehicle to be predicted, as shown in fig. 1, the embodiment of the application provides a future motion trail prediction method of the vehicle, which is mainly applied to unmanned equipment and specifically comprises the following steps of 1 to 5:
step 1, collecting information of all vehicles in a driving scene and selecting the vehicles within a preset distance range as vehicles to be predicted; and if the vehicle to be predicted is in the yielding scene, defining as a yielding vehicle, otherwise defining as a non-yielding vehicle.
In this step, for each target vehicle to be predicted, it is necessary to obtain the historical track coordinates in the predicted current time T, and classify the target vehicle according to whether the target vehicle is in a yielding scene, if so, classifying the target vehicle as a yielding vehicle, otherwise classifying the target vehicle as a non-yielding vehicle. Thus, all vehicles in a driving scenario, which are vehicles in the road environment that need to be modeled, can be classified into own vehicles, let go vehicles and non-let go vehicles.
And 2, extracting features and fusing features of the historical tracks of all vehicles in the driving scene and lane lines where the historical tracks are located to obtain vehicle enhancement features and vehicle enhancement features of the vehicles to be predicted.
In this step, the feature extraction is done by a neural network. As shown in fig. 2 to 3, this step further includes the steps of:
step 21, performing feature extraction on the history tracks of all vehicles in the driving scene and the entering of each lane line into a sub-network;
and step 22, performing feature fusion by using the graph neural network to obtain the vehicle enhancement features and the vehicle enhancement features of the vehicles to be predicted.
The sub-network feature extraction includes feature extraction of each lane line and a history track of all vehicles in the driving scene to obtain vehicle features and lane line features.
More specifically, the feature extraction of the historical tracks of all vehicles in the driving scene and the entering of each lane line into the sub-network further includes:
extracting historical tracks of all vehicles in a driving scene through sub-network features to obtain vehicle features;
assuming that there are R vehicles in the yield scene, the history of each vehicleThe track includes track points within the predicted current time T, each track point includes information as world coordinates x, y of the current time, and thus the vehicle information s= { S 0 ,…,S i ,…,S R },S i ={S i0 ,…,S ij ,…,S iT },S ij ={x ij ,y ij And obtaining a vehicle characteristic S ' = { S ' after extracting the characteristics of the sub-network ' 0 ,…,S′ i ,…,S′ R -a }; the number of the vehicle features is R, and the vehicle features correspond to R vehicles.
Extracting the characteristics of each lane line through a sub-network characteristic to obtain lane line characteristics;
assuming that there are M lane lines in the traffic scene, each lane line includes N lane line coordinate points, and each lane line coordinate point includes information as world coordinates x and y, so that the lane line information c= { C 0 ,…,C i ,…,C M },C i ={C i0 ,…,C ij ,…,C N },C N ={x ij ,y ij Obtaining lane line characteristics C ' = { C ' after extracting the characteristics of the sub-network ' 0 ,…,C′ i ,…,C′ M -a }; the number of the lane line features is M, and the lane lines correspond to M lane lines.
In fig. 3, the arrow lines on the left side represent the historic trajectories of all vehicles in the driving scene and each of the lane lines, the rectangular blocks with light middle represent the extracted vehicle features and lane line features, and the rectangular blocks with dark right side represent the vehicle enhancement features and the own vehicle enhancement features of the vehicle to be predicted.
More specifically, the feature fusion is performed by using a graph neural network to obtain the vehicle enhancement feature and the vehicle enhancement feature of each vehicle to be predicted, and the method further comprises the following steps:
the R vehicle features and the M lane line features use a graph annotation force network to carry out mutual information transmission so as to finish feature fusion;
firstly, abstracting each vehicle characteristic and each lane line characteristic into a node, and forming the input of a graph attention network after all the nodes are spliced;
and then, carrying out self-attention calculation among the nodes to obtain the vehicle enhancement features and the vehicle enhancement features of the vehicles to be predicted after fusion.
More specifically, in the self-attention calculating process, each node Q obtains information to be reinforced from other nodes V after performing similarity calculation with other nodes K; the characteristics of the ith vehicle node to be predicted after self-attention calculation are expressed as follows:
wherein Q is i ,K j ,V j From S' i The linear change is obtained, f () represents neural network conversion, s () represents similarity calculation, the similarity calculation is obtained by matrix multiplication, and N is a characteristic dimension; thus, the vehicle characteristics are reinforced after the characteristic fusion, and the vehicle reinforcing characteristic S '= { S' -of the vehicle to be predicted is formed 0 ,…,S″ i ,…,S″ R }. Wherein the dimensions of the vehicle enhancement feature and the own vehicle enhancement feature of each of the vehicles to be predicted are 128.
And step 3, extracting features of a planning track of the own vehicle to obtain an own vehicle planning feature, and splicing the own vehicle planning feature with the own vehicle enhancement feature and compressing the features to obtain an own vehicle fusion feature.
As shown in fig. 4, this step further includes:
step 31, performing feature dimension lifting on planned track points by using a neural network in the process of performing feature extraction on the planned track of the own vehicle; the feature extraction of the planned track of the own vehicle is mainly performed on the track to be driven by the own vehicle.
Assuming that the planned track points of the own vehicle comprise L track points after the current time T, each comprising information is the world coordinates x and y of the current time, thereby planning the track E= { E 0 ,…,E i ,…,E L },E i ={x i ,y i By (E) passing throughThe neural network consisting of two layers of MLPs (referred to as fully connected layers) yields the vehicle planning feature E'.
And step 32, splicing the vehicle planning feature E' and the vehicle enhancement feature, and performing feature compression to obtain a vehicle fusion feature, wherein the dimension of the vehicle fusion feature is 128.
And 4, carrying out feature interaction between the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted based on the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted, and obtaining interaction features of the vehicle to be predicted.
As shown in fig. 5, this step further includes:
step 41, giving the vehicle enhancement feature S' of a certain vehicle to be predicted i And the self-vehicle fusion feature E', performing similarity calculation between the two;
step 42, multiplying the self-vehicle fusion feature E 'with a similarity matrix, and extracting a vehicle enhancement feature S' which can be transmitted to the vehicle to be predicted i Information of (2);
step 43, combining said information with a vehicle enhancement feature S' of said vehicle to be predicted i Adding, namely finishing feature interaction of the vehicle to be predicted and obtaining interaction features S 'of the vehicle to be predicted' ii 。
If S '' ii For one of the 128-dimensional features, the process of feature interaction is expressed as:
wherein f () represents neural network conversion, S () represents similarity calculation, and is obtained by matrix multiplication, N represents characteristic dimension, S' ii Vehicle enhancement feature representing the ith vehicle to be predicted, E j And (5) representing the self-vehicle fusion characteristic with the characteristic dimension of j.
FIG. 6 is a flow chart of a feature interaction of a vehicle to be predicted, wherein the left light rectangular block represents the vehicle information and the left dark rectangular block represents the vehicle enhancement feature of the vehicle to be predicted; the dark rectangular blocks on the right side represent interaction characteristics of the vehicle to be predicted, which are obtained after the interaction of the net characteristics.
Step 5, if the vehicle is a traveling vehicle, obtaining a future motion trail of the vehicle to be predicted through neural network regression by using the interaction characteristics; and if the vehicle is not a traveling vehicle, obtaining the future motion trail of the vehicle to be predicted through neural network regression by using the vehicle enhancement characteristics of the vehicle to be predicted. Whether the yielding vehicle or the non-yielding vehicle shares a regression network.
As shown in fig. 7, this step further includes:
step 51, giving vehicle enhancement features or interaction features of a vehicle to be predicted, and flattening the vehicle enhancement features or interaction features in feature dimensions to form neural network input;
and step 52, performing weighted calculation based on the neural network input to obtain a plurality of preset coordinate regression values, namely future motion trail of the vehicle to be predicted.
The number of the output nodes of the neural network is twice the number of preset regression coordinates, and every two neural network nodes correspond to X and Y coordinate values of each regression coordinate.
In one embodiment, the method for predicting a future motion trajectory of a vehicle further includes:
and step 6, transmitting the future motion trail of the vehicle to be predicted. The sending of the future motion trail of the vehicle to be predicted refers to the subsequent process of applying the obtained future motion trail of the vehicle to be predicted to the self-driving business.
In summary, step 1 selects a vehicle to be predicted, and correspondingly, a plurality of target vehicles to be predicted are formed at each time. And the target vehicles to be predicted circularly enter the track generation flow to form future motion tracks of the target vehicles to be predicted. The generated future motion trail is used as a position to be avoided in the subsequent self-vehicle path planning, so that the self-vehicle has prediction capability, and the rationality of the path planning is improved.
Example two
As shown in fig. 8, corresponding to the method for predicting a future motion trajectory of a vehicle described in the above embodiments, the embodiment of the present application further provides a device for predicting a future motion trajectory of a vehicle, where the device includes an information acquisition module 201, a vehicle enhancement feature acquisition module 202, an own vehicle fusion feature acquisition module 203, a feature interaction module 204, and a trajectory output module 205; wherein,
the information acquisition module 201 is used for acquiring information of all vehicles in a driving scene and selecting vehicles within a preset distance range as vehicles to be predicted; if the vehicle to be predicted is in the yielding scene, defining as yielding vehicles, otherwise defining as non-yielding vehicles;
the vehicle enhancement feature obtaining module 202 is configured to perform feature extraction and feature fusion on the historical tracks of all vehicles in the driving scene and the lane lines where the historical tracks are located, so as to obtain vehicle enhancement features of the vehicles to be predicted and vehicle enhancement features of the own vehicles;
the vehicle fusion feature acquisition module 203 is configured to perform feature extraction on a planned track of a vehicle to obtain a vehicle planning feature, and splice and feature compress the vehicle planning feature and the vehicle enhancement feature to obtain a vehicle fusion feature;
the feature interaction module 204 is configured to perform feature interaction between the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted, so as to obtain an interaction feature of the vehicle to be predicted;
the track output module 205 is configured to obtain a future motion track of the vehicle to be predicted by using the interaction feature through neural network regression if the vehicle is a passing vehicle; and if the vehicle is not a traveling vehicle, obtaining the future motion trail of the vehicle to be predicted through neural network regression by using the vehicle enhancement characteristics of the vehicle to be predicted.
In one embodiment, the future motion trail prediction device of the vehicle further includes a trail sending module (not shown) configured to send the future motion trail of the vehicle to be predicted, so as to be applied to a subsequent process of the self-driving service.
It should be noted that, because the content of information interaction and execution process between the above devices/modules is based on the same concept as the method embodiment of the present application, specific functions and technical effects thereof may be referred to in the method embodiment section, and will not be described herein.
Example III
As shown in fig. 9, an unmanned apparatus disclosed in this embodiment may be an automatic or semi-automatic unmanned apparatus such as an unmanned vehicle, a mobile robot, or the like, on which a transmitter 302, a receiver 301, a memory 304, and a processor 303 are disposed. The transmitter 302 is configured to transmit instructions and data, the receiver 301 is configured to receive instructions and data, the memory 304 is configured to store computer-executable instructions, and the processor 303 is configured to execute the computer-executable instructions stored in the memory 304, so as to implement some or all of the steps performed by the future motion trail prediction method of the vehicle. The specific implementation process is the same as the method for predicting the future motion trail of the vehicle.
It should be noted that the memory 304 may be separate or integrated with the processor 303. When the memory is provided separately, the unmanned device further comprises a bus for connecting the memory 304 and the processor 303.
Example IV
The embodiment also discloses a computer readable storage medium, wherein computer executable instructions are stored in the computer readable storage medium, and when the processor executes the computer executable instructions, part or all of the steps executed by the future motion trail prediction method of the vehicle are realized.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the scope of the application, and all equivalent structural changes made by the description of the present application and the accompanying drawings or direct/indirect application in other related technical fields are included in the scope of the application.
Claims (10)
1. A method for predicting a future motion profile of a vehicle, the method comprising the steps of:
step 1, collecting information of all vehicles in a driving scene and selecting the vehicles within a preset distance range as vehicles to be predicted; if the vehicle to be predicted is in the yielding scene, defining as yielding vehicles, otherwise defining as non-yielding vehicles;
step 2, extracting features and fusing features of the historical tracks of all vehicles in the driving scene and lane lines where the historical tracks are located to obtain vehicle enhancement features and vehicle enhancement features of the vehicles to be predicted;
step 3, extracting features of a planning track of the own vehicle to obtain an own vehicle planning feature, and splicing the own vehicle planning feature with the own vehicle enhancement feature and compressing the features to obtain an own vehicle fusion feature;
step 4, carrying out feature interaction between the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted based on the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted to obtain interaction features of the vehicle to be predicted;
step 5, if the vehicle is a traveling vehicle, obtaining a future motion trail of the vehicle to be predicted through neural network regression by using the interaction characteristics; and if the vehicle is not a traveling vehicle, obtaining the future motion trail of the vehicle to be predicted through neural network regression by using the vehicle enhancement characteristics of the vehicle to be predicted.
2. The method for predicting the future movement track of a vehicle according to claim 1, wherein in step 2, the feature extraction and feature fusion are performed on the historical tracks of all vehicles in the driving scene and the lane lines where the vehicles are located, so as to obtain the vehicle enhancement features of each vehicle to be predicted and the vehicle enhancement features of the own vehicle, and further comprising:
step 21, performing feature extraction on the history tracks of all vehicles in the driving scene and the entering of each lane line into a sub-network;
and step 22, performing feature fusion by using the graph neural network to obtain the vehicle enhancement features and the vehicle enhancement features of the vehicles to be predicted.
3. The method for predicting future movement trajectories of vehicles according to claim 2, wherein in step 21, the feature extraction is performed on the historic trajectories of all vehicles in the driving scene and each lane line entering sub-network, and further comprising:
extracting historical tracks of all vehicles in a driving scene through sub-network features to obtain vehicle features;
assuming that there are vehicles R in the yield scene, the history track of each vehicle includes track points within the predicted current time T, and each track point includes information as the world coordinates x, y of the current time, whereby the vehicle information s= { S 0 ,…,S i ,…,S R },S i ={S i0 ,…,S ij ,…,S iT },S ij ={x ij ,y ij And obtaining a vehicle characteristic S ' = { S ' after extracting the characteristics of the sub-network ' 0 ,…,S′ i ,…,S′ R };
Extracting the characteristics of each lane line through a sub-network characteristic to obtain lane line characteristics;
assuming that there are M lane lines in the traffic scene, each lane line includes N lane line coordinate points, and each lane line coordinate point includes information as world coordinates x and y, so that the lane line information c= { C 0 ,…,C i ,…,C M },C i ={C i0 ,…,C ij ,…,C N },C ij ={x ij ,y ij Obtaining lane line characteristics C ' = { C ' after extracting the characteristics of the sub-network ' 0 ,…,C′ i ,…,C M };
In step 22, the feature fusion is performed by using the graph neural network to obtain a vehicle enhancement feature and a vehicle enhancement feature of each vehicle to be predicted, and the method further includes:
the R vehicle features and the M lane line features use a graph annotation force network to carry out mutual information transfer to complete feature fusion;
abstracting each vehicle characteristic and each lane line characteristic into a node, and forming the input of a graph attention network after all the nodes are spliced;
and performing self-attention calculation among the nodes to obtain the vehicle enhancement features and the vehicle enhancement features of the vehicles to be predicted after fusion.
4. The method for predicting a future motion trajectory of a vehicle according to claim 3, wherein the performing self-attention computation between the nodes to obtain the vehicle enhancement features and the own vehicle enhancement features of the vehicles to be predicted after fusion includes:
after each node Q calculates the similarity with other nodes K, information needing to be enhanced is obtained from other nodes V; the characteristics of the ith vehicle node to be predicted after self-attention calculation are expressed as follows:
wherein Q is i ,K j ,V j From S' i The linear change is obtained, f () represents neural network conversion, s () represents similarity calculation, the similarity calculation is obtained by matrix multiplication, and N represents characteristic dimension; thus, the vehicle characteristics are reinforced after the characteristic fusion, and the vehicle reinforcing characteristic S '= { S' -of the vehicle to be predicted is formed 0 ,…,S″ i ,…,S″ R }。
5. The method for predicting a future movement track of a vehicle according to claim 4, wherein in step 3, the feature extraction is performed on the planned track of the vehicle to obtain a vehicle planning feature, the vehicle planning feature and the vehicle enhancement feature are spliced and feature compression is performed to obtain a vehicle fusion feature, and further comprising:
step 31, performing feature dimension lifting on planned track points by using a neural network in the process of performing feature extraction on the planned track of the own vehicle;
assuming that the planned track points of the own vehicle comprise L track points after the current time T, each comprising information is the world coordinates x and y of the current time, thereby planning the track E= { E 0 ,…,E i ,…,E L },E i ={x i ,y i By (E) passing throughThe neural network composed of two layers of MLP obtains a vehicle planning feature E';
and step 32, splicing the vehicle planning features and the vehicle enhancement features, and then performing feature compression to obtain vehicle fusion features.
6. The method for predicting a future motion trajectory of a vehicle according to claim 5, wherein in step 4, the feature interaction between the vehicle enhancement feature and the vehicle fusion feature based on the vehicle to be predicted is performed to obtain an interaction feature of the vehicle to be predicted, and further comprising:
step 41, giving the vehicle enhancement feature S' of a certain vehicle to be predicted i And the self-vehicle fusion feature E', performing similarity calculation between the two;
step 42, multiplying the self-vehicle fusion feature E 'with a similarity matrix, and extracting a vehicle enhancement feature S' for being transmitted to the vehicle to be predicted i Information of (2);
step 43, combining said information with a vehicle enhancement feature S' of said vehicle to be predicted i Adding, namely finishing feature interaction of the vehicle to be predicted and obtaining interaction features S 'of the vehicle to be predicted' ii 。
7. The method for predicting future motion trajectories of vehicles according to claim 6, wherein the process of feature interactions is expressed as:
wherein f () represents neural network conversion, S () represents similarity calculation, and is obtained by matrix multiplication, N represents characteristic dimension, S' ii Vehicle enhancement feature representing the ith vehicle to be predicted, E j And (5) representing the self-vehicle fusion characteristic with the characteristic dimension of j.
8. The method for predicting the future motion trail of a vehicle according to claim 7, wherein in step 5, if the vehicle is a traveling vehicle, the future motion trail of the vehicle to be predicted is obtained by regression through a neural network using the interaction characteristics; if the vehicle is a non-passing vehicle, obtaining the future motion trail of the vehicle to be predicted by using the vehicle enhancement characteristic of the vehicle to be predicted through neural network regression, and further comprising:
step 51, giving vehicle enhancement features or interaction features of a vehicle to be predicted, and flattening the vehicle enhancement features or interaction features in feature dimensions to form neural network input;
and step 52, performing weighted calculation based on the neural network input to obtain a plurality of preset coordinate regression values, namely future motion trail of the vehicle to be predicted.
9. A future motion trajectory prediction apparatus of a vehicle, the apparatus comprising:
the information acquisition module is used for acquiring information of all vehicles in a driving scene and selecting the vehicles within a preset distance range as vehicles to be predicted; if the vehicle to be predicted is in the yielding scene, defining as yielding vehicles, otherwise defining as non-yielding vehicles;
the vehicle enhancement feature acquisition module is used for carrying out feature extraction and feature fusion on the historical tracks of all vehicles in the driving scene and lane lines where the historical tracks are located to obtain vehicle enhancement features of the vehicles to be predicted and vehicle enhancement features of the vehicles to be predicted;
the vehicle fusion feature acquisition module is used for extracting features of a planning track of a vehicle to obtain vehicle planning features, splicing the vehicle planning features with the vehicle enhancement features of the vehicle and compressing the features to obtain vehicle fusion features;
the feature interaction module is used for carrying out feature interaction between the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted based on the vehicle enhancement feature and the vehicle fusion feature of the vehicle to be predicted, so as to obtain interaction features of the vehicle to be predicted;
the track output module is used for obtaining the future motion track of the vehicle to be predicted through neural network regression by using the interaction characteristics if the vehicle is a passing vehicle; and if the vehicle is not a traveling vehicle, obtaining the future motion trail of the vehicle to be predicted through neural network regression by using the vehicle enhancement characteristics of the vehicle to be predicted.
10. An unmanned aerial vehicle, characterized in that, be equipped with on the unmanned aerial vehicle:
a memory for storing a program;
a processor for executing the program stored in the memory, the processor being configured to execute part or all of the steps of the future motion trajectory prediction method of a vehicle as claimed in any one of claims 1 to 8 when the program is executed.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310972110.3A CN117002530A (en) | 2023-08-03 | 2023-08-03 | Method and device for predicting future motion trail of vehicle and unmanned equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310972110.3A CN117002530A (en) | 2023-08-03 | 2023-08-03 | Method and device for predicting future motion trail of vehicle and unmanned equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117002530A true CN117002530A (en) | 2023-11-07 |
Family
ID=88563204
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310972110.3A Pending CN117002530A (en) | 2023-08-03 | 2023-08-03 | Method and device for predicting future motion trail of vehicle and unmanned equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117002530A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117576950A (en) * | 2024-01-16 | 2024-02-20 | 长沙行深智能科技有限公司 | Method and device for predicting vehicle to select crossing entrance and crossing exit |
-
2023
- 2023-08-03 CN CN202310972110.3A patent/CN117002530A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117576950A (en) * | 2024-01-16 | 2024-02-20 | 长沙行深智能科技有限公司 | Method and device for predicting vehicle to select crossing entrance and crossing exit |
CN117576950B (en) * | 2024-01-16 | 2024-04-09 | 长沙行深智能科技有限公司 | Method and device for predicting vehicle to select crossing entrance and crossing exit |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111506058B (en) | Method and device for planning a short-term path for autopilot by means of information fusion | |
US11370423B2 (en) | Multi-task machine-learned models for object intention determination in autonomous driving | |
CN110796856B (en) | Vehicle lane change intention prediction method and training method of lane change intention prediction network | |
JP6857371B2 (en) | Learning methods that support safe autonomous driving, and learning devices, test methods, and test devices that use them. | |
WO2022052406A1 (en) | Automatic driving training method, apparatus and device, and medium | |
CN111860155B (en) | Lane line detection method and related equipment | |
EP3822852B1 (en) | Method, apparatus, computer storage medium and program for training a trajectory planning model | |
Furgale et al. | Toward automated driving in cities using close-to-market sensors: An overview of the v-charge project | |
CN111507372A (en) | Method and apparatus | |
US20210248460A1 (en) | Systems and Methods for Optimized Multi-Agent Routing Between Nodes | |
US20210149404A1 (en) | Systems and Methods for Jointly Performing Perception, Perception, and Motion Planning for an Autonomous System | |
CN108986540A (en) | Vehicle control system and method and traveling secondary server | |
US11891087B2 (en) | Systems and methods for generating behavioral predictions in reaction to autonomous vehicle movement | |
CN111316286A (en) | Trajectory prediction method and device, storage medium, driving system and vehicle | |
CN115578711A (en) | Automatic channel changing method, device and storage medium | |
CN117002530A (en) | Method and device for predicting future motion trail of vehicle and unmanned equipment | |
WO2022017307A1 (en) | Autonomous driving scenario generation method, apparatus and system | |
CN114882457A (en) | Model training method, lane line detection method and equipment | |
CN113552867A (en) | Planning method of motion trail and wheel type mobile equipment | |
US20210398014A1 (en) | Reinforcement learning based control of imitative policies for autonomous driving | |
US20230029993A1 (en) | Systems and methods for behavior cloning with structured world models | |
US11544899B2 (en) | System and method for generating terrain maps | |
CN114103994A (en) | Control method, device and equipment based on automatic road surface cleaning of vehicle and vehicle | |
CN118270013A (en) | Method and device for predicting future running track of motor vehicle and unmanned equipment | |
US20220005217A1 (en) | Multi-view depth estimation leveraging offline structure-from-motion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |