CN113033899A - Unmanned adjacent vehicle track prediction method - Google Patents

Unmanned adjacent vehicle track prediction method Download PDF

Info

Publication number
CN113033899A
CN113033899A CN202110331661.2A CN202110331661A CN113033899A CN 113033899 A CN113033899 A CN 113033899A CN 202110331661 A CN202110331661 A CN 202110331661A CN 113033899 A CN113033899 A CN 113033899A
Authority
CN
China
Prior art keywords
vehicle
lstm
unmanned
data
behavior
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110331661.2A
Other languages
Chinese (zh)
Other versions
CN113033899B (en
Inventor
程久军
毛其超
原桂远
魏超
周爱国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN202110331661.2A priority Critical patent/CN113033899B/en
Publication of CN113033899A publication Critical patent/CN113033899A/en
Application granted granted Critical
Publication of CN113033899B publication Critical patent/CN113033899B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/211Selection of the most significant subset of features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Abstract

The invention relates to a method for predicting the track of an adjacent unmanned vehicle, and relates to the field of unmanned driving. The invention provides a prediction method of the track of an unmanned adjacent vehicle. Firstly, extracting a vehicle set and road condition information around an unmanned adjacent vehicle from video data and point cloud data through an LK-DBSCAN (Limit-K DBSCAN) algorithm, constructing potential characteristics with influence by adopting characteristic engineering, and enhancing the expression capacity of data characteristics on complex road conditions; then, predicting the real-time behavior of the vehicle by using a long-term and short-term memory neural network (LSTM); and finally, predicting the track of the vehicle through B-LSTM (Behavior-based LSTM) by combining the predicted vehicle Behavior of the LSTM with the historical Behavior data of the vehicle. The method solves the problems, can improve the accuracy of behavior prediction of the unmanned vehicle, and reduces errors of trajectory prediction, thereby providing accuracy and efficiency for decision-making of the operation behavior of the unmanned vehicle.

Description

Unmanned adjacent vehicle track prediction method
Technical Field
The present invention relates to the field of unmanned driving.
Background
The prediction function of the unmanned vehicle is to predict the state of the neighboring vehicle at a future time based on the sensed surrounding data, which is of great practical significance to avoid potential safety risks when the unmanned vehicle is driven on a road. At present, the prediction methods of vehicle trajectories are mainly divided into two categories: a trajectory prediction method based on a dynamic model and deep learning. The main body is as follows:
(1) the trajectory prediction method based on the dynamic model comprises the following steps: based primarily on newton's first law, only the inertia and applied force of the object need to be considered. Most researchers have built kinematics correlation models to approximately predict the vehicle's trajectory over a future time period. The method obtains satisfactory precision in a short time domain, but ignores interaction between vehicles, so that the model prediction accuracy is low in an environment with a plurality of interference factors.
(2) The track prediction method based on deep learning comprises the following steps: driven by the growth in computing power and the availability of large-scale labeled samples (e.g., KITTI and H3D), machine learning and deep learning have been extensively studied in their fast, scalable learning framework. The method takes data as drive, takes extracted relevant characteristics of the vehicle and historical track data of the vehicle as input of a model, and outputs predicted vehicle track data through training fitting on large-scale data. The method has high accuracy and strong expandability, and particularly takes the LSTM network as a prediction model. However, the model only adopts raw data, features of the surrounding environment of the unmanned vehicle are difficult to characterize, and only real-time behaviors of the surrounding vehicle can be reflected, and the future information is not considered, so that the problem of large prediction error is caused.
The data sources of the two trajectory prediction methods mainly depend on videos and point clouds of the unmanned vehicle. The video data can only provide color information of surrounding vehicles about time, and cannot provide relative depth information; the point cloud data can embody the depth information of the surrounding environment relative to the unmanned vehicle through a laser reflection mechanism, but the point cloud data is lack of color information and difficult to accurately infer the environment around the unmanned vehicle.
Disclosure of Invention
The purpose of the invention is as follows:
although the prediction of the trajectory of the unmanned vehicle has received extensive attention from academia and achieved considerable results, the following problems still exist:
(1) the raw data is insufficient to characterize the environment surrounding the unmanned vehicle;
(2) interaction among vehicles is ignored, so that the model prediction accuracy is low in an environment with a plurality of interference factors;
(3) the data source is single and the reliability is lacked.
Therefore, the invention specifically provides the following technical scheme:
the invention provides a prediction method of the track of an unmanned adjacent vehicle. Firstly, extracting a vehicle set and road condition information around an unmanned adjacent vehicle from video data and point cloud data through an LK-DBSCAN (Limit-K DBSCAN) algorithm, constructing potential characteristics with influence by adopting characteristic engineering, and enhancing the expression capacity of data characteristics on complex road conditions; then, predicting the real-time behavior of the vehicle by using a long-term and short-term memory neural network (LSTM); and finally, predicting the track of the vehicle through B-LSTM (Behavior-based LSTM) by combining the predicted vehicle Behavior of the LSTM with the historical Behavior data of the vehicle. The method solves the problems, can improve the accuracy of behavior prediction of the unmanned vehicle, and reduces errors of trajectory prediction, thereby providing accuracy and efficiency for decision-making of the operation behavior of the unmanned vehicle.
The method for predicting the track of the unmanned adjacent vehicle specifically comprises the following steps:
step 1. correlation definition
Step2, preprocessing data of unmanned adjacent vehicles
Step 2.1 neighboring vehicle data population and Filtering
Step 2.2 vehicle feature extraction
Step 2.3 neighboring vehicle set extraction
Step3, unmanned adjacent vehicle behavior prediction algorithm
Step4, unmanned adjacent vehicle track prediction algorithm
Advantageous effects
The invention aims to disclose a method for improving the track prediction accuracy of an unmanned adjacent vehicle on the basis of constructing a vehicle set of influence factors around the unmanned adjacent vehicle, extracting feature information with higher dimensionality and providing an unmanned adjacent vehicle behavior prediction algorithm by considering the characteristic problem that original data under the current unmanned environment are rough and not enough to represent the environment around the unmanned vehicle, so that the unmanned adjacent vehicle can be applied possibly under a complex environment.
Description of the attached tables
TABLE 1 description of symbols in the present invention
TABLE 2 Experimental parameters
TABLE 3H 3D data set
Drawings
FIG. 1 is a schematic view of an unmanned neighboring vehicle and its perception range
FIG. 2 is a detailed flow chart of a vehicle set extraction algorithm for neighboring vehicles
FIG. 3 is a block diagram of a neighboring vehicle behavior prediction architecture
FIG. 4 flow chart of an unmanned neighboring vehicle behavior prediction algorithm
FIG. 5 architecture diagram of a model network for predicting unmanned neighboring vehicle trajectories
FIG. 6 flow chart of unmanned neighboring vehicle trajectory prediction algorithm
FIG. 7 accuracy of each classification algorithm and LSTM method on different input data
FIG. 8 kappa coefficient for each classification algorithm and LSTM method on different input data
FIG. 9 HAM distances between classification algorithms and LSTM method on different input data
Recall, F1 and Limit-recycle of the classification algorithms and LSTM methods of FIG. 10
FIG. 11B-mean square error distribution over time of LSTM and LSTM in different directions
FIG. 12B-mean absolute error distribution over time of LSTM and LSTM in different directions
FIG. 13B-LSTM and LSTM R2 determining coefficient distribution over time in different directions
FIG. 14B-LSTM and LSTM trace fitting curves over time in a single sample X-direction
FIG. 15B-trajectory fitting curves of LSTM and LSTM over time in the Y-direction of a single sample
FIG. 16 is a flow chart of the method of the present invention
Detailed Description
The specific implementation process of the invention is shown in fig. 16, and comprises the following 5 aspects:
(ii) associated definitions
Unmanned neighboring vehicle data preprocessing
Third, unmanned adjacent vehicle behavior prediction algorithm
Unmanned adjacent vehicle track prediction algorithm
Simulation experiment verification
Correlation definition
The main symbols required in the construction of the prediction method of the unmanned adjacent vehicle track are shown in the table 1.
In order to research the construction method of the prediction method of the unmanned adjacent vehicle track, the relevant definitions are as follows:
define 1 neighboring vehicle own attribute set Attribute of vehicles (Attributes): the self attributes of the adjacent manned vehicle comprise inherent attributes such as size and type and dynamic attributes such as speed, acceleration and the lane where the vehicle is located. AttriVi(t) is the set of relevant attributes of the neighboring vehicle i itself at time t, as shown in formula (1):
AttriVi(t)={Si(t),Ai(t),Pi(t),Movediri(t),TYi,Li,Wi} (1)
wherein S isi(t)、Ai(t)、Pi(t)、Movediri(t) represents the speed, acceleration, position and direction of travel of the vehicle adjacent to i at time t, respectively; TYi watchThe method is characterized in that the specific types of adjacent vehicles are shown, small and medium vehicles such as cars and the like are flexible in real life, tend to change lanes more easily, have larger influence on surrounding vehicles, are heavier and keep going straight more frequently, so that the types of the vehicles are brought into the attributes of the vehicles; l isiAnd WiRespectively representing the length and width of the vehicle.
Defining 2 an inherent attribute set Attribute of environmental surroundings of neighboring vehicles (attributes of environmental): attriei(t) represents the set of intrinsic properties of the surrounding of the neighboring vehicle at time t, as in equation (3.2).
AttriEi(t)={Lanei(t),CLallowable} (2)
Wherein Lanei(t) represents the lane where i is located, the lanes are numbered from left to right in sequence and are converted into One-hot codes, the salable represents whether lane changing is allowed or not according to traffic rule regulation, the numerical value is equal to 1 and represents that lane changing can be achieved, and the reverse is true.
Defining 3 Total Attribute fv of neighboring vehiclesi(t) is defined as follows:
fvi(t)=AttriVi(t)∪AttriEi(t) (3)
wherein, fvi(t) is a set of vehicle own attributes AtttriVi(t) and a set of vehicle ambient attributes Attree.
Define 4 the adjacent vehicle track tra (trajectory of vehicles): for a particular neighboring vehicle i, which has multiple sampling points within a continuous time period in the data set, converting the minimum timestamp to 0 time by converting time, the trajectory of i vehicle can be defined as (3):
Trai={Pi(t0),Pi(t1)…,Pi(tk-1),Pi(tk)} (4)
wherein, { t0,t1,…,tk-1,tkIs a sequence of sampling time points, Pi(t) represents the position of the vehicle i at time t, consisting of its lateral x and longitudinal y coordinates in two-dimensional coordinates, respectively, which can be represented as Pi(t)= (x,y)。
Defining 5 a set of influence factors suv (surrounding vehicles) around neighboring vehicles: the unmanned vehicle perception has a certain range, and only the environmental characteristics in a fixed area can be perceived, so that the unmanned vehicle can only extract relevant information from limited knowledge to make behavior decision. Since the data set is mixed with sampled information of the unmanned vehicle over the entire time period, the data needs to be extracted and processed in a correlated manner according to time. SuV (t) is a set of characteristics of vehicles around the neighboring vehicle i at time t and perceived by the unmanned vehicle, and its mathematical expression is:
SuV(t)={fvj(t),…,fvl(t)|{j,…,l}∈SuE(t)} (5)
wherein, fvj(t) represents the feature set of j vehicles in the set. Sue (t) represents a set of surrounding vehicles { j, …, l } having an influence on the vehicle i.
Unmanned neighboring vehicle data preprocessing
Since the original H3D data set provides only low-dimensional information such as vehicle speed, acceleration, vehicle type and the like through video analysis and depth information of point cloud data, higher-dimensional inter-vehicle association and feature intersection information are not provided. Therefore, the invention processes the original data, and obtains higher dimensional information through feature extraction and feature engineering so as to enhance the effectiveness of the features.
(1) Proximity vehicle data population and filtering
1) Missing data padding
Since the GPS/IMU of the sampling vehicle and other related hardware facilities lose part of data in the sampling process, the missing data needs to be filled in some cases. Among them, the timestamp (sampling timestamp) has a large influence on the vehicle trajectory prediction model. Although the sampling frequency is 10hz (0.1s), some fluctuation exists in the view of data provided by the IMU, so that for discontinuous timestamp missing, a linear interpolation method is adopted for filling, and the missing part is
Figure BDA0002996330340000051
Can be calculated from equation (6):
Figure BDA0002996330340000052
wherein the content of the first and second substances,
Figure BDA0002996330340000061
for missing sample time stamps, tk-1And tk+1Respectively representing the sample times of the last and next instants of the missing timestamp. Furthermore, there may be instances where some of the information of surrounding vehicles is lost during the sampling of vehicle perceptions. And for the data lost in one sampling interval, the related information at the missing time is approximately fitted by adopting a linear interpolation method.
2) Proximity vehicle identification data filtering
In the H3D data set, researchers noted different states (dynamic/Static) of vehicles in the data set, wherein Static (Static) vehicles are generally vehicles parked on both sides of a lane, and have no influence on the research content. Therefore, vehicle data marked as Static in the data set are filtered, and scene data with more than 80% of Static vehicles are removed. Secondly, a large number of vehicle numbers exist in the data set and are unique identifiers of the vehicles, but specific numerical values of the vehicle numbers cannot represent relevant attributes of the vehicles, so that the vehicle numbers are not used as training data during model training, and are used as the unique identifiers only when sessions are divided according to time.
(2) Vehicle feature extraction
Since the vehicle information in the original dataset is based on a single vehicle, the correlation between two different vehicles is not given in the dataset, for example: relative speed, relative distance, relative position and relative acceleration of the two vehicles, and the like. Therefore, reasonable extraction processing of the original features is required.
The lanes in the data set are typically two-way lanes, and the vehicles in the driving environment of the nearby lanes that directly affect the vehicle, so only the neighboring vehicles in the adjacent lanes are considered in the feature extraction process.
1) Relative features (Relative features):
for the relative features, the relative features are expressed in a vector mode, and a specific calculation mode is shown as (7):
Figure BDA0002996330340000062
wherein the content of the first and second substances,
Figure BDA0002996330340000063
relative feature vectors for vehicle i and vehicle j, including relative velocity
Figure BDA0002996330340000064
Relative acceleration
Figure BDA0002996330340000065
Relative distance Pij. Wherein
Figure BDA0002996330340000066
And
Figure BDA0002996330340000067
the relative distance P can be calculated by calculating the difference of the characteristic vectors between the two vehiclesijBy computing a 2-normal form of the distance vector.
2) Vehicle behavior (DB, Drive Behaviors):
the behavior of a vehicle on a road is mainly classified into the following three types: straight-going, left lane changing (including left turning) and right lane changing (including right turning), the first frame is used as an origin, a relative coordinate system is established, and the vehicle behavior DriBehavior is performed at time ti(t) may be expressed as follows:
Figure BDA0002996330340000071
wherein the content of the first and second substances,
Figure BDA0002996330340000072
representing position vectors
Figure BDA0002996330340000073
Projection on the abscissa at time t;
Figure BDA0002996330340000074
representing position vectors
Figure BDA0002996330340000075
Projection on the ordinate at time t; during the running process, due to the maneuverability of the vehicle, the vehicle has the phenomenon of left-right swinging in the lane, which is simple
Figure BDA0002996330340000076
Performing the calculation may result in unstable vehicle behavior. Therefore, a limitation is added
Figure BDA0002996330340000077
3) Data normalization:
different dimensions and dimension units exist among the different features, so that certain trouble is caused to data analysis, and in order to eliminate the dimension influence among the features, data standardization processing is required. Therefore, in order to reduce the influence of the partial features with large numerical distribution on the model and accelerate the convergence speed of the model, Z-Score normalization (zero-mean normalization) is performed on the used features according to the mean and variance thereof, as shown in formula (9):
Figure BDA0002996330340000078
wherein, x is sample data in the data feature set, μ is a mean value, and σ is a standard deviation.
(3) Vehicle set extraction around neighboring vehicles
The unmanned vehicle has a limited sensing range and can only sense the relevant information of the vehicles around the unmanned vehicle, wherein the unmanned adjacent vehicle and the sensing range thereof are schematically shown in fig. 1. Therefore, when a certain adjacent vehicle is researched, the set of vehicles in the graph which have influence on the vehicle selected to be researched can be extracted from limited data according to the LK-DBSCAN algorithm provided by the invention.
The data is subjected to feature extraction through the LK-DBSCAN (Limit-K-DBSCAN) algorithm of the invention, and the information of the vehicles adjacent to the vehicle selected as the research object is screened out. Firstly, the whole Data set is divided according to sampling time points to obtain DatatAnd extracting a sampling set (t) corresponding to each time point. On the basis, traversing all vehicles i in the sampling set (t), and calculating the distance Dis between the vehicles k and other vehicleskiAnd put into the set DiskWhen the study object i is calculated to obtain the complete distance set DisiThen, the process is carried out by Min (epsilon, beta, Dis)i) Solving a set of vehicles satisfying all constraints
Figure BDA0002996330340000081
The epsilon is the visual perception distance of the unmanned vehicle, the beta is the upper limit of the number of the surrounding vehicles with influence, and Min (epsilon, beta, Dis)i) The calculation method is defined as follows:
Min(∈,β,Disi)={k,…,l|Diski≤∈and k∈Topmin-β(Disi)} (10)
the algorithm LK-DBSCAN is shown as algorithm 1, and the specific flow is as follows:
step 1: input vehicle raw Data set DataH3D
Step 2: dividing the whole Data set according to time to generate Datat
Step 3: there is set (t) epsilon DatatAnd not accessed, go to (4); otherwise, turn to (7)
Step 4: if the vehicle i belongs to set (t) and is not selected as a research node, turning to (5); otherwise, turn to (3)
Step 5: traversing set (t), calculating the distance Dis between the two when the vehicle k is different from i and is not traversedkiPutting the calculation result into DisiTurning to (5); otherwise, turn to (6)
Step 6: adopting Min (epsilon, beta, Dis)i) Screening out the results meeting the definition and generating
Figure BDA0002996330340000082
Put into the partition set Result, turn (3)
Step 7: outputting a set of results of vehicles having an influence on the presence of neighboring vehicles
Figure BDA0002996330340000083
Figure BDA0002996330340000091
The Chinese description of the algorithm is as follows:
Figure BDA0002996330340000092
a detailed flow chart of the vehicle set extraction algorithm around the neighboring vehicle is shown in fig. 2.
Unmanned neighboring vehicle behavior prediction algorithm
In the process of driving a vehicle on a road, the vehicle can have various behaviors such as straight running, lane changing, left-right turning and the like regardless of whether the vehicle is driven by a person or an unmanned person, and the behaviors make the road scene more complicated. Therefore, the vehicle behavior is accurately predicted, the solution space of vehicle trajectory prediction is reduced, and the safety of unmanned driving decision can be guaranteed.
The vehicle behavior is continuous and real-time. Specifically, continuity refers to a certain time period when the vehicle behavior has a specific tendency, such as when the vehicle makes a lane change to the left, all vehicle behaviors in the following time period are left change. Meanwhile, the vehicle behavior has real-time performance, and the behavior is real-time behavior taken according to the current surrounding environment.
According to definition 5, at time t, the surrounding running environment of the vehicle i whose behavior is to be predicted is suv (t). Since the historical track of the vehicle uses the coordinates in the same coordinate system, the specific values of the historical track do not have direct influence on the behavior. In addition, different vehicles have differences in coordinate systems when modeling themselves, and the vehicle track can also be represented as the change of the behavior in the period of time, so that the change sequence of the vehicle behavior in the period of time is added in addition to the historical track of the vehicle when behavior prediction is carried out.
Because the vehicle information has obvious time series characteristics, an LSTM-based Behavior Prediction Method (LSTM-based Driving Behavior Prediction Method) is provided, predicted current surrounding environment information of a vehicle and the historical behaviors of the predicted current surrounding environment information and the predicted current surrounding environment information are used as input of a model, and probability values D of three behaviors of currently keeping straight, changing left lane and changing right lane are calculated to be SoftMax (p (D-based Driving Behavior Prediction Method)) by utilizing a SoftMax function1|I),p(d2|I),p(d3I)). Where P represents the mapping function of the model from input to output, d1、d2、d3I is the model input for the probabilities of the above three behaviors. If the time interval is [ a, b ]]Then I can be calculated from equation (11):
Figure BDA0002996330340000101
the problem is a multi-classification problem, a cross entropy function is selected as a loss function, and the loss L in a single batch is equal to:
Figure BDA0002996330340000102
where m denotes the number of samples per batch, y(i,k)A true tag, h, representing the kth behavior of sample iθ,k(I(i)) Representing the probability of the kth behavior predicted by the model.
The behavior prediction model of the unmanned adjacent vehicle is shown in fig. 3, and the specific algorithm steps are as follows:
step 1: model weights w are initialized. If the time length t is 0, …, k, the input I of the model is obtained as { I ═ I according to equation (11)0,…,Ik}。
Step 2: the training set input is divided into a plurality of equal-sized buckets to obtain a set (batch).
Step 3: presence of batchiE set (batch), and is not accessed, step 4; otherwise, step6 is switched
Step 4: computing batchiAll samples in the system are subjected to model and predicted value output after the SoftMax layer, and step5 is transferred.
Step 5: calculate batch according to equation (12)iThe cross entropy loss L of the predicted value and the real label; and (4) reversely obtaining the derivative according to the chain rule, updating the parameter w of the whole model, and turning to step 3.
Step 6: and saving the trained model and the weight parameter w thereof.
Figure BDA0002996330340000111
The Chinese description of the algorithm is as follows:
Figure BDA0002996330340000112
a flow chart of the unmanned neighboring vehicle behavior prediction algorithm is shown in fig. 4.
Prediction algorithm for unmanned adjacent vehicle track
The existing vehicle track prediction method based on deep learning mainly predicts the track of a vehicle in the next continuous time period by inputting the historical track coordinates of the vehicle. However, these methods have a significant disadvantage in that the behavior of the vehicle itself and the influence of its neighboring vehicles on the trajectory are not taken into account. When the track of the adjacent vehicle is predicted, the prediction of the behavior of the vehicle is added, and the incidence relation characteristics among the vehicles and the historical track of the vehicle are multiplexed.
Unmanned neighboring vehicle trajectory prediction model as shown in fig. 5, the whole model is mainly divided into a neighboring vehicle behavior prediction module, a model input, an encoder part, a decoder part and a final trajectory output, and the model inputs neighboring vehicle information and related information in an encoder partThe influencing factors of (a) are compressed into a 128-dimensional vector by the LSTM model. The behavior prediction part in the model adopts a prediction model of the behavior of the trained unmanned adjacent vehicle in the unmanned adjacent vehicle behavior prediction, maps the output behavior to 128 dimensions through a full connection layer, and splices the output behavior with the output of the encoder part as the input of the decoder part. Outputting neighboring vehicle offset information P ═ P at the next l time points through l LSTM-CELLk+1,…,Pk+lAnd (4) as the final track output, and the loss function adopts a mean square error loss function (MSE).
The specific algorithm flow is as follows:
step 1: the model weight parameters w are initialized. If the time length t is 0, …, k, the input I of the model is obtained as { I ═ I according to equation (11)0,…,Ik}。
Step 2: the training set input is divided into a plurality of equal-sized buckets to obtain a set (batch).
Step 3: presence of batchiE set (batch), and is not accessed, step 4; otherwise, step7 is switched
Step 4: will batchiAll samples in (1) are input to the encoder, step5, of the neighboring vehicle behavior prediction module and the trajectory prediction module.
Step 5: the output2 of the behavior prediction module (not passing through SoftMax here) is mapped to 128 dimensions through the full link layer FC and spliced with the compressed output1 of the encoder part as the input of the decoder part, and then translated to step 6.
Step 6: calculating to obtain batchiCorresponding predicted trajectory output P ═ Pk,…,Pl}; calculating loss L according to a mean square error formula, and updating a parameter w according to a back propagation algorithm; step3 is transferred.
Step 7: and saving the trained model and the weight parameter w thereof.
Figure BDA0002996330340000121
Figure BDA0002996330340000131
The Chinese description of the algorithm is as follows:
Figure BDA0002996330340000132
a flow chart of the unmanned neighboring vehicle trajectory prediction algorithm is shown in fig. 6.
Simulation experiment verification
The method for predicting the track of the unmanned adjacent vehicle is realized in an experimental mode so as to further verify the accuracy and the effectiveness of a track prediction algorithm.
(1) Experimental Environment and data set
1) Experimental Environment
The experimental parameters are shown in table 2.
2) Data set
The H3D driverless public dataset, which was collected and labeled by the honda institute and which contains 443 Scenario, mainly in urban scenarios, was used as the experimental dataset. Each Scenario is divided into sensor data and tag data in detail according to the sampling time length, and mainly contains sampling vehicle speed, sampling time stamp, vehicle local position, surrounding vehicle unique identification ID and the like. The sampling frequency is 10hz (i.e. the time interval between two consecutive sampling points is 0.1 s). The detailed information is shown in table 3.
3) Evaluation index
The classification indexes adopted in the experiment mainly comprise accuracy, KAPPA consistency coefficient, HAM distance, F-1 value and recall rate.
(a) The accuracy is as follows: the ratio of the predicted result to its true label can be calculated from equation (13):
Figure BDA0002996330340000141
where | TPre | represents the number of samples (True Prediction) for which Prediction is accurate, and | D | represents the number of samples of the entire sample space.
(b) KAPPA uniformity coefficient: the accuracy used to evaluate the classification model is calculated by the following formula:
Figure BDA0002996330340000142
where acc is the accuracy, peCalculated from the following equation:
Figure BDA0002996330340000143
wherein rkI represents the true number of class k samples, | pk| represents the number of kth classes in the prediction result, | D | represents the number of samples of the entire sample space. The closer the KAPPA consistency coefficient is to 1, the more accurate the classification model is proven.
(c) HAM distance: because the difference in numerical value exists between the predicted probability and the real label, the HAM distance is also suitable for measuring the quality of the multi-classification model so as to embody the distance between the predicted label and the real sample label, and the value is usually between 0 and 1.
(d) Recall rate recalling: the method is generally applied to the binary problem, and the main purpose of the method is to reflect the accurate prediction proportion of a binary model to a specific class, and the method is calculated by the following formula:
Figure BDA0002996330340000151
where TP represents the number of classes expected for accurate prediction and FP represents the number of classes that wrongly classify other classes as expected.
(e) F-1 score: the comprehensive performance of the classifier is mainly embodied, and the comprehensive performance is calculated by the following formula:
F1=2*Precision*Recall/(Precision+Recall) (17)
where Precision represents Precision and Recall is the Recall described above. The two are mutually contradictory quantities, and the F-1 value can reflect the comprehensive performance of the classifier.
Vehicle trajectory prediction is a regression problem, and therefore involves the selection of regression indicators, mainly including Mean Square Error (MSE), Mean Absolute Error (MAE), and R2 decision coefficients.
(f) Mean Square Error (MSE): representing the ratio of the sum of squares of the predicted and true value differences to the number of samples. Since the index is in the form of an evolution, the index is susceptible to an abnormal value.
(g) Mean Absolute Error (MAE): the ratio of the sum of the absolute values of the difference between the predicted value and the real value to the number of samples is represented, and the difference between the predicted value and the real value can be reflected.
(h) R2 determines the coefficient: reflecting the effect of the regression model on the fitting of the true values, the closer the coefficient determined by R2 is to 1, the better the fitting effect of the model is.
(2) Simulation and test result analysis
1) Adjacent vehicle behavior prediction
Firstly, the LSTM method and other vehicle behavior prediction methods (DT; Light GBM and LGBM; Logistic Regression, LR; Deep Neural Networks, DNN) are compared on model accuracy, kappa consistency coefficient and HAM distance. Then, the algorithm is compared based on two different input data, namely original data of the vehicle; and secondly, modeling and extracting the incidence relation of the surrounding vehicles to obtain related characteristic data.
The DT, LGBM, LR, DNN and LSTM methods are accurate on two different input data as shown in FIG. 7. It can be seen that, in the model using the raw data as input, the prediction accuracy of the LSTM-based vehicle behavior prediction method is 0.89, which is lower than 0.97 and 0.96 of DT and LGBM, mainly because the non-parametric model represented by the decision tree fits the distribution of the whole data, and the parametric model represented by the deep learning needs to fit the final target from the data level, so the deep learning model is prone to generate an under-fitting phenomenon in the case of less features. When the incidence relation features of surrounding vehicles and vehicle behavior sequence data are added, the accuracy of the five vehicle behavior prediction methods is improved to a certain extent mainly because compared with original data, more dominant features (privillee features) exist in the data after feature extraction and processing, such as historical behaviors and the like, wherein the vehicle behavior prediction method based on LSTM and LR is improved remarkably mainly because the under-fitting problem is solved after more extracted features are added. After the incidence relation characteristics of surrounding vehicles and the vehicle behavior sequence data are added, the accuracy based on the LSTM method is improved by 10 percent and reaches 99 percent, which is higher than that of other comparison methods, and the method can be used for predicting the vehicle behavior more effectively and more accurately compared with other four methods.
The DT, LGBM, LR, DNN and LSTM methods show kappa consistency coefficients on two different input data as shown in FIG. 8. The kappa consistency coefficient is used to measure the classification accuracy of different classification models. The closer the kappa coefficient is to the value 1, the better the model is. As can be seen from fig. 3.5, in the model using the original data as input, the decision tree models (DecisionTree, LGBM) and LSTM perform well, and reach 0.90, 0.88 and 0.88 respectively, which indicates that the classification results of the three models have high consistency and good classification effect. However, compared to the accuracy in fig. 8, the kappa index of DecisionTree and LGBM is low, mainly because these two models have a bias in behavior classification, i.e. always put the result into the class with the most label distribution. After the incidence relation features of surrounding vehicles and vehicle behavior sequence data are extracted as input, the kappa consistency coefficients of the five vehicle behavior prediction methods in fig. 8 are all improved to a certain extent, wherein the vehicle behavior prediction methods based on LR and DNN are improved remarkably, mainly because more features are added, the fitting capability of the model is enhanced, and the mapping relation from input to the label can be better fitted. After the incidence relation characteristics of the surrounding vehicles and the vehicle behavior sequence data are added, the kappa consistency coefficient based on the LSTM method reaches 0.99 which is higher than that of other classification algorithms, and the method has higher classification accuracy compared with the other four methods.
FIG. 9 measures the performance of the DT, LGBM, LR, DNN and LSTM methods on two different input data from the HAM distance metric. As can be seen from the figure, in the model using the original data as input, the HAM distance of the LSTM is 0.11, which is higher than that of the decision tree model (DesionTree, LGBM). The main reason is that when the distribution of the training samples can reflect the distribution of the whole sample space, the non-parametric model represented by the decision tree can obtain higher confidence coefficient by fitting the distribution of the whole training data, and the non-parametric model can better express on the test set. While parametric models require sufficient features to fit the mapping from input to output. Thus, the parametric model behaves less optimally under RAW data. However, after the correlation characteristics of surrounding vehicles and the vehicle behavior sequence data are added, the hamming distances of the five vehicle behavior prediction methods are reduced, mainly because compared with original data, more dominant features (privilege features) exist in the data subjected to feature extraction and processing, so that each model can better fit the mapping from input to output. After the incidence relation characteristics of surrounding vehicles and vehicle behavior sequence data are added, the LSTM model is reduced from the original 0.11 to 0.007, a smaller Hamming distance is obtained compared with LR and DNN which are parameter models, and the Hamming distance is basically leveled with tree models DT and LGBM which have stronger classification performance, so that the method can be used for predicting the vehicle behavior more effectively and more accurately compared with other methods.
From the analysis of the above experimental results, the original perception data is really insufficient in the capability of depicting the surrounding environment of the vehicle. Through the LK-DBSCAN algorithm, the characteristic engineering of vehicle information possibly influencing each other and original data is extracted, and the fitting capacity of each classification algorithm on training data is obviously improved. And the LSTM model can comprehensively consider the characteristics of input data in time sequence, so that the three indexes are equal to the classification algorithm of the front row.
In the data acquisition process, the passed road sections are mainly straight road sections of cities, and in real life, the behaviors of frequently changing lanes of the vehicles (which are embodied as the intentions that the behaviors of the vehicles drive leftwards and rightwards) are relatively less, so that the whole training and testing are concentrated, and the data volume marked as straight driving is large. Therefore, the model is not well characterized by the accuracy, the KAPPA coefficient, and the HAM distance.
Fig. 10 shows that LGBMs, DNNs are compared with the LSTM model adopted in this section on a plurality of indexes after adding the correlation characteristics of surrounding vehicles and vehicle behavior sequence data as inputs.
As can be seen from FIG. 10, the DNN model performed poorly under F1-Score and Recall (Recall) criteria, and the LGBM remained substantially level with the LSTM model. Since the training data has a sample imbalance problem, in addition to calculating the overall recall rate, the recall rate in the prediction data with only left-going and right-going labels, i.e., Limit-call in the figure, is recalculated. It can be seen that under the index of Limit-call, the LSTM model performs well, which is higher by approximately 0.5 percentage point compared to the LGBM. The main reason is that although the LGBM model has a strong classification capability, the LSTM model has a unique advantage in processing time series, and can make an accurate prediction on samples with behavior variation by analyzing the time sequence of input data.
In conclusion, the correlation relationship characteristics among vehicles and the vehicle behavior sequence data have positive influence on the prediction of the vehicle behavior through experiments, and the performance of the LSTM model is superior to that of the existing classification algorithm when the behavior is predicted through the time sequence characteristics.
2) Trajectory prediction for neighboring vehicles
The track prediction method B-LSTM based on vehicle behaviors is compared with the traditional vehicle track prediction method based on an LSTM model through experiments, and the advantages of the two methods are evaluated respectively from the aspects of mean square error, mean absolute error, R2 decision coefficient, offset in the X/Y driving direction and the like.
The mean square error of the LSTM-based vehicle track prediction method and the B-LSTM method in the direction perpendicular to the vehicle driving direction is shown in FIG. 11, wherein the LSTM method isxThe mean square error of the vehicle track prediction method based on the LSTM in the vehicle driving direction is represented; LSTMyRepresenting the mean square error of the LSTM-based vehicle track prediction method in the direction perpendicular to the vehicle running direction; B-LSTMxAnd B-LSTMyMethod for predicting vehicle trajectoryA mean square error in a direction perpendicular to the vehicle traveling direction. As can be seen from FIG. 11, at t0、t1The errors of the two methods are closer in time. At t2、t3At time, the error of B-LSTM is small. At t2Time B-LSTMxHas a mean square error of 0.04 compared to the LSTM at the same timexThe reduction is 0.11; B-LSTMyHas a mean square error of 0.03, which is less than that of the LSTM at the same timeyThe reduction is 0.02. At t3Time B-LSTMxHas a mean square error of 0.014, compared to the LSTM at the same timexThe reduction is 0.1; B-LSTMyHas a mean square error of 0.013 less than that of the LSTM at the same timeyThe reduction is 0.016. On the whole, compared with the traditional track prediction model LSTM, the B-LSTM method is more stable in mean square error index, and the error of the model in the direction perpendicular to the driving direction can be remarkably reduced by the prediction of the vehicle behavior from the error in the X direction, so that the track of the vehicle can be predicted more accurately by the B-LSTM method compared with the LSTM.
FIG. 12 shows B-LSTM and conventional LSTM at t0~t4Average absolute error over time. It can be seen that the error of both methods increases gradually with time. The main reason is that the LSTM model has problems of information loss and accumulated errors as time increases, so the errors tend to increase with time. At t, the average error in the vehicle traveling direction Y0、t1And t2Time of day, B-LSTMyIs smaller than LSTMyAt t0Maximum time difference, B-LSTMyIs to be compared with LSTMy0.002 lower; at t3、t4Time of day, B-LSTMyIs greater than LSTMyAt t4Maximum time difference, B-LSTMyIs to be compared with LSTMyHigher by 0.002. The average absolute error relative fluctuation amplitude of the B-LSTM and the LSTM in the Y direction is consistent, but the prediction result of the B-LSTM at three moments is better than that of the LSTM, so that the B-LSTM is more effective in predicting in the Y direction. And at t, from the average error in the direction X perpendicular to the direction of travel of the vehicle0、t1Time of day, B-LSTMyIs higher than LSTMyAt t0Maximum time difference, B-LSTMyIs to be compared with LSTMy0.003 high; at t2、t3、t4Time of day, B-LSTMyIs smaller than LSTMyAt t3Maximum time difference, B-LSTMyIs to be compared with LSTMy0.0028 lower. The average absolute error relative fluctuation amplitude of the B-LSTM and the LSTM in the X direction is consistent, but the prediction result of the B-LSTM at three time instants is better than that of the LSTM, so that the B-LSTM is more effective in predicting in the X direction. In summary, B-LSTM can predict the trajectory of the vehicle more accurately than LSTM.
The R2 decision coefficients for the prediction results of LSTM and B-LSTM are shown in FIG. 13. The coefficient determined by R2 is an index for discriminating the degree of fit of the regression model, and is a value of 1 or less. At time t0、t1The fitting degree of the two methods is relatively close. LSTM is more preferred in the X direction, while B-LSTM is more preferred in the Y direction. And at t2、t3At time, the R2 decision coefficient of B-LSTM is a value closer to 1 than LSTM, regardless of the X direction or the Y direction. At t4At that time, the two methods are closer in the Y direction, while B-LSTM performs less well in the X direction. In general, the B-LSTM method is superior and more stable in R2 index than the conventional trajectory prediction model LSTM, so that the B-LSTM can predict the trajectory of the vehicle more accurately than the LSTM.
Fig. 14 and 15 are fitted curves of the two methods for the direction of travel and the amount of offset perpendicular to the direction of travel, respectively, for a single sample. It can be seen that the curve fitted by the B-LSTM is closer to the Real vehicle's driving trajectory Real. Wherein, B-LSTMxIs comprehensively superior to LSTMxThe main reason is that the B-LSTM approach incorporates predicted vehicle behavior, reducing the fitting error of the model in the X direction.
Innovation point
Considering the complexity and diversity of road conditions, if an unmanned vehicle makes a safe and reasonable decision, the behavior trend of adjacent vehicles needs to be accurately predicted, so that potential safety hazards and risks are avoided. The following problems also exist at present: (1) the raw data is insufficient to characterize the environment surrounding the unmanned vehicle; (2) the interaction between vehicles is ignored, so that the model prediction accuracy is low in an environment with a plurality of interference factors; (3) the data source is single and the reliability is lacked. Aiming at the problems, the invention firstly provides an experimental data set analysis and adjacent vehicle data preprocessing method based on videos and point clouds; then, by analyzing historical behavior data of the adjacent vehicles and the incidence relation between the vehicles, prediction methods of the behaviors and the tracks of the adjacent vehicles are respectively given. Therefore, reliable guarantee is provided for the accuracy and the practicability of the motion behavior decision method of the unmanned vehicle.
Attached table of the specification
TABLE 1
Figure BDA0002996330340000201
TABLE 2
Figure BDA0002996330340000211
TABLE 3
Figure BDA0002996330340000212

Claims (2)

1. A method for predicting a trajectory of an unmanned neighboring vehicle, characterized by: firstly, extracting a vehicle set and road condition information around an unmanned adjacent vehicle from video data and point cloud data through an LK-DBSCAN (Limit-K DBSCAN) algorithm, constructing potential characteristics with influence by adopting characteristic engineering, and enhancing the expression capacity of data characteristics on complex road conditions; then, predicting the real-time behavior of the vehicle by using a long-term and short-term memory neural network (LSTM); and finally, predicting the track of the vehicle through B-LSTM (Behavior-based LSTM) by combining the predicted vehicle Behavior of the LSTM with the historical Behavior data of the vehicle.
2. The unmanned neighboring vehicle trajectory prediction method of claim 1, further comprising: the method specifically comprises the following steps:
step1, relevant definition;
step2, preprocessing data of the unmanned adjacent vehicle;
step 2.1, filling and filtering data of adjacent vehicles;
step 2.2, extracting vehicle characteristics;
step 2.3, extracting a vehicle set around the adjacent vehicle;
step3, a prediction algorithm of the behavior of the unmanned adjacent vehicle;
and 4, carrying out a prediction algorithm on the track of the unmanned adjacent vehicle.
CN202110331661.2A 2021-03-29 2021-03-29 Unmanned adjacent vehicle track prediction method Active CN113033899B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110331661.2A CN113033899B (en) 2021-03-29 2021-03-29 Unmanned adjacent vehicle track prediction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110331661.2A CN113033899B (en) 2021-03-29 2021-03-29 Unmanned adjacent vehicle track prediction method

Publications (2)

Publication Number Publication Date
CN113033899A true CN113033899A (en) 2021-06-25
CN113033899B CN113033899B (en) 2023-03-17

Family

ID=76472728

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110331661.2A Active CN113033899B (en) 2021-03-29 2021-03-29 Unmanned adjacent vehicle track prediction method

Country Status (1)

Country Link
CN (1) CN113033899B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113401143A (en) * 2021-07-19 2021-09-17 电子科技大学长三角研究院(衢州) Individualized self-adaptive trajectory prediction method based on driving style and intention
CN113740837A (en) * 2021-09-01 2021-12-03 广州文远知行科技有限公司 Obstacle tracking method, device, equipment and storage medium
CN114664051A (en) * 2022-02-25 2022-06-24 长安大学 Early warning method for temporary construction area of highway curve

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986453A (en) * 2018-06-15 2018-12-11 华南师范大学 A kind of traffic movement prediction method based on contextual information, system and device
CN110555476A (en) * 2019-08-29 2019-12-10 华南理工大学 intelligent vehicle track change track prediction method suitable for man-machine hybrid driving environment
CN110738370A (en) * 2019-10-15 2020-01-31 南京航空航天大学 novel moving object destination prediction algorithm
CN111079590A (en) * 2019-12-04 2020-04-28 东北大学 Peripheral vehicle behavior pre-judging method of unmanned vehicle
CN112347993A (en) * 2020-11-30 2021-02-09 吉林大学 Expressway vehicle behavior and track prediction method based on vehicle-unmanned aerial vehicle cooperation

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108986453A (en) * 2018-06-15 2018-12-11 华南师范大学 A kind of traffic movement prediction method based on contextual information, system and device
CN110555476A (en) * 2019-08-29 2019-12-10 华南理工大学 intelligent vehicle track change track prediction method suitable for man-machine hybrid driving environment
CN110738370A (en) * 2019-10-15 2020-01-31 南京航空航天大学 novel moving object destination prediction algorithm
CN111079590A (en) * 2019-12-04 2020-04-28 东北大学 Peripheral vehicle behavior pre-judging method of unmanned vehicle
CN112347993A (en) * 2020-11-30 2021-02-09 吉林大学 Expressway vehicle behavior and track prediction method based on vehicle-unmanned aerial vehicle cooperation

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113401143A (en) * 2021-07-19 2021-09-17 电子科技大学长三角研究院(衢州) Individualized self-adaptive trajectory prediction method based on driving style and intention
CN113401143B (en) * 2021-07-19 2022-04-12 电子科技大学长三角研究院(衢州) Individualized self-adaptive trajectory prediction method based on driving style and intention
CN113740837A (en) * 2021-09-01 2021-12-03 广州文远知行科技有限公司 Obstacle tracking method, device, equipment and storage medium
CN114664051A (en) * 2022-02-25 2022-06-24 长安大学 Early warning method for temporary construction area of highway curve
CN114664051B (en) * 2022-02-25 2023-09-29 长安大学 Early warning method for temporary construction area of expressway curve

Also Published As

Publication number Publication date
CN113033899B (en) 2023-03-17

Similar Documents

Publication Publication Date Title
CN107862864B (en) Driving condition intelligent prediction estimation method based on driving habits and traffic road conditions
CN113033899B (en) Unmanned adjacent vehicle track prediction method
Sun et al. A machine learning method for predicting driving range of battery electric vehicles
CN103605362B (en) Based on motor pattern study and the method for detecting abnormality of track of vehicle multiple features
Xing et al. Energy oriented driving behavior analysis and personalized prediction of vehicle states with joint time series modeling
CN112347993B (en) Expressway vehicle behavior and track prediction method based on vehicle-unmanned aerial vehicle cooperation
CN106095963B (en) Vehicle driving behavior analysis big data public service platform under internet + era
WO2023097971A1 (en) 4d millimeter wave radar data processing method
CN103921743A (en) Automobile running working condition judgment system and judgment method thereof
CN105654139A (en) Real-time online multi-target tracking method adopting temporal dynamic appearance model
CN107730889B (en) Target vehicle retrieval method based on traffic video
Wirthmüller et al. Predicting the time until a vehicle changes the lane using LSTM-based recurrent neural networks
CN112270355A (en) Active safety prediction method based on big data technology and SAE-GRU
CN113406955B (en) Complex network-based automatic driving automobile complex environment model, cognitive system and cognitive method
CN112734094B (en) Intelligent city intelligent rail vehicle fault gene prediction method and system
CN111695737A (en) Group target advancing trend prediction method based on LSTM neural network
CN114519302A (en) Road traffic situation simulation method based on digital twin
CN114566052B (en) Method for judging rotation of highway traffic flow monitoring equipment based on traffic flow direction
Chen et al. Advanced driver assistance strategies for a single-vehicle overtaking a platoon on the two-lane two-way road
Wang et al. A state dependent mandatory lane-changing model for urban arterials with hidden Markov model method
Shao et al. Failure detection for motion prediction of autonomous driving: An uncertainty perspective
CN115691140B (en) Analysis and prediction method for space-time distribution of automobile charging demand
CN116386020A (en) Method and system for predicting exit flow of highway toll station by multi-source data fusion
CN111371609B (en) Internet of vehicles communication prediction method based on deep learning
Deng et al. Research on operation characteristics and safety risk forecast of bus driven by multisource forewarning data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant