CN109709956B - Multi-objective optimized following algorithm for controlling speed of automatic driving vehicle - Google Patents

Multi-objective optimized following algorithm for controlling speed of automatic driving vehicle Download PDF

Info

Publication number
CN109709956B
CN109709956B CN201811600366.7A CN201811600366A CN109709956B CN 109709956 B CN109709956 B CN 109709956B CN 201811600366 A CN201811600366 A CN 201811600366A CN 109709956 B CN109709956 B CN 109709956B
Authority
CN
China
Prior art keywords
data
model
headway
ttc
following
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811600366.7A
Other languages
Chinese (zh)
Other versions
CN109709956A (en
Inventor
王雪松
朱美新
孙平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tongji University
Original Assignee
Tongji University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tongji University filed Critical Tongji University
Priority to CN201811600366.7A priority Critical patent/CN109709956B/en
Publication of CN109709956A publication Critical patent/CN109709956A/en
Application granted granted Critical
Publication of CN109709956B publication Critical patent/CN109709956B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)
  • Feedback Control In General (AREA)

Abstract

The invention develops a multi-objective optimized following algorithm for controlling the speed of an automatic driving vehicle. The algorithm provides a model for controlling the automobile following speed based on deep reinforcement learning, and the model not only simulates human driving, but also directly optimizes driving safety, efficiency and comfort. A reward function reflecting driving safety, efficiency and comfort is constructed by combining collision time, headway empirical distribution and acceleration, an actual driving data training model in a Next Generation Simulation (NGSIM) project is used, the following behavior simulated by the model is compared with the behavior observed in NGSIM empirical data, and the reinforced learning intelligent body learns the vehicle speed safely, comfortably and efficiently in a mode of maximizing accumulated reward through tests and trial and error in a Simulation environment. The results show that the proposed following speed control algorithm shows better safe, efficient and comfortable driving ability compared to a real-world human driver.

Description

Multi-objective optimized following algorithm for controlling speed of automatic driving vehicle
Technical Field
The invention relates to the field of automatic driving following control, in particular to a following algorithm for multi-objective optimization of speed control of an automatic driving vehicle.
Background
The following control is an important component of automatic driving intelligent decision, and comprises speed selection in free driving, vehicle distance keeping in vehicle following and braking in emergency. Under the condition that automatic driving and human driving coexist, the automatic driving vehicle makes a follow-up control decision similar to a human driver (anthropomorphic for short), so that the comfort level and the trust degree of passengers are improved, and meanwhile, other traffic participants can understand and predict the behavior of the automatic driving vehicle better, so that the safety interaction between the automatic driving and the human driving is realized. However, the traditional following model has many limitations when being applied to automatic following control, such as limitation on flexibility and accuracy of the model, difficulty in popularization to driving scenes and drivers except for calibration data, and incapability of reflecting driving styles and driving scenes of actual drivers of vehicles when being applied to automatic driving.
Deep Learning (DRL) is widely used in the fields of industrial manufacturing, simulation, robot control, optimization and scheduling, game playing, etc., and its basic idea is to learn the optimal strategy to achieve the goal by maximizing the accumulated award value obtained from the environment by the intelligent agent. The DRL method focuses more on learning a problem solving strategy, and does not fit data, so that the generalization capability of the DRL method is stronger, and reference is provided for automatic driving vehicle following control.
Disclosure of Invention
The purpose of the invention is: a multi-objective optimized following algorithm for controlling the speed of an automatically driven vehicle. The algorithm proposes a model for vehicle following speed control that directly optimizes driving safety, efficiency and comfort. Combining the Time To Collision (TTC), the experience distribution of the time headway and the Jerk (Jerk), a reward function reflecting the driving safety, the efficiency and the comfort is constructed, an actual driving data training model in a Next Generation Simulation (NGSIM) project is used, the following behavior simulated by the model is compared with the behavior observed in the NGSIM experience data, and the reinforcement learning intelligent body learns the vehicle speed safely, comfortably and efficiently in a mode of maximizing the accumulated reward through tests and trial and error in a simulation environment. The results show that the proposed following speed control algorithm shows better safe, efficient and comfortable driving ability compared to a real-world human driver.
The technical scheme adopted by the invention is as follows:
a multi-objective optimized following algorithm for controlling the speed of an automatic driving vehicle comprises the following steps:
step 1: data is acquired. And (3) extracting the following events based on the criteria that the front vehicle and the rear vehicle stay on the same lane and the length of the vehicle following events is greater than 15 seconds by using the data in the NGSIM project, and taking one part as training data and the other part as test data based on the extracted following events.
Step 2: a reward function is constructed. Characteristic quantities reflecting the relevant objectives (safety, comfort, efficiency) of the following control of the vehicle are proposed.
Step 2.1: time To Collision (TTC) is used to reflect safety. TTC represents the amount of time remaining before a collision of two vehicles and is formulated as
Figure BDA0001922335250000021
Where Sn-1, n (t) is the inter-vehicle distance, Δ Vn-1, n (t) is the relative velocity. Determining the safety threshold value to be 7 seconds according to NGSIM empirical data, and performing TTC feature construction:
Figure BDA0001922335250000022
if TTC is less than 7 seconds, the TTC characteristic index is a negative value, and as the TTC approaches zero, the TTC characteristic approaches negative infinity, and the most severe punishment is shown for the situation of approaching collision.
Step 2.2: the driving efficiency is measured by headway. From the analysis, the lognormal distribution is adapted to the distribution of the acquired training data with a probability density function of
Figure BDA0001922335250000023
x>0. From the extracted data, it can be estimated that the mean μ and the logarithmic standard deviation σ of the distribution variable x are 0.4226 and 0.4365, respectively. And constructing the headway characteristics into the probability density value of the estimated headway lognormal distribution: fheadway ═ flognormal (headway | μ ═ 0.4226, σ ═ 0.4365). According to the headway time characteristic, the headway time of about 1.3 seconds corresponds to a high characteristic value, and the headway time is too long or too short and corresponds to a low characteristic value, so that the characteristic value estimates the high-flow headway maintenance behavior, and meanwhile punishs the unsafe or too long headway maintenance behavior.
Step 2.3: the driving comfort is measured by adopting the change rate Jerk of the acceleration, and the characteristic is as follows:
Figure BDA0001922335250000024
step 2.4: and establishing a comprehensive reward function. R is w1FTTC + w2Fheadway + w3fjerk, where w1, w2, w3 are coefficients of the features, all set to 1.
And step 3: and (5) training the model. And during each training, sequentially simulating the following events in the data, repeating the training for multiple times, and selecting the model which obtains the maximum average reward on the test data as the final model.
And 4, step 4: and (6) evaluating the model. And (4) comparing and evaluating the following behaviors obtained by the NGSIM data and the DDPG model simulation by using indexes such as TTC, headway, jerk and the like.
The invention has the advantages that:
1. the developed autonomous vehicle-following control logic is applicable to autonomous vehicle development;
2. the algorithmic model does not mimic human driving, but directly optimizes driving safety, efficiency, and comfort.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2NGSIM data is compared to the DDPG model driving safety.
Fig. 3 comparison of driving comfort between NGSIM data and DDPG models.
Detailed Description
The algorithm proposes a model for automobile follow-up speed control based on deep reinforcement learning, and the model does not imitate human driving but directly optimizes driving safety, efficiency and comfort. Combining the Time To Collision (TTC), the experience distribution of the time headway and the Jerk (Jerk), a reward function reflecting the driving safety, the efficiency and the comfort is constructed, an actual driving data training model in a Next Generation Simulation (NGSIM) project is used, the following behavior simulated by the model is compared with the behavior observed in the NGSIM experience data, and the reinforcement learning intelligent body learns the vehicle speed safely, comfortably and efficiently in a mode of maximizing the accumulated reward through tests and trial and error in a simulation environment. The results show that the proposed following speed control algorithm shows better safe, efficient and comfortable driving ability compared to a real-world human driver. The results show that the proposed following speed control algorithm shows better safe, efficient and comfortable driving ability compared to a real-world human driver.
The invention is described in detail below with reference to the following figures and specific examples, the steps of which are as follows:
step 1: data is acquired. And (3) extracting a following event based on the criteria that a front vehicle and a rear vehicle stay on the same lane and the length of the vehicle following event is greater than 15 seconds and the like by using data in a Next Generation Simulation (NGSIM) project, and taking one part as training data and the other part as test data based on the extracted following event.
Step 2: a reward function is constructed. Characteristic quantities reflecting the relevant objectives (safety, comfort, efficiency) of the following control of the vehicle are proposed.
Step 2.1: time To Collision (TTC) is used to reflect safety. TTC represents the amount of time remaining before a collision of two vehicles and is formulated as
Figure BDA0001922335250000041
Where Sn-1, n (t) is the inter-vehicle distance, Δ Vn-1, n (t) is the relative velocity. Determining the safety threshold value to be 7 seconds according to NGSIM empirical data, and performing TTC feature construction:
Figure BDA0001922335250000042
if TTC is less than 7 seconds, the TTC characteristic index is a negative value, and as the TTC approaches zero, the TTC characteristic approaches negative infinity, and the most severe punishment is shown for the situation of approaching collision.
Step 2.2: the driving efficiency is measured by headway. From the analysis, the lognormal distribution is adapted to the distribution of the acquired training data with a probability density function of
Figure BDA0001922335250000043
x>0. Based on the extracted dataIt can be estimated that the mean μ and the logarithmic standard deviation σ of the distribution variable x are 0.4226 and 0.4365, respectively. And constructing the headway characteristics into the probability density value of the estimated headway lognormal distribution: fheadway ═ flognormal (headway | μ ═ 0.4226, σ ═ 0.4365). According to the headway time characteristic, the headway time of about 1.3 seconds corresponds to a high characteristic value, and the headway time is too long or too short and corresponds to a low characteristic value, so that the characteristic value estimates the high-flow headway maintenance behavior, and meanwhile punishs the unsafe or too long headway maintenance behavior.
Step 2.3: the driving comfort is measured by adopting the change rate Jerk of the acceleration, and the characteristic is as follows:
Figure BDA0001922335250000044
step 2.4: and establishing a comprehensive reward function. Establishing r ═ w1FTTC + w2Fheadway + w3fjerk according to the above steps 2.1, 2.2, 2.3, where w1, w2, w3 are coefficients of the features, all set to 1.
And step 3: and (5) training the model. And during each training, sequentially simulating the following events in the data, repeating the training for multiple times, and selecting the model which obtains the maximum average reward on the test data as the final model.
And 4, step 4: and (6) evaluating the model. And (4) comparing and evaluating the following behaviors obtained by the NGSIM data and the DDPG model simulation by using indexes such as TTC, headway, jerk and the like.
Examples
By comparing the empirical NGSIM data with the driving following behavior simulated by the DDPG model, the model can be tested to safely, efficiently and comfortably follow the front vehicle.
Data is acquired. Using the data in the NGSIM project, the following event is extracted based on criteria such as the leading and trailing vehicles staying on the same lane and the length of the vehicle following event >15 seconds.
In terms of driving safety, a following event is randomly selected from the NGSIM data set. FIG. 2 shows the observed velocities, spacings, and accelerations, along with the corresponding index values generated by the DDPG model. The driver in the NGSIM data drives at a very small inter-vehicle distance after 10 seconds, while the DDPG model always maintains a following gap of about 10 meters.
In terms of driving comfort, a follow-up event is randomly selected in the NGSIM dataset. FIG. 3 shows the observed velocity, pitch, acceleration and Jerk values, along with the corresponding index values generated by the DDPG model. The driver in NGSIM data produces frequent acceleration changes and large Jerk values during driving, while the DDPG model can maintain near constant acceleration and produce low Jerk values.
Based on the above, the proposed following speed control algorithm shows better safe, efficient and comfortable driving ability compared to the human driver in the NGSIM.

Claims (1)

1. A multi-objective optimized follow-up algorithm for controlling the speed of an automatically driven vehicle is characterized by comprising the following steps:
step 1: acquiring data;
using data in an NGSIM project, extracting a following event based on the criterion that a front vehicle and a rear vehicle stay on the same lane and the length of the vehicle following event is greater than 15 seconds, and taking one part as training data and the other part as test data based on the extracted following event;
step 2: constructing a reward function;
providing characteristic quantities reflecting related targets of automobile follow-up control, wherein the characteristic quantities specifically comprise safety, comfort and efficiency;
step 2.1: adopting TTC to reflect safety;
TTC is the time to collision, representing the amount of time remaining before two vehicles collide, and is formulated as
Figure FDA0002932330000000011
Wherein Sn-1, n (t) distance between cars, Δ Vn-1, n (t) relative velocity; determining the safety threshold value to be 7 seconds according to NGSIM empirical data, and performing TTC feature construction:
Figure FDA0002932330000000012
if TTC is less than 7 seconds, the TTC characteristic index is a negative value, and as the TTC approaches zero, the TTC characteristic approaches negative infinity, and for approaching collisionThe situation (2) represents the most severe penalty;
step 2.2: measuring the driving efficiency by adopting headway ();
headway is the Headway time; from the analysis, the lognormal distribution is adapted to the distribution of the acquired training data with a probability density function of
Figure FDA0002932330000000013
x>0; from the extracted data, it can be estimated that the mean μ and the logarithmic standard deviation σ of the distribution variable x are 0.4226 and 0.4365, respectively; and constructing the headway characteristics into the probability density value of the estimated headway lognormal distribution: fheadway ═ flognormal (headway | μ ═ 0.4226, σ ═ 0.4365); according to the train head time characteristic, a train head time distance of about 1.3 seconds corresponds to a high characteristic value, and an overlong train head time distance or an overlong train head time distance corresponds to a low characteristic value, so that the characteristic value estimates a high-flow train head distance keeping behavior, and punishs an unsafe or overlong train head distance keeping behavior;
step 2.3: the driving comfort is measured by adopting the change rate Jerk of the acceleration, and the characteristic is as follows:
Figure FDA0002932330000000021
step 2.4: establishing a comprehensive reward function;
establishing r ═ w1FTTC + w2Fheadway + w3fjerk according to the above steps, where w1, w2, w3 are coefficients of features, all set to 1;
and step 3: training a model;
in each training, sequentially simulating the following events in the data, repeating the training for many times, and selecting a model which obtains the maximum average reward on the test data as a final model;
and 4, step 4: evaluating the model;
and comparing and evaluating the NGSIM data and the following behaviors obtained by DDPG model simulation by using TTC, headway and jerk indexes.
CN201811600366.7A 2018-12-26 2018-12-26 Multi-objective optimized following algorithm for controlling speed of automatic driving vehicle Active CN109709956B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811600366.7A CN109709956B (en) 2018-12-26 2018-12-26 Multi-objective optimized following algorithm for controlling speed of automatic driving vehicle

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811600366.7A CN109709956B (en) 2018-12-26 2018-12-26 Multi-objective optimized following algorithm for controlling speed of automatic driving vehicle

Publications (2)

Publication Number Publication Date
CN109709956A CN109709956A (en) 2019-05-03
CN109709956B true CN109709956B (en) 2021-06-08

Family

ID=66258357

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811600366.7A Active CN109709956B (en) 2018-12-26 2018-12-26 Multi-objective optimized following algorithm for controlling speed of automatic driving vehicle

Country Status (1)

Country Link
CN (1) CN109709956B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321605A (en) * 2019-06-19 2019-10-11 中汽研(天津)汽车工程研究院有限公司 A kind of human-computer interaction coordination control strategy based on Multiple Velocity Model PREDICTIVE CONTROL
CN110347043B (en) * 2019-07-15 2023-03-10 武汉天喻信息产业股份有限公司 Intelligent driving control method and device
CN110488802B (en) * 2019-08-21 2020-05-12 清华大学 Decision-making method for dynamic behaviors of automatic driving vehicle in internet environment
CN110716562A (en) * 2019-09-25 2020-01-21 南京航空航天大学 Decision-making method for multi-lane driving of unmanned vehicle based on reinforcement learning
CN110992676B (en) * 2019-10-15 2021-06-04 同济大学 Road traffic capacity and internet automatic driving vehicle equivalent coefficient estimation method
JP6970156B2 (en) * 2019-10-18 2021-11-24 トヨタ自動車株式会社 Data generation method used for vehicle control, vehicle control device, vehicle control system, in-vehicle device and vehicle learning device
JP6744598B1 (en) * 2019-10-18 2020-08-19 トヨタ自動車株式会社 Vehicle control system, vehicle control device, and vehicle learning device
CN112698578B (en) * 2019-10-22 2023-11-14 北京车和家信息技术有限公司 Training method of automatic driving model and related equipment
CN110843746B (en) * 2019-11-28 2022-06-14 的卢技术有限公司 Anti-lock brake control method and system based on reinforcement learning
DE102020201931A1 (en) * 2020-02-17 2021-08-19 Psa Automobiles Sa Method for training at least one algorithm for a control unit of a motor vehicle, method for optimizing a traffic flow in a region, computer program product and motor vehicle
CN112201069B (en) * 2020-09-25 2021-10-29 厦门大学 Deep reinforcement learning-based method for constructing longitudinal following behavior model of driver
CN112614344B (en) * 2020-12-14 2022-03-29 中汽研汽车试验场股份有限公司 Hybrid traffic system efficiency evaluation method for automatic driving automobile participation
CN113353102B (en) * 2021-07-08 2022-11-25 重庆大学 Unprotected left-turn driving control method based on deep reinforcement learning
CN113954865B (en) * 2021-09-22 2023-11-10 吉林大学 Following control method for automatic driving vehicle in ice and snow environment
CN113901718A (en) * 2021-10-11 2022-01-07 长安大学 Deep reinforcement learning-based driving collision avoidance optimization method in following state
CN113954874B (en) * 2021-11-03 2023-04-28 同济大学 Automatic driving control method based on improved intelligent driver model
CN114056332B (en) * 2022-01-14 2022-04-12 清华大学 Intelligent automobile following decision and control method based on cognitive risk balance
CN115123159A (en) * 2022-06-27 2022-09-30 重庆邮电大学 AEB control method and system based on DDPG deep reinforcement learning

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101606794A (en) * 2009-07-17 2009-12-23 梁秀芬 A kind of dynamic cinema seat equipment
CN102955884A (en) * 2012-11-23 2013-03-06 同济大学 Safety distance calibration method in full-speed areas during following operation of high-speed train
CN103101559A (en) * 2013-02-16 2013-05-15 同济大学 Full-speed field train interval real-time control method based on car-following behavior quality evaluation
CN103248545A (en) * 2013-05-28 2013-08-14 北京和利时电机技术有限公司 Ethernetcommunication method and system for special effect broadcast system of dynamic cinema
CN105654779A (en) * 2016-02-03 2016-06-08 北京工业大学 Expressway construction area traffic flow coordination control method based on vehicle-road and vehicle-vehicle communication
CN106926844A (en) * 2017-03-27 2017-07-07 西南交通大学 A kind of dynamic auto driving lane-change method for planning track based on real time environment information
CN108313054A (en) * 2018-01-05 2018-07-24 北京智行者科技有限公司 The autonomous lane-change decision-making technique of automatic Pilot and device and automatic driving vehicle
CN108387242A (en) * 2018-02-07 2018-08-10 西南交通大学 Automatic Pilot lane-change prepares and executes integrated method for planning track
CN108492398A (en) * 2018-02-08 2018-09-04 同济大学 The method for early warning that drive automatically behavior based on accelerometer actively acquires
CN108932840A (en) * 2018-07-17 2018-12-04 北京理工大学 Automatic driving vehicle urban intersection passing method based on intensified learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100185369A1 (en) * 2009-01-19 2010-07-22 Jung-Woong Choi Automatic transmission

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101606794A (en) * 2009-07-17 2009-12-23 梁秀芬 A kind of dynamic cinema seat equipment
CN102955884A (en) * 2012-11-23 2013-03-06 同济大学 Safety distance calibration method in full-speed areas during following operation of high-speed train
CN103101559A (en) * 2013-02-16 2013-05-15 同济大学 Full-speed field train interval real-time control method based on car-following behavior quality evaluation
CN103248545A (en) * 2013-05-28 2013-08-14 北京和利时电机技术有限公司 Ethernetcommunication method and system for special effect broadcast system of dynamic cinema
CN105654779A (en) * 2016-02-03 2016-06-08 北京工业大学 Expressway construction area traffic flow coordination control method based on vehicle-road and vehicle-vehicle communication
CN106926844A (en) * 2017-03-27 2017-07-07 西南交通大学 A kind of dynamic auto driving lane-change method for planning track based on real time environment information
CN108313054A (en) * 2018-01-05 2018-07-24 北京智行者科技有限公司 The autonomous lane-change decision-making technique of automatic Pilot and device and automatic driving vehicle
CN108387242A (en) * 2018-02-07 2018-08-10 西南交通大学 Automatic Pilot lane-change prepares and executes integrated method for planning track
CN108492398A (en) * 2018-02-08 2018-09-04 同济大学 The method for early warning that drive automatically behavior based on accelerometer actively acquires
CN108932840A (en) * 2018-07-17 2018-12-04 北京理工大学 Automatic driving vehicle urban intersection passing method based on intensified learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
Drivers’ rear end collision avoidance behaviors under different levels of situational urgency;Xuesong Wang,等;《Transportation Research Part C》;20161223;第71卷;第419-433页 *
Modeling car-following behavior on urban expressways in Shanghai: A naturalistic driving study;Meixin Zhu,等;《Transportation Research Part C》;20180831;第93卷;第425-445页 *
中美两国道路交通事故信息采集技术比较研究;王雪松,等;《中国安全科学学报》;20121031;第22卷(第10期);第79-87页 *
基于自然驾驶数据的避撞预警对跟车行为影响;王雪松,朱美新,邢祎伦;《同济大学学报(自然科学版)》;20160731;第44卷(第7期);第1045-1051页 *
驾驶员前向避撞行为特征的降维及多元方差分析;王雪松,朱美新,陈铭;《同济大学学报(自然科学版)》;20161231;第44卷(第12期);第1858-1866页 *

Also Published As

Publication number Publication date
CN109709956A (en) 2019-05-03

Similar Documents

Publication Publication Date Title
CN109709956B (en) Multi-objective optimized following algorithm for controlling speed of automatic driving vehicle
CN109733415B (en) Anthropomorphic automatic driving and following model based on deep reinforcement learning
CN106874597B (en) highway overtaking behavior decision method applied to automatic driving vehicle
CN108919795B (en) Automatic driving automobile lane change decision method and device
CN107169567A (en) The generation method and device of a kind of decision networks model for Vehicular automatic driving
CN107168303A (en) A kind of automatic Pilot method and device of automobile
CN106407563A (en) A car following model generating method based on driving types and preceding vehicle acceleration speed information
CN110686906B (en) Automatic driving test method and device for vehicle
CN112703459A (en) Iterative generation of confrontational scenarios
CN113044064B (en) Vehicle self-adaptive automatic driving decision method and system based on meta reinforcement learning
Bolduc et al. Multimodel approach to personalized autonomous adaptive cruise control
CN112784485B (en) Automatic driving key scene generation method based on reinforcement learning
CN113010967A (en) Intelligent automobile in-loop simulation test method based on mixed traffic flow model
CN111824169B (en) Method for reducing exhaust emissions of a drive train of a vehicle having an internal combustion engine
Wang et al. High-level decision making for automated highway driving via behavior cloning
CN114511999A (en) Pedestrian behavior prediction method and device
CN111159832A (en) Construction method and device of traffic information flow
CN117242438A (en) Method for testing a driver assistance system of a vehicle
Koenig et al. Bridging the gap between open loop tests and statistical validation for highly automated driving
Su et al. A traffic simulation model with interactive drivers and high-fidelity car dynamics
CN114179830A (en) Autonomous overtaking method and system for automatic driving vehicle
Wei et al. A learning-based autonomous driver: emulate human driver's intelligence in low-speed car following
CN115176297A (en) Method for training at least one algorithm for a control unit of a motor vehicle, computer program product and motor vehicle
CN116629114A (en) Multi-agent model training method, system, computer equipment and storage medium
Cacciabue et al. Unified Driver Model simulation and its application to the automotive, rail and maritime domains

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant