CN114179835B - Automatic driving vehicle decision training method based on reinforcement learning in real scene - Google Patents

Automatic driving vehicle decision training method based on reinforcement learning in real scene Download PDF

Info

Publication number
CN114179835B
CN114179835B CN202111653767.0A CN202111653767A CN114179835B CN 114179835 B CN114179835 B CN 114179835B CN 202111653767 A CN202111653767 A CN 202111653767A CN 114179835 B CN114179835 B CN 114179835B
Authority
CN
China
Prior art keywords
vehicle
reinforcement learning
automatic driving
action
real scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111653767.0A
Other languages
Chinese (zh)
Other versions
CN114179835A (en
Inventor
孙辉
戴一凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Original Assignee
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Suzhou Automotive Research Institute of Tsinghua University filed Critical Tsinghua University
Priority to CN202111653767.0A priority Critical patent/CN114179835B/en
Publication of CN114179835A publication Critical patent/CN114179835A/en
Application granted granted Critical
Publication of CN114179835B publication Critical patent/CN114179835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces

Abstract

The invention discloses an automatic driving vehicle decision training method based on reinforcement learning in a real scene. The automatic driving vehicle is provided with a wire control chassis, a positioning device, a laser radar device and an automatic driving controller, and the method comprises the following steps: when a vehicle runs in a real scene according to a track point of a preset running path, intermittently executing exploration behaviors and recording input information of a reinforcement learning model, wherein the input information comprises an input state, an action space and return after single step execution; training the reinforcement learning decision algorithm according to the input information. The invention breaks through the limitation that the reinforcement learning algorithm depends on the virtual environment through the key technologies of the drive-by-wire chassis, four laser radars, an RTK positioning unit, a computer controller and other basic hardware, such as preset running track, small-range sampling action exploration, reliable safety protection, automatic reset and the like, and realizes the on-line automatic acquisition, training and verification of the reinforcement learning decision algorithm of the automatic driving vehicle.

Description

Automatic driving vehicle decision training method based on reinforcement learning in real scene
Technical Field
The embodiment of the invention relates to the technical field of automatic driving, in particular to an automatic driving vehicle decision training method based on reinforcement learning in a real scene.
Background
Autonomous vehicles, also known as intelligent automobiles, are an important application of outdoor wheeled mobile robots in the traffic field. It senses the surrounding environment of the vehicle using on-vehicle sensors such as cameras, lidar, ultrasonic sensors, microwave radar, GPS, odometer, magnetic compass, etc., and can control the steering and speed of the vehicle according to the road, vehicle position and obstacle information obtained by sensing, thereby enabling the vehicle to run safely and reliably on the road.
The intelligent automobile fundamentally changes the traditional closed-loop control mode of 'one-way-one-man', and the uncontrollable driver is requested out of the closed-loop system, so that the artificial influence factors are reduced, and the accurate machine control is realized by the machine driving brain, so that the efficiency and the safety of the traffic system are greatly improved.
The traditional prediction method based on artificial features or vehicle dynamics models cannot solve the problems of high dynamics, uncertainty, strong nonlinearity and the like in the actual road traffic environment, and the industrialized development process of the intelligent driving technology is influenced and limited.
The deep reinforcement learning theory is used for exploring and solving the problem of random uncertainty of the intelligent driving technology through analyzing and researching and calculating big data, and laying a scientific theoretical support for further realizing industrialization of the intelligent driving automobile. Reinforcement learning algorithms rely mostly on virtual simulation environment data acquisition and training, which greatly limits their application in real scenes.
Disclosure of Invention
The invention provides an automatic driving vehicle decision training method based on reinforcement learning in a real scene, which breaks through the limitation that the reinforcement learning algorithm depends on a virtual environment and realizes the online automatic acquisition, training and verification of the automatic driving vehicle reinforcement learning decision algorithm.
The invention provides an automatic driving vehicle decision training method based on reinforcement learning in a real scene, wherein the automatic driving vehicle is provided with a drive-by-wire chassis, a positioning device, a laser radar device and an automatic driving controller, the drive-by-wire chassis runs along a track point of a preset running path after being started, the positioning device is used for acquiring position information of the vehicle, the radar device is used for acquiring environmental data of the running process of the vehicle, and the automatic driving controller is used for controlling the running process of the vehicle according to a preset algorithm, and the method comprises the following steps:
when a vehicle runs in a real scene according to a track point of a preset running path, intermittently executing exploration behaviors and recording input information of a reinforcement learning model, wherein the input information comprises an input state S, an action space A and a return R after single step execution;
training a reinforcement learning decision algorithm according to the input information.
Optionally, the input state S includes: the track point S1 of the travel path and the abstract information S2 of the surrounding environment acquired by the laser radar device are preset.
Optionally, the action space A comprises two decomposition actions of a transverse action space A1 and a longitudinal action space A2;
wherein, the transverse action space A1 is assumed to accord with Gaussian distribution and is used as the basis of subsequent random action sampling;
the longitudinal movement space A2 is set with a reference value.
Optionally, the return R after single step execution is an evaluation obtained after single step execution of the a action in the input state S;
factors related to the return after the single step R include: and evaluating the offset of the vehicle driving path and the preset driving path, the offset of the vehicle driving speed and the expected driving speed, and the collision risk and lane departure of the vehicle.
Optionally, when the number of times of continuously executing the exploring action reaches a set threshold, the vehicle is controlled to reset, and the vehicle is driven according to a preset driving track point.
Optionally, the reinforcement learning algorithm is an offline reinforcement learning algorithm.
The invention has the beneficial effects that:
1. the invention provides an automatic driving vehicle decision training algorithm based on reinforcement learning in a real scene, which breaks through the limitation that the reinforcement learning algorithm depends on a virtual environment through the key technologies of presetting a driving track, small-range sampling action exploration, reliable safety protection, automatic resetting and the like, and has reference and guiding significance for popularization and use of reinforcement learning in the real environment.
2. According to the invention, during the whole sample collection period, the full-automatic driving behavior is realized, the manual workload is greatly reduced, the sampling efficiency is improved, and the synchronous operation and sampling of a plurality of automatic driving vehicles are supported.
3. The invention sets the reference value in the longitudinal motion space by applying Gaussian distribution limitation in the transverse motion space, and the designs are beneficial to the rapid convergence of the intelligent body model.
Drawings
FIG. 1 is a flow chart of an autonomous vehicle decision training method based on reinforcement learning in a real scene in the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
The embodiment of the invention provides an automatic driving vehicle decision training method based on reinforcement learning in a real scene, wherein the automatic driving vehicle is provided with a drive-by-wire chassis, a positioning device, a laser radar device and an automatic driving controller, the drive-by-wire chassis runs along a track point of a preset running path after being started, the positioning device is used for acquiring position information of the vehicle, the radar device is used for acquiring environmental data of the running process of the vehicle, and the automatic driving controller is used for controlling the running process of the vehicle according to a preset algorithm.
Preferably, the laser radars can be 4 360-degree laser radars arranged on the front, rear, left and right vehicle bodies of the wire control chassis; the positioning device may be a roof mounted RTK high precision positioning unit.
Referring to fig. 1, the method includes:
s110, intermittently executing exploration behaviors and recording input information of a reinforcement learning model when a vehicle runs in a real scene according to track points of a preset running path, wherein the input information comprises an input state S, an action space A and a return R after single step execution;
s120, training the reinforcement learning decision algorithm according to the input information.
The reinforcement learning model in this embodiment selects an off-policy reinforcement learning algorithm, such as DDPG (Deep Deterministic Policy Gradient) algorithm, TD3 (Twin Delayed Deep Deterministic policy gradient) algorithm, or SAC (Soft Actor-Critic) algorithm in deep reinforcement learning. Offline reinforcement learning algorithms can make full use of historical data.
Further, a high-precision map of a target driving area and preset driving track points are pre-stored in automatic driving control, and the vehicle drives along the track under a policy environment. And the whole sampling process is full-automatic driving, so that the sampling efficiency can be improved, and parallel sampling of multiple vehicles can be realized. In order to explore more action spaces, a random exploration and automatic resetting method is provided to fully explore the environment.
Specifically, the input state S includes two parts S1, S2. S1 is a track point of a preset running path and is global information; s2 is abstraction of surrounding environment perceived by the laser radar, and comprises dynamic and static barriers and a travelable area around a vehicle.
The action space A comprises two decomposition actions of a transverse action space A1 and a longitudinal action space A2;
wherein, the transverse action space A1 is assumed to accord with Gaussian distribution and is used as the basis of subsequent random action sampling; the longitudinal action space A2 is provided with a reference value, and the rapid convergence of the intelligent body model is realized through the design.
The return R after single step execution is the evaluation obtained after single step execution of the action A under the input state S, and R is related to 3 factors: 1) And the offset of the preset path, the smaller the offset is, the larger R is; 2) And the offset amount of the expected running speed, the smaller the offset amount, the larger R; 3) Evaluation of collision risk and lane departure.
In this embodiment, a high-precision map of a specific scene and a closed running track τ under normal conditions of the vehicle are preset in the automatic driving controller, and the vehicle should always run along the track according to the positioning information received by the RTK positioning unit under normal barrier-free conditions. However, since the full exploration of the environment cannot be obtained all the time along the track, a method A1_ for sampling the strategy conforming to the Gaussian distribution in the transverse operation is adopted to replace the standard preset action A1, random noise is added in the speed operation to form the action A2_, and the action A2_ is replaced, so that the full exploration of the environment is realized.
Because the automatic exploration process is set, the vehicle running has strong random uncertainty objectively, and the deviation phenomenon occurs. In order to solve this problem, when the number of times of continuously executing the search behavior reaches a set threshold, the vehicle is controlled to reset and travel according to a preset travel locus point. In the embodiment, the safety of the vehicle in the random exploration process can be ensured by presetting the constraint of the running track, the RTK high-precision positioning unit and the four laser radar data and in a full-automatic driving state.
Examples
The embodiment of the invention provides an application case of an automatic driving vehicle decision training method based on reinforcement learning in a real scene, which comprises the following steps:
1. preparing a line control chassis after debugging, respectively installing four 360-degree laser radars on front, rear, left and right vehicle bodies of the line control chassis, and combining to form full coverage of a 360-degree view field. And installing the RTK high-precision positioning unit on the roof, and fixing the automatic driving controller on the vehicle body.
2. And downloading the high-precision map of the fixed park and the preset running track point into an automatic driving controller.
3. And downloading the written driving control and exploration algorithm into the automatic driving controller.
4. And downloading the written safe collision avoidance and behavior evaluation algorithm into an automatic driving controller.
5. When the drive-by-wire chassis and the automatic driving controller are started, the drive-by-wire chassis can run along a preset track, the exploration is intermittently executed, and at the moment, the data of S, A, R are recorded. If yaw is generated, the track point is automatically reset to a preset track point, and behavior exploration is restarted.
6. If the search operation does not reach the automatic reset condition (reset after searching for a certain number of times), continuous search is performed.
7. And the training is carried out until the training requirement is met, and an off-policy (offline) reinforcement learning algorithm such as DDPG, SAC and the like is adopted during training.
8. And downloading the trained model into an automatic driving controller, and evaluating the effect of the model by using a real vehicle.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (3)

1. The utility model provides an automatic driving vehicle decision training method based on reinforcement learning under real scene, characterized in that, automatic driving vehicle is provided with drive-by-wire chassis, positioner, laser radar device and automatic driving controller, the drive-by-wire chassis is followed the orbit point of predetermineeing the travel path after starting, positioner is used for obtaining the positional information of vehicle, radar device is used for obtaining the environmental data of vehicle travel process, automatic driving controller is used for controlling vehicle travel process according to predetermineeing the algorithm, the method includes:
when a vehicle runs in a real scene according to a track point of a preset running path, intermittently executing exploration behaviors and recording input information of a reinforcement learning model, wherein the input information comprises an input state S, an action space A and a return R after single step execution;
training a reinforcement learning decision algorithm according to the input information;
the input state S includes: presetting a track point S1 of a driving path and abstract information S2 of surrounding environment acquired by a laser radar device;
the action space A comprises two decomposition actions of a transverse action space A1 and a longitudinal action space A2;
wherein, the transverse action space A1 is assumed to accord with Gaussian distribution and is used as the basis of subsequent random action sampling;
a reference value is set on the longitudinal movement space A2;
the return R after single step execution is the evaluation obtained after single step execution of the action A under the input state S;
factors related to the return after the single step R include: and evaluating the offset of the vehicle driving path and the preset driving path, the offset of the vehicle driving speed and the expected driving speed, and the collision risk and lane departure of the vehicle.
2. The method according to claim 1, wherein when the number of times of continuously performing the exploring action reaches the set threshold, the vehicle is controlled to be reset and travel according to the preset travel locus point.
3. The method of claim 1, wherein the reinforcement learning algorithm is an offline reinforcement learning algorithm.
CN202111653767.0A 2021-12-30 2021-12-30 Automatic driving vehicle decision training method based on reinforcement learning in real scene Active CN114179835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111653767.0A CN114179835B (en) 2021-12-30 2021-12-30 Automatic driving vehicle decision training method based on reinforcement learning in real scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111653767.0A CN114179835B (en) 2021-12-30 2021-12-30 Automatic driving vehicle decision training method based on reinforcement learning in real scene

Publications (2)

Publication Number Publication Date
CN114179835A CN114179835A (en) 2022-03-15
CN114179835B true CN114179835B (en) 2024-01-05

Family

ID=80606422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111653767.0A Active CN114179835B (en) 2021-12-30 2021-12-30 Automatic driving vehicle decision training method based on reinforcement learning in real scene

Country Status (1)

Country Link
CN (1) CN114179835B (en)

Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106154834A (en) * 2016-07-20 2016-11-23 百度在线网络技术(北京)有限公司 For the method and apparatus controlling automatic driving vehicle
CN106774291A (en) * 2016-12-26 2017-05-31 清华大学苏州汽车研究院(吴江) A kind of electric-control system of automatic Pilot electric automobile
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN109597317A (en) * 2018-12-26 2019-04-09 广州小鹏汽车科技有限公司 A kind of Vehicular automatic driving method, system and electronic equipment based on self study
CN109649390A (en) * 2018-12-19 2019-04-19 清华大学苏州汽车研究院(吴江) A kind of autonomous follow the bus system and method for autonomous driving vehicle
CN110322017A (en) * 2019-08-13 2019-10-11 吉林大学 Automatic Pilot intelligent vehicle Trajectory Tracking Control strategy based on deeply study
WO2020079066A1 (en) * 2018-10-16 2020-04-23 Five AI Limited Autonomous vehicle planning and prediction
CN112406904A (en) * 2020-08-27 2021-02-26 腾讯科技(深圳)有限公司 Method and device for training automatic driving strategy, automatic driving method, equipment, vehicle and computer readable storage medium
CN112417756A (en) * 2020-11-13 2021-02-26 清华大学苏州汽车研究院(吴江) Interactive simulation test system of automatic driving algorithm
WO2021103834A1 (en) * 2019-11-27 2021-06-03 初速度(苏州)科技有限公司 Method for generating lane changing decision model, lane changing decision method for driverless vehicle, and device
CN113044064A (en) * 2021-04-01 2021-06-29 南京大学 Vehicle self-adaptive automatic driving decision method and system based on meta reinforcement learning
CN113104050A (en) * 2021-04-07 2021-07-13 天津理工大学 Unmanned end-to-end decision method based on deep reinforcement learning
CN113264059A (en) * 2021-05-17 2021-08-17 北京工业大学 Unmanned vehicle motion decision control method supporting multiple driving behaviors and based on deep reinforcement learning
CN113420368A (en) * 2021-05-24 2021-09-21 江苏大学 Intelligent vehicle neural network dynamics model, reinforcement learning network model and automatic driving training method thereof
WO2021212728A1 (en) * 2020-04-24 2021-10-28 广州大学 Unmanned vehicle lane changing decision-making method and system based on adversarial imitation learning
CN113561986A (en) * 2021-08-18 2021-10-29 武汉理工大学 Decision-making method and device for automatically driving automobile
CN113635909A (en) * 2021-08-19 2021-11-12 崔建勋 Automatic driving control method based on confrontation generation simulation learning
CN113682312A (en) * 2021-09-23 2021-11-23 中汽创智科技有限公司 Autonomous lane changing method and system integrating deep reinforcement learning
WO2021238303A1 (en) * 2020-05-29 2021-12-02 华为技术有限公司 Motion planning method and apparatus

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109765820B (en) * 2019-01-14 2019-08-09 南栖仙策(南京)科技有限公司 A kind of training system for automatic Pilot control strategy
US11465650B2 (en) * 2019-09-20 2022-10-11 Honda Motor Co., Ltd. Model-free reinforcement learning

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106154834A (en) * 2016-07-20 2016-11-23 百度在线网络技术(北京)有限公司 For the method and apparatus controlling automatic driving vehicle
CN106774291A (en) * 2016-12-26 2017-05-31 清华大学苏州汽车研究院(吴江) A kind of electric-control system of automatic Pilot electric automobile
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
WO2020079066A1 (en) * 2018-10-16 2020-04-23 Five AI Limited Autonomous vehicle planning and prediction
CN109649390A (en) * 2018-12-19 2019-04-19 清华大学苏州汽车研究院(吴江) A kind of autonomous follow the bus system and method for autonomous driving vehicle
CN109597317A (en) * 2018-12-26 2019-04-09 广州小鹏汽车科技有限公司 A kind of Vehicular automatic driving method, system and electronic equipment based on self study
CN110322017A (en) * 2019-08-13 2019-10-11 吉林大学 Automatic Pilot intelligent vehicle Trajectory Tracking Control strategy based on deeply study
WO2021103834A1 (en) * 2019-11-27 2021-06-03 初速度(苏州)科技有限公司 Method for generating lane changing decision model, lane changing decision method for driverless vehicle, and device
WO2021212728A1 (en) * 2020-04-24 2021-10-28 广州大学 Unmanned vehicle lane changing decision-making method and system based on adversarial imitation learning
WO2021238303A1 (en) * 2020-05-29 2021-12-02 华为技术有限公司 Motion planning method and apparatus
CN112406904A (en) * 2020-08-27 2021-02-26 腾讯科技(深圳)有限公司 Method and device for training automatic driving strategy, automatic driving method, equipment, vehicle and computer readable storage medium
CN112417756A (en) * 2020-11-13 2021-02-26 清华大学苏州汽车研究院(吴江) Interactive simulation test system of automatic driving algorithm
CN113044064A (en) * 2021-04-01 2021-06-29 南京大学 Vehicle self-adaptive automatic driving decision method and system based on meta reinforcement learning
CN113104050A (en) * 2021-04-07 2021-07-13 天津理工大学 Unmanned end-to-end decision method based on deep reinforcement learning
CN113264059A (en) * 2021-05-17 2021-08-17 北京工业大学 Unmanned vehicle motion decision control method supporting multiple driving behaviors and based on deep reinforcement learning
CN113420368A (en) * 2021-05-24 2021-09-21 江苏大学 Intelligent vehicle neural network dynamics model, reinforcement learning network model and automatic driving training method thereof
CN113561986A (en) * 2021-08-18 2021-10-29 武汉理工大学 Decision-making method and device for automatically driving automobile
CN113635909A (en) * 2021-08-19 2021-11-12 崔建勋 Automatic driving control method based on confrontation generation simulation learning
CN113682312A (en) * 2021-09-23 2021-11-23 中汽创智科技有限公司 Autonomous lane changing method and system integrating deep reinforcement learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
基于强化学习的无人驾驶仿真研究;孙嘉浩;陈劲杰;;农业装备与车辆工程(06);全文 *
基于深度强化学习的端到端无人驾驶决策;黄志清;曲志伟;张吉;张严心;田锐;;电子学报(09);全文 *
改进DDPG算法在自动驾驶中的应用;张斌;何明;陈希亮;吴春晓;刘斌;周波;;计算机工程与应用(10);全文 *
智能汽车决策中的驾驶行为语义解析关键技术;李国法;陈耀昱;吕辰;陶达;曹东璞;成波;;汽车安全与节能学报(04);全文 *
智能网联汽车(ICV)技术的发展现状及趋势;李克强;戴一凡;李升波;边明远;;汽车安全与节能学报(01);全文 *
融合类人驾驶行为的无人驾驶深度强化学习方法;吕迪;徐坤;李慧云;潘仲鸣;;集成技术(05);全文 *

Also Published As

Publication number Publication date
CN114179835A (en) 2022-03-15

Similar Documents

Publication Publication Date Title
CN109263639B (en) Driving path planning method based on state grid method
CN110647056B (en) Intelligent networking automobile environment simulation system based on whole automobile hardware-in-loop
US10882522B2 (en) Systems and methods for agent tracking
US10800427B2 (en) Systems and methods for a vehicle controller robust to time delays
Llorca et al. Autonomous pedestrian collision avoidance using a fuzzy steering controller
CN102495631B (en) Intelligent control method of driverless vehicle tracking desired trajectory
CN110673602B (en) Reinforced learning model, vehicle automatic driving decision method and vehicle-mounted equipment
US11167770B2 (en) Autonomous vehicle actuation dynamics and latency identification
CN110834644A (en) Vehicle control method and device, vehicle to be controlled and storage medium
US20190018412A1 (en) Control Method for Autonomous Vehicles
CN107092256B (en) Steering control method for unmanned vehicle
CN113665593B (en) Longitudinal control method and system for intelligent driving of vehicle and storage medium
CN110824912B (en) Method and apparatus for training a control strategy model for generating an autonomous driving strategy
CN112099378B (en) Front vehicle lateral motion state real-time estimation method considering random measurement time lag
CN116182884A (en) Intelligent vehicle local path planning method based on transverse and longitudinal decoupling of frenet coordinate system
US10891951B2 (en) Vehicle language processing
Kanchwala et al. Development of an intelligent transport system for EV
CN114638103A (en) Automatic driving joint simulation method and device, computer equipment and storage medium
CN114179835B (en) Automatic driving vehicle decision training method based on reinforcement learning in real scene
CN112193318A (en) Vehicle path control method, device, equipment and computer readable storage medium
Gelbal et al. Smartshuttle: Model based design and evaluation of automated on-demand shuttles for solving the first-mile and last-mile problem in a smart city
DE102020122086A1 (en) MEASURING CONFIDENCE IN DEEP NEURAL NETWORKS
CN109492835A (en) Determination method, model training method and the relevant apparatus of vehicle control information
DE102022121602A1 (en) OBJECT MOTION PATH PREDICTION
Cremean et al. Alice: An information-rich autonomous vehicle for high-speed desert navigation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant