CN114179835A - Decision training method for automatic driving vehicle based on reinforcement learning in real scene - Google Patents

Decision training method for automatic driving vehicle based on reinforcement learning in real scene Download PDF

Info

Publication number
CN114179835A
CN114179835A CN202111653767.0A CN202111653767A CN114179835A CN 114179835 A CN114179835 A CN 114179835A CN 202111653767 A CN202111653767 A CN 202111653767A CN 114179835 A CN114179835 A CN 114179835A
Authority
CN
China
Prior art keywords
vehicle
reinforcement learning
automatic driving
preset
real scene
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111653767.0A
Other languages
Chinese (zh)
Other versions
CN114179835B (en
Inventor
孙辉
戴一凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Original Assignee
Tsinghua University
Suzhou Automotive Research Institute of Tsinghua University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tsinghua University, Suzhou Automotive Research Institute of Tsinghua University filed Critical Tsinghua University
Priority to CN202111653767.0A priority Critical patent/CN114179835B/en
Publication of CN114179835A publication Critical patent/CN114179835A/en
Application granted granted Critical
Publication of CN114179835B publication Critical patent/CN114179835B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Automation & Control Theory (AREA)
  • Human Computer Interaction (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Traffic Control Systems (AREA)

Abstract

The invention discloses an automatic driving vehicle decision training method based on reinforcement learning in a real scene. The autonomous vehicle is provided with a drive-by-wire chassis, a positioning device, a lidar device, and an autonomous driving controller, the method comprising: when a vehicle runs according to track points of a preset running path in a real scene, intermittently executing exploration behaviors and recording input information of a reinforcement learning model, wherein the input information comprises an input state, an action space and a return after single step execution; and training a reinforcement learning decision algorithm according to the input information. According to the invention, through basic hardware such as a drive-by-wire chassis, four laser radars, an RTK positioning unit and a computer controller, and through key technologies such as preset driving track, small-range sampling action exploration, reliable safety protection and automatic reset, the limitation that a reinforcement learning algorithm depends on a virtual environment is broken through, and online automatic acquisition, training and verification of a reinforcement learning decision algorithm of an automatic driving vehicle are realized.

Description

Decision training method for automatic driving vehicle based on reinforcement learning in real scene
Technical Field
The embodiment of the invention relates to the technical field of automatic driving, in particular to a decision training method for an automatic driving vehicle based on reinforcement learning in a real scene.
Background
An automatic driving vehicle, also called an intelligent automobile, is an important application of an outdoor wheeled mobile robot in the traffic field. The vehicle-mounted sensor senses the surrounding environment of the vehicle by using a vehicle-mounted sensor, such as a camera, a laser radar, an ultrasonic sensor, a microwave radar, a GPS, a speedometer, a magnetic compass and the like, and can control the steering and the speed of the vehicle according to road, vehicle position and obstacle information obtained by sensing, so that the vehicle can safely and reliably run on the road.
The intelligent automobile fundamentally changes the traditional closed-loop control mode of 'one vehicle is one way' and requests an uncontrollable driver from the closed-loop system, thereby reducing human influence factors and realizing accurate machine control by a machine driving brain, thereby greatly improving the efficiency and the safety of a traffic system.
The traditional prediction method based on artificial features or vehicle dynamics models cannot solve the problems of high dynamics, uncertainty, strong nonlinearity and the like existing in the actual road traffic environment, and the problems affect and limit the industrial development process of the intelligent driving technology.
The deep reinforcement learning theory is used for solving the problem of random uncertainty of the intelligent driving technology by analyzing big data and researching and calculating, and lays a scientific theoretical support for further realizing industrialization of the intelligent driving automobile. However, the reinforcement learning algorithm mostly depends on the data acquisition and training of the virtual simulation environment, which greatly limits the application of the reinforcement learning algorithm in real scenes.
Disclosure of Invention
The invention provides an automatic driving vehicle decision training method based on reinforcement learning in a real scene, which breaks through the limitation that a reinforcement learning algorithm depends on a virtual environment and realizes the online automatic acquisition, training and verification of the reinforcement learning decision algorithm of the automatic driving vehicle.
The invention provides an automatic driving vehicle decision training method based on reinforcement learning in a real scene, wherein the automatic driving vehicle is provided with a drive-by-wire chassis, a positioning device, a laser radar device and an automatic driving controller, the drive-by-wire chassis drives along track points of a preset driving path after being started, the positioning device is used for acquiring position information of the vehicle, the radar device is used for acquiring environment data of a vehicle driving process, the automatic driving controller is used for controlling the vehicle driving process according to a preset algorithm, and the method comprises the following steps:
when a vehicle runs according to track points of a preset running path in a real scene, intermittently executing exploration behaviors and recording input information of a reinforcement learning model, wherein the input information comprises an input state S, an action space A and a return R after single step execution;
and training a reinforcement learning decision algorithm according to the input information.
Optionally, the input state S includes: a track point S1 of a preset driving path and abstract information S2 of the surrounding environment acquired by the laser radar device.
Optionally, the motion space a comprises two decomposed motions of a transverse motion space a1 and a longitudinal motion space a 2;
wherein, the transverse motion space A1 is assumed to conform to Gaussian distribution as the basis of the subsequent random motion sampling;
a reference value is set in the vertical movement space a 2.
Optionally, the return R after the single step execution is an evaluation obtained after the action a is executed in a single step in the input state S;
factors related to the reward R after the single-step execution include: the evaluation of the offset amount of the vehicle travel path and the preset travel path, the offset amount of the vehicle travel speed and the expected travel speed, and the vehicle collision risk and lane departure.
Optionally, when the number of times of continuously executing the exploration behavior reaches a set threshold, the vehicle is controlled to reset, and the vehicle runs according to a preset running track point.
Optionally, the reinforcement learning algorithm is an offline reinforcement learning algorithm.
The invention has the beneficial effects that:
1. the invention provides an automatic driving vehicle decision training algorithm based on reinforcement learning in a real scene, which breaks through the limitation that the reinforcement learning algorithm depends on a virtual environment through key technologies such as preset driving tracks, small-range sampling action exploration, reliable safety protection and automatic reset and has reference guiding significance for popularization and use of reinforcement learning in a real environment.
2. The invention is a full-automatic driving behavior in the whole sample collection period, greatly reduces the manual workload, improves the sampling efficiency, and supports the synchronous operation and sampling of a plurality of automatic driving vehicles.
3. The invention sets a reference value on the longitudinal motion space by applying Gaussian distribution limitation on the transverse motion space, and the designs are favorable for the rapid convergence of the intelligent agent model.
Drawings
FIG. 1 is a flow chart of an automated driving vehicle decision training method based on reinforcement learning in real scenes according to the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
The embodiment of the invention provides an automatic driving vehicle decision training method based on reinforcement learning in a real scene, wherein the automatic driving vehicle is provided with a drive-by-wire chassis, a positioning device, a laser radar device and an automatic driving controller, the drive-by-wire chassis drives along track points of a preset driving path after being started, the positioning device is used for acquiring position information of the vehicle, the radar device is used for acquiring environment data of a vehicle driving process, and the automatic driving controller is used for controlling the vehicle driving process according to a preset algorithm.
Preferably, the laser radar can be 4 360-degree laser radars arranged on the front, the rear, the left and the right vehicle bodies of the drive-by-wire chassis; the positioning device may be a roof mounted RTK high precision positioning unit.
Referring to fig. 1, the method includes:
s110, when a vehicle runs according to track points of a preset running path in a real scene, intermittently executing exploration behaviors and recording input information of a reinforcement learning model, wherein the input information comprises an input state S, an action space A and a return R after single step execution;
and S120, training a reinforcement learning decision algorithm according to the input information.
The reinforcement learning model in this embodiment selects an off-polarity reinforcement learning algorithm, such as a ddpg (Deep dependent polarization gradient) algorithm, a TD3(Twin Delayed dependent polarization gradient) algorithm, or a SAC (Soft Actor-critical) algorithm in Deep reinforcement learning. The offline reinforcement learning algorithm can make full use of historical data.
Further, a high-precision map of the target driving area and preset driving track points are prestored in the automatic driving control, and the vehicle drives along the tracks in the policy environment. In addition, the whole sampling process is full-automatic driving, the sampling efficiency can be improved, and multi-vehicle parallel sampling can be realized. In order to explore more motion spaces, a method of random exploration and automatic reset is provided to realize full exploration on the environment.
Specifically, the input state S includes two portions S1, S2. Wherein, S1 is a track point of a preset driving path and is global information; s2 is an abstraction of the lidar-aware surroundings, including dynamic and static obstacles, travelable areas around the vehicle.
The motion space A comprises two decomposed motions of a transverse motion space A1 and a longitudinal motion space A2;
wherein, the transverse motion space A1 is assumed to conform to Gaussian distribution as the basis of the subsequent random motion sampling; the reference value is set on the longitudinal motion space A2, and the rapid convergence of the intelligent agent model is realized through the design.
The return R after single step execution is an evaluation obtained after the action A is executed in a single step under the input state S, and R is related to 3 factors: 1) and the offset of the preset path, wherein the smaller the offset is, the larger R is; 2) and an offset of the expected travel speed, the smaller the offset, the larger R; 3) evaluation of collision risk and lane departure.
In this embodiment, a high-precision map of a specific scene and a closed running track τ of the vehicle under a normal condition are preset in the automatic driving controller, and the vehicle is required to run along the track tracking according to the positioning information received by the RTK positioning unit under a normal obstacle-free condition. However, since the environment cannot be fully explored all the time by driving along the trajectory, a method a1 'of sampling the strategy conforming to the gaussian distribution in the transverse operation is adopted, the standard preset action a1 is replaced, random noise is added to the speed operation to form an action a 2', and the action a2 is replaced, so as to realize the full exploration of the environment.
Due to the arrangement of the automatic exploration process, the vehicle has strong random uncertainty objectively in driving, and the deviation phenomenon occurs. In order to solve the problem, when the number of times of continuously executing the exploration behaviors reaches a set threshold value, the vehicle is controlled to reset and run according to a preset running track point. In the embodiment, safety of the vehicle in the random exploration process can be guaranteed through the constraint of the preset running track, the RTK high-precision positioning unit and the four laser radar data and the full-automatic driving state.
Examples
The embodiment of the invention provides an application case of an automatic driving vehicle decision training method based on reinforcement learning in a real scene, which comprises the following steps:
1. preparing a debugged drive-by-wire chassis, and respectively installing four 360-degree laser radars on front, rear, left and right vehicle bodies of the drive-by-wire chassis to form full coverage of a 360-degree view field by combination. And the RTK high-precision positioning unit is arranged on the roof of the vehicle, and the automatic driving controller is fixed on the vehicle body.
2. And downloading the high-precision map of the fixed park and the preset running track point into an automatic driving controller.
3. And downloading the programmed driving control and exploration algorithm into an automatic driving controller.
4. And downloading the compiled safe collision avoidance and behavior evaluation algorithm into an automatic driving controller.
5. When the drive-by-wire chassis and the automatic driving controller are started, the drive-by-wire chassis can drive along the preset track, intermittently executes exploration behaviors, and at the moment, begins to record S, A, R data. If the vehicle drifts, the vehicle automatically resets to a preset track point, and the behavior exploration is restarted.
6. If the search operation does not reach the automatic reset condition (reset after a certain number of searches), the search is continued.
7. And collecting until the training requirement is met, and adopting off-policy (offline) reinforcement learning algorithms such as DDPG, SAC and the like during training.
8. And downloading the trained model into an automatic driving controller, and evaluating the effect of the automatic driving controller by a real vehicle.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (6)

1. The utility model provides an automatic driving vehicle decision-making training method based on reinforcement learning under the real scene, which characterized in that, automatic driving vehicle is provided with drive-by-wire chassis, positioner, laser radar device and automatic driving controller, drive-by-wire chassis is after the start-up along the track point of predetermineeing the route of traveling, positioner is used for acquireing the positional information of vehicle, the radar installation is used for acquireing the environmental data of vehicle travel process, automatic driving controller is used for controlling the vehicle travel process according to predetermined algorithm, the method includes:
when a vehicle runs according to track points of a preset running path in a real scene, intermittently executing exploration behaviors and recording input information of a reinforcement learning model, wherein the input information comprises an input state S, an action space A and a return R after single step execution;
and training a reinforcement learning decision algorithm according to the input information.
2. The method of claim 1, wherein the input state S comprises: a track point S1 of a preset driving path and abstract information S2 of the surrounding environment acquired by the laser radar device.
3. The method of claim 1, wherein the motion space a comprises two decomposed motions of lateral motion space a1 and longitudinal motion space a 2;
wherein, the transverse motion space A1 is assumed to conform to Gaussian distribution as the basis of the subsequent random motion sampling;
a reference value is set in the vertical movement space a 2.
4. The method of claim 1, wherein the reward R after single-stepping is an evaluation obtained after a single-stepping action a in the input state S;
factors related to the reward R after the single-step execution include: the evaluation of the offset amount of the vehicle travel path and the preset travel path, the offset amount of the vehicle travel speed and the expected travel speed, and the vehicle collision risk and lane departure.
5. The method according to claim 1, characterized in that when the number of times of the continuous exploration activities reaches a set threshold, the vehicle is controlled to reset and to run according to a preset running track point.
6. The method of claim 1, wherein the reinforcement learning algorithm is an offline reinforcement learning algorithm.
CN202111653767.0A 2021-12-30 2021-12-30 Automatic driving vehicle decision training method based on reinforcement learning in real scene Active CN114179835B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111653767.0A CN114179835B (en) 2021-12-30 2021-12-30 Automatic driving vehicle decision training method based on reinforcement learning in real scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111653767.0A CN114179835B (en) 2021-12-30 2021-12-30 Automatic driving vehicle decision training method based on reinforcement learning in real scene

Publications (2)

Publication Number Publication Date
CN114179835A true CN114179835A (en) 2022-03-15
CN114179835B CN114179835B (en) 2024-01-05

Family

ID=80606422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111653767.0A Active CN114179835B (en) 2021-12-30 2021-12-30 Automatic driving vehicle decision training method based on reinforcement learning in real scene

Country Status (1)

Country Link
CN (1) CN114179835B (en)

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106154834A (en) * 2016-07-20 2016-11-23 百度在线网络技术(北京)有限公司 For the method and apparatus controlling automatic driving vehicle
CN106774291A (en) * 2016-12-26 2017-05-31 清华大学苏州汽车研究院(吴江) A kind of electric-control system of automatic Pilot electric automobile
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
CN109597317A (en) * 2018-12-26 2019-04-09 广州小鹏汽车科技有限公司 A kind of Vehicular automatic driving method, system and electronic equipment based on self study
CN109649390A (en) * 2018-12-19 2019-04-19 清华大学苏州汽车研究院(吴江) A kind of autonomous follow the bus system and method for autonomous driving vehicle
CN110322017A (en) * 2019-08-13 2019-10-11 吉林大学 Automatic Pilot intelligent vehicle Trajectory Tracking Control strategy based on deeply study
WO2020079066A1 (en) * 2018-10-16 2020-04-23 Five AI Limited Autonomous vehicle planning and prediction
US20200372822A1 (en) * 2019-01-14 2020-11-26 Polixir Technologies Limited Training system for autonomous driving control policy
CN112406904A (en) * 2020-08-27 2021-02-26 腾讯科技(深圳)有限公司 Method and device for training automatic driving strategy, automatic driving method, equipment, vehicle and computer readable storage medium
CN112417756A (en) * 2020-11-13 2021-02-26 清华大学苏州汽车研究院(吴江) Interactive simulation test system of automatic driving algorithm
US20210086798A1 (en) * 2019-09-20 2021-03-25 Honda Motor Co., Ltd. Model-free reinforcement learning
WO2021103834A1 (en) * 2019-11-27 2021-06-03 初速度(苏州)科技有限公司 Method for generating lane changing decision model, lane changing decision method for driverless vehicle, and device
CN113044064A (en) * 2021-04-01 2021-06-29 南京大学 Vehicle self-adaptive automatic driving decision method and system based on meta reinforcement learning
CN113104050A (en) * 2021-04-07 2021-07-13 天津理工大学 Unmanned end-to-end decision method based on deep reinforcement learning
CN113264059A (en) * 2021-05-17 2021-08-17 北京工业大学 Unmanned vehicle motion decision control method supporting multiple driving behaviors and based on deep reinforcement learning
CN113420368A (en) * 2021-05-24 2021-09-21 江苏大学 Intelligent vehicle neural network dynamics model, reinforcement learning network model and automatic driving training method thereof
WO2021212728A1 (en) * 2020-04-24 2021-10-28 广州大学 Unmanned vehicle lane changing decision-making method and system based on adversarial imitation learning
CN113561986A (en) * 2021-08-18 2021-10-29 武汉理工大学 Decision-making method and device for automatically driving automobile
CN113635909A (en) * 2021-08-19 2021-11-12 崔建勋 Automatic driving control method based on confrontation generation simulation learning
CN113682312A (en) * 2021-09-23 2021-11-23 中汽创智科技有限公司 Autonomous lane changing method and system integrating deep reinforcement learning
WO2021238303A1 (en) * 2020-05-29 2021-12-02 华为技术有限公司 Motion planning method and apparatus

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106154834A (en) * 2016-07-20 2016-11-23 百度在线网络技术(北京)有限公司 For the method and apparatus controlling automatic driving vehicle
CN106774291A (en) * 2016-12-26 2017-05-31 清华大学苏州汽车研究院(吴江) A kind of electric-control system of automatic Pilot electric automobile
CN108196535A (en) * 2017-12-12 2018-06-22 清华大学苏州汽车研究院(吴江) Automated driving system based on enhancing study and Multi-sensor Fusion
WO2020079066A1 (en) * 2018-10-16 2020-04-23 Five AI Limited Autonomous vehicle planning and prediction
CN109649390A (en) * 2018-12-19 2019-04-19 清华大学苏州汽车研究院(吴江) A kind of autonomous follow the bus system and method for autonomous driving vehicle
CN109597317A (en) * 2018-12-26 2019-04-09 广州小鹏汽车科技有限公司 A kind of Vehicular automatic driving method, system and electronic equipment based on self study
US20200372822A1 (en) * 2019-01-14 2020-11-26 Polixir Technologies Limited Training system for autonomous driving control policy
CN110322017A (en) * 2019-08-13 2019-10-11 吉林大学 Automatic Pilot intelligent vehicle Trajectory Tracking Control strategy based on deeply study
US20210086798A1 (en) * 2019-09-20 2021-03-25 Honda Motor Co., Ltd. Model-free reinforcement learning
WO2021103834A1 (en) * 2019-11-27 2021-06-03 初速度(苏州)科技有限公司 Method for generating lane changing decision model, lane changing decision method for driverless vehicle, and device
WO2021212728A1 (en) * 2020-04-24 2021-10-28 广州大学 Unmanned vehicle lane changing decision-making method and system based on adversarial imitation learning
WO2021238303A1 (en) * 2020-05-29 2021-12-02 华为技术有限公司 Motion planning method and apparatus
CN112406904A (en) * 2020-08-27 2021-02-26 腾讯科技(深圳)有限公司 Method and device for training automatic driving strategy, automatic driving method, equipment, vehicle and computer readable storage medium
CN112417756A (en) * 2020-11-13 2021-02-26 清华大学苏州汽车研究院(吴江) Interactive simulation test system of automatic driving algorithm
CN113044064A (en) * 2021-04-01 2021-06-29 南京大学 Vehicle self-adaptive automatic driving decision method and system based on meta reinforcement learning
CN113104050A (en) * 2021-04-07 2021-07-13 天津理工大学 Unmanned end-to-end decision method based on deep reinforcement learning
CN113264059A (en) * 2021-05-17 2021-08-17 北京工业大学 Unmanned vehicle motion decision control method supporting multiple driving behaviors and based on deep reinforcement learning
CN113420368A (en) * 2021-05-24 2021-09-21 江苏大学 Intelligent vehicle neural network dynamics model, reinforcement learning network model and automatic driving training method thereof
CN113561986A (en) * 2021-08-18 2021-10-29 武汉理工大学 Decision-making method and device for automatically driving automobile
CN113635909A (en) * 2021-08-19 2021-11-12 崔建勋 Automatic driving control method based on confrontation generation simulation learning
CN113682312A (en) * 2021-09-23 2021-11-23 中汽创智科技有限公司 Autonomous lane changing method and system integrating deep reinforcement learning

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
吕迪;徐坤;李慧云;潘仲鸣;: "融合类人驾驶行为的无人驾驶深度强化学习方法", 集成技术, no. 05 *
孙嘉浩;陈劲杰;: "基于强化学习的无人驾驶仿真研究", 农业装备与车辆工程, no. 06 *
张斌;何明;陈希亮;吴春晓;刘斌;周波;: "改进DDPG算法在自动驾驶中的应用", 计算机工程与应用, no. 10 *
李克强;戴一凡;李升波;边明远;: "智能网联汽车(ICV)技术的发展现状及趋势", 汽车安全与节能学报, no. 01 *
李国法;陈耀昱;吕辰;陶达;曹东璞;成波;: "智能汽车决策中的驾驶行为语义解析关键技术", 汽车安全与节能学报, no. 04 *
黄志清;曲志伟;张吉;张严心;田锐;: "基于深度强化学习的端到端无人驾驶决策", 电子学报, no. 09 *

Also Published As

Publication number Publication date
CN114179835B (en) 2024-01-05

Similar Documents

Publication Publication Date Title
US10882522B2 (en) Systems and methods for agent tracking
CN109263639B (en) Driving path planning method based on state grid method
CN110834644B (en) Vehicle control method and device, vehicle to be controlled and storage medium
Llorca et al. Autonomous pedestrian collision avoidance using a fuzzy steering controller
CN111309600B (en) Virtual scene injection automatic driving test method and electronic equipment
US10800427B2 (en) Systems and methods for a vehicle controller robust to time delays
CN102495631B (en) Intelligent control method of driverless vehicle tracking desired trajectory
CN108860139B (en) A kind of automatic parking method for planning track based on depth enhancing study
US11167770B2 (en) Autonomous vehicle actuation dynamics and latency identification
JP2021049969A (en) Systems and methods for calibrating steering wheel neutral position
US20190018412A1 (en) Control Method for Autonomous Vehicles
CN108995538A (en) A kind of Unmanned Systems of electric car
CN112477849B (en) Parking control method and device for automatic driving truck and automatic driving truck
CN112829747A (en) Driving behavior decision method and device and storage medium
CN110569602A (en) Data acquisition method and system for unmanned vehicle
CN112193318A (en) Vehicle path control method, device, equipment and computer readable storage medium
CN114638103A (en) Automatic driving joint simulation method and device, computer equipment and storage medium
CN114179835B (en) Automatic driving vehicle decision training method based on reinforcement learning in real scene
DE102022121602A1 (en) OBJECT MOTION PATH PREDICTION
Spencer et al. Trajectory based autonomous vehicle following using a robotic driver
CN112477861B (en) Driving control method and device for automatic driving truck and automatic driving truck
Cremean et al. Alice: An information-rich autonomous vehicle for high-speed desert navigation
Team Stanford racing team’s entry in the 2005 DARPA grand challenge
CN110375751A (en) A kind of automatic Pilot real-time navigation system framework
CN117826825B (en) Unmanned mining card local path planning method and system based on artificial potential field algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant