WO2022105579A1 - 一种基于自动驾驶的控制方法、装置、车辆以及相关设备 - Google Patents

一种基于自动驾驶的控制方法、装置、车辆以及相关设备 Download PDF

Info

Publication number
WO2022105579A1
WO2022105579A1 PCT/CN2021/127867 CN2021127867W WO2022105579A1 WO 2022105579 A1 WO2022105579 A1 WO 2022105579A1 CN 2021127867 W CN2021127867 W CN 2021127867W WO 2022105579 A1 WO2022105579 A1 WO 2022105579A1
Authority
WO
WIPO (PCT)
Prior art keywords
lane
lane change
vehicle
target vehicle
change
Prior art date
Application number
PCT/CN2021/127867
Other languages
English (en)
French (fr)
Inventor
钱祥隽
Original Assignee
腾讯科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 腾讯科技(深圳)有限公司 filed Critical 腾讯科技(深圳)有限公司
Priority to EP21893727.4A priority Critical patent/EP4209853A4/en
Publication of WO2022105579A1 publication Critical patent/WO2022105579A1/zh
Priority to US17/972,426 priority patent/US20230037367A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/18Propelling the vehicle
    • B60W30/18009Propelling the vehicle related to particular drive situations
    • B60W30/18163Lane change; Overtaking manoeuvres
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/10Path keeping
    • B60W30/12Lane keeping
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/14Adaptive cruise control
    • B60W30/143Speed control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/14Adaptive cruise control
    • B60W30/16Control of distance between vehicles, e.g. keeping a distance to preceding vehicle
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/105Speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0011Planning or execution of driving tasks involving control alternatives for a single driving scenario, e.g. planning several paths to avoid obstacles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0015Planning or execution of driving tasks specially adapted for safety
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0287Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling
    • G05D1/0289Control of position or course in two dimensions specially adapted to land vehicles involving a plurality of land vehicles, e.g. fleet or convoy travelling with means for avoiding collisions between vehicles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4041Position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4049Relationship among other objects, e.g. converging dynamic objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/80Spatial relation or speed relative to objects
    • B60W2554/802Longitudinal distance
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/40High definition maps
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2556/00Input parameters relating to data
    • B60W2556/45External transmission of data to or from the vehicle
    • B60W2556/50External transmission of data to or from the vehicle for navigation systems

Definitions

  • the present application relates to the field of artificial intelligence, and in particular, to a control method, device, vehicle and related equipment based on automatic driving.
  • Automatic lane changing requires autonomous vehicles to autonomously select the driving lane on the road and perform lane-changing operations, and make appropriate lane-changing decisions. It can better complete driving tasks, avoid traffic congestion, improve traffic efficiency, avoid traffic accidents, and ensure road safety. Therefore, autonomous lane change has become a major problem facing the current autonomous driving technology.
  • the embodiments of the present application provide a control method, device, vehicle, and related equipment based on automatic driving, which can make the autonomous lane change of the automatic driving vehicle more flexible, and improve the safety and traffic efficiency of lane change.
  • the embodiments of the present application provide a control method based on automatic driving, including:
  • the current lane change scene type is the forced lane change scene type
  • the first lane used to complete the navigation driving route is identified according to the scene information, and the target vehicle is controlled according to the first lane when it is detected that the first lane meets the lane change safety inspection conditions.
  • the second lane used to optimize the driving time is identified according to the scene information, and when it is detected that the second lane meets the lane change safety inspection conditions, the target vehicle is controlled according to the second lane to execute Lane change processing.
  • One aspect of the embodiments of the present application provides a control device based on automatic driving, including:
  • an information acquisition module for acquiring scene information of the target vehicle
  • a scene determination module used for determining the current lane change scene type of the target vehicle according to the scene information
  • the forced lane change module is used to identify the first lane used to complete the navigation driving route according to the scene information if the current lane change scene type is the forced lane change scene type, and when it is detected that the first lane meets the lane change safety inspection conditions, Execute lane change processing according to the first lane control target vehicle;
  • the free lane change module is used to identify the second lane for optimizing the travel time according to the scene information if the current lane change scene type is the free lane change scene type, and when it is detected that the second lane meets the lane change safety inspection conditions, according to the The second lane control target vehicle executes the lane change process.
  • An aspect of the embodiments of the present application provides a computer device, including: a processor, a memory, and a network interface;
  • the above-mentioned processor is connected to the above-mentioned memory and the above-mentioned network interface, wherein the above-mentioned network interface is used to provide a data communication function, the above-mentioned memory is used to store a computer program, and the above-mentioned processor is used to call the above-mentioned computer program to execute the method in the embodiment of the present application. .
  • an embodiment of the present application provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and the computer program includes program instructions.
  • embodiments of the present application provide a computer program product or computer program, where the computer program product or computer program includes computer instructions, where the computer instructions are stored in a computer-readable storage medium, and a processor of a computer device stores the computer-readable storage medium.
  • the medium reads the computer instructions, and the processor executes the computer instructions, so that the computer device executes the methods in the embodiments of the present application.
  • An aspect of an embodiment of the present application provides a vehicle, where the vehicle includes the automatic driving-based control device, or the computer device, or the computer-readable storage medium.
  • FIG. 1 is a network architecture diagram provided by an embodiment of the present application.
  • FIG. 2 is a schematic diagram of a scene of an autonomous lane change provided by an embodiment of the present application
  • FIG. 3 is a schematic flowchart of a control method based on automatic driving provided by an embodiment of the present application
  • FIG. 4 is a schematic flowchart of a forced lane change provided by an embodiment of the present application.
  • FIG. 5 is a schematic design diagram of a scene distribution decision tree provided by an embodiment of the present application.
  • 6a-6c are schematic diagrams of a scene for identifying the first lane provided by an embodiment of the present application.
  • FIG. 7 is a schematic flowchart of a free lane change provided by an embodiment of the present application.
  • FIG. 8 is a schematic diagram of a training process of an offline lane evaluation model provided by an embodiment of the present application.
  • FIG. 9 is a schematic flowchart of preparation for a forced lane change provided by an embodiment of the present application.
  • 10a-10b are schematic diagrams of a forced lane change preparation scenario provided by an embodiment of the present application.
  • FIG. 11 is a schematic flowchart of an autonomous lane change provided by an embodiment of the present application.
  • 12a is a schematic flowchart of a lane change execution method provided by an embodiment of the application.
  • FIG. 12b is a schematic diagram of an expected lane change trajectory route provided by an embodiment of the present application.
  • FIG. 13 is a schematic structural diagram of a control device based on automatic driving provided by an embodiment of the present application.
  • FIG. 14 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • Artificial Intelligence is a theory, method, technology and application system that uses digital computers or machines controlled by digital computers to simulate, extend and expand human intelligence, perceive the environment, acquire knowledge and use knowledge to obtain the best results.
  • artificial intelligence is a comprehensive technique of computer science that attempts to understand the essence of intelligence and produce a new kind of intelligent machine that can respond in a similar way to human intelligence.
  • Artificial intelligence is to study the design principles and implementation methods of various intelligent machines, so that the machines have the functions of perception, reasoning and decision-making.
  • Artificial intelligence technology is a comprehensive discipline, involving a wide range of fields, including both hardware-level technology and software-level technology.
  • the basic technologies of artificial intelligence generally include technologies such as sensors, special artificial intelligence chips, cloud computing, distributed storage, big data processing technology, operation/interaction systems, and mechatronics.
  • Artificial intelligence software technology mainly includes computer vision technology, speech processing technology, natural language processing technology, and machine learning/deep learning.
  • artificial intelligence technology has been researched and applied in many fields, such as common smart homes, smart wearable devices, virtual assistants, smart speakers, smart marketing, unmanned driving, autonomous driving, drones It is believed that with the development of technology, artificial intelligence technology will be applied in more fields and play an increasingly important value.
  • the scheme provided by the embodiment of the present application relates to technologies such as automatic driving and machine learning of artificial intelligence, and is specifically described by the following examples:
  • the common autonomous lane change scheme is to make the global path planning into the lane level, that is, at the beginning of an automatic driving task, it has basically decided where to change the lane.
  • the lane-level global path planning scheme cannot cope with the rapidly changing complex traffic flow very well. For example, when the place that needs to be changed in the global planning is blocked by static obstacles, the autonomous vehicle may not be able to change lanes normally. Or when the vehicle in front of the autonomous vehicle's current lane is moving slowly, it may cause the autonomous vehicle to drive slowly, and so on. It can be seen that the current lane changing method of autonomous vehicles is not flexible enough, which may bring safety hazards and low traffic efficiency.
  • the embodiment of the present application provides a control method based on automatic driving, which can make the autonomous lane change of the automatic driving vehicle more flexible, and improve the safety of lane change and the traffic efficiency.
  • FIG. 1 is a network architecture diagram provided by an embodiment of the present application.
  • the network architecture may include a service server 100 and a terminal device cluster, wherein the terminal device cluster may include multiple terminal devices, as shown in FIG. 1 , may specifically include a terminal device 10a, a terminal device 10b, ... ..., the terminal equipment 10n.
  • terminal equipment 10a, terminal equipment 10b, . . . , terminal equipment 10n can be respectively connected to the above-mentioned service server through a network, so that each terminal equipment can perform data interaction with the service server 100 through the network connection, so as to The above-mentioned service server 100 can receive service data from each terminal device.
  • each terminal device may be a vehicle terminal or a mobile phone terminal.
  • the terminal device is configured on the moving vehicle, and each vehicle can be equipped with a terminal device, so as to obtain the control command for automatic driving through the data interaction between the terminal device and the service server, and the terminal device controls the vehicle to perform automatic driving through the control command.
  • the service server 100 can receive service data from each terminal device, call source data related to automatic driving, and then perform logical operation processing to obtain control commands for controlling vehicle driving.
  • the service data may be scene information.
  • the scene information includes vehicle-related information, road information, environmental information, positioning information, end point information, map information, and the like.
  • the source data can be the parameter data required for the machine learning model and logic operation for automatic driving.
  • the terminal device corresponding to the vehicle can continuously initiate a service request for lane change detection to the service server 100.
  • the service server receives the service request from the terminal device, it will respond to the service transmitted from the terminal device.
  • the control command is issued and returned to the terminal device.
  • each terminal device receives the control command from the service server 100, it can control the corresponding vehicle to perform lane change processing according to the control command.
  • the autonomous vehicle 20a travels on the road, and the autonomous vehicle 20a is provided with the terminal device 10a.
  • the terminal device 10a sends various acquired information about the autonomous vehicle 20a to the service server 100 together as scene information.
  • the service server 100 After receiving the scene information from the terminal device 10a, the service server 100 will call the source data related to automatic driving and the scene information to perform logical operation processing together.
  • the logic operation processing includes first determining the current lane change scene type of the autonomous driving vehicle 20a, then identifying the target lane according to different current lane changing scene types, and issuing a control command to make the autonomous driving vehicle 20a change lanes to the target when the safety inspection conditions are met.
  • the target lane refers to a lane change suitable for the automatic driving vehicle 20a in the current scenario obtained by the service server 100 after the operation processing.
  • the method provided in the embodiment of the present application may be executed by a computer device, and the computer device may be the above-mentioned service server 1000 .
  • the service server 1000 may be an independent physical server, or a server cluster or distributed system composed of multiple physical servers, or may provide cloud services, cloud databases, cloud computing, cloud functions, cloud storage, network services, Cloud servers for basic cloud computing services such as cloud communication, middleware services, domain name services, security services, CDN, and big data and artificial intelligence platforms.
  • the above data interaction process is only an example of the embodiment of the present application, and the logical operation processing is not limited to the service server 100, but may also be performed in a terminal device.
  • the terminal device initiates a service request for lane change safety inspection, and can obtain the source data related to automatic driving from the service server 100 after obtaining the service data, and then perform logical operation processing to obtain a control command for controlling the driving of the vehicle.
  • the source data related to automatic driving is not limited to being stored in the service server 100, but can also be stored in the terminal device. This application is not limited here.
  • the terminal device and the service server may be directly or indirectly connected through wired or wireless communication, which is not limited in this application.
  • FIG. 2 is a schematic diagram of an autonomous lane change scenario provided by an embodiment of the present application.
  • the service server shown in FIG. 2 may be the above service server 100, and the autonomous vehicle 2 shown in FIG. 2 may be the autonomous vehicle 20b shown in FIG. 21.
  • the in-vehicle terminal 21 may be the terminal device 10b shown in FIG. 1 above.
  • the automatic driving vehicle 2 is driving in the lane B.
  • the vehicle terminal 21 will transmit the current scene information for the automatic driving vehicle 2 to the service server.
  • the scene information may include positioning information, map information, environment information, destination information, vehicle-related information, and the like.
  • the vehicle-related information refers to the information about the autonomous vehicle 2 and its adjacent vehicles, that is, the vehicle-related information may include the speed and acceleration of the autonomous vehicle 2, the speed and acceleration of the vehicles adjacent to the autonomous vehicle 2, and the vehicle-related information.
  • the information may also include the positional relationship of the autonomous vehicle 2 and its neighboring vehicles.
  • the scene information collection device can be installed on the automatic driving vehicle 2, can also be installed on the vehicle terminal 21, or can be installed on the automatic driving vehicle 2 and the vehicle terminal 21 respectively, which is not limited here, but for the purpose of The following description is clear, the subsequent default collection device is installed on the vehicle terminal 21 .
  • the service server will determine the current lane change scene type according to the scene information, and then determine that the current lane change scene type and scene information are theoretically suitable for the autonomous vehicle at this time.
  • the optimal lane of 2 is used as the target lane.
  • the current lane change scene types can be classified into two categories: free lane change scene type (the slow speed of the preceding vehicle determines overtaking) and forced lane change scene type (crossroad lane selection, static obstacles blocking the road, etc.).
  • the free lane change scenario type refers to the automatic driving vehicle 2, in order to increase the traffic speed to optimize the driving time, chooses to actively change the lane, and does not change the lane and will not affect the mission goal;
  • the forced lane change scenario type refers to the current scene The car does not change Road to the target lane, unable to complete the mission objective.
  • the mission target refers to a target end point set before the autonomous vehicle 2 runs and needs to be reached. Then, the service server will perform lane change processing on the autonomous vehicle 2 according to the target lane and scene information.
  • the target lane refers to the optimal lane that can complete the current navigation driving route.
  • lane change processing means that the business server will first monitor whether the automatic driving vehicle 2 meets the lane change conditions according to the scene information. If the lane change conditions are not met, the business server will adjust the position and speed of the automatic driving vehicle 2 to create a safe lane change. surroundings. During the lane change process, the service server will issue a control command to the vehicle terminal 21 . After receiving the control command, the in-vehicle terminal 21 will control the autonomous driving vehicle 2 to perform a lane change operation according to the control command. For example, the business server determines that lane C is the target lane according to the scene information, but the distance between the autonomous vehicle 2 and the preceding vehicle 1 in lane C is too close, and there is a risk of changing lanes.
  • the service server will also perform a lane change safety check on the entire process of autonomous lane change according to the scene information.
  • the lane change safety check is carried out continuously, that is to say, after receiving the lane change detection request from the vehicle terminal 21, the service server will continuously perform the lane change safety check according to the transmitted scene information until the automatic lane change check is performed.
  • Drive vehicle 2 to change lanes.
  • the service server will stop the processing of the automatic driving vehicle 2 changing lanes to the target lane, and control the automatic driving vehicle 2 to return to the current driving lane , which is lane 2.
  • FIG. 3 is a schematic flowchart of a control method based on automatic driving provided by an embodiment of the present application.
  • the method may be executed by a service server (such as the service server 100 in the above embodiment corresponding to FIG. 1 ), or may be executed by a terminal device (such as the terminal device 10a in the above embodiment corresponding to FIG. 1 ).
  • a service server such as the service server 100 in the above embodiment corresponding to FIG. 1
  • a terminal device such as the terminal device 10a in the above embodiment corresponding to FIG. 1 .
  • This embodiment is described by taking as an example that the method is executed by a computer device (the computer device may be the service server 100, or may be the terminal device 10a).
  • the process may include:
  • S101 Acquire scene information of the target vehicle.
  • the scene information can reflect the comprehensive situation of the car driving behavior and the driving environment within a certain time and space.
  • the scene information includes vehicle-related information, road information, environmental information, positioning information, end point information, and map information.
  • the vehicle-related information includes the speed, acceleration, vehicle type, current state, etc. of the target vehicle and the surrounding vehicles of the target vehicle;
  • the road information includes the congestion of the current lane, the speed limit of the lane, the average speed of the lane, the distance from the end of the lane, etc. ;
  • Environmental information includes obstacle detection information.
  • the collection of scene information can be realized by sensors, lidars, cameras, millimeter-wave radars, navigation systems, positioning systems, high-precision maps, etc.
  • a computer device (such as the terminal device 10a in the above-mentioned embodiment corresponding to FIG. 1 ) can collect the scene information.
  • S102 Determine the current lane change scene type of the target vehicle according to the scene information.
  • the type of the current lane change scene can be determined according to the current scene where the target vehicle is located, such as the free overtaking scene, the intersection scene, the main and auxiliary road/up and down ramp scenes, the static obstacle scene and the terminal parking scene respectively corresponding to the free overtaking and lane change scene Type, intersection lane change scene type, exit lane change scene type, static obstacle lane change scene type, and end stop lane change scene type.
  • These lane change scene types can be classified into two major lane change scene categories: forced lane change scene types and free lane change scene types.
  • the type of forced lane change scenario means that in this scenario, if the determined optimal lane is not the current driving lane of the target vehicle, the target vehicle must change lanes, otherwise it cannot reach the end of the task according to the current navigation route;
  • free lane change Scenario type means that in this scenario, if the determined optimal lane is not the current driving lane of the target vehicle, the target vehicle can also choose not to change the lane, and can still reach the end of the task according to the current navigation route, but it may take a long time. .
  • the above-mentioned free overtaking and lane-changing scene types belong to the free lane-changing scene types; the above-mentioned intersection lane-changing scene types, exit lane-changing scene types, static obstacle lane-changing scene types, and end-stop parking lane-changing scene types are all forced lane-changing scene types .
  • the computer device will identify the optimal lane according to the current lane change scene type, navigation driving route, vehicle speed, parking position and other scene information, as the first lane change scene type.
  • the optimal lane refers to the lane that is most suitable for driving in the forced lane change scenario type among the candidate lanes that can complete the navigation route.
  • the computer device After identifying the first lane, the computer device does not immediately control the target vehicle to perform a lane change operation. Because there are many vehicles on the road, it is easy to cause traffic accidents if you are not careful. When the distance between the first vehicle in the first lane in front of the target vehicle and the first vehicle in the first lane behind the target vehicle is relatively large, the target vehicle has the opportunity to drive between the two vehicles and enter the first vehicle. One lane. Therefore, the computer device acquires the adjacent vehicle separation area of the first lane, and controls the target vehicle to enter the lane change preparation position according to the adjacent vehicle separation area.
  • the adjacent vehicle interval area is the interval area between the first vehicle and the second vehicle in the first lane; the first vehicle is the vehicle in the first lane with the closest distance to the head of the target vehicle; the second vehicle is the vehicle in the first lane The vehicle closest to the rear of the target vehicle.
  • the lane-change preparation position refers to a position where the lane-change environment is relatively safe. Before the target vehicle enters the lane change preparation position and starts to change lanes, the computer equipment needs to confirm the safety of the target vehicle's lane change. The computer equipment will perform a lane change safety inspection on the first lane, and only after determining that the target vehicle meets the lane change safety inspection conditions will it control the target vehicle to change lanes to the first lane.
  • Lane change safety checks may include safety and security rules.
  • the safety guarantee rule is used to ensure that the target vehicle can avoid the first vehicle by emergency braking when an emergency occurs during lane change, and at the same time, to ensure that if the target vehicle brakes suddenly when changing lanes, the second vehicle can have sufficient time to respond. That is, after the target vehicle enters the first lane, the distance between the target vehicle and the first vehicle cannot be smaller than the safety distance, and the distance between the target vehicle and the second vehicle cannot be smaller than the safety distance.
  • the safety distance may be a predetermined threshold, or may be a value calculated according to the current speed and position of the target vehicle, the first vehicle, and the second vehicle.
  • the computer device determines that the actual distance between the target vehicle and the first vehicle is 1m at this time, it is considered that it is not safe to change lanes at this time, and the control is abandoned.
  • the target vehicle changes lanes if the target vehicle is already executing the control command for changing lanes, the computer device will issue a new control command to stop the target vehicle from changing lanes and control the target vehicle to return to the current driving lane.
  • the calculation of the first safe distance threshold between the target vehicle and the first vehicle can be as formula (1):
  • dl is the first safety distance threshold
  • vego, aego and tdelay are the current speed, current acceleration and reaction time of the target vehicle, respectively
  • vl, al are the first speed and first acceleration of the first vehicle, respectively.
  • tdelay can be adjusted according to the actual situation such as the model of the target vehicle, and the rest of the data can be obtained from the scene information.
  • the first safe distance threshold is the minimum safe distance between the target vehicle and the first vehicle. If the actual distance between the target vehicle and the first vehicle is less than the first safe distance threshold, it is determined that the first lane does not meet the lane change safety inspection conditions.
  • the calculation of the second safe distance threshold between the target vehicle and the second vehicle can be as formula (2):
  • dr is the second safe distance threshold
  • vego, aego and tdelay are the current speed, current acceleration and reaction time of the target vehicle, respectively
  • vr and ar are the second speed and second acceleration of the second vehicle, respectively.
  • tdelay can be adjusted according to the actual situation such as the model of the target vehicle, and the rest of the data can be obtained from the scene information.
  • the second safe distance threshold is the minimum safe distance between the target vehicle and the second vehicle. If the actual distance between the target vehicle and the second vehicle is less than the second safe distance threshold, it is determined that the first lane does not meet the safety inspection conditions.
  • the safety guarantee rules when using the safety guarantee rules to perform a lane change safety check on the target vehicle, if the distance of the preceding vehicle is not less than the first safe distance threshold, and the distance of the following vehicle is not less than the second safe distance threshold, it is determined that the first lane satisfies the lane change safety.
  • Lane change safety checks may also include the use of a data-driven TTC (Time-To-Collision, also known as time-to-collision) model.
  • TTC Time-To-Collision
  • the above safety guarantee rules are used to ensure the most basic lane change safety, and social acceptance can also be considered when determining lane change safety.
  • the lane change feature is obtained from the scene information.
  • the lane change feature is the above-mentioned extraction feature.
  • the lane change feature is input into the collision time recognition model, and the expected collision time of the preceding vehicle and the expected collision time of the rear vehicle are output through the collision time recognition model.
  • the expected collision time of the preceding vehicle is the ideal collision time between the target vehicle and the first vehicle. If the actual collision time between the target vehicle and the first vehicle is less than the expected collision time of the preceding vehicle, it is considered that the lane change safety check condition is not satisfied.
  • the expected collision time of the rear vehicle is the collision time between the second vehicle and the target vehicle under ideal conditions. If the actual collision time between the target vehicle and the second vehicle is less than the expected collision time of the rear vehicle, it is considered that the lane change safety inspection conditions are not met. .
  • the actual collision time can be obtained according to formula (3):
  • T is the actual collision time of vehicle A
  • d is the distance between vehicle A and vehicle B
  • v is the speed of vehicle A.
  • the A vehicle is behind the B vehicle.
  • the service server will obtain the lane change features from the scene information, and then input the collision time model to obtain the expected collision time of the preceding vehicle and the expected collision time of the rear vehicle. Then, the service server will calculate the actual collision distance of the target vehicle and the actual collision distance of the vehicle behind according to the above formula (3).
  • the target vehicle is not less than the expected collision time of the preceding vehicle, and the actual collision time of the second vehicle If not less than the expected collision time of the rear vehicle, it is determined that the first lane meets the lane change safety inspection conditions; if the actual collision event distance of the target vehicle is less than the expected collision time of the preceding vehicle, or the actual collision time of the second vehicle is less than the expected collision time of the rear vehicle, Then, it is determined that the first lane does not meet the lane-changing safety inspection conditions, and the target vehicle is controlled to stop changing lanes to the first lane.
  • the computer equipment when the computer equipment performs the lane change safety inspection on the first lane, it can only use the safety guarantee rules for the lane change safety inspection, or only the collision time model can be used for the lane change safety inspection.
  • a lane uses safety and security rules and a time-to-collision model for lane change safety checks. If two inspection methods are adopted to perform the lane change safety inspection on the first lane, it is necessary to pass both inspection methods before the lane change safety inspection can be considered to be passed. That is to say, if the first lane does not meet the safety guarantee rules, or the actual collision time is less than the expected collision time, it is confirmed that the first lane does not meet the lane change safety inspection conditions, and the target vehicle is controlled to stop changing lanes to the first lane.
  • S104 If the current lane change scene type is a free lane change scene type, identify a second lane for optimizing the travel time according to the scene information, and when it is detected that the second lane satisfies the lane change safety inspection condition when the target vehicle is controlled according to the second lane to perform a lane change process.
  • the current lane change scene type is the free lane change scene type
  • the target vehicle continues to drive on the current driving lane at this time, and the navigation driving route can also be completed.
  • it may take more time to continue driving in the current driving lane. For example, the vehicle in front of the target vehicle in the current driving lane is driving very slowly, and the next lane can also complete the navigation driving route, and the driving There are fewer vehicles and the average speed is faster. If the target vehicle can change lanes to a faster lane next to it, it can not only complete the navigation route, but also optimize the driving time. Therefore, when the target vehicle is in the free lane change scene type, the second lane for optimizing the travel time can be identified according to the scene information.
  • a machine learning method can be used to evaluate whether it is necessary to change lanes in the current driving state.
  • the computer equipment can extract the lane features and driving features of the candidate lanes from the scene information.
  • the candidate lanes refer to lanes that can complete the navigation route, such as the current driving lane, the left lane, and the right lane.
  • the lane feature refers to the feature related to the candidate lane, including the average vehicle speed of the candidate lane in the past preset time period, such as the average vehicle speed of the lane in the past 30s, the average vehicle speed of the lane in the past 1 minute, the speed limit of the lane, and the distance from the end of the lane.
  • the difference between the lane and the exit lane is the number of lanes; driving characteristics refer to some action characteristics and task characteristics of the target vehicle during driving, such as the last lane change time, the last lane change, the current speed, and the speed is lower than the ideal speed Duration, distance from road exit, etc.
  • driving characteristics refer to some action characteristics and task characteristics of the target vehicle during driving, such as the last lane change time, the last lane change, the current speed, and the speed is lower than the ideal speed Duration, distance from road exit, etc.
  • driving characteristics refer to some action characteristics and task characteristics of the target vehicle during driving, such as the last lane change time, the last lane change, the current speed, and the speed is lower than the ideal speed Duration, distance from road exit, etc.
  • driving characteristics refer to some action characteristics and task characteristics of the target vehicle during driving, such as the last lane change time, the last lane change, the current speed, and the speed is lower than the ideal speed Duration, distance from road exit, etc.
  • the above features are only examples, and other features may be
  • the computer device will control the target vehicle to perform lane change processing according to the target lane when detecting that the second lane satisfies the lane change safety check condition.
  • the lane change safety inspection performed on the second lane reference may be made to the description of the lane change safety inspection in the foregoing step S103, which will not be repeated here.
  • the current lane change scene type in which the target vehicle is located can be determined.
  • the first lane of the navigation route is completed, and when it is detected that the first lane meets the safety inspection conditions, the target vehicle is controlled according to the first lane to perform lane change processing; if the current lane change scene type is the free lane change scene type, according to the scene information
  • a second lane for optimizing the travel time is identified, and when it is detected that the second lane satisfies the lane-change safety check condition, the target vehicle is controlled to perform a lane-change process according to the second lane.
  • the current lane change scene type of the target vehicle can be determined according to the acquired scene information of the target vehicle, and then different lane change processing is performed on the target vehicle according to different current lane change scene types. , which can enable self-driving cars to have the ability to flexibly change lanes, better avoid traffic congestion, and increase driving speed.
  • FIG. 4 is a schematic flowchart of a forced lane change provided by an embodiment of the present application.
  • the method may be executed by a service server (such as the service server 100 in the above embodiment corresponding to FIG. 1 ), or may be executed by a terminal device (such as the terminal device 10a in the above embodiment corresponding to FIG. 1 ).
  • This embodiment is described by taking as an example that the method is executed by a computer device (the computer device may be the service server 100, or may be the terminal device 10a).
  • the process may include:
  • S201 Determine obstacle detection information, end point distance and intersection distance according to scene information.
  • the obstacle detection information refers to whether there is a static obstacle in front of the current driving lane of the target vehicle, which can be detected by means of radar or sensors, and the detection distance can be set according to the actual situation.
  • the specified detection distance is 200 meters, then if the computer equipment does not detect a static obstacle within 200 meters in front of the current driving lane of the target vehicle, it is considered that there is no static obstacle in front of the target vehicle.
  • the distance to the end point (distance_to_end, d2e) refers to the distance from the target vehicle to the end point task, which can be calculated through high-precision maps and positioning information.
  • Junction distance refers to the distance from the target vehicle to the junction (distance_to_junction, d2j) refers to the distance from the target vehicle to the next intersection or exit at the current time, which can also be calculated through high-precision maps and positioning information.
  • S202 Input the obstacle detection information, the distance to the end point and the distance to the intersection into the scene distributor to determine the current scene of the target vehicle.
  • the lane change scene type of the target vehicle can be determined according to the current scene.
  • the current scene includes free overtaking scene, intersection scene, main and auxiliary road/up and down ramp scene, static obstacle scene and terminal parking scene.
  • the free overtaking scenario is one of the most common scenarios, which is more common on structured roads such as expressways or urban expressways.
  • the target vehicle can choose a lane that is not congested to drive, but if the target vehicle does not change lanes, it will not affect the arrival of the target destination.
  • the intersection scene is mainly suitable for the L4 urban automatic driving system.
  • the main and auxiliary road/up and down ramp scenarios are similar to the intersection scenarios, both of which can obtain candidate lanes from the map.
  • the static obstacle scene means that when there is a static obstacle (such as a cone barrel, construction facilities, etc.) in front of the target, it is necessary to change lanes to the left or right to avoid obstacles.
  • the end-point parking scene refers to entering the end-point parking scene when the distance between the target vehicle and the end point is less than a certain value. It is generally applicable to L4-level automatic driving systems, that is, when you reach the end point, you need to pull over to the side, so in this scenario, select the rightmost lane as the target lane.
  • FIG. 5 is a schematic design diagram of a scene distribution decision tree provided by an embodiment of the present application. As shown in Figure 5, the entire decision-making process can include:
  • the obstacle detection information it can be first determined whether there is a static obstacle in front of the target vehicle.
  • the front of the target vehicle may be within the range of the detection threshold in front of the current driving lane of the target vehicle, and the value of the detection threshold may be set according to the actual situation, may be the distance to the next intersection, or may be a directly set value. If the obstacle detection information indicates that there is an obstacle in front of the target vehicle, determine that the current lane change scene type of the target vehicle is a static obstacle lane change scene type; if the obstacle detection information indicates that the target vehicle If there is an obstacle ahead, step S52 is executed to continue to determine the current lane change scene type.
  • S52 Determine whether the distance to the end point is less than the first distance threshold.
  • the end point distance can be compared with the set first distance threshold. If the end point distance is less than the first distance threshold, the current lane change scene type of the target vehicle is determined to be the end point parking lane change scene type; if the end point distance is not less than the first distance threshold, step S53 is performed to continue to determine the current lane change scene type.
  • S53 Determine whether the intersection distance is less than the second distance threshold.
  • the computer device when it is determined that the target vehicle is still far from the end point and is not in the scene of parking at the end point, the computer device will compare the distance of the intersection with the set second distance threshold. If the intersection distance is not less than the second distance threshold, it is determined that the current lane change scene type of the target vehicle is the free overtaking lane change scene type; if the intersection distance is less than the second distance threshold, step S54 is executed to continue to determine the current lane change scene type.
  • the intersection may include intersections, main and auxiliary roads, and ramps.
  • the intersection includes three situations: turning left at the intersection, turning right at the intersection and going straight at the intersection.
  • the main and auxiliary roads and up and down ramps can be counted as exits.
  • the corresponding lane change scene type can be defined as the exit lane change scene type. Therefore, if the intersection map information indicates that the intersection is an exit, the current lane change scene type of the target vehicle is determined as the exit lane change scene type, otherwise the current lane change scene type of the target vehicle is determined as the intersection lane change scene type.
  • the scene type of the intersection lane change scene, the exit lane change scene type, the static obstacle lane change scene type, and the end point parking lane change scene type all belong to the forced lane change scene type. If it is determined that the current lane change scene type of the target vehicle is a forced lane change scene type, the computer device will identify the optimal lane change scene type for completing the navigation driving route according to scene information such as the current lane change scene type, the navigation driving route, the vehicle speed, and the parking position. Lane, as the first lane. For ease of understanding, please refer to FIGS. 6a to 6c together. FIGS. 6a to 6c are schematic diagrams of a scene for recognizing the first lane provided by an embodiment of the present application.
  • the computer device will first determine whether the target vehicle should turn left or right or go straight when reaching the next intersection according to the navigation route. As shown in Fig. 6a, the target vehicle 61 is driving in lane B, and from the navigation route in the map, it can be determined that the target vehicle 61 should turn right when it reaches the intersection a. It can be known from the high-precision map information that only vehicles in lane C or lane D can turn right at intersection a. If the target vehicle 61 does not change the lane and continues to drive in the lane B and cannot turn right normally, forcing the target vehicle 61 to turn right may cause a traffic accident.
  • lane C and lane D are used as candidate lanes, and then the first lane can be selected according to the speed of the candidate lane and the difficulty of changing lanes to the candidate lane.
  • the stop position is far from the intersection. the distance.
  • the current road may be a main or auxiliary road or an up and down ramp.
  • This scenario is similar to the intersection scenario in that both candidate lanes can be obtained from the map, and the rest of the lanes should be treated as wrong lanes.
  • the target vehicle 61 is driving in lane B, and it can be known from the navigation route in the map that when reaching the exit b, the target vehicle 61 should enter the side road. It can be known from the high-precision map information that only vehicles in lane C or lane D can enter the side road at exit b.
  • the target vehicle 61 does not change lane and continues to drive in lane B, forcibly controlling the target vehicle 61 to turn right may cause a traffic accident, so lane C and lane D are used as candidate lanes.
  • the difference between the exit lane change scene type and the intersection lane change scene type is that there are no traffic lights in this scene, and the vehicle speed is relatively fast in most cases. Therefore, the first lane is selected mainly according to the vehicle speed of the candidate lane and the difficulty of changing lanes to the candidate lane.
  • the current lane change scene type is a static obstacle lane change scene type, that is, when there is a static obstacle (cone bucket, construction facility, etc.) in front of the target vehicle, you need to change lanes to the left or right to avoid obstacles.
  • the target vehicle 61 is traveling in the lane B, and there is a static obstacle c in front of the lane B.
  • the target vehicle 61 cannot continue to drive in the lane B, otherwise a traffic accident will occur, and a lane change operation should be performed.
  • the computer device will first obtain a normal passable lane, and then filter to the wrong lane B, that is, use lane A or lane C as a candidate lane.
  • the computer equipment will select the optimal lane from the candidate lanes as the first lane, at this time, the vehicle speed of the candidate lane and the distance that the lane can travel can be comprehensively considered (if there is an intersection or a main and auxiliary road ramp ahead, the one leading to the final destination is given priority. ground lane).
  • the current lane change scene type is the end point parking lane change scene type, that is, when the end point distance is less than a certain value, a lane suitable for parking needs to be selected as the target lane.
  • This scenario type is generally applicable to L4-level autopilot systems, that is, you need to pull over when you reach the end point, so in this scenario type, select the rightmost lane as the first lane.
  • the speed and driving direction of the target vehicle are adjusted to the lane-changing speed and the lane-changing driving direction, and the target vehicle is controlled to change lanes according to the lane-changing speed and the lane-changing driving direction. to the first lane.
  • the lane change safety check reference may be made to the description of the lane change safety check in step S103 in the embodiment corresponding to FIG. 3 , which will not be repeated here.
  • the current lane change scene type of the target vehicle can be determined according to the acquired scene information of the target vehicle, and then different lane change processing is performed on the target vehicle according to different current lane change scene types. , which can enable self-driving cars to have the ability to flexibly change lanes, better avoid traffic congestion, and increase driving speed.
  • FIG. 7 is a schematic flowchart of a free lane change provided by an embodiment of the present application. As shown in Figure 7, the process can include:
  • S301 Determine the current lane change scene type of the target vehicle according to the scene information.
  • step S301 For the specific implementation of step S301, reference may be made to the description of steps S201 to S202 in the embodiment corresponding to FIG. 4 , which will not be repeated here.
  • the free lane change scene types include the above free overtaking and lane change scene types. If it is determined that the current lane change scene type of the target vehicle is a free lane change scene type, it means that the target vehicle is far away from the end point, far away from the next intersection, and there is no obstacle in front of the current driving lane. At this time, although you continue to drive in the current driving lane, you can reach the target end point according to the navigation driving route. However, in order to prevent the slow driving speed of the preceding vehicle from causing the target vehicle to travel slowly, the time for the target vehicle to complete the task is increased.
  • the condition of the candidate lanes can be monitored in real time, and the lane with the faster average vehicle speed can be selected to drive.
  • Computer equipment can use machine learning methods to assess whether a lane change is required in the current driving state. Therefore, the computer equipment will extract the lane features and driving features of the candidate lanes from the scene information, so as to infer the lane that is theoretically most suitable for the target vehicle at present, as the second lane.
  • the candidate lanes refer to lanes that can complete the navigation route.
  • S303 Process the lane feature and the driving feature through a lane evaluation model to obtain an evaluation parameter value of the candidate lane.
  • FIG. 8 is a schematic diagram of a training process of an offline lane evaluation model provided by an embodiment of the present application. As shown in Figure 8, the entire training process includes:
  • ordinary vehicles equipped with sensors and information processing systems or automatic driving vehicles driven by humans can be used to obtain the driving behavior of human drivers and the corresponding perception, positioning, and map information.
  • the data extraction module is used to select the scene in which the human driver actively changes lanes, and the forced lane change data caused by various factors (such as the need to get off the ramp and the turn signal at the intersection) are eliminated.
  • lane features and driving features are extracted from active lane change data.
  • lane-related features for each candidate lane such as the current lane, left lane, and right lane: the average speed of the lane in the past 30s, the average speed of the lane in the past 1 minute, the speed limit of the lane, and the distance from the end of the lane.
  • the difference between the lane and the exit lane is the number of lanes. Extract driving features, such as the last lane change time, the last lane change target lane, the current vehicle speed, the duration of the speed lower than the ideal speed, the distance from the road exit, etc.
  • the above features are only examples, and other features may be selected for practical applications.
  • the computer equipment After having the lane evaluation model, in the free lane change scenario type, the computer equipment processes the lane features and the driving features through the lane evaluation model to obtain the evaluation parameter values of the candidate lanes, and then will have the highest evaluation parameters.
  • the candidate lane for the parameter value is determined as the second lane for optimizing the travel time.
  • the lane change safety check mentioned in step S103 in the embodiment corresponding to FIG. 3 still needs to be performed on the second lane. Only when the second vehicle satisfies the lane change safety check conditions, the computer device will control the target vehicle to change lanes to the second lane.
  • the current lane change scene type of the target vehicle can be determined according to the acquired scene information of the target vehicle, and then different lane change processing is performed on the target vehicle according to different current lane change scene types. , which can enable self-driving cars to have the ability to flexibly change lanes, better avoid traffic congestion, and increase driving speed.
  • FIG. 9 is a schematic flowchart of a forced lane change preparation provided by an embodiment of the present application.
  • the forced lane change preparation may be a specific description of step S103 in the above-mentioned embodiment corresponding to FIG. 3 , mainly to describe the process of the target vehicle entering the lane change preparation position.
  • the forced lane change preparation process includes:
  • step S401 for the specific implementation manner of step S401, reference may be made to the description of step S103 in the above-mentioned embodiment corresponding to FIG. 3 about obtaining the adjacent-vehicle separation area of the first lane, which will not be repeated here.
  • S402 Detect whether there is a feasible lane change area in the adjacent vehicle separation area.
  • the distance between the target vehicle and the first vehicle is smaller than the first safe distance threshold calculated by the formula (1) given in step S103 in the embodiment corresponding to FIG. 3 above , when the first vehicle has an emergency, it is difficult for the target vehicle to avoid the first vehicle by emergency braking, which is prone to safety accidents, so the distance between the target vehicle and the first vehicle after entering the first lane should be greater than the first safe distance threshold; If the distance between the target vehicle and the second vehicle is smaller than the second safe distance threshold calculated by the formula (2) given in step S103 in the above-mentioned embodiment corresponding to FIG.
  • the rear vehicle when the target vehicle brakes suddenly, the rear vehicle does not have sufficient reaction time , so the distance between the target vehicle and the second vehicle after entering the first lane should be greater than the second safe distance threshold.
  • the distance from the position in the feasible lane change area to the first vehicle is greater than a first safe distance threshold, and the distance from the second vehicle is greater than a second safe distance threshold.
  • the first safe distance threshold and the second safe distance threshold are to calculate the first safe distance threshold and the second safe distance threshold through the above formulas (1) and (2) and scene information, and then determine the adjacent vehicle separation. Whether there is an area in the area satisfies that the distance from the first vehicle is greater than the first safe distance threshold, and the distance from the second vehicle is greater than the second safe distance threshold.
  • the computer device can directly obtain the lane change preparation area in the current driving lane, that is, it is assumed that the feasible lane change area is shifted to the current lane change area.
  • an area determined from time to time on the driving lane, and the target vehicle is controlled to drive into the lane-change preparation area, where the lane-change preparation area is located in the current driving lane at a position corresponding to the feasible lane-change area, and has an area corresponding to the feasible lane-change area the same length.
  • FIG. 10a please refer to FIG. 10a together.
  • FIG. 10a is a schematic diagram of a forced lane change preparation scenario provided by an embodiment of the present application.
  • the target vehicle 101 (which may be the autonomous driving vehicle 2 in the embodiment corresponding to FIG. 2 above) is driving in lane B.
  • the computer device determines that the current lane change scene type is the forced lane change scene type, according to the The forced lane change scenario type determines that the first lane is lane A, and the target vehicle 101 needs to change lanes to lane A.
  • the computer device will use the area between the vehicle 102 and the vehicle 103 as the adjacent vehicle separation area, and then calculate the first safe distance threshold dl of the target vehicle 101 according to the scene information and the above formula (1), as shown in Figure 10a
  • the area 1043 shown is the danger area of the target vehicle 101 relative to the vehicle 103.
  • the computer device calculates the second safe distance threshold dr of the target vehicle 101 according to the scene information and the above formula (2), then as shown in FIG. 10a
  • the area 1041 is the danger area of the target vehicle 101 relative to the vehicle 102 .
  • the dangerous area is removed from the adjacent vehicle separation area, and the remaining part of the area is the feasible lane change area, that is, the area 1042 .
  • the computer device determines that there is a feasible lane change area in the adjacent vehicle separation area, it will control the target vehicle 101 to enter the lane change preparation area.
  • the lane-change preparation area is an area determined assuming that the feasible lane-change area is shifted to the current driving lane, the lane-change preparation area is located in the current driving lane at a position corresponding to the feasible lane-change area, and has a the same length as the feasible lane change area.
  • speed control can be used to control the target vehicle 101 to enter the lane change preparation area, that is, the uniform acceleration and then uniform deceleration strategy.
  • the uniform acceleration and uniform deceleration acceleration is a
  • the distance between the vehicle and the center of the feasible lane change area is d
  • the acceleration time is t.
  • the lane change probing process includes acquiring the lane change probing area of the target vehicle in the current driving lane.
  • the feasible lane change area refers to the area that satisfies the lane change safety inspection conditions;
  • the lane change test area is the area determined by assuming that the middle area of the adjacent vehicle separation area is shifted into the current driving lane, and the lane change test area is located in the A position in the current driving lane that corresponds to the middle area of the adjacent vehicle separation area, and has the same length as the middle area of the adjacent vehicle separation area. Then control the target vehicle to enter the lane change test area.
  • the target vehicle enters the lane change test area obtain the lane change test distance and the lane change test time period, and control the target vehicle to move the lane change test distance to the first lane.
  • the target vehicle is controlled to continue to move to the first lane for the lane change test distance; if it is detected that there is a feasible lane change area in the adjacent vehicle interval after the target vehicle movement test, the lane change of the target vehicle in the current driving direction is obtained.
  • Preparation area control the target vehicle to enter the lane change preparation position according to the lane change preparation area;
  • the lane change preparation area is an area determined when assuming that the feasible lane change area is shifted to the current driving direction, and the lane change preparation area is located in the current driving lane and has the same length as the feasible lane change area; wherein, the lane change preparation position in the lane change preparation area satisfies the lane change condition.
  • the specific implementation manner of controlling the target vehicle to enter the lane change preparation position according to the lane change preparation area may refer to the foregoing step S402, which will not be repeated here.
  • FIG. 10b is a schematic diagram of another forced lane change preparation scenario provided by an embodiment of the present application.
  • the target vehicle 101 is driving in lane B, and there is not only no feasible lane change area in the adjacent vehicle interval area in lane A, but also the area 1041 and the area 1043 are partially overlapped.
  • the target vehicle 101 cannot change lanes, so the speed control of the target vehicle 101 can be performed first, so that it reaches the middle area in lane B relative to the interval area between adjacent vehicles.
  • the lane change detection time period is the time that the target vehicle maintains after each time the target vehicle moves the lane change detection distance to the first lane.
  • the lane change test distance and the lane change test period can be obtained through the lane change test model.
  • the lane-change probing driving characteristics include driving characteristics of the target vehicle during lane-changing probing
  • the lane-changing probing lane characteristics include lane characteristics of the candidate lanes during lane-changing probing.
  • the target vehicle is controlled to continue to deviate the lane change test distance x until there is a feasible lane change area 1042 in the area 1041 and the area 1042 , and then the target vehicle is controlled to enter the lane change preparation area 105 . If the vehicle 102 is not detected to brake within the lane change test period t, it is considered that the vehicle 102 does not want the target vehicle to enter the first lane.
  • S405 Perform a lane-change safety check on the first lane according to the lane-change preparation position; if it is detected that the first lane meets the lane-change safety check conditions, adjust the speed and driving direction of the target vehicle to the lane-changing speed and the lane-changing driving direction, Control the target vehicle to change lanes to the first lane according to the lane-changing speed and the lane-changing direction.
  • step S405 may refer to the description of the lane change safety check in step S103 in the above-mentioned embodiment corresponding to FIG. 3 , which will not be repeated here.
  • the lane change safety check is not limited to be performed after the target vehicle enters the lane change preparation position. After the computer device determines the first lane according to the scene information, the lane change safety check can be performed on the target vehicle until the until the target vehicle changes lanes. During this period, if the computer equipment determines that the lane change safety check fails, it will issue a new control command to control the target vehicle to return to the current driving lane.
  • the computer device when the target vehicle changes lanes to the second lane in the free lane change scenario in the embodiment corresponding to FIG. 7 , the computer device will decide whether to change lanes according to the output result of the lane evaluation model.
  • the lane evaluation model may also include safety features.
  • the safety feature refers to safety information related to the lane-change environment of the second lane, such as the result of the lane-change safety inspection.
  • the lane evaluation model can comprehensively consider lane characteristics, driving characteristics and safety characteristics.
  • the target vehicle is controlled to enter the lane change preparation area, and then change lanes to the second lane.
  • the specific implementation manner of controlling the target vehicle to enter the lane change preparation area may be the same as that in step S402 above, which will not be repeated here.
  • the current lane change scene type of the target vehicle can be determined according to the acquired scene information of the target vehicle, and then different lane change processing is performed on the target vehicle according to different current lane change scene types. , which can enable self-driving cars to have the ability to flexibly change lanes, better avoid traffic congestion, and increase driving speed. Moreover, through the method of lane change test, the success rate of lane change can be improved under the condition of ensuring safe lane change.
  • FIG. 11 is a schematic flowchart of an autonomous lane change provided by an embodiment of the present application.
  • the corresponding on-board terminal will collect scene information in real time, and the computer equipment will also analyze the scene where the target vehicle is located according to the obtained scene information.
  • the specific implementation manner of collecting scene information to analyze the scene where the target vehicle is located may refer to steps S201-S202 in the above-mentioned embodiment corresponding to FIG. 4 , which will not be repeated here.
  • the target vehicle's lane-changing is more flexible and efficient.
  • a target lane for completing the navigation route will be selected for the target vehicle according to the characteristics of the scene.
  • the target vehicle will be prepared for the forced lane change first, that is, the target vehicle will enter the lane change preparation area by means of speed control and displacement control.
  • the specific implementation can refer to the description of steps S401-S404 in the embodiment corresponding to FIG. 9 above, and details are not repeated here.
  • the computer equipment will call the lane evaluation model, use the lane evaluation model to infer the scene information collected in real time at a fixed frequency, and decide whether to change lanes according to the inference results.
  • the target lane is determined according to the inference result, and then the target vehicle is controlled to perform the lane change preparation area.
  • steps S302-S304 in the embodiment corresponding to FIG. 7, which will not be repeated here.
  • the computer equipment After the target vehicle enters the lane change preparation area, the computer equipment will obtain the scene information of the target vehicle at the lane change preparation position in the lane change preparation area, and then determine whether the target vehicle satisfies the lane change safety inspection conditions (that is, corresponding to the above Figure 3).
  • the lane change safety check condition in step S103 in the embodiment if it is confirmed that the target vehicle has passed the lane change safety check, the computer device will control the target vehicle to start changing lanes. As shown in Figure 11, from the time when the target vehicle starts to change lanes to the end of the lane change execution, the computer equipment will continue to perform the lane change safety check on the target vehicle and the target lane. If the computer equipment determines the lane change safety check during the lane change execution process If you do not pass, you will immediately stop changing lanes and return to the original driving lane.
  • Lane change execution means that after the target vehicle enters the lane change preparation position, the computer equipment controls the target vehicle to change lanes from the current driving lane to the target lane when the computer equipment confirms that the lane change safety check has passed (such as the first lane identified in the forced lane change scenario type. , or the second lane identified in the free lane change scenario type).
  • FIG. 12a specifically takes the lane change to the first lane as an example for description.
  • the lane change execution method process includes:
  • S501 Acquire the current driving state of the target vehicle.
  • the current driving state includes the current lateral offset between the target vehicle and the first lane, the current travel distance of the target vehicle, the current angular offset between the target vehicle and the first lane, and the current angular velocity of the target vehicle.
  • S502 Determine an expected lane change trajectory of the target vehicle according to the current driving state and the expected driving state.
  • the expected driving state refers to an ideal driving state
  • the expected lateral offset, the expected travel distance, the expected angular offset and the expected angular velocity included in the expected driving state may be set in advance.
  • FIG. 12b is a schematic diagram of an expected lane change trajectory provided by an embodiment of the present application.
  • the computer device can control the target vehicle to change lanes according to the expected lane change trajectory as shown in FIG. 12b.
  • the expected lane change trajectory route shown in Figure 12b may be a quintic polynomial curve.
  • the quintic polynomial can be shown in formula (4):
  • s represents the distance traveled along the road and l represents the lateral offset relative to the target road.
  • the target vehicle 121 is driving in lane B
  • lane A is the target lane that needs to change lanes
  • s refers to the vertical distance from the target vehicle 121 to the vertical line of the starting position of the target vehicle 121
  • l refers to the vertical line of the target vehicle 121. is the vertical distance from the target vehicle 121 to the lane centerline of lane A.
  • is the angular offset, which refers to the included angle between the driving direction of the target vehicle 121 and the lane center line of the target lane, and s still represents the distance traveled along the road.
  • is the angular velocity of the target vehicle 121, and s still represents the distance traveled along the road.
  • the current driving state of the target vehicle 121 includes the current lateral offset of the target vehicle 121 and the target lane, the current travel distance of the target vehicle 121, the driving direction of the target vehicle 121 and the current angular offset of the target lane, the target vehicle The current angular velocity of 121.
  • the preset traveling state of the target vehicle 121 is an ideal state set in advance. For example, the traveling direction of the target vehicle 121 and the preset angular offset of the target lane can be set to 0, and the preset angular velocity of the target vehicle 121 can be set to 0.
  • the preset lateral offset of the target vehicle 121 and the target lane is 0, and the preset travel distance of the target vehicle 121 is the travel distance threshold.
  • five unknown parameters a0 to a5 can be obtained, and a quintic polynomial can be obtained.
  • the position at the beginning of the vehicle lane change is The expected lane change trajectory route of the origin.
  • S503 Control the target vehicle to change lanes to the first lane according to the expected lane change trajectory.
  • the computer device determines the lane-changing speed and the lane-changing driving direction according to the expected lane-changing trajectory, adjusts the speed and driving direction of the target vehicle to the lane-changing speed and the lane-changing driving direction, and controls the target vehicle to follow the lane-changing speed and lane-changing direction. Change the direction of travel to the first lane.
  • the current lane change scene type of the target vehicle can be determined according to the acquired scene information of the target vehicle, and then different lane change processing is performed on the target vehicle according to different current lane change scene types. , which can enable self-driving cars to have the ability to flexibly change lanes, better avoid traffic congestion, and increase driving speed.
  • FIG. 13 is a schematic structural diagram of an automatic driving-based control device provided by an embodiment of the present application.
  • the above-mentioned control apparatus may be a computer program (including program code) running in a computer device, for example, the control apparatus is an application software; the apparatus may be used to execute corresponding steps in the methods provided in the embodiments of the present application.
  • the control device 1 may include: an information acquisition module 11 , a scene determination module 12 , a forced lane change module 13 and a free lane change module 14 .
  • an information acquisition module 11 used for acquiring scene information of the target vehicle
  • the scene determination module 12 is used for determining the current lane change scene type of the target vehicle according to the scene information
  • the forced lane change module 13 is configured to identify, according to the scene information, the first lane used to complete the navigation driving route if the current lane change scene type is the forced lane change scene type, and when it is detected that the first lane satisfies the lane change safety inspection condition , and execute lane change processing according to the first lane control target vehicle;
  • the free lane change module 14 is configured to identify the second lane for optimizing the travel time according to the scene information if the current lane change scene type is the free lane change scene type, and when it is detected that the second lane meets the lane change safety inspection condition, Lane change processing is performed according to the second lane control target vehicle.
  • the forced lane change module 12 may include: an interval area acquisition unit 121 , a lane change preparation unit 122 , a safety inspection unit 123 and a lane change execution unit 124 .
  • the interval area acquisition unit 121 is used to acquire the adjacent vehicle interval area of the target lane; the adjacent vehicle interval area is the interval area between the first vehicle and the second vehicle in the first lane; the first vehicle is the first vehicle in the first lane and the target vehicle The front of the vehicle is the closest vehicle; the second vehicle is the vehicle in the first lane that is closest to the rear of the target vehicle;
  • the lane change preparation unit 122 is configured to control the target vehicle to enter the lane change preparation position according to the adjacent vehicle separation area;
  • a safety inspection unit 123 configured to perform a lane change safety inspection on the first lane according to the lane change preparation position
  • the lane change execution unit 124 is configured to adjust the speed and driving direction of the target vehicle to the lane change speed and the lane change direction if it is detected that the first lane satisfies the lane change safety check condition, and control the target vehicle to follow the lane change speed and the lane change direction. Change lanes in the direction of travel to the first lane.
  • the space area acquisition unit 121 , the lane change preparation unit 122 , the safety check unit 123 and the lane change execution unit 124 can be referred to the description of step S103 in the embodiment corresponding to FIG. 3 , which will not be repeated here.
  • the lane change preparation unit 122 may include: a first area acquisition subunit 1221, a first lane change preparation subunit 1222, a tentative preparation subunit 1224, a tentative acquisition subunit 1225, a tentative movement subunit 1226, and a second Lane change preparation sub-unit 1227.
  • the first area acquisition subunit 1221 is used to acquire the lane change preparation area of the target vehicle in the current driving lane if there is a feasible lane change area in the adjacent vehicle interval area; area; the lane change preparation area is the area determined when it is assumed that the feasible lane change area is shifted into the current driving lane;
  • the first lane change preparation subunit 1222 is used to control the target vehicle to enter the lane change preparation position according to the lane change preparation area;
  • the second area acquisition subunit 1223 is configured to acquire the lane-change detection area of the target vehicle in the current driving lane if there is no feasible lane-change area in the adjacent-vehicle interval area;
  • the lane change test area is the area determined by assuming that the middle area of the adjacent vehicle separation area is shifted into the current driving lane;
  • Probing preparation sub-unit 1224 used to control the target vehicle to enter the lane change probing area
  • the tentative acquisition subunit 1225 is used to determine the lane-change probe distance and the lane-change probe time period when the target vehicle enters the lane-change probe area;
  • a tentative moving subunit 1226 configured to control the target vehicle to move a lane-change tentative distance to the first lane
  • the detecting and moving sub-unit is used to control the target vehicle to continue to move to the first lane for the lane-changing test distance if it is detected that the second vehicle is in the braking state within the lane-change test time period;
  • the second lane change preparation subunit 1227 is configured to obtain the lane change preparation area of the target vehicle in the current driving direction if it is detected that there is a feasible lane change area in the adjacent vehicle interval area after the movement of the target vehicle is tested, according to the lane change preparation
  • the area controls the target vehicle to enter the lane change preparation position; the lane change preparation area is the area determined when it is assumed that the feasible lane change area is shifted to the current driving direction.
  • the tentative acquisition sub-unit is specifically used to extract the lane-change tentative driving feature and the lane-change tentative lane feature from the scene information; Lane test distance and lane change test period; the lane change test model is trained based on driving behavior samples; the driving behavior samples refer to the lane feature samples and driving feature samples when the user is trying to change lanes.
  • first area acquisition subunit 1221 the first lane change preparation subunit 1222, the tentative preparation subunit 1224, the tentative acquisition subunit 1225, the tentative movement subunit 1226 and the second lane change preparation subunit 1227, Reference may be made to the description of steps S402-S404 in the embodiment corresponding to FIG. 9, which will not be repeated here.
  • the safety inspection unit 123 may include: a vehicle feature acquisition subunit 1231, a threshold value calculation subunit 1232, a first safety determination subunit 1233, a lane change feature acquisition subunit 1234, a collision parameter determination subunit 1235, and a time calculation subunit 1234.
  • Subunit 1236 and second security determination subunit 1237 may be included in the safety inspection unit 123 .
  • the vehicle feature acquisition subunit 1231 is used to acquire the reaction time of the target vehicle, and the current speed and current acceleration of the target vehicle at the lane change preparation position;
  • the vehicle feature acquisition subunit 1231 is further configured to acquire the first speed and the first acceleration of the first vehicle;
  • the vehicle feature acquisition subunit 1231 is further configured to acquire the second speed and the second acceleration of the second vehicle;
  • Threshold value calculation subunit 1232 used for determining the first safety distance threshold value according to the reaction time, current speed, current acceleration, first speed and first acceleration;
  • the threshold calculation subunit 1232 is further configured to determine the second safety distance threshold according to the reaction time, the current speed, the current acceleration, the second speed and the second acceleration;
  • the first safety determination subunit 1233 is configured to determine that the first lane satisfies the lane change safety inspection condition if the distance of the preceding vehicle is not less than the first safety distance threshold, and the distance of the following vehicle is not less than the second safety distance threshold;
  • the preceding vehicle distance is the distance between the target vehicle and the first vehicle at the lane change preparation position, and the rear vehicle distance is the target vehicle at the lane change preparation position and the first vehicle. the distance between the two vehicles;
  • the first safety determination subunit 1233 is further configured to determine that the first lane does not meet the lane change safety check condition if the distance of the preceding vehicle is less than the first safety distance threshold, or the distance of the following vehicle is less than the second safety distance threshold, and control the target vehicle to stop Change lanes to the first lane.
  • the lane change feature acquisition subunit 1234 is used to obtain the scene update information when the target vehicle lane change preparation position is obtained, and obtains the lane change feature from the scene update information;
  • the collision parameter determination subunit 1235 is used to input the lane change feature into the collision time identification model, and output the expected collision time of the preceding vehicle and the expected collision time of the rear vehicle through the collision time identification model;
  • a time calculation subunit 1236 used for determining the actual collision time of the target vehicle according to the distance of the preceding vehicle and the current speed;
  • the time calculation subunit 1236 is further configured to determine the actual collision time of the second vehicle according to the distance of the rear vehicle and the second speed;
  • the second safety determination subunit 1237 is configured to determine that the first lane satisfies the lane change safety if the actual collision time of the target vehicle is not less than the expected collision time of the preceding vehicle, and the actual collision time of the second vehicle is not less than the expected collision time of the rear vehicle. check conditions;
  • the second safety determination subunit 1237 is further configured to determine that the first lane does not satisfy the change if the actual collision time of the target vehicle is smaller than the expected collision event distance of the preceding vehicle, or the actual collision event distance of the second vehicle is smaller than the expected collision time of the rear vehicle. Lane safety check conditions, control the target vehicle to stop changing lanes to the first lane.
  • the vehicle feature acquisition subunit 1231, the threshold calculation subunit 1232, the first safety determination subunit 1233, the lane change feature acquisition subunit 1234, the collision parameter determination subunit 1235, the time calculation subunit 1236 and the second safety determination subunit For the specific implementation of the 1237, reference may be made to the description of the lane change safety check in step S103 in the embodiment corresponding to FIG. 3 , which will not be repeated here.
  • the lane change execution unit 124 may include: a driving state acquisition subunit 1241 , a trajectory determination subunit 1242 , a driving data determination subunit 1243 , a driving adjustment subunit 1244 , and a lane change subunit 1245 .
  • Driving state acquisition subunit 1241 used to acquire the current driving state of the target vehicle;
  • the current driving state includes the current lateral offset between the target vehicle and the first lane, the current travel distance of the target vehicle, and the distance between the target vehicle and the first lane.
  • Trajectory determination subunit 1242 used to determine the expected lane change trajectory route of the target vehicle according to the current lateral offset, the current travel distance, the current angular offset, the current angular velocity and the expected driving state;
  • the driving data determination subunit 1243 is used to determine the lane-change speed and the lane-change travel direction according to the expected lane-change trajectory;
  • the driving adjustment subunit 1244 is used to adjust the speed and driving direction of the target vehicle to the lane-changing speed and the lane-changing driving direction;
  • the lane change subunit 1245 is used to control the target vehicle to change lanes to the first lane according to the lane change speed and the lane change travel direction.
  • the driving state acquisition subunit 1241 for the specific implementation of the driving state acquisition subunit 1241, the trajectory determination subunit 1242, the driving data determination subunit 1243, the driving adjustment subunit 1244 and the lane change subunit 1245, please refer to the description of steps S501-S503 in FIG. 12a above, here No further description will be given.
  • the free lane change module 13 may include: a feature acquisition unit 131 , an evaluation parameter determination unit 132 , a second lane determination unit 133 and a second lane change unit 134 .
  • the feature acquisition unit 131 is configured to extract the lane feature and driving feature of the candidate lane from the scene information if the current lane change scene type is the free lane change scene type;
  • the evaluation parameter determination unit 132 is used to process the lane features and driving features through the lane evaluation model to obtain the evaluation parameter values of the candidate lanes; the lane evaluation model is obtained by training according to the driving behavior samples; the driving behavior samples refer to the user actively changing lanes Lane feature samples and driving feature samples at the time;
  • the second lane determination unit 133 is configured to determine the candidate lane with the highest evaluation parameter value as the second lane for optimizing the travel time;
  • the second lane changing unit 134 is configured to control the target vehicle according to the second lane to execute the lane changing process when it is detected that the second lane satisfies the lane changing safety check condition.
  • the specific implementation of the feature acquisition unit 131 , the evaluation parameter determination unit 132 , the second lane determination unit 133 and the second lane change unit 134 may refer to the description of steps S301 to S304 in the embodiment corresponding to FIG. 7 , here No further description will be given.
  • the scenario determination module 14 may include: a demand information determination unit 141 , a free scenario type determination unit 142 and a mandatory scenario type determination unit 143 .
  • a demand information determination unit 141 configured to determine obstacle detection information, end point distance and intersection distance according to scene information
  • the free scene type determination unit 142 is configured to determine the current lane change of the target vehicle if the obstacle detection information indicates that there is no obstacle in front of the target vehicle, and the distance to the end point is not less than the first distance threshold, and the distance to the intersection is not less than the second distance threshold
  • the scene type is the free lane change scene type
  • the mandatory scene type determination unit 143 is used to determine the current lane change scene type of the target vehicle if the obstacle detection information indicates that there is an obstacle in front of the target vehicle, or the end point distance is less than the first distance threshold, or the intersection distance is less than the second distance threshold For the forced lane change scene type.
  • the specific implementation of the demand information determination unit 141, the free scene type determination unit 142, and the mandatory scene type determination unit 143 can be referred to the description of steps S201-S202 in the embodiment corresponding to FIG. 4, which will not be repeated here.
  • the forced lane change scene types include the intersection lane change scene type, the exit lane change scene type, the static obstacle lane change scene type, and the end stop lane change scene type;
  • the forced lane change type determination unit 143 may include: a first scene determination subunit 1431 , a second scene determination subunit 1432 , an intersection information acquisition subunit 1433 and a third scene determination subunit 1434 .
  • the first scene determination subunit 1431 is configured to determine that the current lane change scene type of the target vehicle is a static obstacle lane change scene type if the obstacle detection information indicates that there is a static obstacle in front of the target vehicle;
  • the second scene determination subunit 1432 is configured to determine that the current lane change scene type of the target vehicle is the end point parking lane change scene type if the end point distance is less than the first distance threshold;
  • intersection information obtaining subunit 1433 is used to obtain the intersection map information of the intersection if the intersection distance is less than the second distance threshold;
  • the third scene determination subunit 1434 is configured to determine that the current lane change scene type of the target vehicle is the exit lane change scene type if the intersection map information indicates that the intersection is an exit, otherwise, determine that the current lane change scene type of the target vehicle is the intersection lane change type scene type;
  • the scene type of lane change at the intersection is used to decide the first lane.
  • the current lane change scene type in which the target vehicle is located can be determined.
  • the first lane of the navigation route is completed, and when it is detected that the first lane meets the safety inspection conditions, the target vehicle is controlled according to the first lane to perform lane change processing; if the current lane change scene type is the free lane change scene type, according to the scene information
  • a second lane for optimizing the travel time is identified, and when it is detected that the second lane satisfies the lane-change safety check condition, the target vehicle is controlled to perform a lane-change process according to the second lane.
  • the current lane change scene type of the target vehicle can be determined according to the acquired scene information of the target vehicle, and then different lane change processing is performed on the target vehicle according to different current lane change scene types. , which can enable autonomous vehicles to have the ability to flexibly change lanes, better avoid traffic congestion, and increase driving speed.
  • FIG. 14 is a schematic structural diagram of a computer device provided by an embodiment of the present application.
  • the apparatus 1 in the embodiment corresponding to FIG. 13 can be applied to the above-mentioned computer device 8000.
  • the above-mentioned computer device 8000 can include: a processor 8001, a network interface 8004 and a memory 8005.
  • the above-mentioned computer device 8000 also Including: a user interface 8003, and at least one communication bus 8002.
  • the communication bus 8002 is used to realize the connection and communication between these components.
  • the network interface 8004 may include a standard wired interface, a wireless interface (eg, a WI-FI interface).
  • the memory 8005 may be high-speed RAM memory or non-volatile memory, such as at least one disk memory.
  • the memory 8005 may also be at least one storage device located remotely from the aforementioned processor 8001 .
  • the memory 8005 as a computer-readable storage medium may include an operating system, a network communication module, a user interface module, and a device control application program.
  • the network interface 8004 can provide network communication functions; the user interface 8003 is mainly used to provide an input interface for the user; and the processor 8001 can be used to call the device control application stored in the memory 8005
  • the program to achieve: obtain the scene information of the target vehicle; determine the current lane-change scene type of the target vehicle according to the scene information; if the current lane-change scene type is the forced In the first lane, when it is detected that the first lane satisfies the lane change safety inspection conditions, the target vehicle is controlled according to the first lane to perform lane change processing; In the second lane where the travel time is optimized, when it is detected that the second lane satisfies the lane change safety check condition, the target vehicle is controlled according to the second lane to perform lane change processing.
  • the computer device 8000 described in this embodiment of the present application can execute the description of the control method in the foregoing embodiment corresponding to FIG. 3 to FIG. description, which will not be repeated here.
  • the description of the beneficial effects of using the same method will not be repeated.
  • the embodiment of the present application further provides a computer-readable storage medium, and the computer program executed by the aforementioned data processing computer device 8000 is stored in the computer-readable storage medium, and
  • the above computer program includes program instructions, and when the above processor executes the above program instructions, it can execute the description of the above data processing method in the embodiments corresponding to FIG. 3 to FIG.
  • the description of the beneficial effects of using the same method will not be repeated.
  • the above-mentioned computer-readable storage medium may be the data processing apparatus provided in any of the foregoing embodiments or an internal storage unit of the above-mentioned computer device, such as a hard disk or a memory of the computer device.
  • the computer-readable storage medium can also be an external storage device of the computer device, such as a plug-in hard disk, a smart media card (smart media card, SMC), a secure digital (secure digital, SD) card equipped on the computer device, Flash card (flash card), etc.
  • the computer-readable storage medium may also include both an internal storage unit of the computer device and an external storage device.
  • the computer-readable storage medium is used to store the computer program and other programs and data required by the computer device.
  • the computer-readable storage medium can also be used to temporarily store data that has been or will be output.
  • an embodiment of the present application also provides a vehicle, and the above-mentioned vehicle includes the control device 1 in the embodiment corresponding to FIG. 13 above, or includes the above-mentioned computer equipment, or includes the above-mentioned computer-readable storage device medium.
  • the above-mentioned vehicle can execute the description of the above-mentioned data processing method in the above-mentioned embodiments corresponding to FIG. 3 to FIG. 12 b , and therefore will not be repeated here.
  • the description of the beneficial effects of using the same method will not be repeated.

Abstract

一种基于自动驾驶的控制方法、装置、车辆以及相关设备,自动驾驶的控制方法包括:获取目标车辆的场景信息(S101);根据场景信息确定目标车辆的当前变道场景类型(S102);若当前变道场景类型为强制变道场景类型,则根据场景信息识别用于完成导航行驶路线的第一车道,在检测到第一车道满足变道安全检查条件时,根据第一车道控制目标车辆执行变道处理(S103);若当前变道场景类型为自由变道场景类型,则根据场景信息识别用于优化行驶时间的第二车道,在检测到第二车道满足变道安全检查条件时,根据第二车道控制目标车辆执行变道处理(S104)。控制方法能够让自动驾驶汽车的自主变道更加灵活,提高变道安全性以及通行效率。

Description

一种基于自动驾驶的控制方法、装置、车辆以及相关设备
本申请要求于2020年11月19日提交中国专利局、申请号为202011300553.0、名称为“一种基于自动驾驶的控制方法、装置、车辆以及相关设备”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及人工智能领域,尤其涉及一种基于自动驾驶的控制方法、装置、车辆以及相关设备。
背景
随着自动驾驶技术的不断发展,自动驾驶车辆的自主变道方法也得到了更多的关注,自动变道要求自动驾驶汽车在道路上自主选择行驶车道并进行变道操作,适当的变道决策可以更好地完成驾驶任务,还可以避免交通拥堵,提高通行效率,避免交通事故,保证道路安全。因此,自主变道已成为当前自动驾驶技术面临的重大问题。
技术内容
本申请实施例提供一种基于自动驾驶的控制方法、装置、车辆以及相关设备,能够让自动驾驶汽车的自主变道更加灵活,提高变道安全性以及通行效率。
本申请实施例一方面提供了一种基于自动驾驶的控制方法,包括:
获取目标车辆的场景信息;
根据场景信息确定目标车辆的当前变道场景类型;
若当前变道场景类型为强制变道场景类型,则根据场景信息识别用于完成导航行驶路线的第一车道,在检测到第一车道满足变道安全检查条件时,根据第一车道控制目标车辆执行变道处理;
若当前变道场景类型为自由变道场景类型,则根据场景信息识别用于优化行驶时间的第二车道,在检测到第二车道满足变道安全检查条件时,根据第二车道控制目标车辆执行变道处理。
本申请实施例一方面提供了一种基于自动驾驶的控制装置,包括:
信息获取模块,用于获取目标车辆的场景信息;
场景确定模块,用于根据场景信息确定目标车辆的当前变道场景类型;
强制变道模块,用于若当前变道场景类型为强制变道场景类型,则根据场景信息识别用于完成导航行驶路线的第一车道,在检测到第一车道满足变道安全检查条件时,根据第一车道控制目标车辆执行变道处理;
自由变道模块,用于若当前变道场景类型为自由变道场景类型,则根据场景信息识别用于优化行驶时间的第二车道,在检测到第二车道满足变道安全检查条件时,根据第二车道控制目标车辆执行变道处理。
本申请实施例一方面提供了一种计算机设备,包括:处理器、存储器、网络接口;
上述处理器与上述存储器、上述网络接口相连,其中,上述网络接口用于提供数据通信功能,上述存储器用于存储计算机程序,上述处理器用于调用上述计算机程序,以执行本申请实施例中的方法。
本申请实施例一方面提供了一种计算机可读存储介质,上述计算机可读存储介质存储有计算机 程序,上述计算机程序包括程序指令,上述程序指令被处理器执行时,以执行本申请实施例中的方法。
本申请实施例一方面提供了一种计算机程序产品或计算机程序,该计算机程序产品或计算机程序包括计算机指令,该计算机指令存储在计算机可读存储介质中,计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行本申请实施例中的方法。
本申请实施例一方面提供了一种车辆,上述车辆包括上述基于自动驾驶的控制装置,或者,包括上述计算机设备,或者包括上述计算机可读存储介质。
附图说明
为了更清楚地说明本申请实施例或相关技术中的技术方案,下面将对实施例或相关技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1是本申请实施例提供的一种网络架构图;
图2是本申请实施例提供的一种自主变道的场景示意图;
图3是本申请实施例提供的一种基于自动驾驶的控制方法的流程示意图;
图4是本申请实施例提供的一种强制变道的流程示意图;
图5是本申请实施例提供的一种场景分发决策树的设计示意图;
图6a-图6c是本申请实施例提供的一种识别第一车道的场景示意图;
图7是本申请实施例提供的一种自由变道的流程示意图;
图8是本申请实施例提供的一种离线车道评估模型训练过程示意图;
图9是本申请实施例提供的一种强制变道准备的流程示意图;
图10a-图10b是本申请实施例提供的一种强制变道准备场景示意图;
图11是本申请实施例提供的一种自主变道的流程示意图;
图12a是申请实施例提供的一种变道执行方法的流程示意图;
图12b是本申请实施例提供的一种预期变道轨迹路线示意图;
图13是本申请实施例提供的一种基于自动驾驶的控制装置的结构示意图;
图14是本申请实施例提供的一种计算机设备的结构示意图。
实施方式
下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
人工智能(Artificial Intelligence,AI)是利用数字计算机或者数字计算机控制的机器模拟、延伸和扩展人的智能,感知环境、获取知识并使用知识获得最佳结果的理论、方法、技术及应用系统。换句话说,人工智能是计算机科学的一个综合技术,它企图了解智能的实质,并生产出一种新的能以人类智能相似的方式做出反应的智能机器。人工智能也就是研究各种智能机器的设计原理与实现方法,使机器具有感知、推理与决策的功能。
人工智能技术是一门综合学科,涉及领域广泛,既有硬件层面的技术也有软件层面的技术。人工智能基础技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理技术、操作/交互系统、机电一体化等技术。人工智能软件技术主要包括计算机视觉技术、语音处理技术、 自然语言处理技术以及机器学习/深度学习等几大方向。
随着人工智能技术研究和进步,人工智能技术在多个领域展开研究和应用,例如常见的智能家居、智能穿戴设备、虚拟助理、智能音箱、智能营销、无人驾驶、自动驾驶、无人机、机器人、智能医疗、智能客服等,相信随着技术的发展,人工智能技术将在更多的领域得到应用,并发挥越来越重要的价值。
本申请实施例提供的方案涉及人工智能的自动驾驶、机器学习等技术,具体通过如下实施例进行说明:
常见的自主变道方案,是把全局路径规划做成车道级别,即在一次自动驾驶任务开始的时候就已经基本决定了在哪个地方需要变道。但是车道级别全局路径规划方案不能很好的应对快速变化的复杂交通流,比如全局规划的需要变道的地方被静态的障碍物堵住时,就可能会导致自动驾驶车辆无法正常进行变道,或者自动驾驶车辆当前车道的前方车辆行驶缓慢时,就可能会导致自动驾驶车辆行驶速度慢等等。可见目前的自动驾驶汽车的变道方式不够灵活,可能会带来安全隐患,且通行效率低。
本申请实施例提供了一种基于自动驾驶的控制方法,可以使自动驾驶汽车的自主变道更加灵活,提高变道安全性以及通行效率。
图1是本申请实施例提供的一种网络架构图。如图1所示,该网络架构可以包括业务服务器100以及终端设备集群,其中,上述终端设备集群可以包括多个终端设备,如图1所示,具体可以包括终端设备10a、终端设备10b、……、终端设备10n。如图1所示,终端设备10a、终端设备10b、……、终端设备10n可以分别与上述业务服务器进行网络连接,以便于每个终端设备可以通过该网络连接与业务服务器100进行数据交互,以便于上述业务服务器100可以接收到来自于每个终端设备的业务数据。
本申请中,每个终端设备均可以是车载终端或手机终端。终端设备配置在行驶的车辆上,每辆车辆上都可以配置终端设备,以便通过终端设备与业务服务器的数据交互,得到用于自动驾驶的控制命令,终端设备通过该控制命令控制车辆进行自动驾驶。如图1所示,业务服务器100可以接收来自每个终端设备的业务数据,调用与自动驾驶有关的源数据,然后进行逻辑运算处理得到用于控制车辆驾驶的控制命令。其中,业务数据可以场景信息。其中,场景信息包括车辆相关信息、道路信息、环境信息、定位信息、终点信息以及地图信息等等。其中,源数据可以是用于自动驾驶的机器学习模型和逻辑运算时所需的参数数据。本申请中,每辆车辆行驶时,该车辆对应的终端设备都可以向业务服务器100不断发起变道检测的业务请求,当业务服务器接收到终端设备的业务请求,会对终端设备传来的业务数据进行逻辑运算处理后下发控制命令返回到终端设备。每个终端设备在接收到业务服务器100传来的控制命令时,都可以根据该控制命令控制对应的车辆进行变道处理。
以终端设备10a、自动驾驶车辆20a和业务服务器100进行数据交互为例。自动驾驶车辆20a行驶在道路中,自动驾驶车辆20a配置有终端设备10a。终端设备10a会将获取到的有关自动驾驶车辆20a的各种信息作为场景信息,一起发送给业务服务器100。业务服务器100在接收到终端设备10a传来的场景信息后,会调用与自动驾驶有关的源数据与场景信息一起做逻辑运算处理。其中,逻辑运算处理包括先确定自动驾驶车辆20a的当前变道场景类型,然后根据不同的当前变道场景类型识别目标车道,满足安全检查条件时下发控制命令以使自动驾驶车辆20a变道至目标车道。其中,目标车道是指业务服务器100经过运算处理后得到的当前场景情况下适合自动驾驶车辆20a的变道车道。
可以理解的是,本申请实施例提供的方法可以由计算机设备执行,计算机设备可以为上述的业务服务器1000。其中,业务服务器1000可以是独立的物理服务器,也可以是多个物理服务器构成的服务器集群或者分布式系统,还可以是提供云服务、云数据库、云计算、云函数、云存储、网络服务、云通信、中间件服务、域名服务、安全服务、CDN、以及大数据和人工智能平台等基础云计算服务的云服务器。
可以理解的是,上述数据交互的过程只是本申请实施例的一个实例,逻辑运算处理并不限于在业务服务器100中,也可以在终端设备中。此时终端设备发起变道安全检查的业务请求,可以获取业务数据后从业务服务器100中获取与自动驾驶有关的源数据,然后执行逻辑运算处理,得到用于控制车辆驾驶的控制命令。同样的,与自动驾驶有关的源数据,也并不限于存在业务服务器100中,也可以存储在终端设备中。本申请在此不做限制。
其中,终端设备以及业务服务器可以通过有线或无线通信方式进行直接或间接地连接,本申请在此不做限制。
为便于理解,请参见图2,图2是本申请实施例提供的一种自主变道的场景示意图。其中,如图2所示的业务服务器可以为上述业务服务器100,且如图2所示自动驾驶车辆2可以为上述图1所示的自动驾驶车辆20b,该自动驾驶车辆2中安装有车载终端21,该车载终端21可以为上述图1所示的终端设备10b。如图2所示,自动驾驶车辆2行驶在车道B中,此时,车载终端21会将当前的针对自动驾驶车辆2的场景信息传给业务服务器。其中,场景信息可以包括定位信息、地图信息、环境信息、终点信息和车辆相关信息等等。其中,车辆相关信息指的是自动驾驶车辆2和与其邻近的车辆的相关信息,即车辆相关信息可以包括自动驾驶车辆2的速度、加速度、自动驾驶车辆2邻近的车辆的速度、加速度,车辆相关信息还可以包括自动驾驶车辆2和与其邻近的车辆的位置关系。其中,场景信息的采集装置可以安装在自动驾驶车辆2上,也可以安装在车载终端21上,也可以是分别在安装上自动驾驶车辆2上和车载终端21上,这里不做限制,不过为了下述阐述清楚,后续默认采集装置安装在车载终端21上。
如图2所示,业务服务器在获取到自动驾驶车辆2的场景信息后,会根据场景信息确定当前变道场景类型,然后根据当前变道场景类型和场景信息确定此时理论上适合自动驾驶车辆2的最优车道,作为目标车道。其中,当前变道场景类型可以归类为2个场景大类:自由变道场景类型(前车速度慢决定超车)和强制变道场景类型(十字路口车道选择、静态障碍物堵路等)。其中,自由变道场景类型是指自动驾驶车辆2为了提高通行速度从而优化行驶时间,选择主动变更车道,不变更车道也不会影响任务目标;强制变道场景类型是指当前场景下汽车不变道至目标车道,无法完成任务目标。其中,任务目标是指自动驾驶车辆2行驶前设定的需要到达的目标终点。然后,业务服务器会根据目标车道和场景信息,对自动驾驶汽车2做变道处理。其中,目标车道是指能够完成当前导航驾驶路线的最优车道。其中,变道处理是指业务服务器会先根据场景信息监测自动驾驶车辆2是否符合变道条件,如果不符合变道条件,业务服务器会调节自动驾驶车辆2的位置和速度去创造安全的变道环境。业务服务器在做变道处理过程中,会将控制命令下发到车载终端21。车载终端21在接收到控制命令后,会根据控制命令控制自动驾驶车辆2执行变道操作。比如说,业务服务器根据场景信息确定车道C为目标车道,但是自动驾驶车辆2同车道C中的前方车辆1距离太近,变道有风险,不符合变道安全检查条件,业务服务器便会下发降低行驶速度的控制命令,控制自动驾驶车辆2同车道C中的前方车辆1拉开距离,直到二者的距离达到变道安全检查条件,业务服务器才会下发让自动驾驶车辆2从车道B变道至车道C的控制命令。
如图2所示,业务服务器还会根据场景信息对整个自主变道的过程做变道安全检查。该变道安全检查是一直持续进行的,也就是说,在接收到车载终端21传来的变道检测的请求后,业务服务器会不断的根据传来的场景信息做变道安全检查,直到自动驾驶车辆2变道完成。在变道安全检查过程中,一旦检测到此时不满足变道安全检查条件时,业务服务器会中止对自动驾驶车辆2变道至目标车道的处理,控制该自动驾驶车辆2回到当前行驶车道,也就是车道2。
其中,变道安全检查、确定当前变道场景类型和目标车道、不同变道场景类型下的变道处理和执行变道的具体实现过程,可以参见下述图3-图12b所对应实施例中的描述。
请参见图3,图3是本申请实施例提供的一种基于自动驾驶的控制方法的流程示意图。该方法可以由业务服务器(如上述图1所对应实施例中的业务服务器100)执行,也可以由终端设备(如上述图1所对应实施例中的终端设备10a)执行。本实施例以该方法由计算机设备(该计算机设备 可以为业务服务器100,或者可以为终端设备10a)执行为例进行说明。如图3所示,该流程可以包括:
S101:获取目标车辆的场景信息。
具体的,场景信息够反映一定时间和空间范围内汽车驾驶行为与行驶环境的综合情况,例如:场景信息包括车辆相关信息、道路信息、环境信息、定位信息、终点信息以及地图信息等等。其中,车辆相关信息包括目标车辆和目标车辆周边车辆的速度、加速度、车辆类型、当前状态等等;道路信息包括当前车道的拥堵情况、车道限速情况、车道平均车速、离车道终点距离等等;环境信息包括障碍物检测信息。场景信息的采集可以通过传感器、激光雷达、摄像头、毫米波雷达、导航系统、定位系统、高精度地图等来实现。计算机设备可以(如上述图1所对应实施例中的终端设备10a)采集这些场景信息。
S102:根据所述场景信息确定所述目标车辆的当前变道场景类型。
具体的,根据目标车辆所处的当前场景可以确定出当前变道场景类型,比如自由超车场景、路口场景、主辅路/上下匝道场景、静态障碍物场景和终点停车场景分别对应自由超车变道场景类型、路口变道场景类型、出口变道场景类型、静态障碍物变道场景类型和终点停车变道场景类型。可以将这些变道场景类型归纳为两个变道场景大类:强制变道场景类型和自由变道场景类型。其中,强制变道场景类型是指在此场景下,如果确定的最优车道不是目标车辆的当前行驶车道,则目标车辆要进行变道,否则无法按照当前导航行驶路线到达任务终点;自由变道场景类型是指在此场景下,如果确定的最优车道不是目标车辆的当前行驶车道,目标车辆也可以选择不变道,依然可以按照当前导航行驶路线到达任务终点,只是可能花费的时间较长。因此,上述自由超车变道场景类型属于自由变道场景类型;上述路口变道场景类型、出口变道场景类型、静态障碍物变道场景类型和终点停车变道场景类型均为强制变道场景类型。
S103:若所述当前变道场景类型为强制变道场景类型,则根据所述场景信息识别用于完成导航行驶路线的第一车道,在检测到所述第一车道满足变道安全检查条件时,根据所述第一车道控制所述目标车辆执行变道处理。
具体的,如果确定目标车辆的当前变道场景类型为强制变道场景类型,计算机设备会根据当前变道场景类型、导航驾驶路线、车辆速度、停车位置等场景信息来识别最优车道,作为第一车道。其中,最优车道是指能够完成导航行驶路线的候选车道中在该强制变道场景类型下最适合行驶的车道。
具体的,在识别出第一车道后,计算机设备并不会立即控制目标车辆执行变道操作。因为公路上行驶的车辆多,一不小心很容易发生交通事故。当第一车道中位于目标车辆前方的第一辆车辆和第一车道中位于目标车辆后方的第一辆车辆之间的距离较大时,目标车辆才有机会行驶进两车之间,进入第一车道。因此,计算机设备会获取第一车道的邻车间隔区域,根据邻车间隔区域控制目标车辆进入变道准备位置。其中,邻车间隔区域为第一车道中第一车辆和第二车辆之间的间隔区域;第一车辆为第一车道中与目标车辆的车头距离最近的车辆;第二车辆为第一车道中与目标车辆的车尾距离最近的车辆。其中,变道准备位置是指变道环境较为安全的位置。目标车辆在进入变道准备位置开始变道之前,计算机设备需要确认目标车辆的变道的安全性。计算机设备会对第一车道进行变道安全检查,确定目标车辆满足变道安全检查条件,才会控制目标车辆变道至第一车道。
变道安全检查可以包括安全保障规则。安全保障规则用于保障在变道中发生紧急情况时目标车辆能够通过紧急刹车避让第一车辆,同时,保证变道时若目标车辆急刹车,第二车辆能够有充足的时间反应。也就是说,目标车辆进入第一车道后,目标车辆与第一车辆之间的距离不能小于安全距离,目标车辆与第二车辆之间的距离也不能小于安全距离。其中,安全距离可以是事先规定的阈值,也可以是根据目标车辆、第一车辆和第二车辆的现在的速度、位置等状态计算得出的值。比如说,获取到目标车辆与第一车辆的安全距离为2m,而根据场景信息,计算机设备确定此时目标车辆与第一车辆的实际距离为1m,则认为此时变道不安全,放弃控制目标车辆变道,如果目标车辆已经 在执行变道的控制命令,计算机设备会下发新的控制命令停止目标车辆的变道,并控制目标车辆回到当前行驶车道。
目标车辆与第一车辆的第一安全距离阈值的计算,可以如公式(1):
Figure PCTCN2021127867-appb-000001
其中,dl为第一安全距离阈值,vego、aego和tdelay分别为目标车辆的当前速度、当前加速度和反应时间,vl、al分别为第一车辆的第一速度和第一加速度。其中,tdelay可以根据目标车辆的车型等实际情况来调节,其余数据都可以从场景信息中获取。其中,第一安全距离阈值即目标车辆与第一车辆的最小安全距离,如果目标车辆与第一车辆的实际距离小于第一安全距离阈值,则确定第一车道不符合变道安全检查条件。
目标车辆与第二车辆的第二安全距离阈值的计算,可以如公式(2):
Figure PCTCN2021127867-appb-000002
其中,dr为第二安全距离阈值,vego、aego和tdelay分别为目标车辆的当前速度、当前加速度和反应时间,vr、ar分别为第二车辆的第二速度和第二加速度。其中,tdelay可以根据目标车辆的车型等实际情况来调节,其余数据都可以从场景信息中获取。其中,第二安全距离阈值即目标车辆与第二车辆的最小安全距离,如果目标车辆与第二车辆的实际距离小于第二安全距离阈值,则确定第一车道不符合安全检查条件。
因此,在使用安全保障规则对目标车辆做变道安全检查时,若前车距离不小于第一安全距离阈值,且后车距离不小于第二安全距离阈值,则确定第一车道满足变道安全检查条件,变道安全检查通过;若前车距离小于第一安全距离阈值,或者后车距离小于第二安全距离阈值,则确定第一车道不满足变道安全检查条件,控制目标车辆停止变道至第一车道。
变道安全检查还可以包括使用数据驱动的TTC(Time-To-Collision碰撞时间,也可称作碰撞时间距离)模型。上述安全保障规则用于保障最基本的变道安全,而在确定变道安全时,还可以考虑社会接受程度(Social acceptance),可以通过数据驱动的方法建立TTC模型:收集道路上变道的数据;抽取特征,使用自车速度、前车速度、交通拥堵情况、前车类型、车道等级(城市道路、高速、接近路口情况)、当前天气等;训练得到TTC的逻辑回归模型TTC=Logistic(Features),将其作为碰撞时间识别模型,在变道安全检查的过程中调用。在目标车辆的行驶过程中,从场景信息中获取变道特征。其中,变道特征即上述的抽取特征。将变道特征输入碰撞时间识别模型,通过碰撞时间识别模型输出预期前车碰撞时间和预期后车碰撞时间。其中,预期前车碰撞时间是理想状态下的目标车辆同第一车辆的碰撞时间,如果目标车辆同第一车辆的实际碰撞时间小于预期前车碰撞时间,则认为不满足变道安全检查条件。同理,预期后车碰撞时间是理想状态下的第二车辆同目标车辆的碰撞时间,如果目标车辆同第二车辆的实际碰撞时间小于预期后车碰撞时间,则认为不满足变道安全检查条件。
实际碰撞时间可以根据公式(3)得到:
T=d÷v
其中,T为A车辆的实际碰撞时间,d为A车辆离B车辆的距离,v为A车辆的速度。其中,A车辆处于B车辆的后方。根据公式(3),可以计算得到目标车辆的实际碰撞距离和后方车辆的实际碰撞距离。在变道安全检查过程中,业务服务器会根据从场景信息中获取变道特征,然后输入碰撞时间模型求出预期前车碰撞时间和预期后车碰撞时间。然后,业务服务器会根据上述公式(3)求出目标车辆的实际碰撞距离和后方车辆的实际碰撞距离,若目标车辆的实际碰撞时间不小于预期前车碰撞时间,且第二车辆的实际碰撞时间不小于预期后车碰撞时间,则确定第一车道满足变道安全检查条件;若目标车辆的实际碰撞事件距离小于预期前车碰撞时间,或者第二车辆的实际碰撞时间小于预期后车碰撞时间,则确定第一车道不满足变道安全检查条件,控制目标车辆停止变道至第 一车道。
可以理解的是,计算机设备对第一车道做变道安全检查时,可以只选用安全保障规则来做变道安全检查,也可以只选用碰撞时间模型来做变道安全检查,也可以同时对第一车道使用安全保障规则和碰撞时间模型来做变道安全检查。如果采取两种检查方式对第一车道做变道安全检查,则需要两种检查方式都通过,才能认为变道安全检查通过。也就是说,如果第一车道不符合安全保障规则,或者实际碰撞时间小于预期碰撞时间,均确认第一车道不满足变道安全检查条件,控制目标车辆停止变道至第一车道。
S104:若所述当前变道场景类型为自由变道场景类型,则根据所述场景信息识别用于优化行驶时间的第二车道,在检测到所述第二车道满足所述变道安全检查条件时,根据所述第二车道控制所述目标车辆执行变道处理。
具体的,如果当前变道场景类型为自由变道场景类型,说明此时目标车辆继续在当前行驶车道上行驶,也能够完成导航行驶路线。但是由于道路环境复杂多变,继续在当前行驶车道上行驶花费的时间可能更多,比如当前行驶车道上在目标车辆前方的车辆行驶速度很慢,旁边的车道同样可以完成导航行驶路线,而且行驶车辆少,平均车速也更快,如果目标车辆能够变道至旁边车速更快的车道,不仅能够完成导航行驶路线,还能优化行驶时间。因此,当目标车辆处于自由变道场景类型下时,可以根据场景信息识别用于优化行驶时间的第二车道。
具体的,可以用机器学习方法评估当前驾驶状态下是否需要变换车道。计算机设备可以从场景信息中提取出候选车道的车道特征和驾驶特征。其中,候选车道是指能够完成导航行驶路线的车道,比如当前行驶车道、左侧车道、右边车道。其中,车道特征指的是与候选车道相关的特征,包括候选车道在过去预设时间段内的平均车速,比如车道过去30s平均车速、车道过去1分钟平均车速、车道限速、离车道终点距离、该车道与出口车道相差车道数;驾驶特征指的是目标车辆在行驶过程中的一些动作特征和任务特征,比如上一次变道时间、上一次变道车道、当前车速、速度低于理想速度持续时间、离道路出口距离等。上述特征只作为例子,实际应用可以选择其他特征。然后,计算机设备通过车道评估模型对车道特征和驾驶特征进行处理,得到候选车道的评估参数值。其中,车道评估模型是根据驾驶行为样本训练得到的;驾驶行为样本是指用户主动变道时的车道特征样本和驾驶特征样本。然后将具有最高的评估参数值的候选车道确定为用于优化行驶时间的第二车道。如果第二车道不为目标车辆的当前行驶车道,计算机设备在检测到第二车道满足变道安全检查条件时,会根据目标车道控制目标车辆执行变道处理。其中,对第二车道做变道安全检查可以参见上述步骤S103中对变道安全检查的描述,这里不再进行赘述。
在本申请实施例中,根据获取到的目标车辆的场景信息,可以确定目标车辆所处的当前变道场景类型,若当前变道场景类型为强制变道场景类型,则根据场景信息识别用于完成导航行驶路线的第一车道,在检测到第一车道满足安全检查条件时,根据第一车道控制目标车辆执行变道处理;若当前变道场景类型为自由变道场景类型,则根据场景信息识别用于优化行驶时间的第二车道,在检测到第二车道满足变道安全检查条件时,根据第二车道控制目标车辆执行变道处理。通过本申请实施例提供的方法,可以根据获取到的目标车辆的场景信息,确定目标车辆的当前变道场景类型,再根据不同的的当前变道场景类型对目标车辆做不一样的变道处理,能够让自动驾驶汽车拥有灵活变道的能力,更好的避免交通拥堵,提高行驶速度。
进一步地,请参见图4,图4是本申请实施例提供的一种强制变道的流程示意图。该方法可以由业务服务器(如上述图1所对应实施例中的业务服务器100)执行,也可以由终端设备(如上述图1所对应实施例中的终端设备10a)执行。本实施例以该方法由计算机设备(该计算机设备可以为业务服务器100,或者可以为终端设备10a)执行为例进行说明。如图4所示,该流程可以包括:
S201:根据场景信息确定障碍物检测信息、终点距离和路口距离。
具体的,障碍物检测信息是指目标车辆的当前行驶车道前方是否有静态障碍物,可以通过雷达或者传感器等方式来检测,检测的距离可以根据实际情况来设置。比如规定检测距离为200米,那 么如果计算机设备在目标车辆的当前行驶车道前方200米内没有检测到静态障碍物,则认为目标车辆前方没有静态障碍物。终点距离(distance_to_end,d2e)是指目标车辆到达终点任务的距离,可以通过高精度地图和定位信息来计算得到。路口距离是指目标车辆到达路口距离(distance_to_junction,d2j)是指当前时间目标车辆离下一个路口或者出口的距离,同样可以通过高精度地图和定位信息来计算得到。
S202:将障碍物检测信息、终点距离和路口距离输入场景分发器,确定目标车辆的当前场景。
具体的,根据当前场景可以确定目标车辆的变道场景类型。其中,当前场景包括自由超车场景、路口场景、主辅路/上下匝道场景、静态障碍物场景和终点停车场景。其中,自由超车场景是一种最常见的场景,多见于高速或者城区快速路等结构化道路,此时目标车辆到下一个路口的距离和到终点的距离都足够远,可认为所有车道都可以通向最终的目的地,此时目标车辆可以选择不拥堵的车道来行驶,但是如果目标车辆不变道是不会影响到达目标终点的。其中,路口场景主要适用于L4城区自动驾驶系统,在目标车辆通过路口左转/直行/右转时应该选择对应的车道,否则无法完成目标任务。其中,主辅路/上下匝道场景与路口场景类似,二者都可以从地图中获取到候选车道。其中,静态障碍物场景是指当目标前方有静态障碍物(比如锥桶,施工设施等)时,需要向左或者向右换道避障。其中,终点停车场景是指当目标车辆距离终点距离小于一定数值时,进入终点停车场景。一般适用于L4级别的自动驾驶系统,即到达终点时需要靠边停车,所以在此场景下选择最右车道为目标车道。
具体的,场景分发器的实现可以采用决策树设计,为便于理解,请一并参见图5,图5是本申请实施例提供的一种场景分发决策树的设计示意图。如图5所示,整个决策流程可以包括:
S51:判断目标车辆前方是否有静态障碍物。
具体的,根据障碍物检测信息,可以先确定目标车辆前方是否有静态障碍物。其中,目标车辆前方可以是目标车辆的当前行驶车道前方检测阈值范围内,检测阈值的值可以根据实际情况来设定,可以是到下一个路口的距离,可以是直接设定的数值。如果所述障碍物检测信息指示所述目标车辆前方有障碍物,则确定所述目标车辆的当前变道场景类型为静态障碍物变道场景类型;如果所述障碍物检测信息指示所述目标车辆前方有障碍物,则执行步骤S52,继续确定当前变道场景类型。
S52:判断终点距离是否小于第一距离阈值。
具体的,确定目标车辆前方是否有障碍物以后,应该先确定此时目标车辆是否快要到达终点。因此可以将终点距离同设定的第一距离阈值作比较。若终点距离小于第一距离阈值,则确定目标车辆的当前变道场景类型为终点停车变道场景类型;若终点距离不小于第一距离阈值,则执行步骤S53,继续确定当前变道场景类型。
S53:判断路口距离是否小于第二距离阈值。
具体的,确定目标车辆离终点还较远、未处于终点停车的场景下时,计算机设备会将路口距离同设定的第二距离阈值作比较。若路口距离不小于第二距离阈值,则确定目标车辆的当前变道场景类型为自由超车变道场景类型;若路口距离小于第二距离阈值,则执行步骤S54,继续确定当前变道场景类型。
S54:获取路口的路口地图信息,判断路口状况。
具体的,路口可以有十字路口、主辅路和上下匝道的情况。其中,十字路口包括路口左转、路口右转和路口直行三种情况。其中,主辅路和上下匝道可以算作出口,当前场景下没有交通信号灯,对应的变道场景类型可定义为出口变道场景类型。因此,若路口地图信息指示路口为出口,则确定目标车辆的当前变道场景类型为出口变道场景类型,否则确定目标车辆的当前变道场景类型为路口变道场景类型。
S203:若所述目标车辆的当前变道场景类型为强制变道场景类型,根据所述当前变道场景类型决策目标车辆的第一车道。
具体的,路口变道场景类型、出口变道场景类型、静态障碍物变道场景类型和终点停车变道场 景类型均属于强制变道场景类型。如果确定目标车辆的当前变道场景类型为强制变道场景类型,计算机设备会根据当前变道场景类型、导航驾驶路线、车辆速度、停车位置等场景信息来识别用于完成导航行驶路线的最优车道,作为第一车道。为便于理解,请一并参见图6a-图6c,图6a-图6c是本申请实施例提供的一种识别第一车道的场景示意图。
若当前变道场景类型为路口变道场景类型,则计算机设备会先根据导航行驶路线确定在到达下一个路口时目标车辆应该左转还是右转还是直行。如图6a所示,目标车辆61正行驶在车道B中,由地图中的导航行驶路线,可以确定在到达路口a时,目标车辆61应该右转。通过高精度地图信息可以得知,只有车道C或者车道D中的车辆才能在路口a中右转。如果目标车辆61不变道继续行驶在车道B中,无法正常右转,强行控制目标车辆61右转可能会造成交通事故。因此将车道C和车道D作为候选车道,然后可以根据候选车道的车速和变道至候选车道的难易程度来选择第一车道,当路口为红灯等待状态时,还可以考虑停止位置离路口的距离。
若当前变道场景类型为出口变道场景类型,当前道路可能是主辅路或者上下匝道。此场景与路口场景相似之处为都可以从地图中获取候选车道,其余车道都应该被视为错误的车道。如图6b所示,目标车辆61行驶在车道B中,由地图中的导航行驶路线可知,到达出口b的时候,目标车辆61应该进入辅路。通过高精度地图信息可以得知,只有车道C或者车道D中的车辆才能在出口b中进入辅路。如果目标车辆61不变道继续行驶在车道B中,强行控制目标车辆61右转可能会造成交通事故,因此将车道C和车道D作为候选车道。出口变道场景类型同路口变道场景类型的不同之处在于此场景下没有交通信号灯,并且大部分情况下车速较快。因此主要根据候选车道的车速和变道至候选车道的难易程度来选择第一车道。
若当前变道场景类型为静态障碍物变道场景类型,也就是目标车辆前方有静态障碍物(锥桶,施工设施等)时,需要向左或者向右换道避障。如图6c所示,目标车辆61行驶在车道B中,而车道B的前方有静态障碍物c。目标车辆61不能继续行驶在车道B中,否则会产生交通事故,应该进行变道操作。此时,计算机设备会先获取可以正常通行的车道,然后过滤到错误的车道B,也就是将车道A或者车道C作为候选车道。然后计算机设备会从候选车道中选择最优车道作为第一车道,此时可以综合考虑候选车道的车速和该车道可以行使的距离(如果前方有路口或者主辅路匝道,优先选择通向最终的目的地的车道)。
若当前变道场景类型为终点停车变道场景类型,也就是当终点距离小于一定数值时,需要选择适合停车的车道作为目标车道。该场景类型一般适用于L4级别的自动驾驶系统,即到达终点时需要靠边停车,所以在此场景类型下选择最右车道为第一车道。
S204:在检测到所述第一车道满足安全检查条件时,根据所述第一车道控制所述目标车辆执行变道处理。
具体的,若检测到第一车道满足变道安全检查条件,则将目标车辆的速度和行驶方向调整为变道速度和变道行驶方向,控制目标车辆按照变道速度和变道行驶方向变道至第一车道。其中,变道安全检查可以参见上述图3所对应实施例中步骤S103对变道安全检查的描述,这里不再进行赘述。
通过本申请实施例提供的方法,可以根据获取到的目标车辆的场景信息,确定目标车辆的当前变道场景类型,再根据不同的的当前变道场景类型对目标车辆做不一样的变道处理,能够让自动驾驶汽车拥有灵活变道的能力,更好的避免交通拥堵,提高行驶速度。
进一步地,请参见图7,图7是本申请实施例提供的一种自由变道的流程示意图。如图7所示,该流程可以包括:
S301:根据场景信息确定目标车辆的当前变道场景类型。
其中,步骤S301的具体实现方式可以参见上述图4对应实施例中步骤S201-步骤S202的描述,这里不再进行赘述。
S302:若当前变道场景类型为自由变道场景类型,则获取候选车道的车道特征和驾驶特征。
具体的,自由变道场景类型包括上述自由超车变道场景类型。如果确定目标车辆的当前变道场 景类型为自由变道场景类型,说明此时目标车辆离终点远,离下一个路口的距离也远,且所处的当前行驶车道前方没有障碍物。此时,虽然继续在当前行驶车道行驶,可以根据导航行驶路线到达目标终点,不过为了防止前方车辆行驶速度慢导致目标车辆的行驶速度慢,从而使得目标车辆完成任务的时间增加。为了有机会更快地完成驾驶任务,可以实时监测候选车道的情况,从中选择平均车速较快的车道来行驶。计算机设备可以采用机器学习方法来评估当前驾驶状态下是否需要变换车道。因此,计算机设备会从场景信息中提取出候选车道的车道特征和驾驶特征,以此来推理出理论上当前最适合目标车辆行驶的车道,作为第二车道。其中,候选车道是指能够完成导航行驶路线的车道。
S303:通过车道评估模型对所述车道特征和所述驾驶特征进行处理,得到所述候选车道的评估参数值。
具体的,车道评估模型可以通过离线训练得到,为便于理解,请一并参见图8,图8是本申请实施例提供的一种离线车道评估模型训练过程示意图。如图8所示,整个训练过程包括:
S81:人类司机驾驶数据采集。
具体的,在离线部分,可以利用装有传感器和信息处理系统的普通车辆或者人类驾驶的自动驾驶车辆获取人类司机驾驶行为以及相应的感知、定位、地图信息。
S82:人类司机主动变道数据提取。
具体的,使用数据提取模块选取人类司机主动变道的场景,将因为各种因素(如必须要下匝道、十字路口必须转弯灯)导致的强制变道数据剔除。
S83:特征抽取。
具体的,从主动变道数据中抽取车道特征和驾驶特征。抽取车道特征,可以对每个候选车道(例如当前车道、左侧车道、右侧车道),抽取车道相关特征:车道过去30s平均车速、车道过去1分钟平均车速、车道限速、离车道终点距离、该车道与出口车道相差车道数。抽取驾驶特征,例如上一次变道时间、上一次变道目标车道、当前车速、速度低于理想速度持续时间、离道路出口距离等。上述特征只作为例子,实际应用可以选择其他特征。
S84:模型训练。
具体的,抽取完特征后,加上之前提取出的变道意图组成训练样本,使用XGBoost、逻辑回归或者DNN训练车道评估模型,本方案不限制机器学习方法。
有了车道评估模型以后,在自由变道场景类型下,计算机设备通过车道评估模型对所述车道特征和所述驾驶特征进行处理,得到所述候选车道的评估参数值,然后将具有最高的评估参数值的候选车道确定为用于优化行驶时间的第二车道。
S304:在检测到所述第二车道满足所述变道安全检查条件时,根据所述第二车道控制所述目标车辆执行变道处理。
具体的,自由变道场景类型下,依然需要对第二车道做上述图3所对应实施例中步骤S103中所说的变道安全检查。只有当第二车辆满足变道安全检查条件时,计算机设备才会控制目标车辆变道至第二车道。
通过本申请实施例提供的方法,可以根据获取到的目标车辆的场景信息,确定目标车辆的当前变道场景类型,再根据不同的的当前变道场景类型对目标车辆做不一样的变道处理,能够让自动驾驶汽车拥有灵活变道的能力,更好的避免交通拥堵,提高行驶速度。
进一步的,请参见图9,图9是本申请实施例提供的一种强制变道准备的流程示意图。强制变道准备可以是对上述图3所对应实施例中步骤S103的具体描述,主要在于描述目标车辆进入变道准备位置的过程。该强制变道准备流程包括:
S401:获取第一车道的邻车间隔区域。
具体的,步骤S401的具体实现方式可以参见上述图3所对应实施例中步骤S103关于获取第一车道的邻车间隔区域的描述,这里不再进行赘述。
S402:检测邻车间隔区域是否存在可行变道区域。
具体的,目标车辆在变道进入第一车道后,如果目标车辆和第一车辆的距离小于上述图3所对应实施例中步骤S103中给出的公式(1)计算得到的第一安全距离阈值,第一车辆有突发状况时,目标车辆难以通过紧急刹车来避让第一车辆,容易发生安全事故,所以目标车辆在进入第一车道后同第一车辆的距离应该大于第一安全距离阈值;如果目标车辆和第二车辆的距离小于上述图3所对应实施例中步骤S103中给出的公式(2)计算得到的第二安全距离阈值,目标车辆急刹车时,后车没有充足的反应时间,因此目标车辆在进入第一车道后同第二车辆的距离应该大于第二安全距离阈值。可行变道区域中的位置和第一车辆的距离大于第一安全距离阈值,且和第二车辆的距离大于第二安全距离阈值。
具体的,检测邻车间隔区域是否存在可行变道区域,就是通过上述公式(1)和公式(2)和场景信息,计算出第一安全距离阈值和第二安全距离阈值,然后判断邻车间隔区域中的是否存在区域满足同第一车辆的距离大于第一安全距离阈值,且同第二车辆的距离大于第二安全距离阈值。
S403:若邻车间隔区域中存在可行变道区域,则获取当前行驶车道上的变道准备区域,根据变道准备区域控制目标车辆进入变道准备位置。
具体的,若邻车间隔区域中存在可行变道区域,说明此时的变道环境较为安全,计算机设备可以直接获取当前行驶车道中的变道准备区域,即假设将可行变道区域平移到当前行驶车道上时而确定的区域,控制目标车辆驶入变道准备区域,该变道准备区域位于所述当前行驶车道中与所述可行变道区域对应的位置,且具有与所述可行变道区域相同的长度。为便于理解,请一并参见图10a,图10a是本申请实施例提供的一种强制变道准备场景示意图。如图10a所示,目标车辆101(可以为上述图2所对应实施例中的自动驾驶车辆2)行驶在车道B中,此时计算机设备确定当前变道场景类型为强制变道场景类型,根据强制变道场景类型确定第一车道为车道A,则目标车辆101需要变道至车道A。首先,计算机设备会将车辆102和车辆103之间的区域作为邻车间隔区域,然后会根据场景信息和上述公式(1)计算出目标车辆101的第一安全距离阈值dl,则如图10a所示的区域1043是目标车辆101相对于车辆103的危险区域,同理,计算机设备根据场景信息和上述公式(2)计算出目标车辆101的第二安全距离阈值dr,则如图10a所示的区域1041是目标车辆101相对于车辆102的危险区域。邻车间隔区域除去危险区域,剩下的部分区域便是可行变道区域,也就是区域1042。如果计算机设备确定邻车间隔区域存在可行变道区域,则会控制目标车辆101进入变道准备区域。其中,变道准备区域为假设将可行变道区域平移到当前行驶车道时而确定的区域,该变道准备区域位于所述当前行驶车道中与所述可行变道区域对应的位置,且具有与所述可行变道区域相同的长度。其中,控制目标车辆101进入变道准备区域可以采用速度控制,即先匀加速再匀减速策略,假设匀加速和匀减速加速度为a,车辆离可行变道区域中心距离为d,加速时间为t,那么可算出a与t之间的关系:a=d/(2t^2)。只需要选择一个相对舒适的a和t,控制目标车辆101往变道准备区域靠拢即可。
S404:若邻车间隔区域中不存在可行变道区域,则对目标车辆做变道试探处理,若确认变道试探成功,则获取当前行驶方向上的变道准备区域,根据变道准备区域控制目标车辆进入变道准备位置;若确认变道试探失败,则放弃变道,回到当前行驶车道。
具体的,如果邻车间隔区域中不存在可行变道区域,目标车辆无法变道至第一车道,此时可以对目标车辆做变道试探处理,模仿用户在变道时的挤车行为,为变道创造安全的变道环境。变道试探处理包括获取目标车辆在当前行驶车道中的变道试探区域。其中,可行变道区域是指满足变道安全检查条件的区域;变道试探区域为假设将邻车间隔区域的中间区域平移到当前行驶车道中时而确定的区域,该变道试探区域位于所述当前行驶车道中与所述邻车间隔区域的中间区域对应的位置,且具有与邻车间隔区域的中间区域相同的长度。然后控制目标车辆进入变道试探区域。当目标车辆进入变道试探区域时,获取变道试探距离和变道试探时间周期,控制目标车辆向第一车道移动变道试探距离,在变道试探时间周期内,若检测到第二车辆处于刹车状态,则控制目标车辆继续向第一车道移动变道试探距离;若经过目标车辆移动试探后检测到邻车间隔区域中存在可行变道区域,则 获取目标车辆在当前行驶方向上的变道准备区域,根据变道准备区域控制目标车辆进入变道准备位置;变道准备区域为假设将可行变道区域平移到当前行驶方向上时而确定的区域,该变道准备区域位于所述当前行驶车道中与所述可行变道区域对应的位置,且具有与可行变道区域相同的长度;其中,处于变道准备区域内的变道准备位置满足变道条件。其中,根据变道准备区域控制目标车辆进入变道准备位置的具体实现方式可以参见上述步骤S402,这里不再进行赘述。
具体的,为便于理解,请一并参见图10b,图10b是本申请实施例提供的另一种强制变道准备场景示意图。具体的,如图10b所示,目标车辆101行驶在车道B中,车道A中的邻车间隔区域中不仅不存在可行变道区域,而且区域1041和区域1043还有部分重叠,此时目标车辆101不能进行变道,因此可以先对目标车辆101进行速度控制,让其到达车道B中相对于邻车间隔区域的中间区域。然后,获取变道试探距离x和变道试探周期t,所述变道试探距离为所述邻车间隔区域中不存在可行变道区域时所述目标车辆向所述第一车道试探性移动的距离,所述变道试探时间周期为所述目标车辆向所述第一车道每次移动所述变道试探距离后保持的时间。其中,变道试探距离和变道试探周期都可以通过变道试探模型得到。在目标车辆101开始行驶之前,可以离线收集驾驶行为样本来训练变道试探模型。其中,驾驶行为样本是指用户变道试探时的车道特征样本和驾驶特征样本。然后,在目标车辆101准备变道试探时,获取当前的变道试探驾驶特征和变道试探车道特征,根据变道试探模型对其进行处理,得到道试探距离x和变道试探周期t,所述变道试探驾驶特征包括所述目标车辆在进行变道试探时的驾驶特征,所述变道试探车道特征包括候选车道在变道试探时的车道特征。在变道试探周期t内,一直监测车辆102的速度,如果发现车辆102刹车,速度下降或者邻车间隔区域增大,如图10b所示,区域1041和区域1042的重叠区域减少,并逐渐拉开,说明车辆102愿意让目标车辆101变道至车道A。此时控制目标车辆继续偏移变道试探距离x,等到区域1041和区域1042中存在可行变道区域1042,然后控制目标车辆驶入变道准备区域105。若在变道试探周期t内没有监测到车辆102刹车,则认为车辆102不愿意目标车辆插队进入第一车道,计算机设备会确认变道试探失败,控制目标车辆101回到车道B。
S405:根据变道准备位置对第一车道进行变道安全检查;若检测到第一车道满足变道安全检查条件,则将目标车辆的速度和行驶方向调整为变道速度和变道行驶方向,控制目标车辆按照变道速度和变道行驶方向变道至第一车道。
具体的,步骤S405的具体实现方式,可以参数上述图3所对应实施例中步骤S103中关于变道安全检查的描述,这里不再进行赘述。
可以理解的是,变道安全检查不局限于在目标车辆进入变道准备位置后才执行,计算机设备根据场景信息确定了第一车道后,便可以对目标车辆进行变道安全检查,一直持续到目标车辆变道完成为止。在这期间,如果计算机设备确定变道安全检查不通过,会下发新的控制命令,控制目标车辆回到当前行驶车道。
在一些实施例中,对于上述图7所对应实施例中自由变道场景类型下目标车辆变道至第二车道时,计算机设备会根据车道评估模型的输出结果决定是否进行变道。其中,车道评估模型还可以包括安全特征。其中,安全特征是指与第二车道的变道环境相关的安全信息,比如变道安全检查的结果。车道评估模型可以综合考虑车道特征、驾驶特征和安全特征,当第二车道中存在可行变道区域时,控制目标车辆进入变道准备区域,然后变道至第二车道。其中,控制目标车辆进入变道准备区域的具体实现方式可以同上述步骤S402中一致,这里不再进行赘述。
通过本申请实施例提供的方法,可以根据获取到的目标车辆的场景信息,确定目标车辆的当前变道场景类型,再根据不同的的当前变道场景类型对目标车辆做不一样的变道处理,能够让自动驾驶汽车拥有灵活变道的能力,更好的避免交通拥堵,提高行驶速度。而且通过变道试探的方式,可以在保证安全变道的情况下,提高变道的成功率。
进一步的,请参见图11,图11是本申请实施例提供的一种自主变道的流程示意图。如图11所示,在目标车辆开始自动驾驶任务时,对应的车载终端会实时采集场景信息,计算机设备也会根 据获取到场景信息一直对目标车辆所处的场景进行分析。其中,采集场景信息对目标车辆所处场景进行分析的具体实现方式可以参见上述图4所对应实施例中的步骤S201-S202,这里不再进行赘述。通过对场景的细分,采取不同的变道准备方式,使得目标车辆的变道更为灵活、高效。
若确定目标车辆处于强制变道场景类型,则会根据该场景下的特点来为目标车辆选择完成导航行驶路线的目标车道,具体实现方式可以参见上述图4所对应实施例中的步骤S203,这里不再进行赘述。为了增加强制变道场景类型下目标车辆变道的可行性和成功性,会先对目标车辆做强制变道准备工作,也就是通过速度控制和位移控制等手段,使得目标车辆进入变道准备区域,具体实现方式可以参见上述图9所对应实施例中步骤S401-S404的描述,这里不再进行赘述。
若确定目标车辆处于自由变道场景类型,则计算机设备会调用车道评估模型,按照固定频率使用该车道评估模型对实时采集到的场景信息进行推理,根据推理结果决定是否进行变道,如果决定变道,根据推理结果确定目标车道,然后控制目标车辆进行变道准备区域,具体实现方式可以参见上述图7所对应实施例中步骤S302-S304的描述,这里不再进行赘述。
等到目标车辆进入变道准备区域以后,计算机设备会获取目标车辆在变道准备区域中的变道准备位置时的场景信息,然后确定目标车辆是否满足变道安全检查条件(即上述图3所对应实施例中步骤S103中的变道安全检查条件),如果确认目标车辆变道安全检查通过,计算机设备会控制目标车辆开始变道。如图11所示,在目标车辆开始变道到变道执行结束,计算机设备会持续对目标车辆和目标车道做变道安全检查,如果在变道执行的过程中,计算机设备确定变道安全检查不通过,会立即停止变道,回到原来的行驶车道上面。
进一步的,请参见图12a,图12a是本申请实施例提供的一种变道执行方法的流程示意图。变道执行是指目标车辆进入变道准备位置后,计算机设备确认变道安全检查通过时控制目标车辆从当前行驶车道变道至目标车道(如在强制变道场景类型下识别出的第一车道,或在自由变道场景类型下识别出的第二车道)的过程。对于上述自由变道场景类型和强制变道场景类型,两者确认目标车道的方式有所不同,变道前所做的准备也有所不同,但是执行变道的方式可以相同,都可以如图12a所示的变道执行方法所示,图12a具体是以变道至第一车道为例进行描述。该变道执行方法流程包括:
S501:获取所述目标车辆的当前行驶状态。
具体的,当前行驶状态包括目标车辆与第一车道之间的当前侧向偏移、目标车辆的当前行进距离、目标车辆与第一车道之间的当前角度偏移以及目标车辆的当前角速度。
S502:根据所述当前行驶状态和预期行驶状态确定所述目标车辆的预期变道轨迹路线。
具体的,预期行驶状态是指理想中的行驶状态,预期行驶状态包括的预期侧向偏移、预期行进距离、预期角度偏移和预期角速度可以事先设定。
为便于理解,请一并参见图12b,图12b是本申请实施例提供的一种预期变道轨迹路线示意图。计算机设备可以按照如图12b所示的预期变道轨迹路线来控制目标车辆进行变道。如图12b所示的预期变道轨迹路线可以是一条五次多项式曲线。该五次多项式可以如公式(4)所示:
l=a0s5+a1s4+a2s3+a3s2+a4s+a5      公式(4)
其中,s代表沿道路行进的距离,l代表相对于目标道路的侧向偏移。如图12b所示,目标车辆121行驶在车道B中,车道A为其需要变道过去的目标车道,则s指的是目标车辆121到目标车辆121起始位置垂直线的垂直距离,l指的是目标车辆121到车道A的车道中心线的垂直距离。为了更好的使用五次多项式进行变道,需要选择合适的五次多项式的参数a0-a5。对该五次多项式求一阶导,可以得到公式(5):
θ=5a0s4+4a1s3+3a2s2+2a3s1+a4    公式(5)
其中,θ为角度偏移,指的是目标车辆121的行驶方向和目标车道的车道中心线的夹角,s依然代表沿道路行进的距离。
对该五次多项式求二阶导,可以得到公式(6)
ω=20a0s3+12a1s2+6a2s+2a3     公式(6)
其中,ω为目标车辆121的角速度,s依然代表沿道路行进的距离。
对于五次多项式的计算,可以通过目标车辆121变道开始时的当前行驶状态和预设行驶状态求得。其中,目标车辆121的当前行驶状态包括此时目标车辆121和目标车道的当前侧向偏移、目标车辆121的当前行进距离、目标车辆121的行驶方向和目标车道的当前角度偏移、目标车辆121的当前角速度。其中,目标车辆121的预设行驶状态是事先设定好的理想状态,比如可以设置目标车辆121的行驶方向和目标车道的预设角度偏移为0,目标车辆121的预设角速度为0,目标车辆121和目标车道的预设侧向偏移为0,目标车辆121的预设行进距离为行进距离阈值。根据目标车辆121的当前行驶状态和预设行驶状态和上述公式(4)到公式(6),可以求出a0到a5五个未知参数,可以得到五次多项式以车辆变道开始时的位置为起点的预期变道轨迹路线。
S503:根据所述预期变道轨迹路线控制所述目标车辆变道至所述第一车道。
具体的,计算机设备根据预期变道轨迹路线确定变道速度和变道行驶方向,将目标车辆的速度和行驶方向调整为变道速度和变道行驶方向,控制目标车辆按照变道速度和变道行驶方向变道至第一车道。
通过本申请实施例提供的方法,可以根据获取到的目标车辆的场景信息,确定目标车辆的当前变道场景类型,再根据不同的的当前变道场景类型对目标车辆做不一样的变道处理,能够让自动驾驶汽车拥有灵活变道的能力,更好的避免交通拥堵,提高行驶速度。
进一步地,请参见图13,图13是本申请实施例提供的一种基于自动驾驶的控制装置的结构示意图。上述控制装置可以是运行于计算机设备中的一个计算机程序(包括程序代码),例如该控制装置为一个应用软件;该装置可以用于执行本申请实施例提供的方法中的相应步骤。如图13所示,该控制装置1可以包括:信息获取模块11、场景确定模块12、强制变道模块13以及自由变道模块14。
信息获取模块11,用于获取目标车辆的场景信息;
场景确定模块12,用于根据场景信息确定目标车辆的当前变道场景类型;
强制变道模块13,用于若当前变道场景类型为强制变道场景类型,则根据场景信息识别用于完成导航行驶路线的第一车道,在检测到第一车道满足变道安全检查条件时,根据第一车道控制目标车辆执行变道处理;
自由变道模块14,用于若当前变道场景类型为自由变道场景类型,则根据场景信息识别用于优化行驶时间的第二车道,在检测到第二车道满足变道安全检查条件时,根据第二车道控制目标车辆执行变道处理。
其中,信息获取模块11、场景确定模块12、强制变道模块13以及自由变道模块14的具体实现方式,可以参见上述图3所对应实施例中步骤S101-S104的描述,这里将不再进行赘述。
请参见图13,强制变道模块12可以包括:间隔区域获取单元121、变道准备单元122、安全检查单元123以及变道执行单元124。
间隔区域获取单元121,用于获取目标车道的邻车间隔区域;邻车间隔区域为第一车道中第一车辆和第二车辆之间的间隔区域;第一车辆为第一车道中与目标车辆的车头距离最近的车辆;第二车辆为第一车道中与目标车辆的车尾距离最近的车辆;
变道准备单元122,用于根据邻车间隔区域控制目标车辆进入变道准备位置;
安全检查单元123,用于根据变道准备位置对第一车道进行变道安全检查;
变道执行单元124,用于若检测到第一车道满足变道安全检查条件,则将目标车辆的速度和行驶方向调整为变道速度和变道行驶方向,控制目标车辆按照变道速度和变道行驶方向变道至第一车道。
其中,间隔区域获取单元121、变道准备单元122、安全检查单元123以及变道执行单元124,可以参见上述图3所对应实施例中步骤S103的描述,这里将不再进行赘述。
请参见图13,变道准备单元122可以包括:第一区域获取子单元1221、第一变道准备子单元1222、试探准备子单元1224、试探获取子单元1225、试探移动子单元1226以及第二变道准备子单元1227。
第一区域获取子单元1221,用于若邻车间隔区域中存在可行变道区域,则获取目标车辆在当前行驶车道中的变道准备区域;可行变道区域是指满足变道安全检查条件的区域;变道准备区域为假设将可行变道区域平移到当前行驶车道中时而确定的区域;
第一变道准备子单元1222,用于根据变道准备区域控制目标车辆进入变道准备位置;
第二区域获取子单元1223,用于若邻车间隔区域中不存在可行变道区域,则获取目标车辆在当前行驶车道中的变道试探区域;可行变道区域是指满足变道安全检查条件的区域;变道试探区域为假设将邻车间隔区域的中间区域平移到当前行驶车道中时而确定的区域;
试探准备子单元1224,用于控制目标车辆进入所述变道试探区域;
试探获取子单元1225,用于当目标车辆进入变道试探区域时,确定变道试探距离和变道试探时间周期;
试探移动子单元1226,用于控制目标车辆向第一车道移动变道试探距离;
检测移动子单元,用于在变道试探时间周期内,若检测到第二车辆处于刹车状态,则控制目标车辆继续向第一车道移动变道试探距离;
第二变道准备子单元1227,用于若经过目标车辆移动试探后检测到邻车间隔区域中存在可行变道区域,则获取目标车辆在当前行驶方向上的变道准备区域,根据变道准备区域控制目标车辆进入变道准备位置;变道准备区域为假设将可行变道区域平移到当前行驶方向上时而确定的区域。
其中,试探获取子单元,具体用于从场景信息中提取出变道试探驾驶特征和变道试探车道特征;通过变道试探模型对变道试探驾驶特征和变道试探车道特征进行处理,得到变道试探距离和变道试探周期;变道试探模型是根据驾驶行为样本训练得到的;驾驶行为样本是指用户变道试探时的车道特征样本和驾驶特征样本。
其中,第一区域获取子单元1221、第一变道准备子单元1222、试探准备子单元1224、试探获取子单元1225、试探移动子单元1226以及第二变道准备子单元1227的具体实现方式,可以参见上述图9所对应实施例中步骤S402-S404的描述,这里将不再进行赘述。
请参见图13,安全检查单元123可以包括:车辆特征获取子单元1231、阈值计算子单元1232、第一安全确定子单元1233、变道特征获取子单元1234、碰撞参数确定子单元1235、时间计算子单元1236以及第二安全确定子单元1237。
车辆特征获取子单元1231,用于获取目标车辆的反应时间、以及目标车辆在变道准备位置上的当前速度和当前加速度;
车辆特征获取子单元1231,还用于获取第一车辆的第一速度和第一加速度;
车辆特征获取子单元1231,还用于获取第二车辆的第二速度和第二加速度;
阈值计算子单元1232,用于根据反应时间、当前速度、当前加速度、第一速度和第一加速度确定第一安全距离阈值;
阈值计算子单元1232,还用于根据反应时间、当前速度、当前加速度、第二速度和第二加速度确定第二安全距离阈值;
第一安全确定子单元1233,用于若前车距离不小于第一安全距离阈值,且后车距离不小于第二安全距离阈值,则确定第一车道满足变道安全检查条件;,其中,所述前车距离为所述变道准备位置上的所述目标车辆与所述第一车辆之间的距离,所述后车距离为所述变道准备位置上的所述目标车辆与所述第二车辆之间的距离;
第一安全确定子单元1233,还用于若前车距离小于第一安全距离阈值,或者后车距离小于第二安全距离阈值,则确定第一车道不满足变道安全检查条件,控制目标车辆停止变道至第一车道。
变道特征获取子单元1234,用于获取目标车辆变道准备位置时的场景更新信息,从场景更新 信息获取变道特征;
碰撞参数确定子单元1235,用于将变道特征输入碰撞时间识别模型,通过碰撞时间识别模型输出预期前车碰撞时间和预期后车碰撞时间;
时间计算子单元1236,用于根据前车距离和当前速度,确定目标车辆的实际碰撞时间;
时间计算子单元1236,还用于根据后车距离和第二速度,确定第二车辆的实际碰撞时间;
第二安全确定子单元1237,用于若目标车辆的实际碰撞时间不小于预期前车碰撞时间,且第二车辆的实际碰撞时间不小于预期后车碰撞时间,则确定第一车道满足变道安全检查条件;
第二安全确定子单元1237,还用于若目标车辆的实际碰撞时间小于预期前车碰撞事件距离,或者第二车辆的实际碰撞事件距离小于预期后车碰撞时间,则确定第一车道不满足变道安全检查条件,控制目标车辆停止变道至第一车道。
其中,车辆特征获取子单元1231、阈值计算子单元1232、第一安全确定子单元1233、变道特征获取子单元1234、碰撞参数确定子单元1235、时间计算子单元1236以及第二安全确定子单元1237的具体实现方式,可以参见上述图3所对应实施例中步骤S103关于变道安全检查的描述,这里将不再进行赘述。
请参见图13,变道执行单元124可以包括:行驶状态获取子单元1241、轨迹确定子单元1242、行驶数据确定子单元1243、行驶调整子单元1244以及变道子单元1245。
行驶状态获取子单元1241:用于获取目标车辆的当前行驶状态;当前行驶状态包括目标车辆与第一车道之间的当前侧向偏移、目标车辆的当前行进距离、目标车辆与第一车道之间的当前角度偏移以及目标车辆的当前角速度;
轨迹确定子单元1242:用于根据当前侧向偏移、当前行进距离、当前角度偏移以及当前角速度和预期行驶状态确定目标车辆的预期变道轨迹路线;
行驶数据确定子单元1243,用于根据预期变道轨迹路线,确定变道速度和变道行驶方向;
行驶调整子单元1244,用于将目标车辆的速度和行驶方向调整为变道速度和变道行驶方向;
变道子单元1245,用于控制目标车辆按照变道速度和变道行驶方向变道至第一车道。
其中,行驶状态获取子单元1241、轨迹确定子单元1242、行驶数据确定子单元1243、行驶调整子单元1244以及变道子单元1245的具体实现方式,可以参见上述图12a步骤S501-S503的描述,这里将不再进行赘述。
请参见图13,自由变道模块13可以包括:特征获取单元131、评估参数确定单元132、第二车道确定单元133以及第二车道变道单元134。
特征获取单元131,用于若当前变道场景类型为自由变道场景类型,则从场景信息中提取出候选车道的车道特征和驾驶特征;
评估参数确定单元132,用于通过车道评估模型对车道特征和驾驶特征进行处理,得到候选车道的评估参数值;车道评估模型是根据驾驶行为样本训练得到的;驾驶行为样本是指用户主动变道时的车道特征样本和驾驶特征样本;
第二车道确定单元133,用于将具有最高的评估参数值的候选车道确定为用于优化行驶时间的第二车道;
第二车道变道单元134,用于在检测到第二车道满足变道安全检查条件时,根据第二车道控制目标车辆执行变道处理。
其中,特征获取单元131、评估参数确定单元132、第二车道确定单元133以及第二车道变道单元134的具体实现方式,可以参见上述图7所对应实施例中步骤S301-S304的描述,这里将不再进行赘述。
请参见图13,场景确定模块14可以包括:需求信息确定单元141、自由场景类型确定单元142以及强制场景类型确定单元143。
需求信息确定单元141,用于根据场景信息确定障碍物检测信息、终点距离和路口距离;
自由场景类型确定单元142,用于若障碍物检测信息指示目标车辆前方没有障碍物,且终点距离不小于第一距离阈值,且路口距离不小于第二距离阈值,则确定目标车辆的当前变道场景类型为自由变道场景类型;
强制场景类型确定单元143,用于若障碍物检测信息指示目标车辆前方有障碍物,或者终点距离小于第一距离阈值,或者路口距离小于第二距离阈值,则确定目标车辆的当前变道场景类型为强制变道场景类型。
其中,需求信息确定单元141、自由场景类型确定单元142以及强制场景类型确定单元143的具体实现方式,可以参见上述图4所对应实施例中步骤S201-S202的描述,这里将不再进行赘述。
其中,强制变道场景类型包括路口变道场景类型、出口变道场景类型、静态障碍物变道场景类型和终点停车变道场景类型;
请参见图13,强制变道类型确定单元143可以包括:第一场景确定子单元1431、第二场景确定子单元1432、路口信息获取子单元1433以及第三场景确定子单元1434。
第一场景确定子单元1431,用于若障碍物检测信息指示目标车辆前方有静态障碍物,则确定目标车辆的当前变道场景类型为静态障碍物变道场景类型;
第二场景确定子单元1432,用于若终点距离小于第一距离阈值,则确定目标车辆的当前变道场景类型为终点停车变道场景类型;
路口信息获取子单元1433,用于若路口距离小于第二距离阈值,获取路口的路口地图信息;
第三场景确定子单元1434,用于若路口地图信息指示路口为出口,则确定目标车辆的当前变道场景类型为出口变道场景类型,否则确定目标车辆的当前变道场景类型为路口变道场景类型;
其中,路口变道场景类型、出口变道场景类型、静态障碍物变道场景类型和终点停车变道场景类型用于决策所述第一车道。
其中,第一场景确定子单元1431、第二场景确定子单元1432、路口信息获取子单元1433以及第三场景确定子单元1434的具体实现方式,可以参见上述图5所对应实施例中步骤S51-S54的描述,这里将不再进行赘述。
在本申请实施例中,根据获取到的目标车辆的场景信息,可以确定目标车辆所处的当前变道场景类型,若当前变道场景类型为强制变道场景类型,则根据场景信息识别用于完成导航行驶路线的第一车道,在检测到第一车道满足安全检查条件时,根据第一车道控制目标车辆执行变道处理;若当前变道场景类型为自由变道场景类型,则根据场景信息识别用于优化行驶时间的第二车道,在检测到第二车道满足变道安全检查条件时,根据第二车道控制目标车辆执行变道处理。通过本申请实施例提供的方法,可以根据获取到的目标车辆的场景信息,确定目标车辆的当前变道场景类型,再根据不同的的当前变道场景类型对目标车辆做不一样的变道处理,能够让自动驾驶汽车拥有灵活变道的能力,更好的避免交通拥堵,提高行驶速度。
进一步地,请参见图14,图14是本申请实施例提供的一种计算机设备的结构示意图。如图14所示,上述图13所对应实施例中的装置1可以应用于上述计算机设备8000,上述计算机设备8000可以包括:处理器8001,网络接口8004和存储器8005,此外,上述计算机设备8000还包括:用户接口8003,和至少一个通信总线8002。其中,通信总线8002用于实现这些组件之间的连接通信。网络接口8004可以包括标准的有线接口、无线接口(如WI-FI接口)。存储器8005可以是高速RAM存储器,也可以是非不稳定的存储器(non-volatile memory),例如至少一个磁盘存储器。存储器8005还可以是至少一个位于远离前述处理器8001的存储装置。如图14所示,作为一种计算机可读存储介质的存储器8005中可以包括操作系统、网络通信模块、用户接口模块以及设备控制应用程序。
在图14所示的计算机设备8000中,网络接口8004可提供网络通讯功能;而用户接口8003主要用于为用户提供输入的接口;而处理器8001可以用于调用存储器8005中存储的设备控制应用程序,以实现:获取目标车辆的场景信息;根据场景信息确定目标车辆的当前变道场景类型;若当 前变道场景类型为强制变道场景类型,则根据场景信息识别用于完成导航行驶路线的第一车道,在检测到第一车道满足变道安全检查条件时,根据第一车道控制目标车辆执行变道处理;若当前变道场景类型为自由变道场景类型,则根据场景信息识别用于优化行驶时间的第二车道,在检测到第二车道满足变道安全检查条件时,根据第二车道控制目标车辆执行变道处理。
应当理解,本申请实施例中所描述的计算机设备8000可执行前文图3-图12a所对应实施例中对该控制方法的描述,也可执行前文图13所对应实施例中对该控制装置1的描述,在此不再赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。
此外,这里需要指出的是:本申请实施例还提供了一种计算机可读存储介质,且上述计算机可读存储介质中存储有前文提及的数据处理的计算机设备8000所执行的计算机程序,且上述计算机程序包括程序指令,当上述处理器执行上述程序指令时,能够执行前文图3-图12b所对应实施例中对上述数据处理方法的描述,因此,这里将不再进行赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。对于本申请所涉及的计算机可读存储介质实施例中未披露的技术细节,请参照本申请方法实施例的描述。
上述计算机可读存储介质可以是前述任一实施例提供的数据处理装置或者上述计算机设备的内部存储单元,例如计算机设备的硬盘或内存。该计算机可读存储介质也可以是该计算机设备的外部存储设备,例如该计算机设备上配备的插接式硬盘,智能存储卡(smart media card,SMC),安全数字(secure digital,SD)卡,闪存卡(flash card)等。进一步地,该计算机可读存储介质还可以既包括该计算机设备的内部存储单元也包括外部存储设备。该计算机可读存储介质用于存储该计算机程序以及该计算机设备所需的其他程序和数据。该计算机可读存储介质还可以用于暂时地存储已经输出或者将要输出的数据。
此外,这里需要指出的是:本申请实施例还提供了一种车辆,且上述车辆包括前文图13所对应实施例中的控制装置1,或者,包括上述计算机设备,或者包括上述计算机可读存储介质。上述车辆能够执行前文图3-图12b所对应实施例中对上述数据处理方法的描述,因此,这里将不再进行赘述。另外,对采用相同方法的有益效果描述,也不再进行赘述。对于本申请所涉及的车辆实施例中未披露的技术细节,请参照本申请方法实施例的描述。
以上所揭露的仅为本申请较佳实施例而已,当然不能以此来限定本申请之权利范围,因此依本申请权利要求所作的等同变化,仍属本申请所涵盖的范围。

Claims (16)

  1. 一种基于自动驾驶的控制方法,由计算机设备执行,包括:
    获取目标车辆的场景信息;
    根据所述场景信息确定所述目标车辆的当前变道类型;
    若所述当前变道类型为强制变道类型,则根据所述场景信息识别用于完成导航行驶路线的第一车道,在检测到所述第一车道满足变道安全检查条件时,根据所述第一车道控制所述目标车辆执行变道处理;
    若所述当前变道类型为自由变道类型,则根据所述场景信息识别用于优化行驶时间的第二车道,在检测到所述第二车道满足所述变道安全检查条件时,根据所述第二车道控制所述目标车辆执行变道处理。
  2. 根据权利要求1所述的方法,其中,所述在检测到所述第一车道满足变道安全检查条件时,根据所述第一车道控制所述目标车辆执行变道处理,包括:
    获取所述第一车道的邻车间隔区域;所述邻车间隔区域为所述第一车道中第一车辆和第二车辆之间的间隔区域;所述第一车辆为所述第一车道中与所述目标车辆的车头距离最近的车辆;所述第二车辆为所述第一车道中与所述目标车辆的车尾距离最近的车辆;
    根据所述邻车间隔区域控制所述目标车辆进入变道准备位置;
    根据所述变道准备位置对所述第一车道进行变道安全检查;
    若检测到所述第一车道满足变道安全检查条件,则将所述目标车辆的速度和行驶方向调整为变道速度和变道行驶方向,控制所述目标车辆按照所述变道速度和所述变道行驶方向变道至所述第一车道。
  3. 根据权利要求2所述的方法,其中,所述根据所述邻车间隔区域控制所述目标车辆进入变道准备位置,包括:
    若所述邻车间隔区域中存在可行变道区域,则获取所述目标车辆在当前行驶车道中的变道准备区域,根据所述变道准备区域控制所述目标车辆进入变道准备位置;所述可行变道区域是指满足所述变道安全检查条件的区域;所述变道准备区域为假设将所述可行变道区域平移到所述当前行驶车道中时而确定的区域。
  4. 根据权利要求2所述的方法,其中,所述根据所述邻车间隔区域控制所述目标车辆进入变道准备位置,包括:
    若所述邻车间隔区域中不存在可行变道区域,则获取所述目标车辆在当前行驶车道中的变道试探区域;所述可行变道区域是指满足变道安全检查条件的区域;所述变道试探区域为假设将所述邻车间隔区域的中间区域平移到所述当前行驶车道中时而确定的区域;
    控制所述目标车辆进入所述变道试探区域,确定变道试探距离和变道试探时间周期,所述变道试探距离为所述邻车间隔区域中不存在可行变道区域时所述目标车辆向所述第一车道试探性移动的距离,所述变道试探时间周期为所述目标车辆向所述第一车道每次移动所述变道试探距离后保持的时间;
    控制所述目标车辆向所述第一车道移动变道试探距离;
    在所述变道试探时间周期内,若检测到所述第二车辆处于刹车状态,则控制所述目标车辆继续向所述第一车道移动所述变道试探距离;
    若经过所述目标车辆移动试探后检测到邻车间隔区域中存在可行变道区域,则获取所述目标车辆在当前行驶方向上的变道准备区域,根据所述变道准备区域控制所述目标车辆进入变道准备位置;所述变道准备区域为假设将所述可行变道区域平移到所述当前行驶方向上时而确定的区域。
  5. 根据权利要求4所述的方法,其中,所述获取变道试探距离和变道试探时间周期,包括:
    从所述场景信息中提取出变道试探驾驶特征和变道试探车道特征,所述变道试探驾驶特征包括 所述目标车辆在进行变道试探时的驾驶特征,所述变道试探车道特征包括候选车道在变道试探时的车道特征;
    通过变道试探模型对所述变道试探驾驶特征和所述变道试探车道特征进行处理,得到变道试探距离和变道试探周期;所述变道试探模型是根据驾驶行为样本训练得到的;所述驾驶行为样本是指用户变道试探时的车道特征样本和驾驶特征样本。
  6. 根据权利要求2所述的方法,其中,所述根据所述变道准备位置对第一车道进行变道安全检查,包括:
    获取所述目标车辆的反应时间、以及所述目标车辆在所述变道准备位置上的当前速度和当前加速度;
    获取所述第一车辆的第一速度和第一加速度;
    获取所述第二车辆的第二速度和第二加速度;
    根据所述反应时间、所述当前速度、所述当前加速度、所述第一速度和所述第一加速度确定第一安全距离阈值;
    根据所述反应时间、所述当前速度、所述当前加速度、所述第二速度和所述第二加速度确定第二安全距离阈值;
    若前车距离不小于所述第一安全距离阈值,且后车距离不小于所述第二安全距离阈值,则确定所述第一车道满足变道安全检查条件,其中,所述前车距离为所述变道准备位置上的所述目标车辆与所述第一车辆之间的距离,所述后车距离为所述变道准备位置上的所述目标车辆与所述第二车辆之间的距离;
    若所述前车距离小于所述第一安全距离阈值,或者所述后车距离小于所述第二安全距离阈值,则确定所述第一车道不满足变道安全检查条件,控制所述目标车辆停止变道至所述第一车道。
  7. 根据权利要求2所述的方法,其中,所述根据所述变道准备位置对第一车道进行变道安全检查,包括:
    获取所述目标车辆在所述变道准备位置时的场景更新信息,从所述场景更新信息获取变道特征;
    将所述变道特征输入碰撞时间识别模型,通过所述碰撞时间识别模型输出预期前车碰撞时间和预期后车碰撞时间;
    获取在所述变道准备位置上的所述目标车辆的当前速度和所述第二车辆的第二速度;
    根据前车距离和所述当前速度,确定所述目标车辆的实际碰撞时间;所述前车距离为所述变道准备位置上的所述目标车辆与所述第一车辆之间的距离;
    根据后车距离和所述第二速度,确定所述第二车辆的实际碰撞时间;所述后车距离为所述变道准备位置上的所述目标车辆与所述第二车辆之间的距离;
    若所述目标车辆的实际碰撞时间不小于所述预期前车碰撞时间,且所述第二车辆的实际碰撞时间不小于所述预期后车碰撞时间,则确定所述第一车道满足变道安全检查条件;
    若所述目标车辆的实际碰撞事件距离小于所述预期前车碰撞时间,或者所述第二车辆的实际碰撞时间小于所述预期后车碰撞时间,则确定所述第一车道不满足变道安全检查条件,控制所述目标车辆停止变道至所述第一车道。
  8. 根据权利要求2所述的方法,其中,所述将所述目标车辆的速度和行驶方向调整为变道速度和变道行驶方向,控制所述目标车辆按照所述变道速度和所述变道行驶方向变道至所述第一车道,包括:
    获取所述目标车辆的当前行驶状态;所述当前行驶状态包括所述目标车辆与所述第一车道之间的当前侧向偏移、所述目标车辆的当前行进距离、所述目标车辆与所述第一车道之间的当前角度偏移以及所述目标车辆的当前角速度;
    根据所述当前侧向偏移、所述当前行进距离、所述当前角度偏移以及所述当前角速度和预期行驶状态确定所述目标车辆的预期变道轨迹路线;
    根据所述预期变道轨迹路线,确定变道速度和变道行驶方向;
    将所述目标车辆的速度和行驶方向调整为所述变道速度和所述变道行驶方向;
    控制所述目标车辆按照所述变道速度和所述变道行驶方向变道至所述第一车道。
  9. 根据权利要求1所述的方法,其中,所述若所述当前变道类型为主动自由变道类型,则根据所述场景信息识别用于优化行驶时间的第二车道,在检测到所述第二车道满足所述变道安全检查条件时,根据所述第二车道控制所述目标车辆执行变道处理,包括:
    若所述当前变道类型为自由变道类型,则从所述场景信息中提取出候选车道的车道特征和驾驶特征;
    通过车道评估模型对所述车道特征和所述驾驶特征进行处理,得到所述候选车道的评估参数值;所述车道评估模型是根据驾驶行为样本训练得到的;所述驾驶行为样本是指用户主动变道时的车道特征样本和驾驶特征样本;
    将具有最高的评估参数值的候选车道确定为用于优化行驶时间的第二车道;
    在检测到所述第二车道满足所述变道安全检查条件时,根据所述第二车道控制所述目标车辆执行变道处理。
  10. 根据权利要求1所述的方法,其中,所述根据所述场景信息确定所述目标车辆的当前变道类型,包括:
    根据所述场景信息确定障碍物检测信息、终点距离和路口距离;
    若所述障碍物检测信息指示所述目标车辆前方没有障碍物,且所述终点距离不小于第一距离阈值,且所述路口距离不小于第二距离阈值,则确定所述目标车辆的当前变道类型为自由变道类型;
    若所述障碍物检测信息指示所述目标车辆前方有障碍物,或者所述终点距离小于所述第一距离阈值,或者所述路口距离小于所述第二距离阈值,则确定所述目标车辆的当前变道类型为强制变道类型。
  11. 根据权利要求9所述的方法,其中,所述强制变道类型包括路口变道类型、出口变道类型、静态障碍物变道类型和终点停车变道类型;
    所述若所述障碍物检测信息指示所述目标车辆前方有障碍物,或者所述终点距离小于所述第一距离阈值,或者所述路口距离小于所述第二距离阈值,则确定所述目标车辆的当前变道类型为强制变道类型,包括:
    若所述障碍物检测信息指示所述目标车辆前方有静态障碍物,则确定所述目标车辆的当前变道类型为静态障碍物变道类型;
    若所述终点距离小于所述第一距离阈值,则确定所述目标车辆的当前变道场景类型为终点停车变道类型;
    若所述路口距离小于所述第二距离阈值,则获取路口的路口地图信息;
    若所述路口地图信息指示所述路口为出口,则确定所述目标车辆的当前变道场景类型为出口变道类型,否则确定所述目标车辆的当前变道场景类型为路口变道类型;
    其中,所述路口变道类型、所述出口变道类型、所述静态障碍物变道类型和所述终点停车变道类型用于决策所述第一车道。
  12. 一种基于自动驾驶的控制装置,包括:
    信息获取模块,用于获取目标车辆的场景信息;
    场景确定模块,用于根据所述场景信息确定所述目标车辆的当前变道场景类型;
    强制变道模块,用于若所述当前变道场景类型为强制变道场景类型,则根据所述场景信息识别用于完成导航行驶路线的第一车道,在检测到所述第一车道满足变道安全检查条件时,根据所述第一车道控制所述目标车辆执行变道处理;
    自由变道模块,用于若所述当前变道场景类型为自由变道场景类型,则根据所述场景信息识别用于优化行驶时间的第二车道,在检测到所述第二车道满足变道安全检查条件时,根据所述第二车 道控制所述目标车辆执行变道处理。
  13. 一种计算机设备,包括:处理器、存储器以及网络接口;
    所述处理器与所述存储器、所述网络接口相连,其中,所述网络接口用于提供网络通信功能,所述存储器用于存储程序代码,所述处理器用于调用所述程序代码,以执行权利要求1-11任一项所述的方法。
  14. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时,执行权利要求1-11任一项所述的方法。
  15. 一种车辆,包括权利要求12所述的基于自动驾驶的控制装置,或者,包括权利要求13所述的计算机设备,或者包括权利要求14所述的计算机可读存储介质。
  16. 一种计算机程序产品,该计算机程序产品包括计算机指令,该计算机指令存储在计算机可读存储介质中,计算机设备的处理器从计算机可读存储介质读取该计算机指令,处理器执行该计算机指令,使得该计算机设备执行权利要求1-11任一项所述的方法。
PCT/CN2021/127867 2020-11-19 2021-11-01 一种基于自动驾驶的控制方法、装置、车辆以及相关设备 WO2022105579A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
EP21893727.4A EP4209853A4 (en) 2020-11-19 2021-11-01 CONTROL METHOD AND APPARATUS BASED ON AUTONOMOUS DRIVING, AND VEHICLE AND ASSOCIATED DEVICE
US17/972,426 US20230037367A1 (en) 2020-11-19 2022-10-24 Autonomous-driving-based control method and apparatus, vehicle, and related device

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011300553.0A CN112416004B (zh) 2020-11-19 2020-11-19 一种基于自动驾驶的控制方法、装置、车辆以及相关设备
CN202011300553.0 2020-11-19

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/972,426 Continuation US20230037367A1 (en) 2020-11-19 2022-10-24 Autonomous-driving-based control method and apparatus, vehicle, and related device

Publications (1)

Publication Number Publication Date
WO2022105579A1 true WO2022105579A1 (zh) 2022-05-27

Family

ID=74774634

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/127867 WO2022105579A1 (zh) 2020-11-19 2021-11-01 一种基于自动驾驶的控制方法、装置、车辆以及相关设备

Country Status (4)

Country Link
US (1) US20230037367A1 (zh)
EP (1) EP4209853A4 (zh)
CN (1) CN112416004B (zh)
WO (1) WO2022105579A1 (zh)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2928677A1 (es) * 2022-07-06 2022-11-21 La Iglesia Nieto Javier De Sistema de conduccion ecoeficiente adaptado a la modelización tridimensional geoposicionada de la parametrización del trazado de cualquier infraestructura lineal particularizado al vehiculo
CN115547035A (zh) * 2022-08-31 2022-12-30 交通运输部公路科学研究所 超视距避撞行驶控制方法、装置及信息物理系统
CN115588131A (zh) * 2022-09-30 2023-01-10 北京瑞莱智慧科技有限公司 模型鲁棒性检测方法、相关装置及存储介质
CN115830886A (zh) * 2023-02-09 2023-03-21 西南交通大学 智能网联车辆协同换道时序计算方法、装置、设备及介质
CN116819964A (zh) * 2023-06-20 2023-09-29 小米汽车科技有限公司 模型优化方法、模型优化装置、电子设备、车辆和介质

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11915115B2 (en) * 2019-12-31 2024-02-27 Google Llc Lane selection using machine learning
US11485360B2 (en) * 2020-04-03 2022-11-01 Baidu Usa Llc Dynamic speed limit adjustment system based on perception results
CN112416004B (zh) * 2020-11-19 2021-12-14 腾讯科技(深圳)有限公司 一种基于自动驾驶的控制方法、装置、车辆以及相关设备
CN112896166A (zh) * 2021-03-01 2021-06-04 苏州挚途科技有限公司 车辆换道方法、装置和电子设备
CN114996323A (zh) * 2021-03-01 2022-09-02 海信集团控股股份有限公司 电子设备及车道判断方法
CN113008261B (zh) * 2021-03-30 2023-02-28 上海商汤临港智能科技有限公司 一种导航方法、装置、电子设备及存储介质
CN113631452B (zh) * 2021-03-31 2022-08-26 华为技术有限公司 一种变道区域获取方法以及装置
CN113071493B (zh) * 2021-04-16 2023-10-31 阿波罗智联(北京)科技有限公司 车辆变道控制的方法、设备、存储介质和程序产品
CN113178081B (zh) * 2021-05-17 2022-05-03 中移智行网络科技有限公司 一种车辆汇入预警方法、装置及电子设备
CN113212454B (zh) * 2021-05-20 2023-05-12 中国第一汽车股份有限公司 车辆行驶状态的调整方法、装置、计算机设备和存储介质
CN114003026A (zh) * 2021-06-22 2022-02-01 的卢技术有限公司 一种基于Apollo框架改进的变道机制
US20230009173A1 (en) * 2021-07-12 2023-01-12 GM Global Technology Operations LLC Lane change negotiation methods and systems
CN113257027B (zh) * 2021-07-16 2021-11-12 深圳知帮办信息技术开发有限公司 针对连续变道行为的导航控制系统
CN113581180B (zh) * 2021-07-30 2023-06-30 东风汽车有限公司东风日产乘用车公司 拥堵路况变道决策方法、存储介质及电子设备
CN113375689B (zh) * 2021-08-16 2021-11-05 腾讯科技(深圳)有限公司 导航方法、装置、终端和存储介质
CN113715821B (zh) * 2021-08-31 2023-07-14 北京百度网讯科技有限公司 控制车辆的方法、装置、电子设备和介质
US20230102929A1 (en) * 2021-09-24 2023-03-30 Embark Trucks, Inc. Autonomous vehicle automated scenario characterization
CN113899378A (zh) * 2021-09-29 2022-01-07 中国第一汽车股份有限公司 一种变道处理方法、装置、存储介质及电子设备
CN114547403B (zh) * 2021-12-30 2023-05-23 广州文远知行科技有限公司 变道场景采集方法、装置、设备及存储介质
CN114526752A (zh) * 2022-03-07 2022-05-24 阿波罗智能技术(北京)有限公司 一种路径规划方法、装置、电子设备及存储介质
CN114435405A (zh) * 2022-03-21 2022-05-06 北京主线科技有限公司 一种车辆换道方法、装置、设备和存储介质
CN114743385B (zh) * 2022-04-12 2023-04-25 腾讯科技(深圳)有限公司 车辆处理方法、装置及计算机设备
CN115544870B (zh) * 2022-09-26 2023-04-18 北京邮电大学 一种道路网络临近检测方法、装置及存储介质
CN115923781B (zh) * 2023-03-08 2023-07-04 江铃汽车股份有限公司 一种智能网联乘用车自动避障方法及系统
CN115973158B (zh) * 2023-03-20 2023-06-20 北京集度科技有限公司 换道轨迹的规划方法、车辆、电子设备及计算机程序产品
CN116381946B (zh) * 2023-04-14 2024-02-09 江苏泽景汽车电子股份有限公司 行驶图像显示方法、存储介质和电子设备
CN116161111B (zh) * 2023-04-24 2023-07-18 小米汽车科技有限公司 车辆控制方法、装置、车辆及存储介质
CN116639152B (zh) * 2023-07-27 2023-10-31 安徽中科星驰自动驾驶技术有限公司 一种自动驾驶车辆的人工引导识别方法及系统
CN116653965B (zh) * 2023-07-31 2023-10-13 福思(杭州)智能科技有限公司 车辆变道重规划触发方法、装置及域控制器
CN117125057B (zh) * 2023-10-25 2024-01-30 吉咖智能机器人有限公司 一种基于车辆变道的碰撞检测方法、装置、设备及存储介质

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100228419A1 (en) * 2009-03-09 2010-09-09 Gm Global Technology Operations, Inc. method to assess risk associated with operating an autonomic vehicle control system
CN107433946A (zh) * 2016-05-27 2017-12-05 现代自动车株式会社 考虑优先级的用于控制变道的装置和方法
CN108305477A (zh) * 2017-04-20 2018-07-20 腾讯科技(深圳)有限公司 一种车道选择方法及终端
CN108983771A (zh) * 2018-07-03 2018-12-11 天津英创汇智汽车技术有限公司 车辆换道决策方法及装置
US20200114921A1 (en) * 2018-10-11 2020-04-16 Ford Global Technologies, Llc Sensor-limited lane changing
CN111413973A (zh) * 2020-03-26 2020-07-14 北京汽车集团有限公司 车辆的换道决策方法及装置、电子设备、存储介质
CN112416004A (zh) * 2020-11-19 2021-02-26 腾讯科技(深圳)有限公司 一种基于自动驾驶的控制方法、装置、车辆以及相关设备

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102383427B1 (ko) * 2016-12-16 2022-04-07 현대자동차주식회사 자율주행 제어 장치 및 방법
CN107901909B (zh) * 2017-10-31 2020-05-05 北京新能源汽车股份有限公司 一种车道自动更换的控制方法、装置及控制器
CN109948801A (zh) * 2019-02-15 2019-06-28 浙江工业大学 基于驾驶员换道心理分析的车辆换道概率输出模型建立方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100228419A1 (en) * 2009-03-09 2010-09-09 Gm Global Technology Operations, Inc. method to assess risk associated with operating an autonomic vehicle control system
CN107433946A (zh) * 2016-05-27 2017-12-05 现代自动车株式会社 考虑优先级的用于控制变道的装置和方法
CN108305477A (zh) * 2017-04-20 2018-07-20 腾讯科技(深圳)有限公司 一种车道选择方法及终端
CN108983771A (zh) * 2018-07-03 2018-12-11 天津英创汇智汽车技术有限公司 车辆换道决策方法及装置
US20200114921A1 (en) * 2018-10-11 2020-04-16 Ford Global Technologies, Llc Sensor-limited lane changing
CN111413973A (zh) * 2020-03-26 2020-07-14 北京汽车集团有限公司 车辆的换道决策方法及装置、电子设备、存储介质
CN112416004A (zh) * 2020-11-19 2021-02-26 腾讯科技(深圳)有限公司 一种基于自动驾驶的控制方法、装置、车辆以及相关设备

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP4209853A4

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2928677A1 (es) * 2022-07-06 2022-11-21 La Iglesia Nieto Javier De Sistema de conduccion ecoeficiente adaptado a la modelización tridimensional geoposicionada de la parametrización del trazado de cualquier infraestructura lineal particularizado al vehiculo
CN115547035A (zh) * 2022-08-31 2022-12-30 交通运输部公路科学研究所 超视距避撞行驶控制方法、装置及信息物理系统
CN115547035B (zh) * 2022-08-31 2023-08-29 交通运输部公路科学研究所 超视距避撞行驶控制方法、装置及信息物理系统
CN115588131A (zh) * 2022-09-30 2023-01-10 北京瑞莱智慧科技有限公司 模型鲁棒性检测方法、相关装置及存储介质
CN115588131B (zh) * 2022-09-30 2024-02-06 北京瑞莱智慧科技有限公司 模型鲁棒性检测方法、相关装置及存储介质
CN115830886A (zh) * 2023-02-09 2023-03-21 西南交通大学 智能网联车辆协同换道时序计算方法、装置、设备及介质
CN116819964A (zh) * 2023-06-20 2023-09-29 小米汽车科技有限公司 模型优化方法、模型优化装置、电子设备、车辆和介质
CN116819964B (zh) * 2023-06-20 2024-02-06 小米汽车科技有限公司 模型优化方法、模型优化装置、电子设备、车辆和介质

Also Published As

Publication number Publication date
US20230037367A1 (en) 2023-02-09
EP4209853A1 (en) 2023-07-12
CN112416004B (zh) 2021-12-14
CN112416004A (zh) 2021-02-26
EP4209853A4 (en) 2024-03-06

Similar Documents

Publication Publication Date Title
WO2022105579A1 (zh) 一种基于自动驾驶的控制方法、装置、车辆以及相关设备
CN111775961B (zh) 自动驾驶车辆规划方法、装置、电子设备及存储介质
JP6619436B2 (ja) 譲歩シナリオを検出して応答する自律車両
US10186150B2 (en) Scene determination device, travel assistance apparatus, and scene determination method
US10509408B2 (en) Drive planning device, travel assistance apparatus, and drive planning method
US10112614B2 (en) Drive planning device, travel assistance apparatus, and drive planning method
CN110562258B (zh) 一种车辆自动换道决策的方法、车载设备和存储介质
US11130492B2 (en) Vehicle control device, vehicle control method, and storage medium
US10366608B2 (en) Scene determination device, travel assistance apparatus, and scene determination method
US10796574B2 (en) Driving assistance method and device
EP3696789B1 (en) Driving control method and driving control apparatus
EP3696788A1 (en) Driving control method and driving control apparatus
JP6575612B2 (ja) 運転支援方法及び装置
EP3667638A1 (en) Traffic lane information management method, running control method, and traffic lane information management device
CN113895456A (zh) 自动驾驶车辆的交叉路口行驶方法、装置、车辆及介质
CN115688552A (zh) 行人意图让行
US10074275B2 (en) Scene determination device, travel assistance apparatus, and scene determination method
US20220363291A1 (en) Autonomous driving system, autonomous driving control method, and nontransitory storage medium
EP4353560A1 (en) Vehicle control method and apparatus
CN113899378A (zh) 一种变道处理方法、装置、存储介质及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893727

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021893727

Country of ref document: EP

Effective date: 20230405

NENP Non-entry into the national phase

Ref country code: DE