CN110758388A - Autonomous vehicle, and autonomous driving control method and apparatus - Google Patents

Autonomous vehicle, and autonomous driving control method and apparatus Download PDF

Info

Publication number
CN110758388A
CN110758388A CN201911171558.5A CN201911171558A CN110758388A CN 110758388 A CN110758388 A CN 110758388A CN 201911171558 A CN201911171558 A CN 201911171558A CN 110758388 A CN110758388 A CN 110758388A
Authority
CN
China
Prior art keywords
obstacle
interaction
interactive
automatic driving
request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911171558.5A
Other languages
Chinese (zh)
Other versions
CN110758388B (en
Inventor
李柏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Jingdong Qianshi Technology Co Ltd
Original Assignee
Beijing Jingdong Qianshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Jingdong Qianshi Technology Co Ltd filed Critical Beijing Jingdong Qianshi Technology Co Ltd
Priority to CN201911171558.5A priority Critical patent/CN110758388B/en
Publication of CN110758388A publication Critical patent/CN110758388A/en
Application granted granted Critical
Publication of CN110758388B publication Critical patent/CN110758388B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/09Taking automatic action to avoid collision, e.g. braking and steering
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60QARRANGEMENT OF SIGNALLING OR LIGHTING DEVICES, THE MOUNTING OR SUPPORTING THEREOF OR CIRCUITS THEREFOR, FOR VEHICLES IN GENERAL
    • B60Q9/00Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling
    • B60Q9/008Arrangement or adaptation of signal devices not provided for in one of main groups B60Q1/00 - B60Q7/00, e.g. haptic signalling for anti-collision purposes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units, or advanced driver assistance systems for ensuring comfort, stability and safety or drive control systems for propelling or retarding the vehicle
    • B60W30/08Active safety systems predicting or avoiding probable or impending collision or attempting to minimise its consequences
    • B60W30/095Predicting travel path or likelihood of collision
    • B60W30/0956Predicting travel path or likelihood of collision the prediction being responsive to traffic or environmental parameters
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W50/00Details of control systems for road vehicle drive control not related to the control of a particular sub-unit, e.g. process diagnostic or vehicle driver interfaces
    • B60W2050/0001Details of the control system
    • B60W2050/0019Control system elements or transfer functions

Abstract

The disclosure provides an automatic driving vehicle and an automatic driving control method and device, and relates to the field of automatic driving. The interactive automatic driving control scheme is realized by sensing obstacles capable of making interactive response in a scene, such as pedestrians, vehicles and the like, and sending interactive requests to the obstacles to make the obstacles make interactive response to the interactive requests, so that information interaction is formed between the automatic driving vehicles and specific objects such as surrounding pedestrians and vehicles in the automatic driving process.

Description

Autonomous vehicle, and autonomous driving control method and apparatus
Technical Field
The present disclosure relates to the field of autonomous driving, and in particular, to an interactive autonomous driving vehicle and an autonomous driving control method and apparatus.
Background
The automatic driving is also called unmanned driving, and is an advanced technology for controlling the automatic driving of the vehicle through a computer system.
In some related technologies, an autonomous vehicle senses an obstacle in a scene, for example, a curb, a vehicle, a pedestrian, etc., through a sensor, plans a driving path according to a path planning strategy for avoiding the obstacle, and automatically drives according to the planned driving path.
Disclosure of Invention
The related automatic driving technology takes surrounding objects as obstacles to be avoided to plan a driving path. However, the inventors have found that some of these obstacles are particularly likely to respond based on vehicle whistling, etc., leaving their current position and no longer being an obstacle.
Based on the discovery, the present disclosure realizes an interactive automatic driving control scheme by sensing an obstacle, such as a pedestrian, a vehicle, etc., capable of making an interactive response in a scene, and sending an interactive request to the obstacle, so that the obstacle makes an interactive response to the interactive request, and thus, in an automatic driving process, the automatic driving vehicle forms information interaction with specific objects, such as surrounding pedestrians and vehicles.
In addition, based on the discovery, the driving path is planned according to the condition that no obstacle exists in the scene by sensing the obstacle capable of making interactive response in the scene, the interactive request is sent in advance before the automatic driving vehicle meets the obstacle, the obstacle is prompted to leave, if the obstacle is detected to leave the current driving path before the meeting moment, the automatic driving vehicle is controlled to continue to automatically drive according to the current driving path, the obstacle does not need to be avoided, the driving operation with higher danger coefficient, such as lane-borrowing and overtaking, can be avoided, and the automatic driving is safer.
Some embodiments of the present disclosure provide an automatic driving control method, including:
sensing an obstacle in a scene that is capable of making an interactive response;
sending an interaction request to the obstacle to cause the obstacle to interactively respond to the interaction request.
In some embodiments, further comprising:
planning a first driving path based on the scene;
judging whether the smoothness degree of the first driving path meets a preset condition or not;
in case the situation not complying with the preset condition is caused by said obstacle, performing said step of sending an interaction request.
In some embodiments, further comprising:
planning a second driving path based on the scene without the obstacle;
determining a time at which the autonomous vehicle meets the obstacle while traveling along the second driving path;
sending the interaction request in advance before the meeting time so that the barrier makes an interaction response to the interaction request and leaves the second driving path;
and if the obstacle is detected to leave the second driving path before the meeting time, controlling the automatic driving vehicle to continue to automatically drive according to the planned second driving path.
In some embodiments, further comprising:
determining a type and/or a behavior pattern of the obstacle;
and executing the step of sending the interaction request according to the interaction mode corresponding to the type and/or the behavior mode of the obstacle.
In some embodiments, the interaction means comprises one or more of interaction type, interaction start time, interaction end time, and interaction frequency.
In some embodiments, the method for determining the interaction manner includes:
and constructing an automatic driving objective function based on the interactive mode parameter variables, setting a constraint that the barrier leaves the second driving path before the meeting time, and determining a group of interactive mode parameters under the condition of meeting the constraint so as to enable the value of the objective function to be maximum.
In some embodiments, the objective function is:
Figure BDA0002288850740000031
where cost represents the target of the objective function, tdangerRepresenting the time of said encounter, finteractIndicating sending of an interaction request with the obstacle to leave the second vehicleTime difference between paths, tstartIndicating the start time, t, of sending the interactive requestendRepresents the end time of sending the interactive request, frequency represents the sending frequency of the interactive request, duty () represents a function with positive correlation between input and output, w1And w2Represents a weight value, w1>0,w2≥0;
The constraint includes: t is tdanger-finteract-tstart≥0。
In some embodiments, finteractIs arranged and
Figure BDA0002288850740000032
positive correlation, distance represents the distance between the autonomous vehicle and the obstacle, param represents the sensitivity of the obstacle to make interactive response, and the sensitivity is set according to the type and/or behavior pattern of the obstacle.
In some embodiments, finteractAnd
Figure BDA0002288850740000033
the relationship of (a) is corrected according to the actual response time of the obstacle.
In some embodiments, the interactive responsive obstacle comprises a pedestrian, a biked pedestrian, a tramped pedestrian, a manned vehicle, or an unmanned vehicle.
In some embodiments, the interaction types include a lighting mode and/or a whistling mode.
Some embodiments of the present disclosure provide an automatic driving control apparatus, including:
a memory; and
a processor coupled to the memory, the processor configured to execute the autopilot control method of any of the embodiments based on instructions stored in the memory.
Some embodiments of the present disclosure provide an autonomous vehicle, comprising:
a sensor configured to sense an obstacle in a scene that is capable of interactive response;
an automatic driving control device; and
an interaction device configured to issue a corresponding interaction request based on control of the automatic driving control device.
Some embodiments of the disclosure propose a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the autopilot control method of any one of the embodiments.
Drawings
The drawings that will be used in the description of the embodiments or the related art will be briefly described below. The present disclosure will be more clearly understood from the following detailed description, which proceeds with reference to the accompanying drawings,
it is to be understood that the drawings in the following description are merely exemplary of the disclosure, and that other drawings may be derived from those drawings by one of ordinary skill in the art without undue inventive faculty.
FIG. 1 is a schematic illustration of some embodiments of an autonomous vehicle of the present disclosure.
FIG. 2 is a flow diagram of some embodiments of an interactive autopilot control method of the present disclosure.
FIG. 3 is a flow diagram illustrating alternative embodiments of an interactive autopilot control method of the present disclosure.
Fig. 4A-4B are schematic diagrams of an autonomous driving scenario of the present disclosure.
FIG. 5 is a schematic diagram of some embodiments of the disclosed autonomous driving control apparatus.
Detailed Description
The technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the drawings in the embodiments of the present disclosure.
FIG. 1 is a schematic illustration of some embodiments of an autonomous vehicle of the present disclosure.
As shown in fig. 1, the autonomous vehicle 10 of the embodiment is equipped with various sensors 11, an autonomous control device 12, and an interaction device 13.
The various sensors 11 are capable of sensing driving scenes and obstacles in the scenes. The obstacles comprise a first obstacle capable of making interactive response and a second obstacle incapable of making interactive response. The descriptions of "first" and "second" are used to distinguish different types of objects, and are not used to indicate quantity, size, or timing. The first obstacle includes, for example, various persons (including individual pedestrians, pedestrians on bikes, pedestrians on carts) and moving vehicles (manned vehicles or unmanned vehicles), and other obstacles that can respond to a light prompt or a whistle prompt of the vehicle. The second obstacle includes, for example, a fixed road setting such as a curb, a guardrail, or the like, and other obstacles that cannot respond to a light prompt or a whistle prompt of the vehicle. The various sensors 11 include, for example, radar, vision sensors, etc., but are not limited to the examples given, and the perception of the corresponding scene and obstacles may be referred to in the art.
The automatic driving control device 12 is a device that performs driving planning and driving control based on the sensed information of the various sensors 11. The automatic driving control device 12 may determine information such as the type of obstacle, behavior pattern, and the like based on the perception information. The types of obstacles include a first obstacle, which may continue to be subdivided into people or vehicles, people may continue to be subdivided into individual pedestrians, pedestrians riding bicycles, pedestrians pushing carts, etc., and a second obstacle, which may continue to be subdivided into curbs, guardrails, etc. The behavior pattern of the obstacle in the present disclosure mainly refers to a behavior pattern of the first obstacle, and includes information such as a traveling direction (e.g., traveling toward or toward), a traveling speed, and the like.
A determination method of information such as the type, behavior pattern, and the like of an obstacle (particularly, a first obstacle) is described below. For example, profile information of a scene and obstacles therein is obtained by using radar scanning, and various types of obstacles are found from the profile information obtained by scanning by using a priori knowledge of the profiles of various types of obstacles. For example, using a priori knowledge of the contour of a person or vehicle, a person or vehicle related obstacle is found; finding individual pedestrians, cyclists, trammer pedestrians with a priori knowledge of the profile of the individual pedestrians, cyclists, trammer pedestrians; and finding the curbs and the guardrails by using the prior knowledge of the outlines of the curbs and the guardrails. As another example, a computer vision sensor is used to obtain related images, and various types of obstacles such as people or vehicles in the images are identified through a pre-trained machine learning model such as a classification model of various types of obstacles (such as people or vehicles; or single pedestrians, biking pedestrians, pedestrians in carts; or types of curbs, guardrails and the like) realized by a convolutional neural network; or a pedestrian alone, a cyclist, a trammer pedestrian; or obstacles such as curbs and guardrails. The behavior pattern of the first obstacle may be determined by tracking the first obstacle. For example, by tracking the same obstacle in a plurality of times of radar scanning information or a plurality of frames of images, the behavior pattern of the obstacle, such as the traveling direction and the traveling speed, can be determined.
The interactive device 13 is a device for issuing an interactive request, such as a car light capable of issuing a light prompt, or a horn capable of issuing a whistle prompt. In an autonomous vehicle, the interaction means 13 typically issues a corresponding interaction request based on the control of the autonomous control means 12.
FIG. 2 is a schematic diagram of some embodiments of an interactive autopilot control method of the present disclosure.
As shown in fig. 2, the method of this embodiment includes: steps 21 and 23, optionally, may also include step 22.
At step 21, the sensor senses the scene and obstacles therein. The obstacles may include a first obstacle that is capable of interactive response and a second obstacle that is not capable of interactive response.
In step 22, the automatic driving control device plans the first driving path based on the sensed scene and the obstacle therein, and determines whether the smoothness of the first driving path meets a preset condition, and executes the subsequent step 23 if the smoothness meets the preset condition or if the smoothness does not meet the preset condition and is caused by the first obstacle, and executes the obstacle avoidance operation if the smoothness meets the preset condition or if the smoothness does not meet the preset condition and is caused by the second obstacle, without executing the subsequent step 23.
The first driving route may be determined according to a route planning strategy for avoiding obstacles in the prior art or other existing automatic driving route planning strategies, which is not limited by the present disclosure.
The smoothness is one of the simple criteria for determining whether the driving path is comfortable. For example, the degree of smoothness of the driving route is determined based on the information on sudden change in speed or sudden change in front wheel steering angle in the driving route, and if the speed change in the driving route is smooth and the front wheel steering angle change is smooth, the driving route is considered to be relatively smooth. If the first obstacle causes a sudden speed change or a sudden front wheel steering angle change in the driving path, a situation is considered to occur in which the first obstacle causes the smoothness of the first driving path not to meet the preset conditions.
It should be noted that step 22 is optional, i.e. step 23 may be performed directly after step 21.
In step 23, the automatic driving control device controls the interaction device to send an interaction request to the first obstacle, so that the first obstacle interactively responds to the interaction request.
For example, the automatic driving control device controls the interaction device to whistle or light the pedestrian or the vehicle, and after the sensor of the pedestrian or the vehicle driver or the unmanned vehicle "hears" the whistle or "sees" the light, the pedestrian or the vehicle driver or the unmanned vehicle leaves or drives the vehicle to leave the current driving path of the automatic driving vehicle and no longer becomes an obstacle on the current driving path.
The above embodiment senses an obstacle capable of making an interactive response in a scene, for example, a pedestrian, a vehicle, and the like, and sends an interactive request to the obstacle, so that the obstacle makes an interactive response to the interactive request, and therefore, in an automatic driving process, the automatic driving vehicle forms information interaction with specific objects around, such as the pedestrian and the vehicle, and an interactive automatic driving control scheme is realized, and the very lacking aspect in the current automatic driving field is made up.
FIG. 3 is a flow diagram illustrating alternative embodiments of an interactive autopilot control method of the present disclosure. As shown in fig. 3, the method of this embodiment includes: steps 31, 33-36, optionally, may also include step 32.
At step 31, the sensor senses the scene and obstacles therein. The obstacles may include a first obstacle that is capable of interactive response and a second obstacle that is not capable of interactive response.
In step 32, the automatic driving control device plans the first driving path based on the sensed scene and the obstacle therein, and determines whether the smoothness of the first driving path meets a preset condition, and executes the subsequent steps under the condition that the smoothness does not meet the preset condition is caused by the first obstacle, and if the smoothness meets the preset condition, or under the condition that the smoothness does not meet the preset condition is caused by the second obstacle, the obstacle avoidance operation is required, and the subsequent steps are not executed.
The step 22 is referred to for planning the first driving path and determining the smoothness, which is not described herein again.
It should be noted that step 32 is optional, i.e. step 33 and its subsequent steps can be performed directly after step 31.
In step 33, the automatic driving control device plans a second driving path based on the scene without the first obstacle. I.e. wiping a first obstacle from the scene and planning a second driving path based on the scene and second obstacles therein (if any).
The specific planning method of the driving path still can refer to the prior art, and the disclosure only assumes that a part of special obstacles are erased from the scene.
In step 34, the autonomous driving control apparatus determines a timing at which the autonomous driving vehicle meets the first obstacle while traveling along the second traveling path.
And estimating when the automatic driving vehicle collides with the first obstacle according to the meeting time, namely according to the current plan and the sensed type and behavior mode of the first obstacle when the automatic driving vehicle runs along the driving path at a certain speed.
In step 35, before the meeting time, the automatic driving control device controls the interaction device to send the interaction request in advance, so that the first obstacle leaves the second driving path in response to the interaction request.
In step 36, if it is detected that the first obstacle leaves the second driving path before the meeting time, the automatic driving control device controls the automatic driving vehicle to continue to automatically drive according to the planned second driving path without avoiding the first obstacle.
According to the interactive automatic driving control scheme, the driving path is planned according to the condition that no obstacle exists in the scene by sensing the obstacle capable of making interactive response in the scene, the interactive request is sent in advance before the automatic driving vehicle meets the obstacle, the obstacle is prompted to leave, if the obstacle is detected to leave the current driving path before meeting time, the automatic driving vehicle is controlled to continue to automatically drive according to the current driving path, the obstacle does not need to be avoided, driving operations such as lane-borrowing and overtaking with higher danger coefficients can be avoided, and automatic driving is safer.
For example, as shown in fig. 4A, a vehicle 2 traveling at a low speed in front of the autonomous vehicle 1 travels forward along the current lane line of the autonomous vehicle 1, the autonomous vehicle 1 may first overtake without borrowing, may first apply a lighting operation (if at night) or a whistling operation (if at day), and tries to see whether the vehicle 2 can yield the current lane, and if the vehicle 2 can yield the current lane, as shown in fig. 4B, the autonomous vehicle 1 may continue traveling forward on the current lane line without borrowing overtake, thereby avoiding a lane overtake operation with a higher risk factor.
As described above, the automatic driving control device can determine the type and/or behavior pattern of the first obstacle, and thus, in steps 23 and 35, the automatic driving control device may control the interaction device to send an interaction request to the first obstacle according to the interaction mode corresponding to the type and/or behavior pattern of the first obstacle, such as the interaction type (e.g., light or whistle), the interaction start time, the interaction end time, the interaction frequency (e.g., flashing light frequency or whistle frequency), and so on, so that the first obstacle interactively responds to the interaction request.
Wherein the interaction starting time is determined according to the type and/or behavior pattern of the first obstacle, so that the interaction request is sent out in advance before the meeting moment, and the first obstacle has enough time to make an interaction response to the interaction request. Different types of first obstacles have different sensitivity degrees of interactive responses, and different interaction start times or lead amounts need to be set. The first obstacle with low sensitivity needs to reserve a larger advance than the first obstacle with high sensitivity so that the first obstacle has enough time to leave the driving path. For example, a cyclist is generally more sensitive than a pushchair pedestrian, and the pushchair pedestrian needs to reserve more advance relative to the cyclist; a moving vehicle is generally more sensitive than a pedestrian, and the pedestrian needs to reserve more advance relative to the moving vehicle. Different behavior patterns also influence the setting of the interaction start time or the advance. For example, the autonomous vehicle and the first obstacle traveling in opposite directions may meet earlier than the autonomous vehicle and the first obstacle traveling in the same direction at the same distance, and thus the interaction start time corresponding to the first obstacle traveling in opposite directions of the autonomous vehicle is earlier than the interaction start time corresponding to the first obstacle traveling in the same direction.
The interaction type is determined according to the type and/or behavior pattern of the first obstacle, so that the first obstacle can more easily capture the interaction request. For example, pedestrians traveling in the same direction as the autonomous vehicle cannot see their own back, and therefore, it is difficult to see the light from behind themselves, and an interactive request can be made in a whistling manner. Vehicles traveling in the same direction as the autonomous vehicles may be equipped with rear view mirrors and thus may be equipped with interactive requests in the form of lights or whistling. Of course, the interactive request can be sent out in a lighting and whistling mode at the same time according to the needs.
The interaction frequency is determined based on the type and/or behavior pattern of the first obstacle so that the first obstacle will respond as quickly as possible to the urgency of the event. Generally, for emergency situations, the frequency of interaction may be greater so that the first obstacle experiences the urgency of the event and responds as quickly as possible. For example, a first obstacle with a low sensitivity may set a larger interaction frequency than a first obstacle with a high sensitivity. The first obstacle traveling in the opposite direction to the autonomous vehicle may be set to have a greater interaction frequency than the first obstacle traveling in the same direction. Of course, the same interaction frequency may also be set for all first obstacles.
The interaction termination time is determined according to the type and/or behavior pattern of the first obstacle, so as to achieve the purpose of reducing the influence on the environment even if the first obstacle leaves. For example, a first obstacle with a high sensitivity may set a shorter interaction time period and an earlier interaction termination time than a first obstacle with a low sensitivity. The first obstacle traveling in the opposite direction to the autonomous vehicle can set a shorter interaction time period and an earlier interaction termination time than the first obstacle traveling in the same direction. Of course, the same interaction time period may also be set for all first obstacles.
Regarding the interaction manner, a method, which can be statistically determined from the reflection data of the first obstacle of various types and various behavior patterns, is relatively simple and easy to implement; in another method, a nonlinear programming method can be adopted for determination, an optimal group of sending mode parameters meeting related targets can be searched based on certain constraints, and the objectivity and the accuracy are better. This method will be described in detail later.
The following describes a process of determining an interactive mode using a nonlinear programming method. The method mainly comprises the following steps: and constructing an automatic driving objective function based on the interaction mode parameter variables, setting a constraint that the barrier leaves the second driving path before the meeting time, determining a group of interaction mode parameters under the condition of meeting the constraint, so that the value of the objective function is maximum, and determining a group of corresponding interaction mode parameters as a final interaction mode when the value of the objective function is maximum. The following is described in detail in connection with the formula.
First, a first obstacle is removed from the scene, and a driving path (set as X) is re-planned based on the scene and a second obstacle (if any) therein.
Then, assuming that the autonomous vehicle is traveling along the path X, the course of movement corresponds to the time interval [0, T]. If no prompt operation such as light or whistle is taken, the earliest meeting time of the automatic driving vehicle and the first obstacle is determined in a simulation mode and is marked as tdangerObviously, tdanger∈(0,T]。
Subsequently, numerical calculations will be performed on the interaction means parameters, with the purpose of solving for some or all (this embodiment assumes all) of the following three interaction means parameters (i.e. decision variables): starting time t of lamp/horn operationstartEnd time tendAnd an operating frequency.
Defining an interaction function finteract(distance, frequency, param), the input of which is the distance, frequency, param, and the output is the time difference between sending the interaction request and the first obstacle leaving the driving path, i.e. the time difference between the current obstacle responding to the driving path starting to get away from the autonomous vehicle after receiving the light/whistle signal until the evacuation is completed. In the input of this function, distance represents the distance between the autonomous vehicle and the first obstacle (the further the distance the worse the effect of the obstacle response), frequency represents the frequency of transmission of the interactive requests, i.e. the frequency of operation of the whistle or flashing light as described above (the higher the frequency the better the response of the obstacle is, but may therefore be annoying or thus affect the surroundings), param represents the sensitivity of the current type of first obstacle to making interactive responses, which can be set according to the type and/or behaviour pattern of the first obstacle, the higher the value of which represents the greater the sensitivity of the first obstacle (for example, a cyclist is more sensitive than a cyclist of a pushcart, the better the interactive response).
Thus, finteractIs arranged and
Figure BDA0002288850740000101
positive correlation, the formula is:
Figure BDA0002288850740000102
for example, one exemplary manifestation of equation (1) is:
Figure BDA0002288850740000111
where β > 0, the parameter needs to be determined in connection with actual step-by-step debugging.
finteractAnd
Figure BDA0002288850740000112
is corrected based on the actual response time of the first obstacle, e.g., by correcting the parameter β therein.
A nonlinear programming proposition is established, wherein the cost function is to enable the driving profitability cost to be maximum, and meanwhile, a series of constraint conditions are met. Specifically, the driving profitability cost includes the reward of getting away in advance because the obstacle passes our reminder, and subtracts the penalty of discomfort to the surrounding environment or the obstacle because of our high frequency whistle or flashing:
Figure BDA0002288850740000113
where cost represents the target of the objective function, tdangerIndicating the time of encounter, finteractRepresenting the time difference, t, between sending the request for interaction and the first obstacle leaving the traffic path XstartIndicating the start time, t, of sending the interactive requestendRepresents the end time of sending the interactive request, frequency represents the sending frequency of the interactive request, duty () represents a function that positively correlates the input and output (i.e., a penalty function that reflects the direct proportional relationship between the number of whistling or flashing lights N and the degree of environmental discomfort), w1And w2Represents a weight value, w1Representing the importance of the prize portion, w2Representing the degree of importance of the penalty portion, w1>0,w2≥0。w2May be 0, indicating that there may be no penalty portion in the cost function.
An exemplary representation of a penalty (N) function is:
Figure BDA0002288850740000114
where α > 0 are parameters that need to be determined in connection with actual step-by-step debugging.
Constraints for nonlinear programming propositions include (5a-5 e):
tstart∈[0,T], (5a)
tend∈[0,T], (5b)
tstart<tend, (5c)
frequency>0, (5d)
tdanger-finteract-tstart≥0. (5e)
wherein equations (5a), (5b) ensure that the moment of application of the light or horn operation starts and ends within the effective range, equation (5c) ensures the natural causal sequence of the operations carried out, (5d) ensures that the operations are actually carried out, and (5e) ensures that by the above operations obstacles can be made to respond in time and evacuate our driving route before a danger occurs.
To summarize, the nonlinear programming topic is: and (5) maximizing the formula (3) and meeting constraint conditions (5a-5 e).
Solving the nonlinear programming problem by using a common numerical optimization method, such as IPOPT (Internal Point Optimizer, which is a tool set for solving the nonlinear programming optimization problem), SQP (sequence quadratic programming), and the like, and calculating a set of sending mode parameters t for maximizing coststart,tendFrequency. And finally, sending out an interaction request in advance according to the calculated interaction mode parameters.
The interactive reaction result of the obstacle estimated by the automatic driving vehicle is not necessarily not identical to the expected formula (1) along with the continuous advance of the automatic driving vehicle, and the real situation and the estimation situation areThe deviation of the condition is continuously corrected in a future period, so that the condition is more reliable. For example, finteractAnd
Figure BDA0002288850740000121
is corrected based on the actual response time of the first obstacle (e.g., parameter β).
FIG. 5 is a schematic diagram of some embodiments of the disclosed autonomous driving control apparatus.
As shown in fig. 5, the automatic driving control device 50 of the embodiment includes: a memory 51 and a processor 52 coupled to the memory 51, the processor 52 being configured to execute the autopilot control method in any of the embodiments described above based on instructions stored in the memory 51.
The memory 51 may include, for example, a system memory, a fixed nonvolatile storage medium, and the like. The system memory stores, for example, an operating system, an application program, a Boot Loader (Boot Loader), and other programs.
The apparatus 50 may further include an input-output interface 53, a network interface 54, a storage interface 55, and the like. These interfaces 53, 54, 55 and the connection between the memory 51 and the processor 52 may be, for example, via a bus 56. The input/output interface 53 provides a connection interface for input/output devices such as a display, a mouse, a keyboard, and a touch screen. The network interface 54 provides a connection interface for various networking devices. The storage interface 55 provides a connection interface for external storage devices such as an SD card and a usb disk.
The present disclosure also proposes a non-transitory computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the autopilot control method in any of some embodiments.
The present disclosure is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the disclosure. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The above gai is only a preferred embodiment of the disclosure, and is not intended to limit the disclosure, and any modification, equivalent replacement, or improvement made within the spirit and principle of the disclosure should be included in the scope of the disclosure.

Claims (14)

1. An automatic driving control method characterized by comprising:
sensing an obstacle in a scene that is capable of making an interactive response;
sending an interaction request to the obstacle to cause the obstacle to interactively respond to the interaction request.
2. The method of claim 1, further comprising:
planning a first driving path based on the scene;
judging whether the smoothness degree of the first driving path meets a preset condition or not;
in case the situation not complying with the preset condition is caused by said obstacle, performing said step of sending an interaction request.
3. The method of claim 1, further comprising:
planning a second driving path based on the scene without the obstacle;
determining a time at which the autonomous vehicle meets the obstacle while traveling along the second driving path;
sending the interaction request in advance before the meeting time so that the barrier makes an interaction response to the interaction request and leaves the second driving path;
and if the obstacle is detected to leave the second driving path before the meeting time, controlling the automatic driving vehicle to continue to automatically drive according to the planned second driving path.
4. The method of claim 1 or 3, further comprising:
determining a type and/or a behavior pattern of the obstacle;
and executing the step of sending the interaction request according to the interaction mode corresponding to the type and/or the behavior mode of the obstacle.
5. The method of claim 4,
the interaction mode comprises one or more items of interaction type, interaction starting time, interaction ending time and interaction frequency.
6. The method according to claim 4, wherein the method for determining the interaction mode comprises:
and constructing an automatic driving objective function based on the interactive mode parameter variables, setting a constraint that the barrier leaves the second driving path before the meeting time, and determining a group of interactive mode parameters under the condition of meeting the constraint so as to enable the value of the objective function to be maximum.
7. The method of claim 6, wherein the objective function is:
Figure FDA0002288850730000021
where cost represents the target of the objective function, tdangerRepresenting the time of said encounter, finteractRepresenting the time difference, t, between sending the request for interaction and the obstacle leaving the second driving pathstartIndicating the start time, t, of sending the interactive requestendRepresents the end time of sending the interactive request, frequency represents the sending frequency of the interactive request, duty () represents a function with positive correlation between input and output, w1And w2Represents a weight value, w1>0,w2≥0;
The constraint includes: t is tdanger-finteract-tstart≥0。
8. The method of claim 7,
finteractis arranged and
Figure FDA0002288850730000022
positive correlation, distance represents the distance between the autonomous vehicle and the obstacle, param represents the sensitivity of the obstacle to make interactive response, and the sensitivity is set according to the type and/or behavior pattern of the obstacle.
9. The method of claim 8,
finteractand
Figure FDA0002288850730000023
the relationship of (a) is corrected according to the actual response time of the obstacle.
10. The method of claim 1, wherein the interactively-responsive obstacle comprises a pedestrian, a cycling pedestrian, a pushing pedestrian, a manned vehicle, or an unmanned vehicle.
11. The method of claim 5, wherein the interaction types comprise a lighting mode and/or a whistling mode.
12. An automatic driving control apparatus, characterized by comprising:
a memory; and
a processor coupled to the memory, the processor configured to execute the autopilot control method of any of claims 1-11 based on instructions stored in the memory.
13. An autonomous vehicle, comprising:
a sensor configured to sense an obstacle in a scene that is capable of interactive response;
the automatic driving control device of claim 12; and
an interaction device configured to issue a corresponding interaction request based on control of the automatic driving control device.
14. A non-transitory computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the steps of the autopilot control method of any one of claims 1-11.
CN201911171558.5A 2019-11-26 2019-11-26 Autonomous vehicle, and autonomous driving control method and apparatus Active CN110758388B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911171558.5A CN110758388B (en) 2019-11-26 2019-11-26 Autonomous vehicle, and autonomous driving control method and apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911171558.5A CN110758388B (en) 2019-11-26 2019-11-26 Autonomous vehicle, and autonomous driving control method and apparatus

Publications (2)

Publication Number Publication Date
CN110758388A true CN110758388A (en) 2020-02-07
CN110758388B CN110758388B (en) 2021-03-30

Family

ID=69339216

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911171558.5A Active CN110758388B (en) 2019-11-26 2019-11-26 Autonomous vehicle, and autonomous driving control method and apparatus

Country Status (1)

Country Link
CN (1) CN110758388B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459160A (en) * 2020-04-02 2020-07-28 中国矿业大学 Large-scale track smoothing method for unmanned washing and sweeping vehicle on open road
CN113485384A (en) * 2021-09-06 2021-10-08 中哲国际工程设计有限公司 Barrier-free guidance system based on Internet of things

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005263012A (en) * 2004-03-18 2005-09-29 Advics:Kk Vehicular warning device, vehicular warning control device and warning method
CN104933893A (en) * 2013-08-02 2015-09-23 本田技研工业株式会社 Vehicle To Pedestrian Communication System And Method
CN108292472A (en) * 2015-11-04 2018-07-17 祖克斯有限公司 For carrying out the method that robotic vehicle is communicated with external environment by acoustics Wave beam forming
CN109508019A (en) * 2018-12-28 2019-03-22 北京猎户星空科技有限公司 A kind of motion planning and robot control method, apparatus and storage medium
CN109789880A (en) * 2016-09-21 2019-05-21 苹果公司 The PERCOM peripheral communication of vehicle
CN109828591A (en) * 2019-03-21 2019-05-31 上海赛摩物流科技有限公司 A kind of automatic guided vehicle and its Autonomous Seam Locating Method and device with storage function
CN110320919A (en) * 2019-07-31 2019-10-11 河海大学常州校区 A kind of circulating robot method for optimizing route in unknown geographical environment

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005263012A (en) * 2004-03-18 2005-09-29 Advics:Kk Vehicular warning device, vehicular warning control device and warning method
CN104933893A (en) * 2013-08-02 2015-09-23 本田技研工业株式会社 Vehicle To Pedestrian Communication System And Method
CN108292472A (en) * 2015-11-04 2018-07-17 祖克斯有限公司 For carrying out the method that robotic vehicle is communicated with external environment by acoustics Wave beam forming
CN109789880A (en) * 2016-09-21 2019-05-21 苹果公司 The PERCOM peripheral communication of vehicle
CN109508019A (en) * 2018-12-28 2019-03-22 北京猎户星空科技有限公司 A kind of motion planning and robot control method, apparatus and storage medium
CN109828591A (en) * 2019-03-21 2019-05-31 上海赛摩物流科技有限公司 A kind of automatic guided vehicle and its Autonomous Seam Locating Method and device with storage function
CN110320919A (en) * 2019-07-31 2019-10-11 河海大学常州校区 A kind of circulating robot method for optimizing route in unknown geographical environment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111459160A (en) * 2020-04-02 2020-07-28 中国矿业大学 Large-scale track smoothing method for unmanned washing and sweeping vehicle on open road
CN113485384A (en) * 2021-09-06 2021-10-08 中哲国际工程设计有限公司 Barrier-free guidance system based on Internet of things

Also Published As

Publication number Publication date
CN110758388B (en) 2021-03-30

Similar Documents

Publication Publication Date Title
JP7367255B2 (en) Systems and methods for navigating a vehicle
CN112477884B (en) Automatic driving control method and device and vehicle
US20210405636A1 (en) Vehicular driving assist system that learns different driving styles
JP7235247B2 (en) System and method for navigating while sensing uncertainty
KR102426734B1 (en) Navigational system with imposed liability constraints
CN105539586B (en) Vehicle for autonomous driving hides the unified motion planning of moving obstacle
CN112292719B (en) Adapting the trajectory of an ego-vehicle to a moving foreign object
US9669830B2 (en) Method for assisting a driver in driving a vehicle, a driver assistance system, a computer software program product and vehicle
CN113291308B (en) Vehicle self-learning lane-changing decision-making system and method considering driving behavior characteristics
EP2615598B1 (en) Vehicle with computing means for monitoring and predicting traffic participant objects
IL293713B2 (en) Navigation based on vehicle activity
KR20200110702A (en) Default preview area and gaze-based driver distraction detection
US20220169214A1 (en) Emergency braking for autonomous vehicles
JPWO2018216177A1 (en) Vehicle control device
JP7216766B2 (en) vehicle controller
CN110758388B (en) Autonomous vehicle, and autonomous driving control method and apparatus
WO2020027241A1 (en) Vehicle travel control device
Tomar et al. Safety of Lane Change Maneuver Through A Priori Prediction of Trajectory Using Neural Networks.
CN112567439A (en) Method and device for determining traffic flow information, electronic equipment and storage medium
EP3798912A1 (en) Training method for a convolutional neural network for predicting a driving maneuver of a traffic participant
Weisswange et al. Intelligent traffic flow assist: Optimized highway driving using conditional behavior prediction
US11429843B2 (en) Vehicle operation labeling
Li et al. V2X assisted co‐design of motion planning and control for connected automated vehicle
CN115257813A (en) Intelligent driving control method through construction obstacle and vehicle
CN117227714A (en) Control method and system for turning avoidance of automatic driving vehicle

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant