WO2021008605A1 - 一种确定车速的方法和装置 - Google Patents

一种确定车速的方法和装置 Download PDF

Info

Publication number
WO2021008605A1
WO2021008605A1 PCT/CN2020/102644 CN2020102644W WO2021008605A1 WO 2021008605 A1 WO2021008605 A1 WO 2021008605A1 CN 2020102644 W CN2020102644 W CN 2020102644W WO 2021008605 A1 WO2021008605 A1 WO 2021008605A1
Authority
WO
WIPO (PCT)
Prior art keywords
vehicle
different action
surrounding objects
intentions
probability
Prior art date
Application number
PCT/CN2020/102644
Other languages
English (en)
French (fr)
Inventor
杜明博
陶永祥
Original Assignee
华为技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 华为技术有限公司 filed Critical 华为技术有限公司
Priority to JP2021523762A priority Critical patent/JP7200371B2/ja
Priority to EP20840576.1A priority patent/EP3882095A4/en
Priority to MX2021005934A priority patent/MX2021005934A/es
Publication of WO2021008605A1 publication Critical patent/WO2021008605A1/zh
Priority to US17/322,388 priority patent/US11273838B2/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/10Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to vehicle motion
    • B60W40/105Speed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • B60W60/00276Planning or execution of driving tasks using trajectory prediction for other traffic participants for two or more other traffic participants
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/14Adaptive cruise control
    • B60W30/143Speed control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W40/00Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models
    • B60W40/02Estimation or calculation of non-directly measurable driving parameters for road vehicle drive control systems not related to the control of a particular sub unit, e.g. by using mathematical models related to ambient conditions
    • B60W40/04Traffic conditions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W60/00Drive control systems specially adapted for autonomous road vehicles
    • B60W60/001Planning or execution of driving tasks
    • B60W60/0027Planning or execution of driving tasks using trajectory prediction for other traffic participants
    • B60W60/00274Planning or execution of driving tasks using trajectory prediction for other traffic participants considering possible movement changes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/29Graphical models, e.g. Bayesian networks
    • G06F18/295Markov models or related models, e.g. semi-Markov models; Markov random fields; Networks embedding Markov models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/84Arrangements for image or video recognition or understanding using pattern recognition or machine learning using probabilistic graphical models from image or video features, e.g. Markov models or Bayesian networks
    • G06V10/85Markov-related models; Markov random fields
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/16Anti-collision systems
    • G08G1/166Anti-collision systems for active traffic, e.g. moving vehicles, pedestrians, bikes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2552/00Input parameters relating to infrastructure
    • B60W2552/53Road markings, e.g. lane marker or crosswalk
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/402Type
    • B60W2554/4029Pedestrians
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4041Position
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2554/00Input parameters relating to objects
    • B60W2554/40Dynamic objects, e.g. animals, windblown objects
    • B60W2554/404Characteristics
    • B60W2554/4045Intention, e.g. lane change or imminent movement
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W2720/00Output or target parameters relating to overall vehicle dynamics
    • B60W2720/10Longitudinal speed

Definitions

  • This application relates to the field of vehicle technology, in particular to a method and device for determining vehicle speed.
  • the speed of the vehicle needs to be determined according to the motion state of surrounding objects.
  • the vehicle can predict the possible action intentions of the surrounding objects, thereby determining the vehicle speed based on the motion status of the surrounding objects under the predicted action intention.
  • the impact of the collision risk between the surrounding objects and the vehicle is often ignored, so that the determined vehicle speed is not suitable enough, so that there may be safety risks when the vehicle is driving.
  • the embodiments of the present application provide a method and device for determining a vehicle speed, so that the vehicle can redistribute the probability of the action intention of the surrounding objects, the movement state changes of the surrounding objects under different action intentions, and the vehicle's behavior under different driving speed control actions. Changes in the state of motion, jointly determine the driving speed of the vehicle, to avoid ignoring the high risk between surrounding objects and the vehicle but the probability of occurrence is small, so that the determined driving speed is more appropriate, and the possible existence of the vehicle is reduced Safety hazards.
  • an embodiment of the present application provides a method for determining a vehicle speed.
  • the method may specifically include: firstly, obtaining observation information of the surrounding objects of the vehicle by observing the surrounding objects of the vehicle; then, according to the observation information of the surrounding objects , Calculate the probability distribution of different action intentions of surrounding objects; then, according to the driving time of the vehicle from the current position of the vehicle to the risk area under different action intentions, redistribute the probability distribution to obtain the probability redistribution of different action intentions , Where the risk areas under different action intentions are the areas that the surrounding objects pass in the lanes of the vehicle when they are in different action intentions; then, according to the driving time of the risk area under the different action intentions of the vehicle, the surrounding objects are predicted Changes in the motion state under the different action intentions; finally, according to the probability redistribution of different action intentions, the changes in the motion state of surrounding objects under the different action intentions, and the changes in the motion state of the vehicle under different driving speed control actions, Determine the speed of the vehicle.
  • the probability distribution of each action intention can be calculated according to the observation information of the surrounding objects, and the From the current location to the travel time of the risk area under different action intentions, calculate the probability redistribution of different action intentions. Then, according to the driving time of the risk area under the different action intentions of the vehicle distance, predict the surrounding objects in different actions.
  • the movement state changes under the intention, in this way, the driving speed of the vehicle can be determined according to the probability redistribution of different action intentions, the movement state changes of surrounding objects under different action intentions, and the movement state changes of the vehicle under different driving speed control actions. .
  • the probability distribution of different action intentions of the surrounding objects calculates the probability distribution of different action intentions of the surrounding objects.
  • the specific implementation may include: according to the observation information of the surrounding objects, the vehicle location
  • the driving road is a coordinate system to establish the relative position relationship and relative motion relationship between the surrounding objects and the road; according to the relative position relationship and relative motion relationship between the surrounding objects and the road, the probability distribution of different action intentions of the surrounding objects is calculated.
  • the probability distribution of different action intentions of surrounding objects can be calculated more conveniently and accurately, which provides an accurate data basis for the subsequent determination of reasonable vehicle speed.
  • the method may further include: obtaining observation information of the vehicle; and establishing the vehicle based on the observation information of the vehicle and the observation information of surrounding objects, using the road on which the vehicle is traveling as the coordinate system The relative position relationship and relative motion state with the road and the relative position relationship and relative motion state of the surrounding objects and the road; according to the relative position relationship and relative motion state of the surrounding objects and the road, the risk area under different action intentions is determined; The relative position relationship and relative motion state of the road and the risk area under different action intentions are used to calculate the travel time of the vehicle from the current position of the vehicle to the risk area under different action intentions.
  • the probability distribution is calculated based on the driving time of the risk area under different action intentions from the vehicle to obtain the probability redistribution of different action intentions.
  • the specific implementation may include: The probability distribution is processed into particles, where the number of particles corresponding to different action intentions represents the probability distribution of different action intentions; the weight of particles corresponding to different action intentions is calculated according to the travel time of the risk area under the different action intentions of the vehicle distance Make adjustments to obtain the probability redistribution of different action intentions.
  • the concept of particles can be introduced, and through particle processing and calculation, the realization of each action intention according to the distance of the vehicle.
  • the driving time of the risk area, the purpose of determining the risk degree of each action intention, that is, realizing the probability redistribution of different action intentions, for the subsequent accurate determination of the vehicle speed, and improving the safety of vehicles using intelligent driving technologies such as automatic driving Performance and reliability provide an essential data foundation.
  • the specific implementation may include: according to different vehicle distances The driving time of the risk area under the action intention determines the probability of the surrounding objects changing the action intention under different action intentions; according to the probability and random probability of the surrounding objects changing the action intention under different action intentions, predict the movement state of the surrounding objects under different action intentions Variety.
  • the movement state changes of surrounding objects under different action intentions, and the movement state changes of the vehicle under different driving speed control actions determine the vehicle's
  • the specific implementation of the driving speed can include: according to the probability redistribution of different action intentions, the movement state changes of surrounding objects under different action intentions, and the movement state changes of the vehicle under different driving speed control actions, it is estimated that the vehicle is controlled at different speeds
  • the driving effect under the action according to the driving effect of the vehicle under different driving speed control actions, select the target driving speed control action from different driving speed control actions; according to the target driving speed control action, determine the driving speed of the vehicle.
  • an embodiment of the present application also provides a device for determining a vehicle speed, the device including: a first acquisition unit, a first calculation unit, a second calculation unit, a prediction unit, and a first determination unit.
  • the first obtaining unit is used to obtain observation information of the surrounding objects of the vehicle;
  • the first calculation unit is used to calculate the probability distribution of different action intentions of the surrounding objects according to the observation information of the surrounding objects;
  • the second calculation unit is used to According to the driving time of the vehicle from the current position of the vehicle to the risk area under different action intentions, the probability distribution is calculated to obtain the probability redistribution of different action intentions; among them, the risk areas under different action intentions are the surrounding objects.
  • the area that the vehicle travels through when in different action intentions; the prediction unit is used to predict the movement state changes of surrounding objects under different action intentions according to the driving time of the risk area under the different action intentions of the vehicle;
  • the determining unit is used to determine the driving speed of the vehicle according to the probability redistribution of different action intentions, the changes in the motion state of surrounding objects under different action intentions, and the changes in the motion state of the vehicle under different driving speed control actions.
  • the first calculation unit may include: a creation subunit and a calculation subunit.
  • the establishment of a subunit is used to establish the relative position relationship and relative motion relationship between the surrounding objects and the road based on the observation information of the surrounding objects and the road on which the vehicle is traveling;
  • the relative positional relationship and relative motion relationship with the road calculate the probability distribution of different action intentions of surrounding objects.
  • the device may further include: a second acquiring unit, an establishing unit, a second determining unit, and a third calculating unit.
  • the second acquisition unit is used to acquire observation information of the vehicle
  • the establishment unit is used to establish the relative position relationship between the vehicle and the road based on the observation information of the vehicle and the observation information of surrounding objects, using the road on which the vehicle is traveling as the coordinate system And the relative motion state and the relative position relationship and relative motion state of the surrounding objects and the road
  • the second determining unit is used to determine the risk area under different action intentions according to the relative position relationship and the relative motion state of the surrounding objects and the road
  • third The calculation unit is used to calculate the travel time of the vehicle from the current position of the vehicle to the risk area under different action intentions according to the relative position relationship and relative motion state of the vehicle and the road and the risk area under different action intentions.
  • the second calculation unit may include: a processing subunit and an adjustment subunit.
  • the processing subunit is used to perform particle processing on the probability distribution, where the number of particles corresponding to different action intentions represents the probability distribution of different action intentions;
  • the adjustment subunit is used to calculate the risk of different action intentions according to the distance of the vehicle For the travel time of the area, the weights of particles corresponding to different action intentions are adjusted to obtain the probability redistribution of different action intentions.
  • the prediction unit may include: a first determination subunit and a prediction subunit.
  • the first determination subunit is used to determine the probability of the surrounding objects changing the action intentions under different action intentions according to the driving time of the risk area under different action intentions of the vehicle distance;
  • the prediction subunit is used to determine the surrounding objects under different action intentions Change the probability and random probability of action intentions, and predict the movement state changes of surrounding objects under different action intentions.
  • the first determination unit may include: an estimation subunit, a selection subunit, and a second determination subunit.
  • the estimation subunit is used to estimate the control actions of the vehicle at different speeds according to the probability redistribution of different action intentions, the changes in the motion state of surrounding objects under different action intentions, and the changes in the motion state of the vehicle under different driving speed control actions
  • the selection subunit is used to select the target driving speed control action from different driving speed control actions according to the driving effect of the vehicle under different driving speed control actions; the second determining subunit is used to select the target driving speed control action according to the target driving speed Control actions to determine the speed of the vehicle.
  • the device provided in the second aspect corresponds to the method provided in the first aspect, so the implementation manners of the second aspect and the technical effects achieved can refer to the related descriptions of the implementation manners in the first aspect.
  • an embodiment of the present application also provides a vehicle.
  • the vehicle includes a sensor, a processor, and a vehicle speed controller.
  • the sensor is used to obtain observation information of surrounding objects of the vehicle and send it to the processor;
  • the vehicle is used to determine the driving speed of the vehicle according to the method described in any one of the implementations of the first aspect and send it to the vehicle speed controller;
  • the vehicle speed controller is used to control the vehicle to travel at the determined vehicle speed .
  • an embodiment of the present application also provides a vehicle.
  • the vehicle includes a processor and a memory.
  • the memory stores an instruction.
  • the processor executes the instruction, the vehicle can implement any one of the foregoing first aspect. Way described method.
  • the embodiments of the present application also provide a computer program product, which when running on a computer, causes the computer to execute the method described in any one of the implementations of the first aspect.
  • the embodiments of the present application also provide a computer-readable storage medium that stores instructions in the computer-readable storage medium, which when run on a computer or processor, causes the computer or processor to execute the aforementioned first
  • a computer-readable storage medium that stores instructions in the computer-readable storage medium, which when run on a computer or processor, causes the computer or processor to execute the aforementioned first
  • any one of the methods described in the implementation manner any one of the methods described in the implementation manner.
  • Figure 1 is a schematic diagram of a road traffic scene involved in an application scenario in an embodiment of the application
  • FIG. 2 is a schematic diagram of a vehicle hardware architecture using technologies such as automatic driving in an embodiment of the application;
  • FIG. 3 is a schematic diagram of a vehicle system architecture using technologies such as automatic driving in an embodiment of the application;
  • FIG. 4 is a schematic structural diagram of a vehicle using technologies such as automatic driving in an embodiment of the application;
  • FIG. 5 is a schematic flowchart of a method for determining a vehicle speed in an embodiment of the application
  • Fig. 6 is a schematic diagram of a pedestrian intention in an embodiment of the application.
  • FIG. 7 is a schematic diagram of a vehicle-surrounding object-road model in an embodiment of this application.
  • FIG. 8 is a schematic diagram of a particle representation in an embodiment of the application.
  • FIG. 9 is a schematic diagram of determining a risk area and driving time in an embodiment of the application.
  • FIG. 10 is a schematic diagram of an example of an interactive motion model of surrounding objects in an embodiment of the application.
  • Fig. 11 is a schematic structural diagram of a device for determining vehicle speed in an embodiment of the application.
  • FIG. 12 is a schematic structural diagram of a vehicle in an embodiment of the application.
  • FIG. 13 is a schematic structural diagram of a vehicle in an embodiment of the application.
  • the determination of vehicle speed needs to consider surrounding pedestrians, animals and other surrounding objects to avoid collisions with surrounding objects of the vehicle and other traffic accidents, thereby ensuring the safety of the vehicle and the surrounding objects of the vehicle.
  • the vehicle can predict the target of the surrounding objects through the behavior characteristics of the surrounding objects Action intentions, for example: determine the occurrence probability of various action intentions of the surrounding objects based on the behavior characteristics of the surrounding objects, and then set the probability threshold to filter out the action intentions with higher occurrence probability as the target action intention; thus, the surrounding objects are based on the target action intention
  • the motion state of the object determines the vehicle speed.
  • the surrounding objects may also have other action intentions, and under other action intentions, the surrounding objects and the vehicle may have a greater risk of collision.
  • the vehicle 101 is driven by an automatic driving technology.
  • the surrounding objects of the vehicle 101 include: pedestrian 102 and pedestrian 103.
  • the target action intention of the pedestrian 102 is predicted to be: fast crossing, forward diagonally, and the target action intention of the pedestrian 103 is: stop, so as to determine
  • the speed of the vehicle 101 is 60 kilometers per hour.
  • the vehicle 101 does not consider the other action intentions of the pedestrian 102 and the pedestrian 103 that may collide with the vehicle 101 when predicting the action intention, it may be predicted that the target action intention of the pedestrian 103 closer to the vehicle 101 does not include the possibility
  • the speed is extremely small for fast crossing and forward diagonal crossing, so the speed of the vehicle is determined to be larger.
  • the pedestrian 103 may also cross diagonally forward or quickly traverse.
  • the vehicle 101 is likely to be inaccurate in predicting the target action intention of the pedestrian 103, ignoring the action intention with a greater risk of collision, and thus the vehicle 101 may use a larger 60
  • the vehicle is driving at a speed of kilometers per hour, it is very likely to hit the pedestrian 103, causing the vehicle 101 and the pedestrian 103 to have a traffic accident.
  • a method for determining the appropriate vehicle speed is provided.
  • Driving at the appropriate speed can ensure the safety of the vehicle and its surrounding objects.
  • the specific process of determining the vehicle speed can include: calculating the probability distribution of each action intention based on the observation information of the surrounding objects, and calculating the probability of different action intentions according to the driving time of the vehicle from the current position to the risk area under different action intentions. Distribution; then, according to the driving time of the risk area under different action intentions from the vehicle, predict the movement state changes of surrounding objects under different action intentions; finally, according to the probability of different action intentions, the surrounding objects are under different action intentions.
  • the changes in the motion state of the vehicle and the change in the motion state of the vehicle at different accelerations determine the speed of the vehicle.
  • the speed of the vehicle not only the possibility of each action intention of the surrounding objects is considered, but also the risk of collision with the surrounding objects when the vehicle is driving at different accelerations under each action intention, thereby avoiding the surrounding objects Ignore the high-risk situations with the vehicle that have a low probability of occurrence, so that the determined driving speed is more suitable for the current driving environment, and the potential safety hazards when the vehicle is driving are reduced.
  • the vehicle 200 includes: a front-view camera 201, a radar 202, a global positioning system (English: Global Positioning System, referred to as GPS) 203, an image processor 204, a central processing unit (English: Central Processing Unit, referred to as CPU) 205, and Controller 206.
  • the front-view camera 201 can be used for image collection of road scenes;
  • the radar 202 can be used for data collection of dynamic and static surrounding objects;
  • the image processor 204 can be used for the detection of lane lines, road edges, other vehicles and surrounding objects.
  • CPU 205 can be used to control the entire vehicle 200, obtain image data and status data of surrounding objects from the front-view camera 201 and radar 202 respectively, and call the image processor 204 and
  • the internal calculation module of the CPU 205 performs target recognition, fusion and other calculations to determine the appropriate target vehicle speed, and generates a decision control command based on the target speed, and sends the decision control command to the controller 206; the controller 206 can be used for According to the received decision control instruction, the vehicle is controlled to drive on the current lane at the target speed.
  • the vehicle 200 includes a vehicle-mounted sensor system 210, a vehicle-mounted computer system 220, and a vehicle-mounted control execution system 230.
  • the vehicle-mounted sensor system 210 can be used to obtain data collected by the front-view camera 201, data collected by the radar 202, and data located by the GPS 203.
  • the on-board computer system 220 is roughly divided into two modules: a perception data processing module 221 and a decision planning module 222.
  • the perception data processing module 221 can be used to detect the surrounding objects of the vehicle 200 (especially the surrounding pedestrians) and output the location of the surrounding objects Motion information; the decision planning module 222 can be used to predict and update the action intention distribution of the surrounding objects according to the current location and movement information of the surrounding objects, and then make decisions and plan the speed of the vehicle 200 based on the action intention distribution.
  • the vehicle control execution system 230 may be used to obtain the decision control instruction output by the decision planning module 222, and control the vehicle 200 to travel according to the vehicle speed in the decision control instruction. It should be noted that the method for determining the vehicle speed provided in the embodiment of the present application is mainly executed in the decision planning module 222 of the on-board computer system 220, and the specific implementation manner can refer to the related description in the embodiment shown in FIG. 4.
  • FIG. 4 a schematic structural diagram of the vehicle 200 corresponding to the embodiment of the present application is shown in FIG. 4.
  • the vehicle 200 includes: a sensor layer 410, a perception layer 420, a decision planning layer 430, and a vehicle control layer 440.
  • the data stream passes through the above four layers in sequence and is processed by the above four layers in sequence.
  • the sensor layer 410 can be used to load the data collected by the monocular/dual current camera 201, the data collected by the lidar/millimeter wave radar 202, and the positioning data of the GPS 203; the sensing layer 420 can be used to load the vehicle/surrounding object detection Module 421, lane line detection module 422, traffic sign detection module 423, vehicle positioning module 424, dynamic/static object detection module 425, and perception fusion module 426: data of six modules; decision planning layer 430 can be used to load pedestrian intentions The data of the distribution prediction update module 431, the speed decision planning module 432, and the path planning module 433; the vehicle control layer 440 can be used to perform horizontal and vertical control of the vehicle 200 according to the data sent by the decision planning layer 430.
  • modules corresponding to the gray boxes in Figure 4 are the modules involved in implementing the method for determining vehicle speed provided in the embodiments of this application.
  • the implementation of the embodiments of this application to determine whether the speed of the vehicle is safe and reliable depends mainly on decision planning.
  • FIG. 5 shows a schematic flowchart of a method for determining a vehicle speed in an embodiment of the present application.
  • the method may specifically include the following steps 501 to 505:
  • Step 501 Obtain observation information of surrounding objects of the vehicle.
  • Step 502 Calculate the probability distribution of different action intentions of the surrounding objects based on the observation information of the surrounding objects.
  • the surrounding objects of the vehicle may include: pedestrians, animals, etc. appearing around the vehicle that may participate in traffic.
  • the surrounding objects of the vehicle can be understood and explained by taking pedestrians around the vehicle as an example.
  • Observation information of surrounding objects refers to the information that can reflect the state of surrounding objects, and can be used to predict the probability of the surrounding objects performing each kind of action intention.
  • action intention refers to the intention of surrounding objects relative to the current lane.
  • a diagram represents the action intention g1: forward antegrade
  • b represents the action intention g2: backward antegrade
  • c represents the action intention g3: Vertically crossing
  • the d figure represents the action intention g4: forward diagonally
  • the e figure represents the action intention g5: far away
  • the f figure represents the action intention g6: backward diagonal crossing
  • the g figure represents the action intention g7: stop.
  • an action intention of the surrounding object refers to an action intention of the surrounding object; for example, if the surrounding object of the vehicle is only a pedestrian, then suppose each Pedestrians have two possible action intentions: waiting and crossing. Then, the action intentions of surrounding objects include two action intentions, namely: A waiting and A crossing. In another case, if there are at least two surrounding objects around the vehicle, then the action intention of the surrounding objects represents a combination of action intentions corresponding to each surrounding object in all surrounding objects.
  • the probability of each kind of action intention of the surrounding objects can be calculated as the occurrence probability of each kind of action intention, so that the surrounding objects have different action intentions.
  • step 502 may specifically include: establishing the relative position relationship and relative motion relationship between the surrounding objects and the road based on the observation information of the surrounding objects and using the road on which the vehicle is traveling as the coordinate system; The relative position relationship and relative motion relationship between the roads, and the probability distribution of different action intentions of surrounding objects are calculated.
  • the road coordinate system (ie, the SL coordinate system) refers to the starting point of the road path point as the origin, along the direction the vehicle will travel on the road, denoted as the positive direction of the S axis, which is perpendicular to the positive direction of the S axis to the left
  • the direction of is the positive direction of the L-axis, see Figure 7 for details.
  • steps 501 to 502 may specifically predict the occurrence probability of each type of action intention of the surrounding object based on the observation information of the surrounding object at the previous moment and the next moment.
  • the specific implementation can include: S11, obtain observation information of the surrounding objects of the vehicle; S12 determine whether each surrounding object is a new surrounding object, if so, perform S13, otherwise, perform S14; S13, initialize each action of the surrounding object Probability of intention; S14, update the probability of occurrence of each action intention of the surrounding object based on the observation information. It should be noted that, by determining the occurrence probability of each type of action intention of the surrounding object, the probability distribution of different action intentions of the surrounding object can be determined based on this.
  • the observation information of the surrounding objects of the vehicle acquired in step 501 can be specifically realized by: the front-view camera 201, radar 202, GPS 203 in Figure 2, Figure 3 or Figure 4 Collect and send them to the vehicle/surrounding object detection module 421, self-vehicle positioning module 424, and dynamic/static object detection module 425 of the perception layer 420 in Figure 4 for separate processing, and then send the three processing results to the perception fusion module 426 performs data association fusion and tracking processing.
  • Step 502 ie, S12 ⁇ S14
  • the specific implementation is by: CPU 205 in FIG. 2 (decision planning module 222 of vehicle computer system 220 in FIG. 3 or pedestrian intention distribution prediction update module in decision planning layer 430 in FIG. 4) 431) Execution.
  • S11 may specifically include: obtaining the first observation information of surrounding objects in the Cartesian coordinate system by collecting environmental data around the vehicle, filtering, multi-sensor data association fusion, and tracking the environmental data.
  • the first observation information of the surrounding object may include: the position of the surrounding object, the moving speed and the moving direction of the surrounding object. It should be noted that for each surrounding object participating in traffic, it is necessary to obtain its first observation information; in order to provide a data basis for subsequent calculations, it is also necessary to obtain the first observation information of its own vehicle, including: vehicle position, vehicle speed, Vehicle acceleration and vehicle heading.
  • the position in the first observation information needs to be converted to the rectangular coordinate system.
  • the transformation of the SL coordinate system obtains the position in the SL coordinate system as the position in the second observation information.
  • the specific transformation may include: vertically mapping the original position point in the Cartesian coordinate system to the mapping point in the direction of the road where the vehicle will travel; reading the distance from the starting point of the road to the mapping point as the value of the S-axis direction; The distance between the location point and the mapping point is taken as the value of the L axis direction.
  • the vehicle-surrounding object-road model can be constructed. Since the vehicle-surrounding object-road model uses the SL coordinate system as the reference coordinate system, it is used to describe the relative positional relationship and relative relationship between vehicles, surrounding objects, and roads. Therefore, the vehicle-surrounding object-road model can be used to calculate the second prediction information. For example: for the pedestrians and vehicles shown in Fig. 7, it is assumed that the first observation information of the pedestrian includes the position: (x pedestrian , y pedestrian ), and the first observation information of the vehicle includes the position: (x vehicle , y vehicle ), Then, referring to Fig. 7, the position of the pedestrian transformed to the SL coordinate system is (s pedestrian , l pedestrian ), and the position of the vehicle transformed to the SL coordinate system is (s vehicle , l vehicle ).
  • S12 can be executed to determine whether each surrounding object is a new surrounding object. For a new surrounding object, initialize the surrounding object according to S13. For the existing surrounding objects, update the appearance probability of each intent of the surrounding object based on the observation information according to S14. It is understandable that both S13 and S14 are calculated based on the relative position relationship and relative motion relationship between the surrounding objects and the road.
  • the surrounding object is a new surrounding object according to whether the surrounding object observed at the current moment has been observed before the current moment. If the surrounding object observed at the current moment has not been observed before the current moment, it means that the surrounding object is a newly appeared object around the vehicle, and the surrounding object can be determined as a new surrounding object; otherwise, if The surrounding object observed at the current time is also observed before the current time, indicating that the surrounding object is an existing surrounding object before the current time, and it can be determined that the surrounding object is not a new surrounding object.
  • the occurrence probability of each intention of the surrounding object is updated based on the observation information. Specifically, it can be based on the occurrence probability of the action intention at the closest moment to the current moment, the current moment and the location at the closest moment to the current moment, and the action intention
  • the mean value and corresponding variance of the updated model in the S direction and L direction of the SL coordinate system are determined.
  • the updated model may be a Gaussian motion model.
  • Pedestrian 102 is determined by S12 to determine that it is a new surrounding object. It needs to initialize the appearance probability distribution of its state and action intention.
  • the observation information of pedestrian 102 and pedestrian 103 obtained in the rectangular coordinate system are respectively
  • pedestrian 102 is determined as an existing surrounding object, the appearance of each action intention of the pedestrian 102 is updated based on the observation information Probability.
  • Gaussian motion model In the SL coordinate system, the mean value of pedestrian 102 in the S direction and L direction of g1 action intention; ⁇ s and ⁇ l are Gaussian motion models. In the SL coordinate system, pedestrian 102 has the action intention of g1 The standard deviation in the S and L directions.
  • the concept of particles expresses the probability of occurrence of the action intention by the number of particles included in each action intention, specifically: if the action intention g1 includes more particles, it means that the action intention has a greater probability of occurrence; otherwise, If the action intention g1 includes a small number of particles, it means that the occurrence probability of the action intention is small.
  • each action intention corresponds to a particle set containing the same particles, for example, for 700
  • each action intention corresponds to a set of 100 identical particles, and each particle has a weight of 1/700.
  • the state of each particle can be expressed as: Among them, wi represents the weight of the i-th particle, and the weight is used to represent the risk degree of the particle corresponding to the action intention.
  • wi represents the weight of the i-th particle
  • the weight is used to represent the risk degree of the particle corresponding to the action intention.
  • steps 501 to 502 may specifically output the probability distribution of each type of action intention of the surrounding objects by using a trained machine learning model according to the observation information of the current surrounding objects.
  • the observation information of the surrounding objects may specifically be the observation information including the position, movement speed, and direction of the surrounding objects processed in the foregoing implementation manner, or may be the currently collected image including the surrounding objects.
  • the observation information is the above-mentioned processed observation information including the position, movement speed, and movement direction of the surrounding objects
  • a large number of historical observations with known probability of each action intention can be used.
  • Information, and its corresponding known probability of occurrence of each action intention train the pre-built first machine learning model to obtain the trained first machine learning model; then, the observation information of surrounding objects obtained in step 501 can be obtained , Input to the trained first machine learning model, and output the probability of occurrence of each action intention of the surrounding object.
  • the observation information is currently collected images that include surrounding objects, then it can be based on a large number of historical images with known occurrence probability of each action intention, and the corresponding known occurrence probability of each action intention , Train the pre-built second machine learning model to obtain the trained second machine learning model; then, the observation information of surrounding objects obtained in step 501 (that is, the currently collected images including surrounding objects) can be input To the second machine learning model that has been trained, the probability of occurrence of each action intention of the surrounding objects included in the image is output.
  • Vehicle speed improves the safety and reliability of vehicles using intelligent driving technologies such as autonomous driving, and provides an indispensable data foundation.
  • Step 503 According to the driving time of the vehicle from the current position of the vehicle to the risk area under different action intentions, redistribute the probability distribution is calculated to obtain the probability redistribution of different action intentions; where the risk areas under different action intentions are respectively The area that the surrounding objects pass in the lane of the vehicle when they are in different action intentions.
  • Step 504 Predict the movement state changes of surrounding objects under different action intentions according to the driving time of the vehicle from the risk area under different action intentions.
  • the speed of the vehicle at least according to the redistribution of the probability of each action intention of the surrounding objects and the changes in the movement state of the surrounding objects under different action intentions; and the surrounding objects
  • the redistribution of the probability of each action intention and the change of the motion state need to be calculated based on the driving time of the vehicle to the risk area when the surrounding objects have different action intentions.
  • the driving time is used to quantify the risk of each action intention, that is, the likelihood of a collision with the vehicle when the surrounding objects move according to the action intention.
  • the collision time of each action intention is the driving time of the risk area of each action intention of the vehicle from the surrounding objects.
  • the risk area is the area A that the pedestrian passes on the lane of the vehicle when the pedestrian is in the crossing intention, then the corresponding collision time
  • the travel time ttc g3 of the vehicle from the current position to the area A As: the travel time ttc g3 of the vehicle from the current position to the area A; for the pedestrian's g4 action intention (ie, forward diagonal crossing), the risk area is the pedestrian's intention to cross in the vehicle's lane After passing through area B, the corresponding collision time is: the travel time ttc g4 of the vehicle from the current position to area B.
  • the travel time of the vehicle from the current position of the vehicle to the risk area under different action intentions can be calculated through step 501 and the following S21 ⁇ S24: S21, obtain the vehicle Observation information; S22, based on the observation information of the vehicle and the observation information of the surrounding objects, using the road on which the vehicle is traveling as the coordinate system, establish the relative position relationship and relative motion state of the vehicle and the road, and the relative position relationship between the surrounding objects and the road.
  • Relative motion state S23, according to the relative position relationship and relative motion state of surrounding objects and the road, determine the risk area under different action intentions
  • S24 according to the relative position relationship and relative motion state of the vehicle and the road and the risk under different action intentions Area, calculate the travel time of the vehicle from the current position of the vehicle to the risk area under different action intentions.
  • S21 refer to the related description of obtaining observation information of surrounding objects in step 501
  • S22 refer to the related description of the coordinate transformation of S11 above.
  • the collision time of the action intention if the collision time of the action intention is long, it means that the action intention corresponds to the possibility of risk being low, that is, the risk is low; on the contrary, if the collision time of the action intention is short, it means The action intention corresponds to a higher probability of occurrence of risks, that is, a high degree of risk.
  • ttc g4 is greater than ttc g3 , which means that when the pedestrian is crossing, there is a high possibility of collision with the vehicle, and the risk is high. Compared with, the possibility of collision with the vehicle is reduced, and the risk is reduced.
  • step 503 can be specifically implemented by the following S31 to S32: S31, particleizing the probability distribution, where the number of particles corresponding to different action intentions represents the probability distribution of different action intentions S32, adjusting the weights of particles corresponding to the different action intentions according to the calculation of the travel time of the vehicle from the risk area under the different action intentions to obtain the probability redistribution of different action intentions.
  • the weights of particles corresponding to different action intentions are adjusted according to the calculation of the vehicle distance from the travel time of the risk area under the different action intentions to obtain the probability redistribution of different action intentions, where each particle is considered
  • the driving time i.e., collision time
  • the driving time indicates the degree of risk of a collision between surrounding objects under a certain intention. Since the shorter the driving time, the higher the risk. Therefore, in order to increase the importance of high-risk action intentions, it can be based on The following formula (3) is based on the weight of particles with high risk of increased travel time:
  • W represents the risk coefficient
  • represents the effective calculation constant
  • the purpose of determining the risk degree of each action intention according to the driving time of the vehicle from the risk area of each action intention can be realized, that is, the probability redistribution of different action intentions is realized, which is the follow-up Accurately determine the speed of the vehicle, improve the safety and reliability of vehicles using intelligent driving technologies such as automatic driving, and provide an indispensable data foundation.
  • step 504 can specifically predict the movement state changes of surrounding objects under different action intentions through the following S41-S42: S41, according to the driving time of the risk area under different action intentions from the vehicle distance, determine the said under different action intentions Probability of the surrounding object changing the action intention; S42, according to the probability and random probability of the surrounding object changing the action intention under different action intentions, predict the movement state change of the surrounding object under different action intentions.
  • S41 can further determine the interaction probability between the vehicle and its surrounding objects according to the collision time ttc of each action intention.
  • the interaction probability of a certain action intention of the surrounding objects is too large, the current action intention can be changed according to the interaction probability to adjust to the target intention.
  • the target intention is the adjusted action intention of the surrounding objects. For example, if the interaction probability corresponding to the collision time ttc of pedestrian 1 in the case of action intention g1 is very large, then the target intention g2 of pedestrian 1 can be determined according to the interaction probability. In another case, if the interaction probability of a certain action intention of the surrounding objects is small, it can be determined according to the interaction probability that the target intention is still the current action intention.
  • the target intention is consistent with the action intention of the surrounding objects before the adjustment. For example: if the interaction probability corresponding to the collision time ttc of pedestrian 1 in the case of action intention g1 is very small, then the target intention g1 of pedestrian 1 can be determined according to the interaction probability.
  • the surrounding objects will generally be more cautious at this time; on the contrary, if the surrounding object and the vehicle have a certain action intention The collision time is longer, that is, the risk is lower. At this time, the surrounding objects will generally relax. Based on this, the introduction of the interaction probability calculated based on the collision time ttc can more realistically simulate the movement state in line with the pedestrian's psychology.
  • the collision time between pedestrian 1 and the vehicle Pedestrian 2 collision time with vehicle For example: the interaction probability of pedestrian 1 and the vehicle can be: Similarly, the interaction probability between pedestrian 2 and the vehicle can be: Among them, W interact is the interaction probability coefficient.
  • S42 can determine whether to use the interactive motion model of the surrounding objects or the linear motion model of the surrounding objects to determine the change of the motion state of the surrounding objects through the prediction model of the surrounding object state and the calculated interaction probability.
  • the random probability P random is introduced to determine the model that should be used to calculate the initial expected value.
  • the specific determination process includes: the first step is to determine the interaction probability Whether P r is greater than the random probability P random , if it is, perform the second step, otherwise, perform the third step; the second step, use the surrounding object interactive motion model to predict the movement state changes of surrounding objects under different action intentions; third Step, use the linear motion model of surrounding objects to predict the changes in the motion state of surrounding objects under different action intentions.
  • the motion state of the surrounding objects can be set to conform to a Gaussian distribution with a large variance.
  • the linear motion model of surrounding objects is specifically defined as follows:
  • ⁇ t represents the predicted single-step step length, which is usually small.
  • f s (g pedestrian ) and f l (g pedestrian ) represent the movement direction components of different action intentions in the S and L directions of the SL coordinate system, that is, the moving directions of surrounding objects are different under different action intentions ⁇ ;
  • ⁇ pedestrian s and ⁇ pedestrian l represent the mean value of the moving distance of the linear motion model of the surrounding objects in the S and L directions, Represents the variance of the moving distance of the linear motion model of surrounding objects in the S and L directions, ⁇ pedestrian v and They respectively represent the mean value and variance of the moving speed of the linear motion model of the surrounding objects in the S and L directions.
  • F s (v pedestrian , ⁇ t, g pedestrian ) and F l (v pedestrian , ⁇ t, g pedestrian ) represent the movement changes in the S and L directions of the SL coordinate system when pedestrians interact with the vehicle based on different action intentions function.
  • step 503 can be executed first, then step 504, or step 504 can be executed first, then step 503, or step 503 and step 504 can be executed simultaneously. Not limited.
  • Step 505 Determine the driving speed of the vehicle according to the probability redistribution of different action intentions, the movement state changes of surrounding objects under different action intentions, and the movement state changes of the vehicle under different driving speed control actions.
  • the appropriate vehicle can be determined Vehicle speed.
  • the acceleration of the vehicle can be determined by the above three factors, and the acceleration can be used to control the vehicle.
  • the acceleration of the vehicle can also be determined by the above three factors, thereby According to the acceleration and the current speed of the vehicle, the waiting speed of the vehicle is determined, and the vehicle is controlled to run at the waiting speed.
  • step 505 can be specifically implemented through the following S51 to S53: S51, according to the probability redistribution of different action intentions, the movement state changes of surrounding objects under different action intentions, and the movement state of the vehicle under different driving speed control actions Change, estimate the driving effect of the vehicle under the different driving speed control actions; S52, select the target driving speed control action from different driving speed control actions according to the driving effect of the vehicle under different driving speed control actions; S53, according to the target The driving speed control action determines the driving speed of the vehicle.
  • S51 can specifically establish a vehicle state prediction model, based on the vehicle state prediction model to predict the driving effect of the vehicle under different driving speed control, that is, the movement state of the vehicle when the vehicle is driving at different accelerations.
  • the vehicle state prediction model considering that the observation information error of the vehicle's position, speed and other state quantities is small, therefore, in the vehicle state prediction model, the vehicle's motion state can be set to conform to a Gaussian distribution with a small variance.
  • the vehicle state prediction model is specifically defined as follows:
  • ⁇ vehicle s and ⁇ vehicle l represent the mean value of the moving distance of the vehicle state prediction model in the S and L directions
  • ⁇ vehicle v and They represent the mean value and variance of the speed of the vehicle state prediction model in the S and L directions respectively.
  • a partially observable Markov Decision Process (English: Partially Observable Markov Decision Process, abbreviated as: POMDP) can be used for optimal speed decision planning .
  • POMDP has partial observability, that is, after a general mathematical model is used to make decisions and plans, the action intention of the unobservable part under the uncertain environment is predicted.
  • the mathematical model may generally include a state set S, an action set A, a state transition function T, an observation set O, an observation function Z, and a reward function R.
  • the mathematical model may generally include a state set S, an action set A, a state transition function T, an observation set O, an observation function Z, and a reward function R.
  • Action space A refers to the set of acceleration actions that may be taken by an autonomous or unmanned vehicle.
  • State transition function T It is the core part of POMDP. This function T focuses on describing the transition process of state over time and provides a decision-making basis for the selection of optimal actions.
  • the vehicle, the state transition function T may represent a vehicle will perform in an acceleration A to the state ⁇ s' of the vehicle, l 'vehicle, v' state ⁇ s ⁇ In the vehicle of the vehicle, l vehicle, v ⁇ under the vehicle; For pedestrian 1, then in the current state ⁇ s pedestrian 1 , l pedestrian 1 , v pedestrian 1 , g pedestrian 1 ⁇ is in accordance with the action intention g pedestrian 1 movement will be transferred to the state ⁇ s' pedestrian 1 , l' pedestrian 1 , v'pedestrian 1 , g'pedestrian 1 ⁇ .
  • Reward function used to quantitatively evaluate the acceleration of the decision. It can be evaluated from the degree of collision, or according to the degree of collision combined with the degree of obstruction to traffic, and can also be evaluated according to the degree of collision and the degree of discomfort in the car, or according to the collision
  • the degree, the degree of obstruction to passage and the degree of discomfort in the vehicle are evaluated. Among them, the degree of collision is used to reflect the safety, the degree of obstruction to the passage is used to reflect the efficiency of the passage, and the degree of discomfort in the vehicle can reflect the comfort. It should be noted that the acceleration of the decision can also be evaluated based on purpose.
  • Reward R_col
  • R_col the degree of collision R_col
  • R_move the degree of discomfort R_action
  • w1 is set to a fixed coefficient
  • v ' represents the vehicle speed of the vehicle
  • c is a constant collision occurs after execution of the current acceleration.
  • the three accelerations of the collision are determined respectively, then the three accelerations can be used to calculate the corresponding collision degrees R_col1, R_col2 And R_col3, the collision degree corresponding to the other 5 accelerations that do not collide is 0.
  • S52 can be executed "according to the driving effect of the vehicle under the different driving speed control action, from the different Select the target travel speed control action in the travel speed control action.
  • the mapping relationship between the particle set of i : b ⁇ P, then the expected value of the target can be determined by the number and weight of the particles containing various action intentions in the particle set.
  • the probability of occurrence of each action intention is embodied in the accumulation of particles with the same action intention, and the number of particles with the same action intention reflects the probability of occurrence of the action intention; the degree of risk of collision for each action intention It is embodied in the weight w k of each particle.
  • the collision time of each action intention can also be reflected in the interaction probability
  • the adjusted target intention is determined based on the interaction probability
  • the value corresponding to the driving effect of the vehicle with the target acceleration is calculated based on the target intention. Therefore, Calculating the target expected value of N steps can be specifically through: among them
  • the occurrence probability of each action intention is specifically embodied in: accumulating particles with the same action intention, the number of particles with the same action intention reflects the probability of the occurrence of the action intention; the risk of collision for each action intention
  • the degree is specifically embodied in: calculating the interaction probability according to the collision time, determining the target intent corresponding to each action intent, and Reward(particle k , a) is calculated based on the target intent.
  • each action intention determines the value corresponding to the driving effect of the vehicle using the target acceleration.
  • the weight of the particles representing the same action intention is used to reflect the impact of risk on vehicle speed, and the impact of risk on vehicle speed is reflected based on the interaction probability.
  • the value corresponding to the driving effect of calculating N steps can be specifically passed: among them
  • the probability of occurrence of each action intention is embodied in: accumulating particles with the same action intention, the number of particles with the same action intention reflects the probability of the action intention; collisions occur under each action intention
  • the degree of risk is embodied in: First, the weight w k of each particle corresponding to each action intention; second, calculate the interaction probability according to the collision time, and determine the movement state change corresponding to each action intention, Reward(particle k , a) Calculate based on the change of motion state corresponding to each action intention.
  • the corresponding driving effects can be expressed as: G(b 0 , -3), G(b 0 , -2), G(b 0 , -1), G(b 0 , 0), G(b 0 , 0.5), G(b 0 ,1), G(b 0 ,2) and G(b 0 ,3).
  • the driving effect is used to represent the return function value obtained after the target acceleration a is executed based on the redistribution of the probability of various action intentions currently occurring. The smaller the value corresponding to the driving effect, the worse the safety, and vice versa. , The larger the value corresponding to the driving effect, the better the safety.
  • S52 can specifically select the largest driving value from the values corresponding to multiple driving effects.
  • the value corresponding to the effect, and the corresponding acceleration is determined as the target driving speed control action, that is, the target acceleration.
  • the maximum value is G(b 0 ,2), then select the corresponding G(b 0 ,2)
  • the target acceleration can be directly sent to the controller, and the controller can control the vehicle to drive at the target acceleration; in another case, it can also calculate the vehicle’s acceleration based on the target acceleration and the current speed.
  • the collision degree R_col is used to determine the value Reward corresponding to the driving effect
  • the value Reward corresponding to the driving effect may also be determined in combination with the traffic obstruction degree R_move and/or the riding discomfort degree R_action.
  • the degree of obstruction R_move is determined according to the vehicle speed reached when the vehicle adopts the target acceleration and the speed limit of the lane
  • the initial expected value of Reward is also determined according to the degree of obstruction R_move when the vehicle adopts the target acceleration.
  • w2 is set fixed coefficient
  • vmax is the speed limit of the current lane.
  • the ride discomfort level R_action is determined according to the target acceleration and the difference between the target acceleration and the current vehicle speed, and the initial expected value Reward is also determined according to the ride discomfort level R_action when the vehicle adopts the target acceleration.
  • Reward R_col+R_action
  • R_action w 3 *f(action current )
  • f(actioncurrent) represents the comfort return generated by the current target acceleration, in order to suppress the discomfort caused by excessive acceleration;
  • f(actioncurrent-actionlast) represents the comfort return generated by the current target acceleration change, for Suppress the discomfort caused by excessive acceleration changes.
  • the initial expected value of Reward can also be determined according to the degree of collision R_col, the degree of obstruction R_move, and the degree of discomfort R_action.
  • the value of Reward corresponding to the driving effect please refer to the above to determine driving based only on the degree of collision R_col. The implementation of the value Reward corresponding to the effect will not be repeated here.
  • steps 503 to step 504 are specifically implemented by: the CPU 205 in Figure 2 (the decision planning module 222 of the on-board computer system 220 in Figure 3 or the pedestrian intention distribution prediction update module in the decision planning layer 430 in Figure 4) 431) Execution.
  • Step 505 is specifically implemented by the CPU 205 in FIG. 2 (the speed decision planning unit in the decision planning module 222 of the vehicle-mounted computer system 220 in FIG. 3 or the speed decision planning module 432 in the decision planning layer 430 in FIG. 4).
  • the method for determining vehicle speed provided by the embodiments of the present application, on the one hand, can calculate the probability distribution of each action intention based on the observation information of surrounding objects, and according to the vehicle’s current position to different action intentions Calculate the probability redistribution of different action intentions under the driving time of the risk area under the control; on the other hand, it can also predict the movement state of surrounding objects under different action intentions according to the driving time of the risk area under the different action intentions of the vehicle distance In this way, the driving speed of the vehicle can be determined according to the probability redistribution of different action intentions, the movement state changes of surrounding objects under different action intentions, and the movement state changes of the vehicle under different driving speed control actions.
  • the probability of each action intention can be predicted based on the observation information of the surrounding objects, and the Under acceleration control, the risk of collision between surrounding objects and the vehicle is predicted, and the vehicle speed is determined by combining the two.
  • the vehicle speed is determined by combining the two.
  • the device 1100 includes: a first acquisition unit 1101, a first calculation unit 1102, a second calculation unit 1103, a prediction unit 1104, and a second One determination unit 1105.
  • the first obtaining unit 1101 is configured to obtain observation information of surrounding objects of the vehicle;
  • the first calculation unit 1102 is configured to calculate the probability distribution of different action intentions of the surrounding objects according to the observation information of the surrounding objects;
  • the second calculation unit 1103 is used to calculate the redistribution of the probability distribution according to the driving time of the vehicle from the current position of the vehicle to the risk area under different action intentions to obtain the probability redistribution of different action intentions; wherein, under different action intentions
  • the risk areas of are respectively the areas that the surrounding objects pass in the lane of the vehicle when they are in different action intentions;
  • the prediction unit 1104 is used to predict the movement state changes of surrounding objects under different action intentions according to the driving time of the risk area under different action intentions of the vehicle distance;
  • the first determining unit 1105 is used to determine the driving speed of the vehicle according to the probability redistribution of different action intentions, the movement state changes of surrounding objects under different action intentions, and the movement state changes of the vehicle under different driving speed control actions.
  • the first calculation unit 1102 may include: a creation sub-unit and a calculation sub-unit. Among them, the establishment of a subunit is used to establish the relative position relationship and relative motion relationship between the surrounding objects and the road based on the observation information of the surrounding objects and the road on which the vehicle is traveling; The relative positional relationship and relative motion relationship with the road, calculate the probability distribution of different action intentions of surrounding objects.
  • the device may further include: a second acquiring unit, an establishing unit, a second determining unit, and a third calculating unit.
  • the second acquisition unit is used to acquire observation information of the vehicle;
  • the establishment unit is used to establish the relative position relationship between the vehicle and the road based on the observation information of the vehicle and the observation information of surrounding objects, using the road on which the vehicle is traveling as the coordinate system And the relative motion state and the relative position relationship and relative motion state of the surrounding objects and the road;
  • the second determining unit is used to determine the risk area under different action intentions according to the relative position relationship and the relative motion state of the surrounding objects and the road;
  • third The calculation unit is used to calculate the travel time of the vehicle from the current position of the vehicle to the risk area under different action intentions according to the relative position relationship and relative motion state of the vehicle and the road and the risk area under different action intentions.
  • the second calculation unit 1103 may include: a processing subunit and an adjustment subunit.
  • the processing subunit is used to perform particle processing on the probability distribution, where the number of particles corresponding to different action intentions represents the probability distribution of different action intentions;
  • the adjustment subunit is used to calculate the risk of different action intentions according to the distance of the vehicle For the travel time of the area, the weights of particles corresponding to different action intentions are adjusted to obtain the probability redistribution of different action intentions.
  • the prediction unit 1104 may include: a first determination subunit and a prediction subunit.
  • the first determination subunit is used to determine the probability of the surrounding objects changing the action intentions under different action intentions according to the driving time of the risk area under different action intentions of the vehicle distance;
  • the prediction subunit is used to determine the surrounding objects under different action intentions Change the probability and random probability of action intentions, and predict the movement state changes of surrounding objects under different action intentions.
  • the first determination unit 1105 may include: an estimation subunit, a selection subunit, and a second determination subunit.
  • the estimation subunit is used to estimate the control actions of the vehicle at different speeds according to the probability redistribution of different action intentions, the changes in the motion state of surrounding objects under different action intentions, and the changes in the motion state of the vehicle under different driving speed control actions
  • the selection subunit is used to select the target driving speed control action from different driving speed control actions according to the driving effect of the vehicle under different driving speed control actions; the second determining subunit is used to select the target driving speed control action according to the target driving speed Control actions to determine the speed of the vehicle.
  • the above device 1100 is used to perform the steps in the embodiment corresponding to FIG. 5, the acquiring unit 1101 can specifically perform step 501; the first computing unit 1102 can specifically perform step 502; the second computing unit 1103 can specifically perform Step 503: The prediction unit 1104 may specifically perform step 504; the first determining unit 1105 may specifically perform step 505.
  • the device 1100 corresponds to the method for determining the vehicle speed provided in the embodiment of the present application. Therefore, the various implementation modes of the device 1100 and the technical effects achieved can be referred to the various implementation modes of the method for determining the vehicle speed in the embodiments of the present application. Related description.
  • the vehicle 1200 includes a sensor 1201, a processor 1202, and a vehicle speed controller 1203.
  • the senor 1201 is used to obtain observation information of the surrounding objects of the vehicle and send it to the processor; for example, it may be a radar, a camera, or the like.
  • the processor 1202 is configured to determine the driving speed of the vehicle according to the method described in any one of the implementation manners of the first aspect, and send it to the vehicle speed controller.
  • the vehicle speed controller 1203 is used to control the vehicle to run at the determined vehicle speed.
  • the vehicle 1200 executes the method for determining the vehicle speed provided in the embodiment of the present application, so the various implementation modes of the vehicle 1200 and the technical effects achieved can be found in the description of the various implementation modes of the method for determining the vehicle speed in the embodiments of the present application. Related description.
  • an embodiment of the present application also provides a vehicle.
  • the vehicle 1300 includes a processor 1301 and a memory 1302.
  • the memory 1302 stores an instruction.
  • the processor 1301 executes the instruction, the vehicle 1300 is operated. The method described in any one of the foregoing methods for determining vehicle speed.
  • the vehicle 1300 executes the method for determining the vehicle speed provided in the embodiment of the present application. Therefore, the various implementation modes of the vehicle 1300 and the technical effects achieved can be referred to in the embodiments of the present application regarding the implementation of the method for determining the vehicle speed. Related description.
  • embodiments of the present application also provide a computer program product, which when running on a computer, causes the computer to execute the method described in any one of the foregoing methods for determining vehicle speed.
  • the embodiments of the present application also provide a computer-readable storage medium that stores instructions in the computer-readable storage medium. When it runs on a computer or processor, the computer or processor executes the aforementioned determination of vehicle speed. The method described in any one of the methods.
  • the computer software product can be stored in a storage medium, such as read-only memory (English: read-only memory, ROM)/RAM, magnetic disk, An optical disc, etc., includes a number of instructions to enable a computer device (which may be a personal computer, a server, or a network communication device such as a router) to execute the method described in each embodiment of the application or some parts of the embodiment.
  • a computer device which may be a personal computer, a server, or a network communication device such as a router
  • the various embodiments in this specification are described in a progressive manner, and the same or similar parts between the various embodiments can be referred to each other, and each embodiment focuses on the differences from other embodiments.
  • the description is relatively simple, and for related parts, please refer to the partial description of the method embodiment.
  • the device embodiments described above are merely illustrative.
  • the modules described as separate components may or may not be physically separated, and the components displayed as modules may or may not be physical modules, that is, they may be located in one place. , Or it can be distributed to multiple network units. Some or all of the modules may be selected according to actual needs to achieve the objectives of the solutions of the embodiments. Those of ordinary skill in the art can understand and implement it without creative work.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Mechanical Engineering (AREA)
  • Transportation (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Artificial Intelligence (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Biology (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Control Of Driving Devices And Active Controlling Of Vehicle (AREA)

Abstract

一种确定车速的方法和装置,在确定车辆(101)的行驶速度时,根据周边对象(102、103)的观测信息计算出现每种行动意图的概率分布(502),并且根据车辆(101)从当前位置到不同行动意图下的风险区域的行驶时间,计算出不同行动意图的概率重分布(503),还根据车辆(101)距离不同行动意图下的风险区域的行驶时间,预测出周边对象(102、103)在不同行动意图下的运动状态变化(504),根据不同行动意图的概率重分布、周边对象(102、103)在不同行动意图下的运动状态变化以及车辆(101)在不同行驶速度控制动作下的运动状态变化,确定车辆(101)的行驶速度(505)。该方法避免了将周边对象(102、103)与车辆(101)之间的高风险但发生概率较小的情况忽视,使得确定出的行驶速度更适合当前驾驶环境,减小了车辆(101)行驶时可能存在的安全隐患。

Description

一种确定车速的方法和装置
本申请要求于2019年7月17日递交中国专利局、申请号为201910646083.4,发明名称为“一种确定车速的方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及车辆技术领域,特别是涉及一种确定车速的方法和装置。
背景技术
在自动驾驶等技术中,为了避免车辆与其周边的行人等对象发生碰撞,需要根据周边对象的运动状态来确定车速。考虑到周边对象的运动状态受其主观的行动意图影响,车辆可以预测周边对象可能出现的行动意图,从而基于周边对象在预测出的行动意图下的运动状态来确定车速。但是,目前在对周边对象的行动意图进行预测时往往忽视了周边对象与车辆之间的碰撞风险所带来的影响,从而使得确定出的车速不够合适,以至于车辆行驶时可能存在安全隐患。
发明内容
本申请实施例提供一种确定车速的方法和装置,以使得车辆能够依据周边对象的行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,共同确定车辆的行驶速度,以避免将周边对象与车辆之间的高风险但发生概率较小的情况忽视,从而使得确定出的行驶速度更为合适,减小车辆行驶时可能存在的安全隐患。
第一方面,本申请实施例提供了一种确定车速的方法,该方法具体可以包括:首先,通过观测车辆的周边对象,获取车辆的周边对象的观测信息;然后,根据该周边对象的观测信息,计算周边对象出现不同行动意图的概率分布;接着,根据该车辆从车辆的当前位置到不同行动意图下的风险区域的行驶时间,对概率分布进行重分布计算,获得不同行动意图的概率重分布,其中,不同行动意图下的风险区域分别为周边对象在处于不同行动意图时在车辆行驶的车道上所经过的区域;然后,根据车辆距离不同行动意图下的风险区域的行驶时间,预测周边对象在所述不同行动意图下的运动状态变化;最后,根据不同行动意图的概率重分布、周边对象在所述不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,确定该车辆的行驶速度。
可见,通过本申请实施例提供的方法,在车辆行驶过程中,对于车辆的周边对象可能出现的多种行动意图,可以根据周边对象的观测信息计算出现每种行动意图的概率分布,并且根据车辆从当前位置到不同行动意图下的风险区域的行驶时间,计算出不同行动意图的概率重分布,接着,还可以根据车辆距离不同行动意图下的风险区域的行驶时间,预测出周边对象在不同行动意图下的运动状态变化,如此,即可根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,确定车辆的行驶速度。这样,在确定车辆的行驶速度时,不仅考虑了周边对象出现每种行动意图的可能性,也考虑了每种行动意图以及车辆的每种加速度控制下周边 对象和车辆之间发生碰撞的风险程度,从而避免了将周边对象与车辆之间的高风险但发生概率较小的情况忽视,使得确定出的行驶速度更合适当前驾驶环境,减小了车辆行驶时可能存在的安全隐患。
结合第一方面的一种可能的实现方式,根据所述周边对象的观测信息,计算所述周边对象出现不同行动意图的概率分布,具体实现时可以包括:根据周边对象的观测信息,以车辆所行驶的道路为坐标系,建立周边对象与道路之间的相对位置关系及相对运动关系;根据周边对象与道路之间的相对位置关系及相对运动关系,计算周边对象出现不同行动意图的概率分布。这样,通过将坐标系的转换,可以更加方便和准确的计算出周边对象出现不同行动意图的概率分布,为后续确定出合理的车速提供了准确的数据基础。
结合第一方面的另一种可能的实现方式,该方法还可以还包括:获取车辆的观测信息;根据车辆的观测信息和周边对象的观测信息,以车辆所行驶的道路为坐标系,建立车辆与道路的相对位置关系和相对运动状态以及周边对象与道路的相对位置关系和相对运动状态;根据周边对象与道路的相对位置关系和相对运动状态,确定不同行动意图下的风险区域;根据车辆与道路的相对位置关系和相对运动状态以及不同行动意图下的风险区域,计算车辆从车辆的当前位置到不同行动意图下的风险区域的行驶时间。如此,在确定车辆的行驶速度时,不仅考虑了周边对象出现每种行动意图的可能性,也考虑了每种行动意图以及车辆的每种加速度控制下周边对象和车辆之间发生碰撞的风险程度,从而避免了将周边对象与车辆之间的高风险但发生概率较小的情况忽视,使得确定出的行驶速度更合适当前驾驶环境,减小了车辆行驶时可能存在的安全隐患。
结合第一方面的再一种可能的实现方式,根据车辆距离不同行动意图下的风险区域的行驶时间,对概率分布进行重分布计算,获得不同行动意图的概率重分布,具体实现时可以包括:对概率分布进行粒子化处理,其中,不同行动意图对应的粒子的数量表示不同行动意图的概率分布;根据计算车辆距离不同行动意图下的风险区域的行驶时间,对不同行动意图对应的粒子的权重进行调整,以获得不同行动意图的概率重分布。这样,为了涵盖更多的周边对象,全面计算每个周边对象的每种可能出现的行动意图的出现概率,可以引入粒子的概念,通过粒子化处理以及计算,实现根据车辆距离每种行动意图的风险区域的行驶时间,确定每种行动意图的风险度的目的,即,实现了不同行动意图的概率重分布,为后续准确的确定车辆的车速,提高采用自动驾驶等智能驾驶技术的车辆的安全性和可靠性,提供了必不可少的数据基础。
结合第一方面的又一种可能的实现方式,根据车辆距离不同行动意图下的风险区域的行驶时间,预测周边对象在不同行动意图下的运动状态变化,具体实现时可以包括:根据车辆距离不同行动意图下的风险区域的行驶时间,确定不同行动意图下周边对象改变行动意图的概率;根据不同行动意图下周边对象改变行动意图的概率与随机概率,预测周边对象在不同行动意图下的运动状态变化。
结合第一方面的另一种可能的实现方式,根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,确定车辆的行驶速度,具体实现时可以包括:根据不同行动意图的概率重分布、周边对象 在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,估计车辆在不同行驶速度控制动作下的行驶效果;根据车辆在不同行驶速度控制动作下的行驶效果,从不同行驶速度控制动作中选择目标行驶速度控制动作;根据目标行驶速度控制动作,确定车辆的行驶速度。这样,在确定车速时,不仅考虑了周边对象出现每种行动意图的可能性,也考虑了每种行动意图以及车辆的每种加速度控制下周边对象和车辆之间发生碰撞的风险程度,从而避免了将周边对象与车辆之间的高风险但发生概率较小的情况忽视,使得确定出的行驶速度更合适当前驾驶环境,减小了车辆行驶时可能存在的安全隐患。
第二方面,本申请实施例还提供了一种确定车速的装置,该装置包括:第一获取单元、第一计算单元、第二计算单元、预测单元和第一确定单元。其中,第一获取单元,用于获取车辆的周边对象的观测信息;第一计算单元,用于根据周边对象的观测信息,计算周边对象出现不同行动意图的概率分布;第二计算单元,用于根据车辆从车辆的当前位置到不同行动意图下的风险区域的行驶时间,对概率分布进行重分布计算,获得不同行动意图的概率重分布;其中,不同行动意图下的风险区域分别为周边对象在处于不同行动意图时在车辆行驶的车道上所经过的区域;预测单元,用于根据车辆距离不同行动意图下的风险区域的行驶时间,预测周边对象在不同行动意图下的运动状态变化;第一确定单元,用于根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,确定车辆的行驶速度。
结合第二方面的一种可能的实现方式,该第一计算单元,可以包括:建立子单元和计算子单元。其中,建立子单元,用于根据周边对象的观测信息,以车辆所行驶的道路为坐标系,建立周边对象与道路之间的相对位置关系及相对运动关系;计算子单元,用于根据周边对象与道路之间的相对位置关系及相对运动关系,计算周边对象出现不同行动意图的概率分布。
结合第二方面的另一种可能的实现方式,该装置还可以包括:第二获取单元、建立单元、第二确定单元和第三计算单元。其中,第二获取单元,用于获取车辆的观测信息;建立单元,用于根据车辆的观测信息和周边对象的观测信息,以车辆所行驶的道路为坐标系,建立车辆与道路的相对位置关系和相对运动状态以及周边对象与道路的相对位置关系和相对运动状态;第二确定单元,用于根据周边对象与道路的相对位置关系和相对运动状态,确定不同行动意图下的风险区域;第三计算单元,用于根据车辆与道路的相对位置关系和相对运动状态以及不同行动意图下的风险区域,计算车辆从车辆的当前位置到不同行动意图下的风险区域的行驶时间。
结合第二方面的再一种可能的实现方式,该第二计算单元,可以包括:处理子单元和调整子单元。其中,处理子单元,用于对概率分布进行粒子化处理,其中,不同行动意图对应的粒子的数量表示不同行动意图的概率分布;调整子单元,用于根据计算车辆距离不同行动意图下的风险区域的行驶时间,对不同行动意图对应的粒子的权重进行调整,以获得不同行动意图的概率重分布。
结合第二方面的又一种可能的实现方式,该预测单元,可以包括:第一确定子单元和 预测子单元。其中,第一确定子单元,用于根据车辆距离不同行动意图下的风险区域的行驶时间,确定不同行动意图下周边对象改变行动意图的概率;预测子单元,用于根据不同行动意图下周边对象改变行动意图的概率与随机概率,预测周边对象在不同行动意图下的运动状态变化。
结合第二方面的另一种可能的实现方式,该第一确定单元,可以包括:估计子单元、选择子单元和第二确定子单元。其中,估计子单元,用于根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,估计车辆在不同行驶速度控制动作下的行驶效果;选择子单元,用于根据车辆在不同行驶速度控制动作下的行驶效果,从不同行驶速度控制动作中选择目标行驶速度控制动作;第二确定子单元,用于根据目标行驶速度控制动作,确定车辆的行驶速度。
可以理解的是,第二方面提供的装置对应于第一方面提供的方法,故第二方面各实现方式以及达到的技术效果可参见第一方面各实现方式的相关描述。
第三方面,本申请实施例还提供了一种车辆,该车辆包括:传感器、处理器和车速控制器,其中,传感器,用于获得车辆的周边对象的观测信息,并发送给处理器;处理器,用于根据前述第一方面任意一种实现方式所述的方法,确定车辆的行驶速度,并发送给车速控制器;车速控制器,用于控制该车辆以所确定的车辆的行驶速度行驶。
第四方面,本申请实施例还提供了一种车辆,该车辆包括处理器和存储器,所述存储器存储有指令,当处理器执行该指令时,使得该车辆行前述第一方面任意一种实现方式所述的方法。
第五方面,本申请实施例还提供了一种计算机程序产品,当其在计算机上运行时,使得计算机执行前述第一方面任意一种实现方式所述的方法。
第六方面,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得该计算机或处理器执行前述第一方面任意一种实现方式所述的方法。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请中记载的一些实施例,对于本领域普通技术人员来讲,还可以根据这些附图获得其他的附图。
图1为本申请实施例中一应用场景所涉及的道路交通场景示意图;
图2为本申请实施例中一种采用自动驾驶等技术的车辆硬件架构示意图;
图3为本申请实施例中一种采用自动驾驶等技术的车辆系统架构示意图;
图4为本申请实施例中一种采用自动驾驶等技术的车辆的结构示意图;
图5为本申请实施例中一种确定车速的方法的流程示意图;
图6为本申请实施例中一种行人意图示意图;
图7为本申请实施例中一种车辆-周边对象-道路模型的示意图;
图8为本申请实施例中一种粒子化表示的示意图;
图9为本申请实施例中一种确定风险区域及行驶时间的示意图;
图10为本申请实施例中周边对象交互运动模型的实例示意图;
图11为本申请实施例中一种确定车速的装置的结构示意图;
图12为本申请实施例中一种车辆的结构示意图;
图13为本申请实施例中一种车辆的结构示意图。
具体实施方式
当车辆在道路上行驶时,车速的确定需要考虑周边的行人、动物等周边对象,以避免和车辆周边对象发生碰撞等交通意外,从而确保车辆以及该车辆周边对象的安全。
目前,为了能够更好的避让行人等周边对象,在确定车辆的行驶速度时,考虑到周边对象的运动状态主要受其主观的行动意图影响,车辆可以通过周边对象的行为特征预测周边对象的目标行动意图,例如:基于周边对象的行为特征确定周边对象出现各种行动意图的出现概率,再通过设置概率阈值筛选出出现概率较高的行动意图作为目标行动意图;从而基于该目标行动意图下周边对象的运动状态来确定车速。但是,除了目标行动意图之外,周边对象也有可能出现其他行动意图,而在其他行动意图下周边对象与车辆有可能存在较大的碰撞风险。因此,仅仅考虑出现可能性较高的目标行动意图,依据推测出的单一的目标行动意图来确定车速,一些出现可能性较低但极易发生碰撞风险的其他行动意图就会被忽略,那么,一旦周边对象实际按照该被忽略的高碰撞风险的行动意图进行运动,而车辆以基于目标行动意图确定出的行驶速度行驶时,就很有可能与周边对象发生碰撞,从而造成安全隐患。
举例说明,参见图1所示的道路交通场景示意图,车辆101采用自动驾驶技术驾驶,在该场景中车辆101的周边对象包括:行人102和行人103。假设该车辆101通过行人102和行人103的行为特征分别预测其目标行动意图,预测行人102的目标行动意图为:快速横穿、前向斜穿,行人103的目标行动意图为:停止,从而确定车辆101的车速为:60千米/小时。但是,由于车辆101预测行动意图时没有考虑行人102、行人103分别与车辆101之间发生碰撞可能的其他行动意图,所以,可能预测出离车辆101较近的行人103的目标行动意图不包括可能性极小的快速横穿、前向斜穿,故确定车速的较大。然而,行人103也有可能前向斜穿或快速横穿,那么,车辆101很可能由于预测行人103的目标行动意图不准确,忽略了碰撞风险较大的行动意图,从而车辆101以较大的60千米/小时的车速行驶时,极可能撞到行人103,造成车辆101以及行人103发生交通意外。
基于此,为了克服由于预测周边对象的行动意图不准确、不全面而导致确定的车速不合适的问题,在本申请实施例中,提供了一种可以确定出合适车速的方法,车辆以确定出的该合适车速行驶即可确保车辆和其周边对象的安全。具体确定车速的过程可以包括:根据周边对象的观测信息计算出现每种行动意图的概率分布,并根据车辆从当前位置到不同行动意图下的风险区域的行驶时间,计算出不同行动意图的概率重分布;接着,根据车辆距离不同行动意图下的风险区域的行驶时间,预测出周边对象在不同行动意图下的运动状态变化;最后,根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态 变化以及车辆在不同加速度下的运动状态变化,确定车辆的行驶速度。这样,在确定车速时,不仅考虑了周边对象出现每种行动意图的可能性,也考虑了每种行动意图下车辆采用不同加速度行驶时和周边对象发生碰撞的风险程度,从而避免了将周边对象与车辆之间的高风险但发生概率较小的情况忽视,使得确定出的行驶速度更合适当前驾驶环境,减小了车辆行驶时可能存在的安全隐患。
举例说明,仍然以图1所示的场景为例,假设车辆101采用本申请实施例提供的方法确定车速,那么,具体的确定过程可以包括:首先,根据行人102的观测信息和行人103的观测信息,分别预测行人102和行人103出现7种行动意图(包括:前向顺行、后向顺行、垂直横穿、前向斜穿、后向斜穿、远离和停止)的概率分布b 102和b 103,其中,b 102中包括行人102出现每种行动意图的概率P 102i(i=1,2,…,7),b 103中包括行人103出现每种行动意图的概率P 103i(i=1,2,…,7);接着,分别针对每种行动意图,确定出行人102和行人103在处于该行动意图时在车辆101行驶的车道上所经过的区域,记作该行动意图下行人102和行人103对应的风险区域,并计算该车辆101从当前位置分别行驶到行人102和行人103对应的风险区域的行驶时间T 102i和T 103i,一方面,可以根据行驶时间T 102i和T 103i对概率分布b 102和b 103进行重分布计算,得到各行动意图对应的概率重分布b’ 102和b’ 103;另一方面,还可以根据该行驶时间T 102i和T 103i预测行人102和行人103在不同行动意图下的运动状态变化;最后,车辆101即可根据概率重分布b’ 102和b’ 103、行人102和行人103在不同行动意图下的运动状态变化、以及车辆在不同加速度下的运动状态变化,确定车辆的行驶速度:30千米/小时。这样,由于在对行人102和行人103进行行动意图预测时,考虑可能出现的各种行动意图,且结合每种行动意图下车辆采取不同应对策略时可能出现的情况,共同确定出较为合适的车速,车辆101以30千米/小时在该车道上行驶较为安全,从而可以有效避免车辆101行驶时可能存在的安全隐患,提高了自动驾驶等技术的可靠性和安全性。
在介绍本申请实施例提供的一种确定车速的方法之前,先对本申请实施例中车辆的硬件架构进行说明。
参见图2,示出了本申请实施例中一种应用于车辆上的系统硬件架构示意图。该车辆200包括:前视相机201、雷达202、全球定位系统(英文:Global Positioning System,简称:GPS)203、图像处理器204、中央处理器(英文:Central Processing Unit,简称:CPU)205以及控制器206。其中,前视相机201可以用于对道路场景进行图像采集;雷达202可以用于对动态、静态周边对象进行数据采集;图像处理器204可以用于对车道线、路沿、其他车辆以及周边对象(例如:行人、动物、树木等)进行识别;CPU 205可以用于对整个车辆200进行总控,从前视相机201、雷达202分别获取图像数据和周边对象的状态数据,调用图像处理器204和CPU 205内部计算模块分别进行目标识别、融合以及其他运算,确定出合适的目标车速,并基于该目标车速生成决策控制指令,将该决策控制指令发送给控制器206;控制器206,可以用于根据接收到的决策控制指令,控制车辆以目标速度在当前车道上行驶。
对于图2所示的硬件架构的车辆200,其对应本申请实施例的系统架构示意图如图3所示。从系统层面来看,该车辆200包括:车载传感器系统210、车载计算机系统220以及车载控制执行系统230。其中,车载传感器系统210可以用于获取前视相机201采集的数据、雷达 202采集的数据以及GPS 203定位到的数据。车载计算机系统220大致分为两个模块:感知数据处理模块221以及决策规划模块222,感知数据处理模块221可以用于检测车辆200的周边对象(尤其是周边的行人),并输出周边对象的位置、运动信息;决策规划模块222可以用于根据周边对象当前的位置及运动信息,预测及更新周边对象的行动意图分布,进而基于行动意图分布决策、规划车辆200的车速。车载控制执行系统230可以用于获取决策规划模块222输出的决策控制指令,并根据该决策控制指令中的车速,控制车辆200行驶。需要说明的是,本申请实施例提供的确定车速的方法,主要在车载计算机系统220的决策规划模块222中执行,具体实现方式可以参见图4所示的实施例中的相关描述。
作为一个示例,从产品层面来看,对应本申请实施例的该车辆200的结构示意图如图4所示。该车辆200包括:传感器层410、感知层420、决策规划层430和车辆控制层440,数据流依次通过上述四层,被上述四层依次进行处理。其中,传感器层410可以用于加载单目/双目前视相机201采集的数据、激光雷达/毫米波雷达202采集的数据以及GPS 203定位的数据;感知层420可以用于加载车辆/周边对象检测模块421、车道线检测模块422、交通标识检测模块423、自车定位模块424、动态/静态对象检测模块425和感知融合模块426这六个模块的数据;决策规划层430可以用于加载行人意图分布预测更新模块431、速度决策规划模块432和路径规划模块433的数据;车辆控制层440可以用于根据决策规划层430发来的数据对车辆200执行横向和纵向控制。需要说明的是,图4中灰色框对应的模块为实施本申请实施例提供的确定车速的方法所涉及的模块,本申请实施例实现对车辆确定其速度是否安全和可靠,主要取决于决策规划层430中的两个灰色模块431和432。
可以理解的是,上述场景仅是本申请实施例提供的一个场景示例,本申请实施例并不限于此场景。
下面结合附图,通过实施例来详细说明本申请实施例中一种确定车速的方法的具体实现方式。
参见图5,示出了本申请实施例中一种确定车速的方法的流程示意图。该方法具体可以包括下述步骤501~步骤505:
步骤501,获取车辆的周边对象的观测信息。
步骤502,根据周边对象的观测信息,计算周边对象出现不同行动意图的概率分布。
可以理解的是,在车辆的行驶环境中,车辆的周边对象可以包括:出现在车辆周边的行人、动物等可能参与交通的对象。在本申请实施例中,车辆的周边对象可以以车辆周边的行人为例进行理解和说明。周边对象的观测信息,是指可以体现周边对象状态的信息,可以用于预测周边对象进行每种行动意图的概率。
可以理解的是,行动意图,是指周边对象相对于当前车道的意图。例如:如图6所示的多种行人意图示意图,可知,图6中a图代表行动意图g1:前向顺行,b图代表行动意图g2:后向顺行,c图代表行动意图g3:垂直横穿,d图代表行动意图g4:前向斜穿,e图代表行动意图g5:远离,f图代表行动意图g6:后向斜穿,g图代表行动意图g7:停止。一种情况下,如果车辆的周边只有一个周边的对象,那么,周边对象的一种行动意图是指该周边对象的 一个行动意图;例如:假设车辆的周边对象只有行人甲,那么,假设每个行人有2种可能的行动意图:等待、横穿,那么,周边对象的行动意图包括2个行动意图,分别为:甲等待和甲横穿。另一种情况下,如果车辆的周边有至少两个周边的对象,那么,周边对象的行动意图表示所有周边对象中每个周边对象对应的一个行动意图的组合,例如:假设车辆的周边对象包括行人甲和行人乙,那么,假设每个行人有2种可能的行动意图:等待、横穿,那么,周边对象的行动意图包括2*2=4个行动意图组合,分别为:{甲等待,乙等待}、{甲等待,乙横穿}、{甲横穿,乙等待}和{甲横穿,乙横穿}。
具体实现时,可以根据获取到的车辆的周边对象的观测信息,计算出周边对象出现每种行动意图的概率,作为所述每种行动意图的出现概率,从而得出周边对象出现不同行动意图的概率分布。其中,出现概率可以是指每个周边对象的每种行动意图出现的可能性大小。
在一个实例中,步骤502具体可以包括:根据周边对象的观测信息,以车辆所行驶的道路为坐标系,建立周边对象与道路之间的相对位置关系及相对运动关系;根据周边对象与所述道路之间的相对位置关系及相对运动关系,计算周边对象出现不同行动意图的概率分布。
可以理解的是,道路坐标系(即,S-L坐标系)是指以道路路径点的起点为原点,沿着车辆将要行驶道路的方向,记作S轴正方向,与S轴正方向垂直向左的方向为L轴正方向,具体可以参见图7。
在一些实现方式中,步骤501~步骤502具体可以根据前一时刻和后一时刻周边对象的观测信息,预测出该周边对象出现每种行动意图的出现概率。具体实现可以包括:S11,获得车辆的周边对象的观测信息;S12判断每个周边对象是否为新的周边对象,如果是,则执行S13,否则,执行S14;S13,初始化该周边对象每种行动意图的出现概率;S14,基于观测信息更新该周边对象每种行动意图的出现概率。需要说明的是,确定周边对象每种行动意图的出现概率,即可基于此确定该周边对象出现不同行动意图的概率分布。
需要说明的是,步骤501(即,S11)中所获取的车辆的周边对象的观测信息,具体的实现可以是由:图2、图3或者图4中前视相机201、雷达202、GPS 203采集,并分别发送给图4中感知层420的车辆/周边对象检测模块421、自车定位模块424和动态/静态对象检测模块425进行分别处理,然后,将三个处理结果发送给感知融合模块426进行数据关联融合和跟踪处理。步骤502(即,S12~S14),具体的实现是由:可以由图2中的CPU 205(图3车载计算机系统220的决策规划模块222或者图4决策规划层430中行人意图分布预测更新模块431)执行。
作为一个示例,S11具体可以包括:通过采集的车辆周围的环境数据,并对环境数据进行滤波、多传感器数据关联融合、跟踪等处理,获得直角坐标系下周边对象的第一观测信息。其中,该周边对象的第一观测信息可以包括:周边对象的位置、周边对象的运动速度和运动方向。需要说明的是,对于参与交通的每个周边对象,都需要获取其第一观测信息;为了给后续计算提供数据基础,也需要获取自车的第一观测信息,包括:车辆位置、车辆速度、车辆加速度以及车辆航向。
例如:仍然以图1所示的交通场景为例,可以通过S111得到行人102的第一观测信息可以表示为:O 行人102={位置:(x 行人102,y 行人102),运动速度:V 行人102,运动方向:θ 行人102};行人103的第一观测信息可以表示为:O 行人103={位置:(x 行人103,y 行人103),运动速度:V 行人103,运动方向:θ 行人103};车辆101的第一观测信息可以表示为:O 车辆101={位置:(x 车辆101,y 车辆101),速度:V 车辆101,加速度:a 辆101,航向:θ 车辆101}。
具体实现时,为了从车辆的角度去考虑周边对象是否存在侵占其即将行驶道路的可能性,需要在S-L坐标系下去观测周边对象,那么,需要将第一观测信息中的位置进行直角坐标系到S-L坐标系的变换,得到S-L坐标系下的位置,作为第二观测信息中的位置。具体的变换可以包括:将直角坐标系中原位置点垂直映射到车辆将要行驶道路的方向上映射点处;读取从行驶道路起点都该映射点的距离,作为S轴方向的值;计算从原位置点到映射点之间的距离,作为L轴方向的值。参见图7,可以通过构建车辆-周边对象-道路模型,由于该车辆-周边对象-道路模型以S-L坐标系为参考坐标系,用于描述车辆、周边对象以及道路之间的相对位置关系以及相对运动关系,故,可以利用该车辆-周边对象-道路模型计算第二预测信息。例如:对于图7中示出的行人和车辆,假设行人的第一观测信息中包括位置:(x 行人,y 行人),车辆的第一观测信息中包括位置:(x 车辆,y 车辆),那么,参见图7,该行人变换到S-L坐标系下的位置为(s 行人,l 行人),车辆变换到S-L坐标系下的位置为(s 车辆,l 车辆)。
例如:仍然以图1所示的交通场景为例,可以通过S112得到行人102的第二观测信息可以表示为:O 行人102’={(s 行人102,l 行人102),V 行人102行人102};行人103的第二观测信息可以表示为:O 行人103’={(s 行人103,l 行人103),V 行人103行人103};车辆101的第二观测信息可以表示为:O 车辆101’={(s 车辆101,l 车辆101),V 车辆101,a 车辆101,θ 车辆101}。
需要说明的是,在根据上述实现方式获得车辆的周边对象的观测信息后,即可执行S12,判断每个周边对象是否为新的周边对象,对于新的周边对象,按照S13初始化该周边对象每种意图的出现概率;对于已有的周边对象,则按照S14基于观测信息更新该周边对象每种意图的出现概率。可以理解的是,S13和S14均基于周边对象与道路之间的相对位置关系及相对运动关系,计算得到。
对于S12,可以根据判断当前时刻观测到的周边对象,在当前时刻之前是否已经被观测到,来确定该周边对象是否为新的周边对象。如果在当前时刻观测到的周边对象,在当前时刻之前未被观测到过,说明该周边对象为新出现在该车辆周边的对象,即可确定该周边对象为新的周边对象;反之,如果在当前时刻观测到的周边对象,在当前时刻之前也被观测到了,说明该周边对象为在当前时刻之前已有的周边对象,即可确定该周边对象不是新的周边对象。
对于S13,初始化新的周边对象的出现概率,由于该周边对象是新观测到的,对其的行动意图没有其他的数据依据,因此,可以根据周边对象可能出现的行动意图的数量确定,即,将每种可能出现的行动意图的出现概率视作相等。例如:假设新的周边对象A有7种可能出现的行动意图,那么,预测出的7种行动意图中每种行动意图的出现概率均相等,为1/7。
对于S14,基于观测信息更新该周边对象每种意图的出现概率,具体可以根据距离当前时刻最近一个时刻该行动意图的出现概率、当前时刻和距离当前时刻最近一个时刻的位置、 以及该行动意图的更新模型在S-L坐标系S方向和L方向上的均值和对应的方差确定。例如,更新模型可以是高斯运动模型。
举例来说,假设在t=0时刻,只观测到行人102,在t=1时刻,观测到行人102和行人103。
在t=0时刻在直角坐标系下获得行人102的观测信息为
Figure PCTCN2020102644-appb-000001
行人102经过S12的判断,确定其为新的周边对象,则需要对其状态和行动意图的出现概率分布进行初始化,具体操作可以包括:以车辆将要行驶的路径path={(x 0,y 0),(x 1,y 1),(x 2,y 2),...,(x n,y n)}为参考坐标系(其中,n为参考坐标系中包括的点的数量),计算行人102的位置
Figure PCTCN2020102644-appb-000002
在path上的投影点(x i,y i),并计算从path的起点(x 0,y 0)沿着path到投影点(x i,y i)的距离
Figure PCTCN2020102644-appb-000003
与投影点(x i,y i)之间的距离为
Figure PCTCN2020102644-appb-000004
则,行人102初始化的状态可以为为
Figure PCTCN2020102644-appb-000005
其中,
Figure PCTCN2020102644-appb-000006
是指行人102在t=0时刻下多种行动意图的出现概率的分布,由于行人102是新的周边对象,确定
Figure PCTCN2020102644-appb-000007
其中,行人102的行动意图包括:g1~g7共7种,每种行动意图的概率均相等(即为1/7)。可以理解的是,
Figure PCTCN2020102644-appb-000008
表示行人102在t=0时刻下出现g1行动意图的概率,
Figure PCTCN2020102644-appb-000009
表示行人102在t=0时刻下出现g2行动意图的概率以此类推,不再赘述。
在t=1时刻,在直角坐标系下获得行人102和行人103的观测信息分别为
Figure PCTCN2020102644-appb-000010
经过S12的判断,确定行人103为新的周边对象,则需要对其状态和行动意图分布进行初始化,确定行人102为已有的周边对象,则基于观测信息更新该行人102每种行动意图的出现概率。其中,对行人103进行状态和意图分布初始化的过程,可以参见在t=0时刻对行人102的初始化过程的描述,这里不再赘述。对于行人102每种行动意图的更新过程,具体可以包括:第一步,根据行人102的观测信息
Figure PCTCN2020102644-appb-000011
通过垂直投影和距离计算得到行人102在t=1时刻S-L坐标系下的位置和速度
Figure PCTCN2020102644-appb-000012
第二步,按照高斯运动模型,对行人102在t=1时刻的行动意图分布
Figure PCTCN2020102644-appb-000013
进行更新,具体是对每种行动意图的出现概率进行更新。关于第二步,以t=1时刻g1行动意图的出现概率为了进行说明,具体可以通过下述公式(1)计算出t=1时刻g1行动意图更新后的出现概率:
Figure PCTCN2020102644-appb-000014
其中,
Figure PCTCN2020102644-appb-000015
Figure PCTCN2020102644-appb-000016
可以通过下述公式(2)计算得到:
Figure PCTCN2020102644-appb-000017
其中,
Figure PCTCN2020102644-appb-000018
Figure PCTCN2020102644-appb-000019
是采用高斯运动模型,在S-L坐标系中,行人102在g1行动意图S方向和L方向上的均值;σ s和σ l是采用高斯运动模型,在S-L坐标系中,行人102在g1行动意图S方向和L方向上的标准差。
可以理解的是,行人102在t=1时刻,g2~g7行动意图的出现概率的更新方式,与t=1时 刻g1行动意图出现概率的更新方式类似,这里不再赘述。更新之后,在t=1时刻,行人102的行动意图的出现概率的分布可以表示为:
Figure PCTCN2020102644-appb-000020
可以理解的是,
Figure PCTCN2020102644-appb-000021
表示行人102在t=1时刻下出现g1行动意图的概率,
Figure PCTCN2020102644-appb-000022
表示行人102在t=1时刻下出现g2行动意图的概率以此类推,不再赘述。
需要说明的是,在计算行人102的每种行动意图的出现概率分布的方式中,为了涵盖更多的周边对象,全面计算每个周边对象的每种可能出现的行动意图的出现概率,可以引入粒子的概念,通过每种行动意图包括的粒子个数来表示该行动意图的出现概率,具体为:如果行动意图g1包括的粒子个数多,则表示该行动意图的出现概率较大;反之,如果行动意图g1包括的粒子个数少,则表示该行动意图的出现概率较小。
例如:在上述第一种实现方式中对S13的相应举例中,t=0时刻初始化每种行动意图的出现概率分布为
Figure PCTCN2020102644-appb-000023
则,利用预设个数(即,所有行动意图的数量的整数倍,例如:700)的粒子表示,如图8所示,每种行动意图对应包含相同粒子的一个粒子集合,例如,对于700个粒子、包括7种行动意图、每种行动意图的出现概率均为1/7,则,每种行动意图对应100个相同粒子的集合,每个粒子权重为1/700。
对于700个粒子中的一个粒子,其状态可以表示为:particle={s 车辆,l 车辆,v 车辆,s 行人j,l 行人j,v 行人j,g 行人j,w},其中,j表示车辆的第j个周边对象,w表示该粒子的权重;粒子集可以表示为:{particle 1,particle 2,......,particle m},其中,m是粒子总数,即700。例如:对于车辆以及车辆的周边对象行人1和行人2,每个粒子的状态可以表示为:
Figure PCTCN2020102644-appb-000024
其中,wi表示第i个粒子的权重,该权重用于表示粒子对应行动意图的风险程度,具体说明参见下述步骤503~步骤505中的相关描述。
在另一些实现方式中,步骤501~步骤502具体可以根据当前周边对象的观测信息,利用已训练的机器学习模型,输出周边对象出现每种行动意图的概率分布。其中,周边对象的观测信息,具体可以是上述实现方式中处理后的包括周边对象的位置、运动速度和运动方向的观测信息,也可以是当前采集的包括周边对象的图像。
该实现方式中,一种情况下,若观测信息为上述处理后的包括周边对象的位置、运动速度和运动方向的观测信息,那么,可以根据大量的已知每种行动意图出现概率的历史观测信息,和其对应的已知每种行动意图的出现概率,训练预先构建的第一机器学习模型,得到训练完成的第一机器学习模型;然后,可以将步骤501获取到的周边对象的观测信息,输入到训练完成的第一机器学习模型,输出该周边对象出现每种行动意图的出现概率。
另一种情况下,若观测信息为当前采集的包括周边对象的图像,那么,可以根据大量的已知每种行动意图出现概率的历史图像,和其对应的已知每种行动意图的出现概率,训练预先构建的第二机器学习模型,得到训练完成的第二机器学习模型;然后,可以将步骤501获取到的周边对象的观测信息(即,当前采集到的包括周边对象的图像),输入到训练 完成的第二机器学习模型,输出该图像中包括的周边对象出现每种行动意图的出现概率。
可以理解的是,通过上述两种实现方式,均可实现根据所述周边对象的观测信息,预测周边对象出现多种行动意图中每种行动意图的出现概率的目的,为后续准确的确定车辆的车速,提高采用自动驾驶等智能驾驶技术的车辆的安全性和可靠性,提供了必不可少的数据基础。
步骤503,根据车辆从车辆的当前位置到不同行动意图下的风险区域的行驶时间,对概率分布进行重分布计算,获得不同行动意图的概率重分布;其中,不同行动意图下的风险区域分别为周边对象在处于不同行动意图时在车辆行驶的车道上所经过的区域。
步骤504,根据车辆距离不同行动意图下的风险区域的行驶时间,预测周边对象在不同行动意图下的运动状态变化。
可以理解的是,为了确保确定的车速更加安全和可靠,需要至少依据周边对象出现每种行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化共同确定车辆的车速;而周边对象出现每种行动意图的概率重分布以及运动状态变化,均需要根据在周边对象出现不同行动意图情况下车辆到达风险区域的行驶时间计算得到。而该行驶时间,用于量化每种行动意图的风险度,即,周边对象按照该行动意图运动与车辆发生碰撞的可能性大小。
可以理解的是,每种行动意图的碰撞时间即为车辆距离周边对象的每种行动意图的风险区域的行驶时间。例如:如图9所示,对于行人的g3行动意图(即,横穿),风险区域为该行人在处于横穿意图时在车辆行驶的车道上所经过的区域A,那么,对应的碰撞时间为:车辆从当前位置行驶到区域A的行驶时间ttc g3;对于该行人的g4行动意图(即,前向斜穿),风险区域为该行人在处于横穿意图时在车辆行驶的车道上所经过的区域B,那么,对应的碰撞时间为:车辆从当前位置行驶到区域B的行驶时间ttc g4
在一些实现方式中,步骤503之前,本申请实施例中还可以通过步骤501以及下述S21~S24计算出车辆从车辆的当前位置到不同行动意图下的风险区域的行驶时间:S21,获取车辆的观测信息;S22,根据车辆的观测信息和周边对象的观测信息,以车辆所行驶的道路为坐标系,建立车辆与道路的相对位置关系和相对运动状态以及周边对象与道路的相对位置关系和相对运动状态;S23,根据周边对象与道路的相对位置关系和相对运动状态,确定不同行动意图下的风险区域;S24,根据车辆与道路的相对位置关系和相对运动状态以及不同行动意图下的风险区域,计算车辆从车辆的当前位置到不同行动意图下的风险区域的行驶时间。其中,S21的具体实现可以参见步骤501中获取周边对象的观测信息相关描述;S22的具体实现可以参见上述关于S11的坐标变换的相关描述。
可以理解的是,对于某个行动意图,如果该行动意图的碰撞时间长,说明该行动意图对应出现风险的可能性小,即,风险度低;反之,如果该行动意图的碰撞时间短,说明该行动意图对应出现风险的可能性较大,即,风险度高。例如,对于图9中的行人,显然,ttc g4大于ttc g3,表示该行人横穿时,出现与车辆碰撞的可能性大,风险度高,而该行人前向斜穿,则于横穿相比,出现与车辆碰撞的可能性降低,风险度减小。
作为一个示例,在确定了碰撞时间后,步骤503具体可以通过下述S31~S32实现:S31, 对概率分布进行粒子化处理,其中,不同行动意图对应的粒子的数量表示不同行动意图的概率分布;S32,根据计算车辆距离所述不同行动意图下的风险区域的行驶时间,对不同行动意图对应的粒子的权重进行调整,以获得不同行动意图的概率重分布。
对于S31,具体可以参见上述实施例中图8以及相关描述,在此不再赘述。
对于S32中计算车辆距离所述不同行动意图下的风险区域的行驶时间,可以参见上述关于S21~S24相关部分的描述,具体原理不再赘述。为了更加清楚,下面举例说明粒子化处理后计算行驶时间的示例性过程,例如:假设对于车辆以及车辆的周边的行人1、行人2,对于粒子i:
Figure PCTCN2020102644-appb-000025
在同一种行动意图
Figure PCTCN2020102644-appb-000026
下,确定行人1风险区域,并计算车辆距离行人1的在该行动意图的风险区域的行驶时间
Figure PCTCN2020102644-appb-000027
确定行人2风险区域,并计算车辆距离行人2的在该行动意图的风险区域的行驶时间
Figure PCTCN2020102644-appb-000028
为了尽可能降低该行人1和行人2出现碰撞的可能性,选取各行人的行驶时间中较小的行驶时间,作为该粒子i的行驶时间ttc i,即,
Figure PCTCN2020102644-appb-000029
对于S32中,根据计算车辆距离所述不同行动意图下的风险区域的行驶时间,对不同行动意图对应的粒子的权重进行调整,以获得不同行动意图的概率重分布,其中,考虑到每个粒子的行驶时间(即,碰撞时间)表示了周边对象在某种意图下发生碰撞的风险程度,由于行驶时间越短,风险度越高,故,为了提高对高风险度行动意图的重视,可以根据下述公式(3),基于行驶时间增加风险度高的粒子的权重:
Figure PCTCN2020102644-appb-000030
其中,W表示风险系数,ε表示有效计算常数。这样,当行驶时间ttc i越小,计算出该表示该粒子风险度的粒子权重就越大,可以突出体现该粒子的风险度。另外,为了计算能够收敛,还可以对权重
Figure PCTCN2020102644-appb-000031
进行归一化处理,具体可以依据下述公式(4)计算出该粒子i的权重
Figure PCTCN2020102644-appb-000032
Figure PCTCN2020102644-appb-000033
可以理解的是,通过上述描述,可以实现根据车辆距离每种行动意图的风险区域的行驶时间,确定每种行动意图的风险度的目的,即,实现了不同行动意图的概率重分布,为后续准确的确定车辆的车速,提高采用自动驾驶等智能驾驶技术的车辆的安全性和可靠性,提供了必不可少的数据基础。
作为一个示例,步骤504具体可以通过下述S41~S42预测周边对象在不同行动意图下的运动状态变化:S41,根据车辆距离不同行动意图下的风险区域的行驶时间,确定不同行动意图下所述周边对象改变行动意图的概率;S42,根据不同行动意图下所述周边对象改变行动意图的概率与随机概率,预测周边对象在不同行动意图下的运动状态变化。
可以理解的是,S41可以根据每种行动意图的碰撞时间ttc,进一步确定出车辆与其周边对象的交互概率。一种情况下,如果周边对象的某个行动意图的交互概率过大,则,可以 根据该交互概率改变当前的行动意图,调整到目标意图。该目标意图即为周边对象调整后的行动意图。例如:若行人1在g1行动意图情况下的碰撞时间ttc对应的交互概率非常大,则,可以根据该交互概率确定行人1的目标意图g2。另一种情况下,如果周边对象的某个行动意图的交互概率较小,则,可以根据该交互概率确定目标意图仍然为当前的行动意图。该目标意图与周边对象调整前的行动意图一致。例如:若行人1在g1行动意图情况下的碰撞时间ttc对应的交互概率很小,则,可以根据该交互概率确定行人1的目标意图g1。
可以理解的是,若周边对象与车辆的在某行动意图下的碰撞时间较短,即,风险较高,此时,周边对象一般会更加谨慎;反之,若周边对象与车辆的在某行动意图下的碰撞时间较长,即,风险较低,此时,周边对象一般会比较放松。基于此,引入基于碰撞时间ttc计算得到的交互概率,可以比较真实的模拟出符合行人心理的运动状态。具体实现时,假设行人1与车辆的碰撞时间
Figure PCTCN2020102644-appb-000034
行人2与车辆的碰撞时间
Figure PCTCN2020102644-appb-000035
例如:行人1与车辆的交互概率可以是:
Figure PCTCN2020102644-appb-000036
同理,行人2与车辆的交互概率可以是:
Figure PCTCN2020102644-appb-000037
其中,W interact为交互概率系数。
具体实现时,S42可以通过周边对象状态预测模型和计算出的交互概率,确定采用周边对象交互运动模型还是周边对象线性运动模型,去确定周边对象的运动状态变化。其中,考虑到行人随意性大,其与车辆发生交互的概率也是随机的,因此,引入随机概率P random,来确定计算初始期望值应该采用的模型,具体确定过程包括:第一步,判断交互概率P r是否大于随机概率P random,如果是,则执行第二步,否则,执行第三步;第二步,利用周边对象交互运动模型预测周边对象在不同行动意图下的运动状态变化;第三步,利用周边对象线性运动模型预测周边对象在不同行动意图下的运动状态变化。
以图10中的场景为例进行说明,假设行人1的行动意图为:g3垂直横穿,如果不采用该周边对象交互运动模型,则行人1完全当做车辆不存在,正常运动至图12中的②号位置;如果采用该周边对象交互运动模型,则行人1很可能为了安全考虑,运动至①号位置避让车辆。类似的,由于行人2与车辆的碰撞时间较大,其与车辆的交互概率较小,那么,行人2通过周边对象线性运动模型运动至④号位置可能性大于通过周边对象交互运动模型运动至③号位置的可能性。
关于周边对象线性运动模型,考虑到周边对象的位置和速度的观测信息误差较大,因此,该周边对象线性运动模型中,周边对象的运动状态可以设置为符合方差较大的高斯分布。该周边对象线性运动模型具体定义如下:
Figure PCTCN2020102644-appb-000038
其中,Δt表示预测的单步步长,通常较小,例如Δt可以取0.3秒。假设在Δt时间内行人的行动意图是不变的,即,g′ 行人=g 行人。另外,这里f s(g 行人)和f l(g 行人)代表不同行动意图在S-L坐标系的S、L方向上的运动方向分量,即,周边对象在不同行动意图下,其运动方向是不同的;μ 行人s、μ 行人l代表周边对象线性运动模型在S、L方向上运动距离的均值,
Figure PCTCN2020102644-appb-000039
代表周边对象线性运动模型在S、L方向上运动距离的方差,μ 行人v
Figure PCTCN2020102644-appb-000040
分别代表周边对象线性运动模型在S、L方向上运动速度的均值和方差。
关于周边对象交互运动模型,具体定义如下:
Figure PCTCN2020102644-appb-000041
其中,F s(v 行人,Δt,g 行人)和F l(v 行人,Δt,g 行人)代表行人基于不同行动意图下与车辆发生交互时,在S-L坐标系S、L方向上的运动变化函数。
需要说明的是,步骤503和步骤504在执行上没有先后顺序,可以先执行步骤503再执行步骤504,也可以先执行步骤504再执行步骤503,还可以同时执行步骤503和步骤504,具体方式不作限定。
步骤505,根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,确定车辆的行驶速度。
可以理解的是,可以基于不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化这三方面的因素,确定出车辆合适的车速,一种情况下,可以通过上述三方面的因素,确定出车辆的加速度,以该加速度控制车辆行驶;另一种情况下,还可以通过上述三方面的因素,确定出车辆的加速度,从而根据该加速度和车辆当前的速度,确定车辆的待行驶速度,控制车辆以该待行驶速度行驶。
具体实现时,步骤505具体可以通过下述S51~S53实现:S51,根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,估计车辆在所述不同行驶速度控制动作下的行驶效果;S52,根据车辆在不同行驶速度控制动作下的行驶效果,从不同行驶速度控制动作中选择目标行驶速度控制动作;S53,根据目标行驶速度控制动作,确定所述车辆的行驶速度。
其中,S51具体可以建立车辆状态预测模型,基于该车辆状态预测模型预测出车辆在不同行驶速度控制下的行驶效果,即,车辆以不同的加速度行驶时车辆的运动状态变化。
对于车辆状态预测模型,考虑到车辆的位置、速度等状态量的观测信息误差较小,因此,该车辆状态预测模型,车辆的运动状态可以设置为符合方差较小的高斯分布。该车辆状态预测模型具体定义如下:
Figure PCTCN2020102644-appb-000042
其中,μ 车辆s、μ 车辆l代表车辆状态预测模型在S、L方向上运动距离的均值,
Figure PCTCN2020102644-appb-000043
代表车辆状态预测模型在S、L方向上运动距离的方差,μ 车辆v
Figure PCTCN2020102644-appb-000044
分别代表车辆状态预测模型在S、L方向上运动速度的均值和方差。
可以理解的是,考虑到周边对象的行动意图是一个不确定的因素,可以采用部分可观察马尔可夫决策过程(英文:Partially Observable Markov Decision Process,简称:POMDP),进行最优速度的决策规划。可以理解的是,该POMDP具有部分可观测性,即,通过一个通用的数学模型决策规划后,预测不确定性环境下不可观测的部分的行动意图。其中,该数学模型一般可以包括状态集合S、动作集合A、状态转移函数T、观测集合O、观测函数Z以及回报函数R。结合图1对应场景,说明该数学模型中包括的内容进行定义:
状态空间S:是指环境中动态、静态实体的所有可能状态的集合,即,车辆和、行人1(即上文中行人102)和行人2(即上文中行人103),则S={s|s∈[s 车辆,l 车辆,v 车辆,s 行人1,l 行人1,v 行人1,g 行人1,s 行人2,l 行人2,v 行人2,g 行人2]}。
动作空间A:是指自动驾驶或无人驾驶的车辆所可能采取的加速度动作集合,为了描述方便,通常将提取的常用加速度范围进行离散化处理,也可以理解为对应的档位,例如:A={-3,-2,-1,0,0.5,1,2,3},即,该车辆可以采用8个不同的初始加速度行驶。
状态转移函数T:是POMDP的核心部分,该函数T重点描述状态随着时间的转移过程,为最优动作的选择提供决策基础。对于车辆来说,状态转移函数T可以表示车辆在状态{s 车辆,l 车辆,v 车辆}下执行A中的加速度a后将转移到状态{s′ 车辆,l′ 车辆,v′ 车辆};对于行人1来说,则在当前状态{s 行人1,l 行人1,v 行人1,g 行人1}下是按照行动意图g 行人1运动将会转移到状态{s′ 行人1,l′ 行人1,v′ 行人1,g′ 行人1}。
观测空间O:通常与状态空间相对应,表示车辆、行人1和行人2的观测信息集合,O={o|o∈[o 车辆,o 行人1,o 行人2]},其中,O 车辆={位置:(x 车辆,y 车辆),速度:V 车辆,加速度:a 车辆,航向:θ 车辆},O 行人1={位置:(x 行人1,y 行人1),运动速度:V 行人1,运动方向:θ 行人1},O 行人2={位置:(x 行人2,y 行人2),运动速度:V 行人2,运动方向:θ 行人2}。
观测函数Z:表示车辆、行人1和行人2在采取加速度a后转移到状态s’后得到观测z的概率,即,Z(z,s′,a)=P(z|s′,a)。假设车辆以及行人的位置和速度相对于真实的位置和速度来说均是符合高斯分布,那么,由于车辆位置和速度的观测信息误差较小,而行人位置和速度的观测信息误差较大,故,车辆和行人的两种高斯分布的方差不同,车辆的高斯运动模型的高斯分布方差较小,而行人的运动模型的高斯分布方差较大。
回报函数Reward:用于对所决策加速度进行定量评估,可以从碰撞程度进行评估,也可以根据碰撞程度结合通行阻碍程度进行评估,还可以根据碰撞程度结合乘车不适程度进行评估,也可以根据碰撞程度、通行阻碍程度和乘车不适程度进行评估。其中,碰撞程度用于体现安全性、通行阻碍程度用于体现通行效率、乘车不适程度可以体现舒适性。需要说明的是,还可以基于目的性评估所决策的加速度。
例如:假设仅通过碰撞程度R_col评估所决策的加速度,那么,Reward=R_col;又例如:假设通过碰撞程度R_col、通行阻碍程度R_move和乘车不适程度R_action进行评估评估所决策的加速度,那么,Reward=R_col+R_move+R_action。
在介绍完POMDP中的定义后,下面对步骤505中S51的示例性具体实现方式进行说明。
可以理解的是,可以遍历车辆的所有可能的加速度,分布通过车辆状态预测模型预测出动态变化的[s 车辆’,l 车辆’,v 车辆’],将其和[s 行人’,l 行人’,v 行人’,g 行人’]进行比较,确定车辆是否与周 边对象发生碰撞,如果不发生碰撞,则,确定该加速度对应的碰撞程度为0;如果发生碰撞,则,可以根据车辆执行该加速度后的速度确定为v 车辆’,即,将碰撞程度R_col直接作为初始期望值Reward,例如:该行驶效果可以通过下述公式(5)计算得到:
Reward=R col=w 1*(v′ 车辆+c)              公式(5)
其中,w1为设置的固定系数,v′ 车辆表示执行当前的加速度后发生碰撞时车辆的速度,c为常数。
举例说明,假设包括车辆和一个行人,对于动作集合A中的8个初始加速,分别判断出发生碰撞的3个加速度,那么,以该3个加速度分别可以计算得到3对应的碰撞程度R_col1、R_col2和R_col3,其他5个不发生碰撞的加速度对应的碰撞程度为0。
在根据上述方式得到每个不同行驶速度控制动作(即,不同加速度)下对应的行驶效果后,可以执行S52“根据所述车辆在所述不同行驶速度控制动作下的行驶效果,从所述不同行驶速度控制动作中选择目标行驶速度控制动作”的操作。可以理解的是,由于行动意图的出现概率分布b被粒子化处理,得到粒子集合P={particle 1,particle 2,......,particle m},表示出现概率分布b到带权重w i的粒子集合之间的映射关系:b→P,那么,可以通过粒子集合中包含各种行动意图的粒子个数和权重来确定目标期望值。例如:对于行人1的g1行动意图,其出现概率可以表示为:
Figure PCTCN2020102644-appb-000045
其中,k满足条件:第k个粒子particle k的行动意图
Figure PCTCN2020102644-appb-000046
w k表示第k个粒子particle k的权重值。
对于S51,作为一个示例,该操作过程可以包括:考虑到预测步长为Δt,基于当前初始的概率分布b 0→P={particle 1,particle 2,......,particle m};预测N步即T=NΔt,则对于加速度a来说,a∈A={-3,-2,-1,0,0.5,1,2,3},其N步的行驶效果为
Figure PCTCN2020102644-appb-000047
其中
Figure PCTCN2020102644-appb-000048
γ为折扣因子,一般取小于1的数值,随着预测步数N的增大,其对当前时刻决策的影响越小,相当于一个时序衰减因子。需要说明的是,出现每种行动意图的概率具体体现在累加相同行动意图的粒子,由相同行动意图的粒子个数多少体现该行动意图的出现概率的大小;每种行动意图发生碰撞的风险程度具体体现在每个粒子的权重w k上。
作为另一个示例,还可以将每种行动意图的碰撞时间体现在交互概率中,基于交互概率确定出调整后的目标意图,基于目标意图来计算车辆采用目标加速度的行驶效果对应的值,故,计算N步的目标期望值具体可以通过:
Figure PCTCN2020102644-appb-000049
其中
Figure PCTCN2020102644-appb-000050
需要说明的是,每种行动意图的出现概率具体体现在:累加相同行动意图的粒子,由相同行动意图的粒子个数多少体现该行动意图的出现概率的大小;每种行动意图发生碰撞的风险程度具体体现在:根据碰撞时间计算交互概率,确定每种行动意图对应的目标意图,Reward(particle k,a)基于目标意图进行计算。
作为再一个示例,可以理解的是,为了更加突出的体现风险度对于确定车速的重要性,提高所确定车速的可靠性和安全性,还可以结合上述两种示例中的方式,多重融合来计算每种行动意图确定车辆采用目标加速度的行驶效果对应的值。其中,既以表示相同行动意图粒子的粒子权重体现风险度对车速的影响,又基于交互概率来体现风险度对车速的影响。例如:对计算N步的行驶效果对应的值具体可以通过:
Figure PCTCN2020102644-appb-000051
其中
Figure PCTCN2020102644-appb-000052
需要说明的是,每种行动意图的出现概率具体体现在:累加相同行动意图的粒子,由相同行动意图的粒子个数多少体现该行动意图的出现概率的大小;每种行动意图下发生碰撞的风险程度具体体现在:第一,每种行动意图对应的每个粒子的权重w k上;第二,根据碰撞时间计算交互概率,确定每种行动意图对应的运动状态变化,Reward(particle k,a)基于每种行动意图对应的运动状态变化进行计算。
可以理解的是,根据上述示例的计算方法,可以针对A中的8个初始加速度进行遍历,即,该每个加速度作为目标加速度,得出目标加速度对应的行驶效果,最终,可以计算出8个对应的行驶效果,分别可以表示为:G(b 0,-3)、G(b 0,-2)、G(b 0,-1)、G(b 0,0)、G(b 0,0.5)、G(b 0,1)、G(b 0,2)和G(b 0,3)。需要说明的是,行驶效果用于表示基于当前出现各种行动意图的概率重分布,执行目标加速度a后所获取的回报函数值,行驶效果对应的值越小,就代表安全性越差,反之,行驶效果对应的值越大,就代表安全性越好。
基于此,可以理解的是,根据行驶效果对应的值的含义可知,行驶效果对应的值越大,表示安全性越好,则,S52具体可以从多个行驶效果对应的值中选取最大的行驶效果对应的值,并将其对应的加速度确定为目标行驶速度控制动作,即,目标加速度。
例如:假设确定的8个目标期望值中:G(b 0,-3)、G(b 0,-2)、G(b 0,-1)、G(b 0,0)、G(b 0,0.5)、G(b 0,1)、G(b 0,2)和G(b 0,3),最大值为G(b 0,2),则选择该G(b 0,2)对应的初始加速度2为目标加速度,即,目标行驶速度控制动作a=2。
对于S53,一种情况下,可以将目标加速度直接发送给控制器,由控制器控制该车辆以该目标加速度行驶;另一种情况下,也可以根据该目标加速度和当前速度,计算出车辆的目标速度,例如:目标速度为:v=v 0+a*Δt,其中,a为目标加速度,v 0为当前速度,将该目标速度v发送给控制器,由控制器控制该车辆以该目标速度v行驶。
需要说明的是,上述实现方式中仅通过碰撞程度R_col确定行驶效果对应的值Reward,也可以结合通行阻碍程度R_move和/或乘车不适程度R_action确定行驶效果对应的值Reward。其中,通行阻碍程度R_move根据车辆采用目标加速度时达到的车速与车道的限速来确定,初始期望值Reward还根据车辆采用目标加速度时的通行阻碍程度R_move来确定,该情况下,
Figure PCTCN2020102644-appb-000053
其中,w2为设置的固定系数,v′ 车辆为车辆采用目标初始加速度时达到的车速,vmax为当前车道的限速。其中,乘坐不适程度R_action根据目标加速度以及目标加速度与车辆的当前车速之差来确定,初始期望值Reward还根据车辆采用目标加速度时的乘坐不适程度R_action来确定,该情况下,Reward=R_col+R_action,R_action=w 3*f(action current)|w 4*f(action current action last),其中,w3、w4为设置的固定系数,actioncurrent表示当前所采取的目标加速度,actionlast表示上一时刻所采取的目标加速度;f(actioncurrent)表示当前目标加速度所产生的舒适度回报,为了抑制加速度过大带来的乘车不适;f(actioncurrent-actionlast)表示当前目标加速度变化所产生的舒适度回报,是为了抑制加速度变化过大带来的乘车不适。需要说明的是,也可以根据碰撞程度R_col、通行阻碍程度R_move和乘车不适程度R_action共同确定初始期望值Reward,每种确定行驶效果对应的值Reward的方式均可以参见上述只基于碰撞程度 R_col确定行驶效果对应的值Reward的实现方式,这里不再赘述。
需要说明的是,在几种实现方式中,仅以基于碰撞程度R_col确定初始期望值Reward的实现方式为例进行说明,基于其他参数确定初始期望值的实现方式与之类似,这里不再赘述。
需要说明的是,步骤503~步骤504具体的实现是由:可以由图2中的CPU 205(图3车载计算机系统220的决策规划模块222或者图4决策规划层430中行人意图分布预测更新模块431)执行。步骤505具体的实现是由:可以由图2中的CPU 205(图3车载计算机系统220的决策规划模块222中的速度决策规划单元或者图4决策规划层430中速度决策规划模块432)执行。
可见,在自动驾驶等场景中,本申请实施例提供的确定车速的方法,一方面,可以根据周边对象的观测信息计算出现每种行动意图的概率分布,并且根据车辆从当前位置到不同行动意图下的风险区域的行驶时间,计算出不同行动意图的概率重分布;另一方面,还可以根据车辆距离不同行动意图下的风险区域的行驶时间,预测出周边对象在不同行动意图下的运动状态变化,如此,即可根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,确定车辆的行驶速度。
这样,在车辆行驶过程中,对于车辆的周边对象可能出现的多种行动意图,可以根据周边对象的观测信息预测出现每种行动意图的概率,以及,可以在每种行动意图以及车辆的每种加速度控制下,预测周边对象和车辆之间发生碰撞的风险程度,从而结合两者来确定车速。这样,在确定车速时,不仅考虑了周边对象出现每种行动意图的可能性,也考虑了每种行动意图以及车辆的每种加速度控制下周边对象和车辆之间发生碰撞的风险程度,从而避免了将周边对象与车辆之间的高风险但发生概率较小的情况忽视,使得确定出的行驶速度更合适当前驾驶环境,减小了车辆行驶时可能存在的安全隐患。
此外,本申请实施例还提供了一种确定车速的装置,参见图11所示,该装置1100包括:第一获取单元1101、第一计算单元1102、第二计算单元1103、预测单元1104和第一确定单元1105。
其中,第一获取单元1101,用于获取车辆的周边对象的观测信息;
第一计算单元1102,用于根据周边对象的观测信息,计算周边对象出现不同行动意图的概率分布;
第二计算单元1103,用于根据车辆从车辆的当前位置到不同行动意图下的风险区域的行驶时间,对概率分布进行重分布计算,获得不同行动意图的概率重分布;其中,不同行动意图下的风险区域分别为周边对象在处于不同行动意图时在车辆行驶的车道上所经过的区域;
预测单元1104,用于根据车辆距离不同行动意图下的风险区域的行驶时间,预测周边对象在不同行动意图下的运动状态变化;
第一确定单元1105,用于根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,确定车辆的行驶速度。
在一种可能的实现方式中,该第一计算单元1102,可以包括:建立子单元和计算子单元。其中,建立子单元,用于根据周边对象的观测信息,以车辆所行驶的道路为坐标系,建立周边对象与道路之间的相对位置关系及相对运动关系;计算子单元,用于根据周边对象与道路之间的相对位置关系及相对运动关系,计算周边对象出现不同行动意图的概率分布。
在另一种可能的实现方式中,该装置还可以包括:第二获取单元、建立单元、第二确定单元和第三计算单元。
其中,第二获取单元,用于获取车辆的观测信息;建立单元,用于根据车辆的观测信息和周边对象的观测信息,以车辆所行驶的道路为坐标系,建立车辆与道路的相对位置关系和相对运动状态以及周边对象与道路的相对位置关系和相对运动状态;第二确定单元,用于根据周边对象与道路的相对位置关系和相对运动状态,确定不同行动意图下的风险区域;第三计算单元,用于根据车辆与道路的相对位置关系和相对运动状态以及不同行动意图下的风险区域,计算车辆从车辆的当前位置到不同行动意图下的风险区域的行驶时间。
在再一种可能的实现方式中,该第二计算单元1103,可以包括:处理子单元和调整子单元。其中,处理子单元,用于对概率分布进行粒子化处理,其中,不同行动意图对应的粒子的数量表示不同行动意图的概率分布;调整子单元,用于根据计算车辆距离不同行动意图下的风险区域的行驶时间,对不同行动意图对应的粒子的权重进行调整,以获得不同行动意图的概率重分布。
在又一种可能的实现方式中,该预测单元1104,可以包括:第一确定子单元和预测子单元。其中,第一确定子单元,用于根据车辆距离不同行动意图下的风险区域的行驶时间,确定不同行动意图下周边对象改变行动意图的概率;预测子单元,用于根据不同行动意图下周边对象改变行动意图的概率与随机概率,预测周边对象在不同行动意图下的运动状态变化。
在另一种可能的实现方式中,该第一确定单元1105,可以包括:估计子单元、选择子单元和第二确定子单元。其中,估计子单元,用于根据不同行动意图的概率重分布、周边对象在不同行动意图下的运动状态变化以及车辆在不同行驶速度控制动作下的运动状态变化,估计车辆在不同行驶速度控制动作下的行驶效果;选择子单元,用于根据车辆在不同行驶速度控制动作下的行驶效果,从不同行驶速度控制动作中选择目标行驶速度控制动作;第二确定子单元,用于根据目标行驶速度控制动作,确定车辆的行驶速度。
需要说明的是,上述装置1100用于执行图5对应的实施例中的各个步骤,获取单元1101具体可以执行步骤501;第一计算单元1102具体可以执行步骤502;第二计算单元1103具体可以执行步骤503;预测单元1104具体可以执行步骤504;第一确定单元1105具体可以执行步骤505。
可以理解的是,该装置1100对应于本申请实施例提供的确定车速的方法,故该装置1100各实现方式以及达到的技术效果,可参见本申请实施例中关于确定车速的方法的各实现方式的相关描述。
另外,本申请实施例还提供了一种车辆,参见图12,该车辆1200包括:传感器1201、 处理器1202和车速控制器1203。
其中,传感器1201,用于获得车辆的周边对象的观测信息,并发送给处理器;例如可以是雷达、摄像头等。
处理器1202,用于根据前述第一方面任意一种实现方式所述的方法,确定车辆的行驶速度,并发送给车速控制器。
车速控制器1203,用于控制该车辆以所确定的车辆的行驶速度行驶。
可以理解的是,该车辆1200执行本申请实施例提供的确定车速的方法,故该车辆1200各实现方式以及达到的技术效果,可参见本申请实施例中关于确定车速的方法的各实现方式的相关描述。
此外,本申请实施例还提供了一种车辆,参见图13,该车辆1300包括处理器1301和存储器1302,所述存储器1302存储有指令,当处理器1301执行该指令时,使得该车辆1300行前述确定车速的方法中任意一种实现方式所述的方法。
可以理解的是,该车辆1300执行本申请实施例提供的确定车速的方法,故该车辆1300各实现方式以及达到的技术效果,可参见本申请实施例中关于确定车速的方法的各实现方式的相关描述。
另外,本申请实施例还提供了一种计算机程序产品,当其在计算机上运行时,使得计算机执行前述确定车速的方法中任意一种实现方式所述的方法。
此外,本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得该计算机或处理器执行前述确定车速的方法中任意一种实现方式所述的方法。
本申请实施例中提到的“第一风险度”等名称中的“第一”只是用来做名字标识,并不代表顺序上的第一。该规则同样适用于“第二”等。
通过以上的实施方式的描述可知,本领域的技术人员可以清楚地了解到上述实施例方法中的全部或部分步骤可借助软件加通用硬件平台的方式来实现。基于这样的理解,本申请的技术方案可以以软件产品的形式体现出来,该计算机软件产品可以存储在存储介质中,如只读存储器(英文:read-only memory,ROM)/RAM、磁碟、光盘等,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者诸如路由器等网络通信设备)执行本申请各个实施例或者实施例的某些部分所述的方法。
本说明书中的各个实施例均采用递进的方式描述,各个实施例之间相同相似的部分互相参见即可,每个实施例重点说明的都是与其他实施例的不同之处。尤其,对于装置实施例而言,由于其基本相似于方法实施例,所以描述得比较简单,相关之处参见方法实施例的部分说明即可。以上所描述的装置实施例仅仅是示意性的,其中作为分离部件说明的模块可以是或者也可以不是物理上分开的,作为模块显示的部件可以是或者也可以不是物理模块,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部模块来实现本实施例方案的目的。本领域普通技术人员在不付出创造性劳动的情况下,即可以理解并实施。
以上所述仅是本申请示例性的实施方式,并非用于限定本申请的保护范围。

Claims (12)

  1. 一种确定车速的方法,其特征在于,包括:
    获取车辆的周边对象的观测信息;
    根据所述周边对象的观测信息,计算所述周边对象出现不同行动意图的概率分布;
    根据所述车辆从所述车辆的当前位置到所述不同行动意图下的风险区域的行驶时间,对所述概率分布进行重分布计算,获得所述不同行动意图的概率重分布;其中,所述不同行动意图下的风险区域分别为所述周边对象在处于所述不同行动意图时在所述车辆行驶的车道上所经过的区域;
    根据所述车辆距离所述不同行动意图下的风险区域的行驶时间,预测所述周边对象在所述不同行动意图下的运动状态变化;
    根据所述不同行动意图的概率重分布、所述周边对象在所述不同行动意图下的运动状态变化以及所述车辆在不同行驶速度控制动作下的运动状态变化,确定所述车辆的行驶速度。
  2. 根据权利要求1所述的方法,其特征在于,所述根据所述周边对象的观测信息,计算所述周边对象出现不同行动意图的概率分布,包括:
    根据所述周边对象的观测信息,以所述车辆所行驶的道路为坐标系,建立所述周边对象与所述道路之间的相对位置关系及相对运动关系;
    根据所述周边对象与所述道路之间的相对位置关系及相对运动关系,计算所述周边对象出现不同行动意图的概率分布。
  3. 根据权利要求1所述的方法,其特征在于,还包括:
    获取所述车辆的观测信息;
    根据所述车辆的观测信息和所述周边对象的观测信息,以所述车辆所行驶的道路为坐标系,建立所述车辆与所述道路的相对位置关系和相对运动状态以及所述周边对象与所述道路的相对位置关系和相对运动状态;
    根据所述周边对象与所述道路的相对位置关系和相对运动状态,确定所述不同行动意图下的风险区域;
    根据所述车辆与所述道路的相对位置关系和相对运动状态以及所述不同行动意图下的风险区域,计算所述车辆从所述车辆的当前位置到所述不同行动意图下的风险区域的行驶时间。
  4. 根据权利要求1至3任意一项所述的方法,其特征在于,所述根据所述车辆距离所述不同行动意图下的风险区域的行驶时间,对所述概率分布进行重分布计算,获得所述不同行动意图的概率重分布,包括:
    对所述概率分布进行粒子化处理,其中,所述不同行动意图对应的粒子的数量表示所述不同行动意图的概率分布;
    根据计算所述车辆距离所述不同行动意图下的风险区域的行驶时间,对所述不同行动意图对应的粒子的权重进行调整,以获得所述不同行动意图的概率重分布。
  5. 根据权利要求1至4任意一项所述的方法,其特征在于,所述根据所述车辆距离所述 不同行动意图下的风险区域的行驶时间,预测所述周边对象在所述不同行动意图下的运动状态变化,包括:
    根据所述车辆距离所述不同行动意图下的风险区域的行驶时间,确定所述不同行动意图下所述周边对象改变行动意图的概率;
    根据所述不同行动意图下所述周边对象改变行动意图的概率与随机概率,预测所述周边对象在所述不同行动意图下的运动状态变化。
  6. 根据权利要求1至5任意一项所述的方法,其特征在于,所述根据所述不同行动意图的概率重分布、所述周边对象在所述不同行动意图下的运动状态变化以及所述车辆在不同行驶速度控制动作下的运动状态变化,确定所述车辆的行驶速度,包括:
    根据所述不同行动意图的概率重分布、所述周边对象在所述不同行动意图下的运动状态变化以及所述车辆在不同行驶速度控制动作下的运动状态变化,估计所述车辆在所述不同行驶速度控制动作下的行驶效果;
    根据所述车辆在所述不同行驶速度控制动作下的行驶效果,从所述不同行驶速度控制动作中选择目标行驶速度控制动作;
    根据所述目标行驶速度控制动作,确定所述车辆的行驶速度。
  7. 一种确定车速的装置,其特征在于,包括:
    第一获取单元,用于获取车辆的周边对象的观测信息;
    第一计算单元,用于根据所述周边对象的观测信息,计算所述周边对象出现不同行动意图的概率分布;
    第二计算单元,用于根据所述车辆从所述车辆的当前位置到所述不同行动意图下的风险区域的行驶时间,对所述概率分布进行重分布计算,获得所述不同行动意图的概率重分布;其中,所述不同行动意图下的风险区域分别为所述周边对象在处于所述不同行动意图时在所述车辆行驶的车道上所经过的区域;
    预测单元,用于根据所述车辆距离所述不同行动意图下的风险区域的行驶时间,预测所述周边对象在所述不同行动意图下的运动状态变化;
    第一确定单元,用于根据所述不同行动意图的概率重分布、所述周边对象在所述不同行动意图下的运动状态变化以及所述车辆在不同行驶速度控制动作下的运动状态变化,确定所述车辆的行驶速度。
  8. 根据权利要求7所述的装置,其特征在于,所述第一计算单元,包括:
    建立子单元,用于根据所述周边对象的观测信息,以所述车辆所行驶的道路为坐标系,建立所述周边对象与所述道路之间的相对位置关系及相对运动关系;
    计算子单元,用于根据所述周边对象与所述道路之间的相对位置关系及相对运动关系,计算所述周边对象出现不同行动意图的概率分布。
  9. 根据权利要求7所述的装置,其特征在于,还包括:
    第二获取单元,用于获取所述车辆的观测信息;
    建立单元,用于根据所述车辆的观测信息和所述周边对象的观测信息,以所述车辆所行驶的道路为坐标系,建立所述车辆与所述道路的相对位置关系和相对运动状态以及所述 周边对象与所述道路的相对位置关系和相对运动状态;
    第二确定单元,用于根据所述周边对象与所述道路的相对位置关系和相对运动状态,确定所述不同行动意图下的风险区域;
    第三计算单元,用于根据所述车辆与所述道路的相对位置关系和相对运动状态以及所述不同行动意图下的风险区域,计算所述车辆从所述车辆的当前位置到所述不同行动意图下的风险区域的行驶时间。
  10. 根据权利要求7至9任意一项所述的装置,其特征在于,所述第二计算单元,包括:
    处理子单元,用于对所述概率分布进行粒子化处理,其中,所述不同行动意图对应的粒子的数量表示所述不同行动意图的概率分布;
    调整子单元,用于根据计算所述车辆距离所述不同行动意图下的风险区域的行驶时间,对所述不同行动意图对应的粒子的权重进行调整,以获得所述不同行动意图的概率重分布。
  11. 根据权利要求7至10任意一项所述的装置,其特征在于,所述预测单元,包括:
    第一确定子单元,用于根据所述车辆距离所述不同行动意图下的风险区域的行驶时间,确定所述不同行动意图下所述周边对象改变行动意图的概率;
    预测子单元,用于根据所述不同行动意图下所述周边对象改变行动意图的概率与随机概率,预测所述周边对象在所述不同行动意图下的运动状态变化。
  12. 根据权利要求7至11任意一项所述的装置,其特征在于,所述第一确定单元,包括:
    估计子单元,用于根据所述不同行动意图的概率重分布、所述周边对象在所述不同行动意图下的运动状态变化以及所述车辆在不同行驶速度控制动作下的运动状态变化,估计所述车辆在所述不同行驶速度控制动作下的行驶效果;
    选择子单元,用于根据所述车辆在所述不同行驶速度控制动作下的行驶效果,从所述不同行驶速度控制动作中选择目标行驶速度控制动作;
    第二确定子单元,用于根据所述目标行驶速度控制动作,确定所述车辆的行驶速度。
PCT/CN2020/102644 2019-07-17 2020-07-17 一种确定车速的方法和装置 WO2021008605A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
JP2021523762A JP7200371B2 (ja) 2019-07-17 2020-07-17 車両速度を決定する方法及び装置
EP20840576.1A EP3882095A4 (en) 2019-07-17 2020-07-17 METHOD AND DEVICE FOR DETERMINING VEHICLE SPEED
MX2021005934A MX2021005934A (es) 2019-07-17 2020-07-17 Metodo y aparato para determinar la velocidad de vehiculo.
US17/322,388 US11273838B2 (en) 2019-07-17 2021-05-17 Method and apparatus for determining vehicle speed

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910646083.4A CN112242069B (zh) 2019-07-17 2019-07-17 一种确定车速的方法和装置
CN201910646083.4 2019-07-17

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/322,388 Continuation US11273838B2 (en) 2019-07-17 2021-05-17 Method and apparatus for determining vehicle speed

Publications (1)

Publication Number Publication Date
WO2021008605A1 true WO2021008605A1 (zh) 2021-01-21

Family

ID=74167575

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/102644 WO2021008605A1 (zh) 2019-07-17 2020-07-17 一种确定车速的方法和装置

Country Status (6)

Country Link
US (1) US11273838B2 (zh)
EP (1) EP3882095A4 (zh)
JP (1) JP7200371B2 (zh)
CN (1) CN112242069B (zh)
MX (1) MX2021005934A (zh)
WO (1) WO2021008605A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11273838B2 (en) * 2019-07-17 2022-03-15 Huawei Technologies Co., Ltd. Method and apparatus for determining vehicle speed

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11630461B2 (en) * 2019-01-31 2023-04-18 Honda Motor Co., Ltd. Systems and methods for utilizing interacting gaussian mixture models for crowd navigation
US11597088B2 (en) 2019-01-31 2023-03-07 Honda Motor Co., Ltd. Systems and methods for fully coupled models for crowd navigation
US11787053B2 (en) * 2019-11-19 2023-10-17 Honda Motor Co., Ltd. Systems and methods for utilizing interacting Gaussian mixture models for crowd navigation
US20210300423A1 (en) * 2020-03-31 2021-09-30 Toyota Motor North America, Inc. Identifying roadway concerns and taking preemptive actions
US11290856B2 (en) 2020-03-31 2022-03-29 Toyota Motor North America, Inc. Establishing connections in transports
JP2022037421A (ja) * 2020-08-25 2022-03-09 株式会社Subaru 車両の走行制御装置
US11741274B1 (en) 2020-11-20 2023-08-29 Zoox, Inc. Perception error model for fast simulation and estimation of perception system reliability and/or for control system tuning
CN113299059B (zh) * 2021-04-08 2023-03-17 四川国蓝中天环境科技集团有限公司 一种数据驱动的道路交通管控决策支持方法
CN114590248B (zh) * 2022-02-23 2023-08-25 阿波罗智能技术(北京)有限公司 行驶策略的确定方法、装置、电子设备和自动驾驶车辆
US11904889B1 (en) * 2022-11-04 2024-02-20 Ghost Autonomy Inc. Velocity adjustments based on roadway scene comprehension

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104340152A (zh) * 2013-08-06 2015-02-11 通用汽车环球科技运作有限责任公司 在避免碰撞任务中用于情形评估和决策的动态安全防护罩
CN106114503A (zh) * 2015-05-05 2016-11-16 沃尔沃汽车公司 用于确定安全车辆轨迹的方法和装置
CN106428000A (zh) * 2016-09-07 2017-02-22 清华大学 一种车辆速度控制装置和方法
CN108458745A (zh) * 2017-12-23 2018-08-28 天津国科嘉业医疗科技发展有限公司 一种基于智能检测设备的环境感知方法
WO2019063416A1 (de) * 2017-09-26 2019-04-04 Audi Ag Verfahren und einrichtung zum betreiben eines fahrerassistenzsystems sowie fahrerassistenzsystem und kraftfahrzeug
DE102018132813A1 (de) * 2018-12-19 2019-04-25 FEV Europe GmbH Fußgängersimulation
WO2019083978A1 (en) * 2017-10-24 2019-05-02 Waymo Llc PREDICTIONS OF PEDESTRIAN BEHAVIOR FOR AUTONOMOUS VEHICLES
CN109969172A (zh) * 2017-12-26 2019-07-05 华为技术有限公司 车辆控制方法、设备及计算机存储介质

Family Cites Families (45)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4318505B2 (ja) * 2003-08-06 2009-08-26 ダイハツ工業株式会社 衝突回避装置
US20070027597A1 (en) 2003-09-23 2007-02-01 Daimlerchrysler Ag Method and device for recognising lane changing operations for a motor vehicle
EP2085279B1 (en) * 2008-01-29 2011-05-25 Ford Global Technologies, LLC A system for collision course prediction
JP4853525B2 (ja) * 2009-02-09 2012-01-11 トヨタ自動車株式会社 移動領域予測装置
JP2012128739A (ja) * 2010-12-16 2012-07-05 Toyota Central R&D Labs Inc 衝突危険判定装置及びプログラム
FR2996512B1 (fr) * 2012-10-05 2014-11-21 Renault Sa Procede d'evaluation du risque de collision a une intersection
EP2916307B1 (en) 2012-10-30 2021-05-19 Toyota Jidosha Kabushiki Kaisha Vehicle safety apparatus
DE102013212092B4 (de) 2013-06-25 2024-01-25 Robert Bosch Gmbh Verfahren und Vorrichtung zum Betreiben einer Fußgängerschutzeinrichtung eines Fahrzeugs, Fußgängerschutzeinrichtung
US9336436B1 (en) 2013-09-30 2016-05-10 Google Inc. Methods and systems for pedestrian avoidance
CN103640532B (zh) * 2013-11-29 2015-08-26 大连理工大学 基于驾驶员制动与加速意图辨识的行人防碰撞预警方法
CN103996312B (zh) * 2014-05-23 2015-12-09 北京理工大学 具有社会行为交互的无人驾驶汽车控制系统
US9187088B1 (en) * 2014-08-15 2015-11-17 Google Inc. Distribution decision trees
JP6257482B2 (ja) * 2014-09-03 2018-01-10 株式会社デンソーアイティーラボラトリ 自動運転支援システム、自動運転支援方法及び自動運転装置
US9440647B1 (en) 2014-09-22 2016-09-13 Google Inc. Safely navigating crosswalks
JP6294247B2 (ja) * 2015-01-26 2018-03-14 株式会社日立製作所 車両走行制御装置
US11458970B2 (en) * 2015-06-29 2022-10-04 Hyundai Motor Company Cooperative adaptive cruise control system based on driving pattern of target vehicle
US9784592B2 (en) * 2015-07-17 2017-10-10 Honda Motor Co., Ltd. Turn predictions
US9934688B2 (en) * 2015-07-31 2018-04-03 Ford Global Technologies, Llc Vehicle trajectory determination
US10392009B2 (en) * 2015-08-12 2019-08-27 Hyundai Motor Company Automatic parking system and automatic parking method
US9604639B2 (en) 2015-08-28 2017-03-28 Delphi Technologies, Inc. Pedestrian-intent-detection for automated vehicles
US11066070B2 (en) * 2015-10-15 2021-07-20 Hyundai Motor Company Apparatus and method for controlling speed in cooperative adaptive cruise control system
KR102107726B1 (ko) * 2016-12-30 2020-05-07 현대자동차주식회사 협조 적응형 순항 제어 시스템의 속도 제어 장치 및 방법
US10152882B2 (en) * 2015-11-30 2018-12-11 Nissan North America, Inc. Host vehicle operation using remote vehicle intention prediction
CN106227204B (zh) 2016-07-08 2020-03-10 百度在线网络技术(北京)有限公司 车载装置及用于控制无人驾驶车辆的系统、方法和装置
CN106515725A (zh) * 2016-10-20 2017-03-22 深圳市元征科技股份有限公司 一种车辆防碰撞的方法及终端
US10699305B2 (en) * 2016-11-21 2020-06-30 Nio Usa, Inc. Smart refill assistant for electric vehicles
IL288191B2 (en) * 2016-12-23 2023-10-01 Mobileye Vision Technologies Ltd A navigation system with forced commitment constraints
US10324469B2 (en) * 2017-03-28 2019-06-18 Mitsubishi Electric Research Laboratories, Inc. System and method for controlling motion of vehicle in shared environment
US10994729B2 (en) * 2017-03-29 2021-05-04 Mitsubishi Electric Research Laboratories, Inc. System and method for controlling lateral motion of vehicle
JP6852632B2 (ja) * 2017-09-19 2021-03-31 トヨタ自動車株式会社 車両制御装置
US10668922B2 (en) * 2017-10-04 2020-06-02 Toyota Motor Engineering & Manufacturing North America, Inc. Travel lane identification without road curvature data
US10657811B2 (en) * 2017-10-04 2020-05-19 Toyota Motor Engineering & Manufacturing North America, Inc. Travel lane identification without road curvature data
US10562538B2 (en) * 2017-11-22 2020-02-18 Uatc, Llc Object interaction prediction systems and methods for autonomous vehicles
CN108230676B (zh) 2018-01-23 2020-11-27 同济大学 一种基于轨迹数据的交叉口行人过街风险评估方法
US11040729B2 (en) * 2018-05-31 2021-06-22 Nissan North America, Inc. Probabilistic object tracking and prediction framework
US10569773B2 (en) * 2018-05-31 2020-02-25 Nissan North America, Inc. Predicting behaviors of oncoming vehicles
US11091158B2 (en) * 2018-06-24 2021-08-17 Mitsubishi Electric Research Laboratories, Inc. System and method for controlling motion of vehicle with variable speed
US11260855B2 (en) * 2018-07-17 2022-03-01 Baidu Usa Llc Methods and systems to predict object movement for autonomous driving vehicles
US11001256B2 (en) * 2018-09-19 2021-05-11 Zoox, Inc. Collision prediction and avoidance for vehicles
GB2579020B (en) * 2018-11-14 2021-05-12 Jaguar Land Rover Ltd Vehicle control system and method
WO2020257366A1 (en) * 2019-06-17 2020-12-24 DeepMap Inc. Updating high definition maps based on lane closure and lane opening
CN112242069B (zh) * 2019-07-17 2021-10-01 华为技术有限公司 一种确定车速的方法和装置
US20210035442A1 (en) * 2019-07-31 2021-02-04 Nissan North America, Inc. Autonomous Vehicles and a Mobility Manager as a Traffic Monitor
CN110758382B (zh) * 2019-10-21 2021-04-20 南京航空航天大学 一种基于驾驶意图的周围车辆运动状态预测系统及方法
US20210197720A1 (en) * 2019-12-27 2021-07-01 Lyft, Inc. Systems and methods for incident detection using inference models

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104340152A (zh) * 2013-08-06 2015-02-11 通用汽车环球科技运作有限责任公司 在避免碰撞任务中用于情形评估和决策的动态安全防护罩
CN106114503A (zh) * 2015-05-05 2016-11-16 沃尔沃汽车公司 用于确定安全车辆轨迹的方法和装置
CN106428000A (zh) * 2016-09-07 2017-02-22 清华大学 一种车辆速度控制装置和方法
WO2019063416A1 (de) * 2017-09-26 2019-04-04 Audi Ag Verfahren und einrichtung zum betreiben eines fahrerassistenzsystems sowie fahrerassistenzsystem und kraftfahrzeug
WO2019083978A1 (en) * 2017-10-24 2019-05-02 Waymo Llc PREDICTIONS OF PEDESTRIAN BEHAVIOR FOR AUTONOMOUS VEHICLES
CN108458745A (zh) * 2017-12-23 2018-08-28 天津国科嘉业医疗科技发展有限公司 一种基于智能检测设备的环境感知方法
CN109969172A (zh) * 2017-12-26 2019-07-05 华为技术有限公司 车辆控制方法、设备及计算机存储介质
DE102018132813A1 (de) * 2018-12-19 2019-04-25 FEV Europe GmbH Fußgängersimulation

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3882095A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11273838B2 (en) * 2019-07-17 2022-03-15 Huawei Technologies Co., Ltd. Method and apparatus for determining vehicle speed

Also Published As

Publication number Publication date
US20210276572A1 (en) 2021-09-09
CN112242069A (zh) 2021-01-19
JP2022506404A (ja) 2022-01-17
US11273838B2 (en) 2022-03-15
JP7200371B2 (ja) 2023-01-06
EP3882095A4 (en) 2022-03-09
CN112242069B (zh) 2021-10-01
EP3882095A1 (en) 2021-09-22
MX2021005934A (es) 2021-06-30

Similar Documents

Publication Publication Date Title
WO2021008605A1 (zh) 一种确定车速的方法和装置
US11390300B2 (en) Method for using lateral motion to optimize trajectories for autonomous vehicles
RU2762786C1 (ru) Планирование траектории
CN111123933B (zh) 车辆轨迹规划的方法、装置、智能驾驶域控制器和智能车
CN112133089B (zh) 一种基于周围环境与行为意图的车辆轨迹预测方法、系统及装置
US20220048535A1 (en) Generating Goal States for Prioritizing Path Planning
WO2021217420A1 (zh) 车道线跟踪方法和装置
CN112034834A (zh) 使用强化学习来加速自动驾驶车辆的轨迹规划的离线代理
US11458991B2 (en) Systems and methods for optimizing trajectory planner based on human driving behaviors
CN108334077B (zh) 确定自动驾驶车辆的速度控制的单位增益的方法和系统
WO2020131399A1 (en) Operation of a vehicle using motion planning with machine learning
CN112034833A (zh) 规划用于自动驾驶车辆的开放空间轨迹的在线代理
US11042159B2 (en) Systems and methods for prioritizing data processing
US20220284619A1 (en) Offline optimization of sensor data for agent trajectories
CN111948938A (zh) 规划用于自动驾驶车辆的开放空间轨迹的松弛优化模型
KR102589587B1 (ko) 자율 주행 차량용 동적 모델 평가 패키지
US20220355825A1 (en) Predicting agent trajectories
WO2022142839A1 (zh) 一种图像处理方法、装置以及智能汽车
CN112534483A (zh) 预测车辆驶出口的方法和装置
WO2022178858A1 (zh) 一种车辆行驶意图预测方法、装置、终端及存储介质
US20230192077A1 (en) Adjustment of object trajectory uncertainty by an autonomous vehicle
US20240161398A1 (en) Late-to-early temporal fusion for point clouds
TWI796846B (zh) 基於物件互動關係之路徑預測方法及電子裝置
US20240025395A1 (en) Path generation based on predicted actions
US20240025444A1 (en) Path generation based on predicted actions

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20840576

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2021523762

Country of ref document: JP

Kind code of ref document: A

ENP Entry into the national phase

Ref document number: 2020840576

Country of ref document: EP

Effective date: 20210422

NENP Non-entry into the national phase

Ref country code: DE