WO2023166845A1 - System and method for parking an autonomous ego- vehicle in a dynamic environment of a parking area - Google Patents

System and method for parking an autonomous ego- vehicle in a dynamic environment of a parking area Download PDF

Info

Publication number
WO2023166845A1
WO2023166845A1 PCT/JP2022/048715 JP2022048715W WO2023166845A1 WO 2023166845 A1 WO2023166845 A1 WO 2023166845A1 JP 2022048715 W JP2022048715 W JP 2022048715W WO 2023166845 A1 WO2023166845 A1 WO 2023166845A1
Authority
WO
WIPO (PCT)
Prior art keywords
obstacle
vehicle
motion
mode
path
Prior art date
Application number
PCT/JP2022/048715
Other languages
French (fr)
Inventor
Yebin Wang
Jessica LEU
Stefano Di Cairano
Original Assignee
Mitsubishi Electric Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US17/654,709 external-priority patent/US20230278593A1/en
Application filed by Mitsubishi Electric Corporation filed Critical Mitsubishi Electric Corporation
Publication of WO2023166845A1 publication Critical patent/WO2023166845A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B62LAND VEHICLES FOR TRAVELLING OTHERWISE THAN ON RAILS
    • B62DMOTOR VEHICLES; TRAILERS
    • B62D15/00Steering not otherwise provided for
    • B62D15/02Steering position indicators ; Steering position determination; Steering aids
    • B62D15/027Parking aids, e.g. instruction means
    • B62D15/0285Parking performed automatically
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B60VEHICLES IN GENERAL
    • B60WCONJOINT CONTROL OF VEHICLE SUB-UNITS OF DIFFERENT TYPE OR DIFFERENT FUNCTION; CONTROL SYSTEMS SPECIALLY ADAPTED FOR HYBRID VEHICLES; ROAD VEHICLE DRIVE CONTROL SYSTEMS FOR PURPOSES NOT RELATED TO THE CONTROL OF A PARTICULAR SUB-UNIT
    • B60W30/00Purposes of road vehicle drive control systems not related to the control of a particular sub-unit, e.g. of systems using conjoint control of vehicle sub-units
    • B60W30/06Automatic manoeuvring for parking

Definitions

  • the present disclosure relates generally to motion planning for autonomous vehicle parking, and more specifically to a system and a method for parking an autonomous ego-vehicle in a dynamic environment of a parking area.
  • a parking task can be formulated as determining a path or motion of the autonomous vehicle from an initial state to a target state defining the parking spot and then controlling actuators of the autonomous vehicle, e.g., vehicle’s gas pedal and steering wheel, to ensure that the autonomous vehicle follows the determined path or motion.
  • actuators of the autonomous vehicle e.g., vehicle’s gas pedal and steering wheel
  • the parking area may include obstacles, such as moving obstacle vehicles or pedestrians, which the autonomous vehicle should not collide with during parking at the parking spot.
  • the autonomous parking may also involve motion planning in a narrow space.
  • vehicle motions in the parking area do not have a clear set of rules to follow and largely depend on driver's intention or even skill level. These make motion prediction of the obstacle vehicles and the motion planning of the autonomous vehicle for parking challenging.
  • the motion prediction i.e., predicting future states of the obstacle vehicles, is crucial as it determines safety constraints that planning and control modules of the autonomous vehicle has to consider, and thus impacts feasibility and smoothness of the motion planning or the path.
  • an accurate short-term motion prediction enables the autonomous vehicle to plan and react safely to the autonomous vehicles, whereas long-term plan/mode prediction allows the autonomous vehicle to plan efficiently and smoothly.
  • Some approaches use an interacting multiple model (IMM) to predict short-term trajectories in parking; while others use classifiers to extract information from movements of the obstacle vehicles to generate short-term motion predictions.
  • IMM interacting multiple model
  • data driven methods are not desirable due to lack of guarantee and performance inconsistency with different data sets, and the data driven methods may present a larger prediction error if the data set is chosen poorly.
  • It is an object of some embodiments provide a system and a method for parking an autonomous ego-vehicle in a dynamic environment of a parking area. Additionally, it is an object to some embodiments to provide a strategic motion planner that varies control strategy of the autonomous ego- vehicle based on different situations in the parking area, for parking the autonomous vehicle.
  • the parking area includes parking spots for parking vehicles.
  • the parking area may further include stationary vehicles parked in the parking area and an obstacle vehicle moving in the parking area.
  • the autonomous ego-vehicle has an initial state and needs to be parked at a parking spot defined by a target state, without colliding with the stationary vehicles and the obstacle vehicle.
  • Each state e.g., the initial state and the target state, defines at least a position (x, y) in a 2D plane and an orientation 6 of the autonomous ego-vehicle.
  • the state of the obstacle vehicle may include a position and an orientation of the obstacle vehicle, and geometric information of the obstacle vehicle, for example, its wheelbase.
  • a path planner is executed to produce a trajectory for parking the autonomous ego-vehicle, based on the state of the dynamic environment in the parking area.
  • Some embodiments are based on the realization that motion of the obstacle vehicle needs to be considered for achieving collision-free parking of the autonomous ego- vehicle in the parking area. To that end, some embodiments aim to predict a path of the obstacle vehicle. [0011] However, some embodiments are based on realization that, considering the dynamic environment of the parking area, it is insufficient to predict only the path of the obstacle vehicle, it is also useful to determine a mode of the motion of the obstacle vehicle. Some embodiments are based on observation that the obstacle vehicle runs in various modes in the parking area, such as a cruising mode and a maneuvering mode. In the cruising mode, the obstacle vehicle has small or steady steering angles.
  • the obstacle vehicle runs in the cruising mode when the obstacle vehicle enters the parking area and is approaching a parking spot or when the obstacle vehicle is exiting the parking area.
  • the obstacle vehicle changes the steering angle frequently and deviates from its path to park or leave narrow parking spot.
  • a safety constraint for the obstacle vehicle is determined based on the predicted path and the mode of motion of the obstacle vehicle. For example, if the predicted mode is the cruising mode, then the safety constraint is determined around the state of the obstacle vehicle along its path. The safety constraint determined around the state of the obstacle vehicle along its path is referred to as a safety margin.
  • the safety margin is determined as the safety constraint for the cruising mode because, in the cruising mode, the obstacle vehicle takes less aggressive steering actions, and its motion can be predicted with relatively high confidence. However, in the maneuvering mode, it is difficult to predict the motion of the obstacle vehicle and the obstacle vehicle tends to occupy a larger space to adjust its orientation. To that end, for the maneuvering mode, the safety constraint is determined as safety bounds in all driving directions around the state of the obstacle vehicle. Thus, if the predicted mode is the maneuvering mode, then the safety bounds are determined for the obstacle vehicle. The safety bounds define an area that the autonomous ego vehicle should avoid if the obstacle vehicle is in the maneuvering mode.
  • the autonomous ego-vehicle is parked by controlling the autonomous ego-vehicle based on the trajectory and the determined safety constraint.
  • Some embodiments are based on the realization that the motion of the autonomous ego-vehicle in the parking area may be constantly adapted in response to changes of the parking area environment. Hence, even after the trajectory for the autonomous ego-vehicle is determined, a need to adjust the trajectory may arise at any time. To that end, to control the parking of the autonomous ego-vehicle, some embodiments use a strategic motion planner that varies control strategy based on different situations in the parking area.
  • the strategic motion planner uses three different strategies based on different situations.
  • the strategic motion planner includes a model predictive control-based safety controller for trajectory tracking, a search-based retreat planner for finding an evasion path in an emergency, and an optimization-based repair planner for modifying the path of the trajectory.
  • the strategic motion planner strikes a balance between safety, plan feasibility, and smooth maneuvering by leveraging advantages of optimization-based and search-based approaches.
  • one embodiment discloses an integrated system for parking an autonomous ego-vehicle in a dynamic environment of a parking area.
  • the integrated system comprises a processor; and a memory having instructions stored thereon that, when executed by the processor, cause the integrated system to: collect measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of one or multiple stationary vehicles parked in the parking area, and a state of one or multiple obstacle vehicles moving in the parking area; execute a path planner configured to produce a trajectory for parking the autonomous ego-vehicle based on the state of the dynamic environment in the parking area; execute an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles, wherein the mode of motion includes a first mode of motion or a second mode of motion; determine a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles, wherein the safety constraint is determined around the state of an obstacle vehicle along its path if the obstacle
  • another embodiment discloses a method for parking an autonomous ego-vehicle in a dynamic environment of a parking area.
  • the method comprises collecting measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of one or multiple stationary vehicles parked in the parking area, and a state of one or multiple obstacle vehicles moving in the parking area; executing a path planner configured to produce a trajectory for parking the autonomous ego-vehicle based on the state of the dynamic environment in the parking area; executing an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles, wherein the mode of motion includes a first mode of motion or a second mode of motion; determining a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles, wherein the safety constraint is determined around the state of an obstacle vehicle along its path if the obstacle vehicle is in the first mode of motion, and wherein the safety constraint is determined in all driving directions around the state
  • yet another embodiment discloses a non-transitory computer-readable storage medium embodied thereon a program executable by a processor for performing a method for parking an autonomous ego- vehicle in a dynamic environment of a parking area.
  • the method comprises collecting measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of one or multiple stationary vehicles parked in the parking area, and a state of one or multiple obstacle vehicles moving in the parking area; executing a path planner configured to produce a trajectory for parking the autonomous ego- vehicle based on the state of the dynamic environment in the parking area; executing an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles, wherein the mode of motion includes a first mode of motion or a second mode of motion; determining a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles, wherein the safety constraint is determined around the state of an obstacle vehicle along its path if the obstacle vehicle is
  • FIG. 1 A illustrates an example parking area, according to some embodiments of present disclosure.
  • FIG. IB shows a block diagram of a system for parking an autonomous egovehicle, according to an embodiment of the present disclosure.
  • FIG. 1C shows a block diagram of a method for parking the autonomous egovehicle in a dynamic environment of the parking area, according to an embodiment of the present disclosure.
  • FIG. ID shows a schematic illustrating different paths along which an obstacle vehicle may traverse to exit the parking area, according to an embodiment of the present disclosure.
  • FIG. IE shows a schematic illustrating a safety margin determined around the state of the obstacle vehicle along its path, according to an embodiment of the present disclosure.
  • FIG. IF shows a schematic illustrating safety bounds, according to an embodiment of the present disclosure.
  • FIG. 2 shows a schematic of an environment predictor, according to an embodiment of the present disclosure.
  • FIG. 3 A shows a schematic of a motion estimator based on a multi-stage estimation framework, according to an embodiment of the present disclosure.
  • FIG. 3B shows a schematic of a motion estimator based on a multi-stage estimation framework, according to an embodiment of the present disclosure.
  • FIG. 3B shows a schematic for motion estimation based on a dynamic model, according to an embodiment of the present disclosure.
  • FIG. 3C shows schematics of paths of an obstacle vehicle predicted using different models, according to an embodiment of the present disclosure.
  • FIG. 3D illustrates an example short-term motion prediction result using a bicycle model and an example short-term motion prediction result using a unicycle model, according to an embodiment of the present disclosure.
  • FIG. 4A shows a schematic of the long-term mode predictor for estimation of a probability of each mode, according to an embodiment of the present disclosure.
  • FIG. 4B shows a schematic for formulation of a cost map, according to an embodiment of the present disclosure.
  • FIG. 5 shows a block diagram for determining a safety constraint and a safety set, according to an embodiment of the present disclosure.
  • FIG. 6 shows a block diagram of a motion prediction and planning framework, according to an embodiment of the present disclosure.
  • FIG. 7 A shows a block diagram of a motion prediction and planning framework, according to an embodiment of the present disclosure.
  • FIG. 7A shows an example scenario where the safety constraint is violated and a retreat planner is triggered, according to an embodiment of the present disclosure.
  • FIG. 7B shows a block diagram of a method of the retreat planner for determining a path between a current location and a safe spot for retreating, according to an embodiment of the present disclosure.
  • FIG. 7C shows a schematic illustrating an evaluation of node cost, according to an embodiment of the present disclosure.
  • FIG. 7D shows a schematic illustrating a process of the retreat planner, according to an embodiment of the present disclosure.
  • FIG. 8A shows a schematic illustrating a process of the retreat planner, according to an embodiment of the present disclosure.
  • FIG. 8A illustrates an example scenario, where the obstacle vehicle stops on a reference trajectory, according to an embodiment of the present disclosure.
  • FIG. 8B shows a block diagram for verifying feasibility of a repaired path, according to an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram illustrating a system that can be used for implementing systems and methods of the present disclosure.
  • FIG. 1A illustrates an example parking area 100, according to some embodiments of present disclosure.
  • the parking area 100 includes parking spots, such as a spot 101, for parking vehicles.
  • the parking area 100 further includes one or multiple stationary vehicles, such as vehicles 103, 105, and 107, parked in the parking area 100, and one or multiple obstacle vehicles moving in the parking area 100, for example, an obstacle vehicle 109.
  • the obstacle vehicle 109 may be leaving the parking area 100 by following a path 111.
  • the parking area 100 includes static obstacles such as a wall 113 and/or columns of the parking area 100.
  • the autonomous ego-vehicle 115 has an initial state 117 and needs to be parked at a parking spot defined by a target state 119, without colliding with the stationary vehicles 103, 105, and 107, the obstacle vehicle 109, and the wall 113.
  • Each state e.g., the initial state 117 and the target state 119, defines at least a position (x, y) in a 2D plane and an orientation 6 of the autonomous ego-vehicle 115.
  • some embodiments provide a system for parking the autonomous ego-vehicle 115.
  • the system for parking the autonomous ego-vehicle 115 is explained in FIG. IB.
  • FIG. IB shows a block diagram of a system 123 for parking the autonomous ego-vehicle 115, according to an embodiment of the present disclosure.
  • the system 123 includes a processor 125 and a memory 127.
  • the processor 125 may be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations.
  • the memory 127 may include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems. Additionally, in some embodiments, the memory 127 may be implemented using a hard drive, an optical drive, a thumb drive, an array of drives, or any combinations thereof.
  • the system 123 may be integrated into the autonomous egovehicle 115.
  • the processor 125 is configured to execute various functions to park the autonomous ego-vehicle 115, as explained in FIG. 1C.
  • FIG. 1C shows a block diagram of a method 129 for parking the autonomous ego- vehicle 115 in the dynamic environment of the parking area 100, according to an embodiment of the present disclosure.
  • the processor 125 collects measurements of a state of the dynamic environment in the parking area 100 indicative of available parking spots in the parking area 100, a state of each of the stationary vehicles 103, 105, and 107, and a state of the obstacle vehicle 109.
  • the state of the obstacle vehicle 109 may include a position and an orientation of the obstacle vehicle 109, and geometric information of the obstacle vehicle 109, for example, its wheelbase.
  • the processor 125 executes a path planner configured to produce a trajectory for parking the autonomous ego- vehicle 115 based on the state of the dynamic environment in the parking area 100.
  • the processor 125 executes the path planner to produce a trajectory 121, shown in FIG. 1A, for parking the autonomous ego- vehicle 115 at the target state 119.
  • the path planner may be stored in the memory 127.
  • FIG. ID shows a schematic illustrating different paths along which the obstacle vehicle 109 may traverse to exit the parking area 100, according to an embodiment of the present disclosure.
  • the obstacle vehicle 109 may, for example, traverse along a path 109a and or a path 109b.
  • Some embodiments are based on realization that, considering the dynamic environment of the parking area 100, it is insufficient to predict only the path of the obstacle vehicle 109, it is also useful to determine a mode of the motion of the obstacle vehicle 109. Some embodiments are based on observation that the obstacle vehicle 109 runs in various modes in the parking area 100, such as a cruising mode and a maneuvering mode. In the cruising mode, the obstacle vehicle 109 has small or steady steering angles. The obstacle vehicle 109 runs in the cruising mode when the obstacle vehicle 109 enters the parking area 100 and is approaching a parking spot or when the obstacle vehicle 109 is exiting the parking area 100. The cruising mode is also referred to as a first mode. In the maneuvering mode, the obstacle vehicle 109 changes the steering angle frequently and deviates from its path to park or leave narrow parking spot. The maneuvering mode is also referred to as a second mode.
  • the processor 125 executes an environment predictor configured to predict the path and the mode of motion of the obstacle vehicle 109.
  • the environment predictor may be stored in the memory 127.
  • the processor 125 determines a safety constraint for the obstacle vehicle 109 based on the predicted path and the mode of motion of the obstacle vehicle 109. For example, if the predicted path is the path 109b and the predicted mode is the cruising mode, then the safety constraint is determined around the state of the obstacle vehicle 109 along its path.
  • FIG. IE shows a schematic illustrating a safety margin 141 determined around the state of the obstacle vehicle 109 along its path 109b, according to an embodiment of the present disclosure.
  • the safety margin 141 is determined as the safety constraint for the cruising mode because, in the cruising mode, the obstacle vehicle 109 takes less aggressive steering actions, and its motion can be predicted with relatively high confidence. However, in the maneuvering mode, it is difficult to predict the motion of the obstacle vehicle 109 and the obstacle vehicle 109 tends to occupy a larger space to adjust its orientation. To that end, for the maneuvering mode, the safety constraint is determined as safety bounds in all driving directions around the state of the obstacle vehicle 109.
  • FIG. IF shows a schematic illustrating safety bounds 143a and 143b, according to an embodiment of the present disclosure.
  • the safety bounds 143a and 143b define an area 145 that the autonomous ego vehicle 115 should avoid if the obstacle vehicle 109 is traversing the path 109b in the maneuvering mode.
  • the processor 125 parks the autonomous ego-vehicle 115 by controlling the autonomous ego- vehicle 115 based on the trajectory 121 and the determined safety constraint.
  • the trajectory 121 may be determined based on a kinematic model of the autonomous ego-vehicle 115 that describes a motion of the autonomous ego-vehicle 115 without consideration of a mass/inertia of the autonomous ego-vehicle 115 or forces/torques that caused the motion.
  • a kinematic model is given by
  • the kinematic model can include an additional state variable to ensure continuous curvature.
  • the kinematic model can include a longitudinal velocity, a steering angle, and a steering angle rate as additional state variables.
  • the motion of the autonomous ego-vehicle 115 is determined based on a dynamic model of the autonomous ego-vehicle 115.
  • the dynamic model of the autonomous ego-vehicle 115 accounts for time-dependent changes in the state of the autonomous egovehicle 115.
  • the dynamic models are represented by ordinary differential equations.
  • a path/motion is kinematically feasible if it is a solution of the kinematic model (1).
  • An autonomous ego-vehicle state X (x, y, 0) is collision free only if the autonomous ego-vehicle 115 located at X does not collide with any obstacle. All collision free vehicle states constitute a collision free configuration space Cf ree .
  • the autonomous ego-vehicle state belongs to a state space X : [0, L) X [0, H) x [0,271).
  • a path/motion is generated according to the dynamic model (2).
  • a dynamically feasible path/motion is a solution of the dynamic model (2).
  • the path of the obstacle vehicle 109 can be determined based on a position and a velocity of the obstacle vehicle 109.
  • some embodiments are based on the realization that the position and the velocity is insufficient to estimate the modes of the motion of the obstacle vehicle 109. This is because a vehicle at the same position and having the same velocity at any instance of time can be in different modes.
  • some embodiments are based on the realization that the steering angle can be more indicative of the modes of the motion because for obstacle vehicle 109 in the cruising mode the steering angle is more aligned with the velocity and/or orientation of the obstacle vehicle 109 than the steering angle of the obstacle vehicle 109 in the maneuvering mode.
  • the environment predictor estimates the position, the velocity, and the steering angle of the obstacle vehicle 109 at different instances of time and estimates the path of the obstacle vehicle 109 based on the position and the velocity of the obstacle vehicle 109 at the different instances of time.
  • the environment predictor further estimates the mode of the motion of the obstacle vehicle 109 based on the steering angle of the obstacle vehicle 109 at the different instances of time.
  • the environment predictor determines the mode of the motion of the obstacle vehicle 109 based on a misalignment of the orientation of the obstacle vehicle 109 and the steering angle at the different instances of time.
  • the environment predictor used to determine the path and the mode of motion of the obstacle vehicle 109 is explained in detail below in FIG. 2.
  • FIG. 2 shows a schematic 200 of the environment predictor, according to an embodiment of the present disclosure.
  • a motion estimator 201 estimates a current state 203 of the obstacle vehicle 109, and a motion predictor 205 predicts a path 207 over a finite time horizon.
  • the measurements include one or a combination of the target state 119, map information, the position, and the orientation of the obstacle vehicle 109, and the geometric information of the obstacle vehicle 109.
  • a long-term mode predictor 209 determines a probability 211 of each mode that the obstacle vehicle 109 possibly belongs to.
  • the processor 125 determines a safety constraint 213.
  • the motion estimator 201 can be realized by implementing an Extended Kalman Filter (EKF) based on discretization of the following continuous-time unicycle model where a> represents a yaw rate of obstacle vehicle 109, and q it 1 ⁇ i ⁇ 5 are Gaussian processes, with zero mean and certain covariances, representing model uncertainties, and r, 1 ⁇ i ⁇ 3 are Gaussian processes, with zero mean and certain covariances, representing measurement noises.
  • An output of the EKF e.g., the estimated current state 203, is denoted by x,y, 6, v, co).
  • performing motion estimation, or the motion estimator 201, based on the model (3) implicitly restricts the constant velocity and yaw rate over the entire prediction time horizon to obtain the predicted path 207 of the obstacle vehicle 109, and thus a mean of the predicted path 207 is always a circle in the 2D plane.
  • the motion estimator 201 can be realized based on discretization of the following continuous-time bicycle model where 6 represents the steering angle of the obstacle vehicle 109, and q it 1 ⁇ i ⁇ 5 are Gaussian processes representing model uncertainties, and r, 1 ⁇ i ⁇ 3 are Gaussian processes representing measurement noises.
  • state estimators such as EKF or particle filter are applied for the state estimation.
  • Some embodiments are based on an observation that it is not straightforward to tune the EKF for accurate estimation of the steering angle. This is partially attributed to term vtan(d) which involves multiplication of unmeasured states. Meanwhile, heavy computation also presents a hurdle for the adoption of the particle filter.
  • some embodiments provide the motion estimator 201 based on a multi-stage estimation framework. The motion estimator 201 based on the multi-stage estimation framework is explained below in FIG. 3A.
  • FIG. 3 A shows a schematic of the motion estimator 201 based on the multi-stage estimation framework, according to an embodiment of the present disclosure.
  • An EKF 301 is configured to estimate (x, y, v) of the obstacle vehicle 109, based on the measurements and a partial dynamic model given by
  • the EKF 301 integrates measurements of the position of the obstacle vehicle 109 to estimate the velocity of the obstacle vehicle 109.
  • the EKF 301 outputs an estimate 303 denoted as (x, y, v).
  • the estimations of the velocity at the different instances of time may be integrated forward in time to produce estimations of the steering angle at the different instances of time.
  • the estimate 303 drives an adaptive state estimator 305 to estimate the steering angle based on the following model
  • the adaptive state estimator’s 305 output is denoted as (0, 5).
  • a multi-stage framework similar to the multi-stage framework described in FIG. 3A, is employed to solve the motion estimation based on the dynamic model (7).
  • FIG. 3B shows a schematic for the motion estimation based on the dynamic model (7), according to an embodiment of the present disclosure.
  • the EKF 301 estimates (x, y, v, a) based on discretization of the dynamic model
  • the EKF 301 outputs an estimate 307 denoted as v, a).
  • the adaptive state estimator 305 addressing nonlinearity in 0 and s-dynamics, performs estimation based on the rest dynamic model
  • the adaptive state estimator 305 produces an estimate (0, s, v s ).
  • the estimate of the current state 203 of the obstacle vehicle 109 is a combination of the estimate 307 and the estimate (0, s, v s ) , i.e., (x, y, 0, v, a, s, v s ), where in some embodiments, 0 in the current state 203 is from the EKF 301, and in another embodiment 0 in the current state 203 is from the adaptive state estimator 305.
  • the adaptive state estimator 305 adopting the multi-stage framework, first estimates s based on v, 9 from the EKF 301, and then estimates v s based on the dynamics of s, v s , while treating s as a measurement.
  • a along with (x, y, 0, v, S ⁇ the path of the obstacle vehicle 109 can be predicted accurately.
  • FIG. 3C shows schematics of paths of the obstacle vehicle 109 predicted using different models, according to an embodiment of the present disclosure.
  • An example path predicted using the unicycle model (3) is represented by 309.
  • An example path predicted using the bicycle model (4) is represented by 311.
  • An example path predicted using the dynamic model (7) is represented by 313.
  • the EKF 301 and the adaptive state estimator 305 are used to estimate a state [x, y, 0, S, v] T of the obstacle vehicle 109, based on the bicycle model (4).
  • the adaptive state Li estimator 305 can be designed as follows: where M is an auxiliary signal, g, A e H are tuning gains, 9 a is an estimated heading angle, and v is a velocity estimated by the EKF 301.
  • FIG. 3D illustrates an example short-term motion prediction result 315 based on the bicycle model and an example short-term motion prediction result 317 based on the unicycle model, according to an embodiment of the present disclosure.
  • Bounding box footprints are 319 and 321.
  • a prediction deviation from ground truth 323 is smaller when using constant steering provided by the bicycle model than when using constant yaw rate provided by the unicycle model.
  • one embodiment uses a kinematic simulator that takes constant inputs and records the state-changes throughout a period of time.
  • a long-term motion of the obstacle vehicle 109 in the parking area 100 can be characterized by possible paths (including 201a and 201b as an example), possible modes (maneuvering or cruising for each path), and a probability of the obstacle vehicle 109 following a certain mode. Combining the path and mode information, the obstacle vehicle 109 that has N possible paths to follow may have 2N possible modes. According to an embodiment, prediction of the longterm motion, at the k time step, corresponds to estimation of the probability of each mode by the long-term mode predictor 209.
  • FIG. 4A shows a schematic of the long-term mode predictor 209 for the estimation of the probability of each mode, according to an embodiment of the present disclosure.
  • the long-term motion prediction corresponds to estimation of the probability of each mode m based on the current state 203 and the predicted path 207, belief about mode probability b(m fc _i) at prior time step, a mode belief forward propagation matrix and a conditional probability of the current state 203 and the predicted path 207 given mode m.
  • Bayesian framework is employed to keep track of a belief of each mode, i.e., b(m fc ).
  • the processor 125 performs mode belief forward propagation to determine the mode belief forward propagation matrix p(m k ⁇ m k ⁇ 1 ).
  • the processor 125 performs mode belief posterior update, based on a cost map 405 and the mode belief forward propagation matrix pfjn ⁇ m ⁇ , to determine a posterior probability distribution X H k , M route ⁇ -
  • the cost map 405 is used to exploit the obstacle vehicle 109 ’s relative movement against parking area environment and capture possible long-term movements of the obstacle vehicle 109.
  • the posterior probability distribution is proportional to the prior multiplied with the conditional probability of the current state 203 and the predicted path 207 given mode, i.e., .
  • the processor 125 normalizes the posterior probability distribution to obtain P(m fc ) 409, based on which, at block 411, the processor 125 selects a value of m k with the highest belief to be the mode m k 413 of the obstacle vehicle 109. Further, in some embodiments, at block 415, F(m k ) 409 and the mode m k 413 are merged as the mode probability 211 for determining a safety set. The determination of the safety set is explained in detail FIG. 5. For example, the safe area where the car can occupy, i.e., the area excluding the spaces enclosed by the safety margin and the safety bounds. In some embodiments, the safe set is represented as safety constraints.
  • FIG. 4B shows a schematic for formulation of the cost map 405, according to an embodiment of the present disclosure.
  • the cost map 405 may be determined based on a vehicle state X cc k 405a, a short-term predicted trajectory X H k 405b, and a route planner 405c.
  • the cost map 405 represents a conditional probability of the obstacle vehicle 109 following the predicted path 207 given that the obstacle vehicle 109 is in the mode m.
  • Boltzmann policy is adopted to represent this conditional probability as [0063]
  • Function M r.0Ute (m kl X cc kl X Hik ') measures a distance between the predicted path 207 and the path 109b related to m k : where is ith waypoint of the path in mode 2 are weighting matrices. Function is proportional to a magnitude of the obstacle vehicle 109’s steering angle and a deviation of the obstacle vehicle 109’s heading angle 0 cc k from a final heading angle of the path
  • FIG. 5 shows a block diagram for determining the safety constraint (such as safety margin 141 and the safety bounds 143a and 143b) and the safety set, according to an embodiment of the present disclosure.
  • the mode probability 211 is split into P(m fc ) 409 and the mode 413.
  • the processor 125 determines if the obstacle vehicle 109 is in the cruising mode, based on 413. If the obstacle vehicle 109 is in the cruising mode, then, at block 505, the processor 125 determines safety margins (f°r example, the safety margin 141 shown in FIG. IE).
  • the safety margin of h t h future time step is an ellipsoid and a length of principal semi-axes s k+h ( are proportional to differential entropy of a mode and a covariance from the motion estimation log(fe(7n fc ))).
  • the processor 125 contemplates that the obstacle vehicle 109 is in the maneuvering mode and, at block 507, the processor 125 determines safety bounds (for example, the safety bounds 143a and 143b shown in FIG. IF). Further, at block 509, the processor 125 determines a safety set based on the safety margin and/or the safety bounds, buncertainty Additionally or alternatively, some embodiment use differential entropy of the mode belief as the Mode belief uncertainty metric.
  • Some embodiments are based on the realization that the motion of the autonomous ego-vehicle 115 in the parking area 100 may be constantly adapted in response to changes of the parking area environment. Hence, even after the trajectory 121 for the autonomous ego-vehicle 115 is determined, a need to adjust the trajectory 121 may arise at any time. To that end, to control the parking of the autonomous ego-vehicle 115, some embodiments use a strategic motion planner that varies control strategy based on different situations in the parking area 100.
  • the strategic motion planner uses three different strategies based on different situations.
  • the strategic motion planner includes a model predictive control-based safety controller for trajectory tracking, a search-based retreat planner for finding an evasion path, and an optimization-based repair planner for planning a new trajectory/repair path when a reference trajectory is invalidated.
  • the strategic motion planner strikes a balance between safety, plan feasibility, and smooth maneuvering by leveraging advantages of optimization-based and search-based approaches.
  • some embodiments are based on the realization that the strategic motion planner can be integrated with the environment predictor to form an integrated motion prediction and planning framework. Such a framework is explained below in FIG. 6.
  • FIG. 6 shows a block diagram of a motion prediction and planning framework 600, according to an embodiment of the present disclosure.
  • the motion prediction and planning framework 600 includes a central controller 601, an environment predictor 603 and a strategic motion planner 605.
  • the environment predictor 603 is same as the environment predictor 603 described in FIG. 2.
  • the central controller 601, the environment predictor 603 and the strategic motion planner 605 are stored in the memory 127.
  • the central controller 601 first initializes 607 a path planner 611.
  • the path planner 611 generates a reference trajectory 613, based on a parking area map 609, without considering the moving obstacle vehicle 109.
  • a Bi-directional A-search Guide Tree (BIAGT) algorithm used as the path planner 611, as it is guaranteed to plan a trajectory that brings the autonomous ego vehicle 115 to a target state.
  • the environment predictor 603 monitors the parking area environment and produces prediction 615 of movements of the obstacle vehicle 109. Based on the prediction 615, the strategic motion planner 605 checks 617 if a current state of the autonomous ego-vehicle 115 is violating the safety constraint. If there is safety violation (i.e., the violation of the safety constraint), then a retreat planner 619 is triggered. The retreat planner 619 determines a retreat path to avoid collision with the obstacle vehicle 109 and updates the reference traj ectory f .
  • the strategic motion planner 605 checks 621 if the reference trajectory T needs to be repaired. For example, the strategic motion planner 605 checks if it is possible to track the path of the reference trajectory 613 without stopping the autonomous ego-vehicle 115. If it is not possible to track the path of the reference trajectory 613 without stopping the autonomous ego- vehicle 115, then a repair planner 623 is activated. The repair planner 623 updates the reference trajectory .
  • the strategic motion planner 605 determines if it is possible to track the updated reference trajectory without stopping the autonomous ego-vehicle 115. If it is determined that it is possible to track the updated reference trajectory without stopping the autonomous ego- vehicle 115, then a model predictive controller (MPC)-based safety controller 625 is executed to track the updated reference trajectory.
  • MPC model predictive controller
  • the MPC- based safety controller 625 submits control commands, e.g., steering angle and velocity, to the autonomous ego- vehicle 115 to track the updated reference trajectory.
  • repair planner 623 If the repair planner 623 cannot succeed, for example, if it is determined that it is not possible to track the updated reference trajectory without stopping the autonomous ego-vehicle 115, then the repair planner 623 requests the central controller 601 to re-plan 627 by updating the parking area map 609 and regenerating a reference trajectory.
  • the MPC-based safety controller 625 computes tracking motions to follow reference trajectory in an MPC framework. Let be a segment of the reference trajectory P “ re f to track at time step k. k is selected and trimmed so that it neither violates the safety margin (in all modes) nor the safety bounds (in maneuvering mode).
  • the trajectory tracking problem is formulated as follows: Given a planning horizon H , a model the autonomous ego- vehicle 115, and the segment X solve the following optimization problem for a sequence of control actions u k 3 k are configurations that maintains the autonomous ego-vehicle 115 to be clear of the safety margins and safety bounds.
  • the optimization problem (14) can be readily formulated as a non- convex optimization problem using various software tools and solved using nonlinear programming solvers, e.g., IPOPT.
  • FIG. 7A shows an example scenario where the safety constraint is violated and the retreat planner 619 is triggered, according to an embodiment of the present disclosure.
  • Upper row 701a shows a prediction at time step k, where a short-term motion predicted trajectory 703 and a safety margin 705 are shown.
  • the processor 125 checks if the safety margin 705 at t, t is in collision with the autonomous ego-vehicle’s 115 current location. In other words, the processor 125 checks if the safety constraint, i.e., the safety margin 705, is violated.
  • the safety constraint violation normally does not happen unless the obstacle vehicle 109 changes its movement drastically.
  • FIG. 7B shows a block diagram of a method of the retreat planner 619 for determining the path between the current location and the safe spot for retreating (i.e., the retreat path), according to an embodiment of the present disclosure.
  • the retreat planner 619 constructs a tree composed of a node set and an edge set 8 , where a node in the node set V represents a collision free state can be reached from the current location, and represents a feasible path between , representing all possible states where the autonomous ego-vehicle 115 does not overlap any obstacles, is implicitly obtained by checking collisions with obstacles in the parking area map.
  • M denote a finite set of motion primitives pre-computed through available control actions
  • V max denotes a maximum number of nodes allowed.
  • the processor 125 adds the current location as a root node to a priority queue.
  • the processor 125 selects a node with the lowest cost and pops from the priority queue and expands by applying the precomputed motion primitives.
  • the processor 125 checks if all the precomputed motion primitives are applied. If all the precomputed motion primitives are not applied, then, at block 713, the processor 125 applies a new motion primitive to produce an edge set 715.
  • the processor 125 determines if the edge set 715 is collision free. If the edge set is not collision free, then again a new motion primitive is applied to produce a new edge set.
  • the processor 125 records a child node which the edge set ends. Further, at block 721, the processor determines if the child node satisfies safe clearance, for example, if it is certain distance-away. If the child node does not satisfy the safe clearance, then, at block 723, the processor 125 evaluates node cost. Further, at block 725, the processor 125 pushes the child node into the priority queue. If the child node satisfies the safe clearance, then at block 727, the processor calculates a path from the current location to the child node. The path to the child node corresponds to the retreat path.
  • safe clearance for example, if it is certain distance-away. If the child node does not satisfy the safe clearance, then, at block 723, the processor 125 evaluates node cost. Further, at block 725, the processor 125 pushes the child node into the priority queue. If the child node satisfies the safe clearance, then at block 727, the processor calculates a path from the current location to the
  • FIG. 7C shows a schematic illustrating the evaluation of the node cost, according to an embodiment of the present disclosure.
  • a node cost function T (•) 735 sums up a heuristic value h(-) and an arrival cost $(•) 733.
  • the heuristic value is calculated based on a collision-cost map 729 measuring a risk of collision with the obstacle vehicle 109 if the autonomous ego- vehicle 115 stays at the node.
  • the heuristic value can be alternatively interpreted as potential cost to a safe spot. Typically, the heuristic value is less, the closer the node is to the safe spot.
  • the arrival cost 733 of a node represents a distance that the autonomous ego-vehicle 115 has to travel from the root node (representing the current state) to the node.
  • the collision-cost map 729 is a weighted sum of Gaussian distributions centered at waypoints of both the predicted trajectory X H ⁇ fc , i.e., X h k , and routes on the cost map, i.e., X mk i , and where c is a weighting constant, and W 4 , W 5 are weighting matrices.
  • FIG. 7D shows a schematic illustrating a process of the retreat planner 619, according to an embodiment of the present disclosure.
  • the retreat planner 619 begins with a current location 737, which is likely leading to collision with the obstacle vehicle 109. In the beginning, since the priority queue only contains the root node, the retreat planner 619 selects the root node to expand. Applying a motion primitive 739 ends up with collision- free new node 741a. Repeating the application of motion primitives at the root node results in adding another four new nodes to the tree: 741b, 741c, 741d, and 741e. All five new nodes 741a, 741b, 741c, 74 Id, and 74 le are pushed into the priority queue.
  • 3 is identified as the best node, due to its lowest node cost, and selected for expansion. Repeating the process eventually yields a new node 745, which satisfies the safe clearance.
  • FIG. 8A illustrates an example scenario, where the obstacle vehicle 109 stops on a reference trajectory 801, according to an embodiment of the present disclosure.
  • the MPC-based safety controller 625 commands the autonomous ego- vehicle 115 to stop on the reference trajectory 801 when an area in front is infeasible.
  • the MPC-based safety controller 625 stops the autonomous egovehicle 115 and waits for the obstacle vehicle 109 to clear.
  • Some embodiments are based on the realization that the obstacle vehicle 109 can be treated as a static obstacle and the reference trajectory 801 can be updated/repaired by the repair planner 623 to form a repaired path 803 such that the autonomous ego-vehicle 115 can go around the obstacle vehicle 109 and merge back to the reference trajectory 801.
  • the repaired path 803 lies in the same homotope class as the reference trajectory 801, which makes an optimization-based repair planner a viable solution.
  • the optimization-based repair planner takes original waypoints 805 as initialization and move them to optimal collision- free locations 807 to form a repaired path 803.
  • the kinematic model is
  • a decision variable for each time step is u k
  • an input vector that is optimized is denoted as u : [uj > uf, , J] , where H is a planning horizon.
  • a resulting state vector is z
  • cost function is quadratic that has the form: i, which is convex and regular.
  • the first term penalizes a distance from the goal, Xf and the second term penalizes the input.
  • Some embodiments are based on an understanding that the repaired path 803, despite being collision-free, cannot always be followed accurately because the repaired path 803 may not satisfy the autonomous egovehicle’s 115 kinematics or dynamics, causing the autonomous ego- vehicle 115 to collide into obstacles. Therefore, there is a need to verify feasibility of the repaired path 803 before accepting the repaired path 803.
  • FIG. 8B shows a block diagram for verifying the feasibility of the repaired path 803, according to an embodiment of the present disclosure.
  • an iterative linear quadratic regulator (ILQR) tracking controller is used.
  • the ILQR is an optimization-based controller configured to generate a motion plan that tracks the repaired path according to the car kinematics if the repaired path is close to a kinematically feasible trajectory. If the ILQR failed, it means that the repaired path cannot be followed, thus, the repair planner failed.
  • the ILQR is an extension of a LQR control, and determines a sequence of control in a given time horizon by solving an optimization problem.
  • the processor 125 determines if the repaired path 803 can be followed accurately. If the repaired path 803 can be followed accurately, then, at block 813, the processor 125 updates the reference trajectory 801 based on the repaired path 803. If it is determined that the repaired path 803 cannot be followed accurately, then, at block 815, the processor 125 requests the central controller 601 to provide a new reference trajectory.
  • FIG. 9 is a schematic diagram illustrating a system that can be used for implementing systems and methods of the present disclosure.
  • the system 900 includes one or combination of a transceiver 901, an inertial measurement unit (IMU) 903, a display 905, a sensor(s) 907, a memory 909, and a processor 911, operatively coupled to each other through connections 913.
  • the connections 913 can comprise buses, lines, fibers, links, or combination thereof.
  • the transceiver 901 can, for example, include a transmitter enabled to transmit one or more signals over one or more types of wireless communication networks and a receiver to receive one or more signals transmitted over the one or more types of wireless communication networks.
  • the transceiver 901 can permit communication with wireless networks based on a variety of technologies such as, but not limited to, femtocells, Wi-Fi networks or Wireless Local Area Networks (WLANs), which may be based on the IEEE 802.11 family of standards, Wireless Personal Area Networks (WPANS) such Bluetooth, Near Field Communication (NFC), networks based on the IEEE 802.15x family of standards, and/or Wireless Wide Area Networks (WWANs) such as LTE, WiMAX, etc.
  • the system 900 can also include one or more ports for communicating over wired networks.
  • the system 900 can comprise image sensors such as CCD or CMOS sensors, lasers and/or camera, which are hereinafter referred to as "sensor 907".
  • the sensor 907 can convert an optical image into an electronic or digital image and can send acquired images to processor 911. Additionally, or alternatively, the sensor 907 can sense the light reflected from a target object in a scene and submit the intensities of the captured light to the processor 911.
  • the senor 907 can include color or grayscale cameras, which provide "color information.”
  • color information refers to color and/or grayscale information.
  • a color image or color information can be viewed as comprising 1 to N channels, where N is some integer dependent on the color space being used to store the image.
  • an RGB image comprises three channels, with one channel each for Red, Blue, and Green information.
  • the senor 907 can include a depth sensor for providing "depth information.”
  • the depth information can be acquired in a variety of ways using depth sensors.
  • depth sensor is used to refer to functional units that may be used to obtain depth information independently and/or in conjunction with some other cameras.
  • the depth sensor and the optical camera can be part of the sensor 907.
  • the sensor 907 includes RGBD cameras, which may capture per-pixel depth (D) information when the depth sensor is enabled, in addition to color (RGB) images.
  • the senor 907 can include a 3D Time Of Flight (3DTOF) camera.
  • 3DTOF camera the depth sensor can take the form of a strobe light coupled to the 3DTOF camera, which can illuminate objects in a scene and reflected light can be captured by a CCD/CMOS sensor in the sensor 410. Depth information can be obtained by measuring the time that the light pulses take to travel to the objects and back to the sensor.
  • the depth sensor can take the form of a light source coupled to the sensor 907.
  • the light source projects a structured or textured light pattern, which can include one or more narrow bands of light, onto objects in a scene. Depth information is obtained by exploiting geometrical distortions of the projected pattern caused by the surface shape of the object.
  • One embodiment determines depth information from stereo sensors such as a combination of an infra-red structured light projector and an infra-red camera registered to a RGB camera.
  • the sensor 907 includes stereoscopic cameras.
  • a depth sensor may form part of a passive stereo vision sensor, which may use two or more cameras to obtain depth information for a scene. The pixel coordinates of points common to both cameras in a captured scene may be used along with camera pose information and/or triangulation techniques to obtain per-pixel depth information.
  • the system 900 can be operatively connected to multiple sensors 907, such as dual front cameras and/or a front and rear-facing cameras, which may also incorporate various sensors.
  • the sensors 907 can capture both still and video images.
  • the sensor 907 can include RGBD or stereoscopic video cameras capable of capturing images at, e.g., 30 frames per second (fps).
  • images captured by the sensor 907 can be in a raw uncompressed format and can be compressed prior to being processed and/or stored in memory 909.
  • image compression can be performed by the processor 911 using lossless or lossy compression techniques.
  • the processor 911 can also receive input from IMU 903.
  • the IMU 903 can comprise 3-axis accelerometer(s), 3-axis gyroscope(s), and/or magnetometer(s).
  • the IMU 903 can provide velocity, orientation, and/or other position related information to the processor 911.
  • the IMU 903 can output measured information in synchronization with the capture of each image frame by the sensor 907.
  • the output of the IMU 903 is used in part by the processor 911 to fuse the sensor measurements and/or to further process the fused measurements.
  • the system 900 can also include a screen or display 905 rendering images, such as color and/or depth images.
  • the display 905 can be used to display live images captured by the sensor 907, fused images, augmented reality (AR) images, graphical user interfaces (GUIs), and other program outputs.
  • the display 905 can include and/or be housed with a touchscreen to permit users to input data via some combination of virtual keyboards, icons, menus, or other GUIs, user gestures and/or input devices such as styli and other writing implements.
  • the display 905 can be implemented using a liquid crystal display (LCD) display or a light emitting diode (LED) display, such as an organic LED (OLED) display.
  • the display 480 can be a wearable display.
  • the result of the fusion can be rendered on the display 905 or submitted to different applications that can be internal or external to the system 900.
  • Exemplary system 900 can also be modified in various ways in a manner consistent with the disclosure, such as, by adding, combining, or omitting one or more of the functional blocks shown.
  • the system 900 does not include the IMU 903 or the transceiver 901.
  • the system 900 include a variety of other sensors (not shown) such as an ambient light sensor, microphones, acoustic sensors, ultrasonic sensors, laser range finders, etc.
  • portions of the system 400 take the form of one or more chipsets, and/or the like.
  • the processor 911 can be implemented using a combination of hardware, firmware, and software.
  • the processor 911 can represent one or more circuits configurable to perform at least a portion of a computing procedure or process related to sensor fusion and/or methods for further processing the fused measurements.
  • the processor 911 retrieves instructions and/or data from the memory 909.
  • the processor 911 can be implemented using one or more application specific integrated circuits (ASICs), central and/or graphical processing units (CPUs and/or GPUs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, micro-controllers, microprocessors, embedded processor cores, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
  • ASICs application specific integrated circuits
  • CPUs and/or GPUs central and/or graphical processing units
  • DSPs digital signal processors
  • DSPDs digital signal processing devices
  • PLDs programmable logic devices
  • FPGAs field programmable gate arrays
  • controllers micro-controllers, microprocessors, embedded processor cores, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
  • the memory 909 can be implemented within the processor 911 and/or external to the processor 911.
  • the term "memory” refers to any type of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to any particular type of memory or number of memories, or type of physical media upon which memory is stored.
  • the memory 909 holds program codes that facilitate the automated parking.
  • the memory 909 can store the measurements of the sensors, such as still images, depth information, video frames, program results, as well as data provided by the IMU 903 and other sensors.
  • the memory 909 can store a memory storing a geometry of the vehicle, a map of the parking space, a kinematic model of the autonomous ego-vehicle, and a dynamic model of the autonomous ego-vehicle.
  • the memory 909 can represent any data storage mechanism.
  • the memory 909 can include, for example, a primary memory and/or a secondary memory.
  • the primary memory can include, for example, a random-access memory, read only memory, etc.
  • Secondary memory can include, for example, the same or similar type of memory as primary memory and/or one or more data storage devices or systems, such as, for example, flash/USB memory drives, memory card drives, disk drives, optical disc drives, tape drives, solid state drives, hybrid drives etc.
  • secondary memory can be operatively receptive of, or otherwise configurable to a non-transitory computer-readable medium in a removable media drive (not shown).
  • the non-transitory computer readable medium forms part of the memory 909 and/or the processor 911.
  • individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may be terminated when its operations are completed but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments.
  • a process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function’s termination can correspond to a return of the function to the calling function or the main function.
  • embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically.
  • Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof.
  • the program code or code segments to perform the necessary tasks may be stored in a machine readable medium.
  • a processor(s) may perform the necessary tasks.
  • Various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
  • Embodiments of the present disclosure may be embodied as a method, of which an example has been provided.
  • the acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts concurrently, even though shown as sequential acts in illustrative embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Transportation (AREA)
  • Mechanical Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Combustion & Propulsion (AREA)
  • Automation & Control Theory (AREA)
  • Traffic Control Systems (AREA)

Abstract

The present disclosure provides a system and a method for parking an autonomous ego-vehicle in a dynamic environment of a parking area. The method includes collecting measurements of a state of the dynamic environment, a state of one or multiple stationary vehicles and one or multiple obstacle vehicles moving in the parking area. The method further includes executing a path planner configured to produce a trajectory based on the state of the dynamic environment and executing an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles. The method further includes determining a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles and parking the autonomous ego-vehicle based on the trajectory for parking and the safety constraint for each of the obstacle vehicles.

Description

[DESCRIPTION]
[Title of Invention]
SYSTEM AND METHOD FOR PARKING AN AUTONOMOUS EGOVEHICLE IN A DYNAMIC ENVIRONMENT OF A PARKING AREA [Technical Field]
[0001] The present disclosure relates generally to motion planning for autonomous vehicle parking, and more specifically to a system and a method for parking an autonomous ego-vehicle in a dynamic environment of a parking area.
[Background Art]
[0002] Several parking systems are used in autonomous vehicles to automatically park the autonomous vehicle at a parking spot in a parking area. A parking task can be formulated as determining a path or motion of the autonomous vehicle from an initial state to a target state defining the parking spot and then controlling actuators of the autonomous vehicle, e.g., vehicle’s gas pedal and steering wheel, to ensure that the autonomous vehicle follows the determined path or motion.
[0003] However, the parking area may include obstacles, such as moving obstacle vehicles or pedestrians, which the autonomous vehicle should not collide with during parking at the parking spot. Further, the autonomous parking may also involve motion planning in a narrow space. Additionally, in contrast to driving on roads or highways, vehicle motions in the parking area do not have a clear set of rules to follow and largely depend on driver's intention or even skill level. These make motion prediction of the obstacle vehicles and the motion planning of the autonomous vehicle for parking challenging. [0004] The motion prediction, i.e., predicting future states of the obstacle vehicles, is crucial as it determines safety constraints that planning and control modules of the autonomous vehicle has to consider, and thus impacts feasibility and smoothness of the motion planning or the path. Particularly, an accurate short-term motion prediction enables the autonomous vehicle to plan and react safely to the autonomous vehicles, whereas long-term plan/mode prediction allows the autonomous vehicle to plan efficiently and smoothly. Some approaches use an interacting multiple model (IMM) to predict short-term trajectories in parking; while others use classifiers to extract information from movements of the obstacle vehicles to generate short-term motion predictions. However, such data driven methods are not desirable due to lack of guarantee and performance inconsistency with different data sets, and the data driven methods may present a larger prediction error if the data set is chosen poorly.
[0005] Additionally, general motion planning algorithms are not directly suitable for parking in presence of the obstacle vehicles which requires rapid re-planning for complicated driving maneuvers. On the other hand, motion planners specialized in the autonomous parking either fail to integrate shorthorizon planning with long-horizon planning or cannot adequately incorporate online path repairing upon new obstacles in the parking area.
[0006] Accordingly, there is still a need for a system and a method for parking the autonomous vehicle in a dynamic environment of the parking area. [Summary of Invention]
[0007] It is an object of some embodiments provide a system and a method for parking an autonomous ego-vehicle in a dynamic environment of a parking area. Additionally, it is an object to some embodiments to provide a strategic motion planner that varies control strategy of the autonomous ego- vehicle based on different situations in the parking area, for parking the autonomous vehicle. The parking area includes parking spots for parking vehicles. The parking area may further include stationary vehicles parked in the parking area and an obstacle vehicle moving in the parking area.
[0008] It is an object of some embodiments to park the autonomous egovehicle in such a dynamic environment of the parking area, without colliding with the stationary vehicles, the obstacle vehicle, and other obstacles that may be present in the parking area. For example, the autonomous ego-vehicle has an initial state and needs to be parked at a parking spot defined by a target state, without colliding with the stationary vehicles and the obstacle vehicle. Each state, e.g., the initial state and the target state, defines at least a position (x, y) in a 2D plane and an orientation 6 of the autonomous ego-vehicle.
[0009] To achieve such an objective some embodiments, at first, collect measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of each of the stationary vehicles, and a state of the obstacle vehicle. In an embodiment, the state of the obstacle vehicle may include a position and an orientation of the obstacle vehicle, and geometric information of the obstacle vehicle, for example, its wheelbase. Further, a path planner is executed to produce a trajectory for parking the autonomous ego-vehicle, based on the state of the dynamic environment in the parking area.
[0010] Some embodiments are based on the realization that motion of the obstacle vehicle needs to be considered for achieving collision-free parking of the autonomous ego- vehicle in the parking area. To that end, some embodiments aim to predict a path of the obstacle vehicle. [0011] However, some embodiments are based on realization that, considering the dynamic environment of the parking area, it is insufficient to predict only the path of the obstacle vehicle, it is also useful to determine a mode of the motion of the obstacle vehicle. Some embodiments are based on observation that the obstacle vehicle runs in various modes in the parking area, such as a cruising mode and a maneuvering mode. In the cruising mode, the obstacle vehicle has small or steady steering angles. The obstacle vehicle runs in the cruising mode when the obstacle vehicle enters the parking area and is approaching a parking spot or when the obstacle vehicle is exiting the parking area. In the maneuvering mode, the obstacle vehicle changes the steering angle frequently and deviates from its path to park or leave narrow parking spot.
[0012] To that end, it is an objective of some embodiments to predict both the path and the mode of motion the obstacle vehicle. To predict the path and the mode of motion the obstacle vehicle, an environment predictor configured to predict the path and the mode of motion of the obstacle vehicle is executed. [0013] Further, a safety constraint for the obstacle vehicle is determined based on the predicted path and the mode of motion of the obstacle vehicle. For example, if the predicted mode is the cruising mode, then the safety constraint is determined around the state of the obstacle vehicle along its path. The safety constraint determined around the state of the obstacle vehicle along its path is referred to as a safety margin. The safety margin is determined as the safety constraint for the cruising mode because, in the cruising mode, the obstacle vehicle takes less aggressive steering actions, and its motion can be predicted with relatively high confidence. However, in the maneuvering mode, it is difficult to predict the motion of the obstacle vehicle and the obstacle vehicle tends to occupy a larger space to adjust its orientation. To that end, for the maneuvering mode, the safety constraint is determined as safety bounds in all driving directions around the state of the obstacle vehicle. Thus, if the predicted mode is the maneuvering mode, then the safety bounds are determined for the obstacle vehicle. The safety bounds define an area that the autonomous ego vehicle should avoid if the obstacle vehicle is in the maneuvering mode.
[0014] Further, the autonomous ego-vehicle is parked by controlling the autonomous ego-vehicle based on the trajectory and the determined safety constraint.
[0015] Some embodiments are based on the realization that the motion of the autonomous ego-vehicle in the parking area may be constantly adapted in response to changes of the parking area environment. Hence, even after the trajectory for the autonomous ego-vehicle is determined, a need to adjust the trajectory may arise at any time. To that end, to control the parking of the autonomous ego-vehicle, some embodiments use a strategic motion planner that varies control strategy based on different situations in the parking area.
[0016] For example, in some embodiments, the strategic motion planner uses three different strategies based on different situations. The strategic motion planner includes a model predictive control-based safety controller for trajectory tracking, a search-based retreat planner for finding an evasion path in an emergency, and an optimization-based repair planner for modifying the path of the trajectory. The strategic motion planner strikes a balance between safety, plan feasibility, and smooth maneuvering by leveraging advantages of optimization-based and search-based approaches.
[0017] Accordingly, one embodiment discloses an integrated system for parking an autonomous ego-vehicle in a dynamic environment of a parking area. The integrated system comprises a processor; and a memory having instructions stored thereon that, when executed by the processor, cause the integrated system to: collect measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of one or multiple stationary vehicles parked in the parking area, and a state of one or multiple obstacle vehicles moving in the parking area; execute a path planner configured to produce a trajectory for parking the autonomous ego-vehicle based on the state of the dynamic environment in the parking area; execute an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles, wherein the mode of motion includes a first mode of motion or a second mode of motion; determine a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles, wherein the safety constraint is determined around the state of an obstacle vehicle along its path if the obstacle vehicle is in the first mode of motion, and wherein the safety constraint is determined in all driving directions around the state of the obstacle vehicle if the obstacle vehicle is in the second mode of motion; and park the autonomous ego- vehicle based on the trajectory for parking and the safety constraint for each of the obstacle vehicles.
[0018] Accordingly, another embodiment discloses a method for parking an autonomous ego-vehicle in a dynamic environment of a parking area. The method comprises collecting measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of one or multiple stationary vehicles parked in the parking area, and a state of one or multiple obstacle vehicles moving in the parking area; executing a path planner configured to produce a trajectory for parking the autonomous ego-vehicle based on the state of the dynamic environment in the parking area; executing an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles, wherein the mode of motion includes a first mode of motion or a second mode of motion; determining a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles, wherein the safety constraint is determined around the state of an obstacle vehicle along its path if the obstacle vehicle is in the first mode of motion, and wherein the safety constraint is determined in all driving directions around the state of the obstacle vehicle if the obstacle vehicle is in the second mode of motion; and parking the autonomous ego-vehicle based on the trajectory for parking and the safety constraint for each of the obstacle vehicles.
[0019] Accordingly, yet another embodiment discloses a non-transitory computer-readable storage medium embodied thereon a program executable by a processor for performing a method for parking an autonomous ego- vehicle in a dynamic environment of a parking area. The method comprises collecting measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of one or multiple stationary vehicles parked in the parking area, and a state of one or multiple obstacle vehicles moving in the parking area; executing a path planner configured to produce a trajectory for parking the autonomous ego- vehicle based on the state of the dynamic environment in the parking area; executing an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles, wherein the mode of motion includes a first mode of motion or a second mode of motion; determining a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles, wherein the safety constraint is determined around the state of an obstacle vehicle along its path if the obstacle vehicle is in the first mode of motion, and wherein the safety constraint is determined in all driving directions around the state of the obstacle vehicle if the obstacle vehicle is in the second mode of motion; and parking the autonomous ego-vehicle based on the trajectory for parking and the safety constraint for each of the obstacle vehicles.
[0020] The presently disclosed embodiments will be further explained with reference to the attached drawings. The drawings shown are not necessarily to scale, with emphasis instead generally being placed upon illustrating the principles of the presently disclosed embodiments.
[Brief Description of Drawings]
[0021]
[Fig- 1A]
FIG. 1 A illustrates an example parking area, according to some embodiments of present disclosure.
[Fig. IB]
FIG. IB shows a block diagram of a system for parking an autonomous egovehicle, according to an embodiment of the present disclosure.
[Fig. 1C]
FIG. 1C shows a block diagram of a method for parking the autonomous egovehicle in a dynamic environment of the parking area, according to an embodiment of the present disclosure.
[Fig. ID]
FIG. ID shows a schematic illustrating different paths along which an obstacle vehicle may traverse to exit the parking area, according to an embodiment of the present disclosure. [Fig. IE]
FIG. IE shows a schematic illustrating a safety margin determined around the state of the obstacle vehicle along its path, according to an embodiment of the present disclosure.
[Fig. IF]
FIG. IF shows a schematic illustrating safety bounds, according to an embodiment of the present disclosure.
[Fig. 2]
FIG. 2 shows a schematic of an environment predictor, according to an embodiment of the present disclosure.
[Fig. 3A]
FIG. 3 A shows a schematic of a motion estimator based on a multi-stage estimation framework, according to an embodiment of the present disclosure. [Fig. 3B]
FIG. 3B shows a schematic for motion estimation based on a dynamic model, according to an embodiment of the present disclosure.
[Fig. 3C]
FIG. 3C shows schematics of paths of an obstacle vehicle predicted using different models, according to an embodiment of the present disclosure.
[Fig. 3D]
FIG. 3D illustrates an example short-term motion prediction result using a bicycle model and an example short-term motion prediction result using a unicycle model, according to an embodiment of the present disclosure.
[Fig. 4A] FIG. 4A shows a schematic of the long-term mode predictor for estimation of a probability of each mode, according to an embodiment of the present disclosure.
[Fig- 4B]
FIG. 4B shows a schematic for formulation of a cost map, according to an embodiment of the present disclosure.
[Fig- 5]
FIG. 5 shows a block diagram for determining a safety constraint and a safety set, according to an embodiment of the present disclosure.
[Fig. 6]
FIG. 6 shows a block diagram of a motion prediction and planning framework, according to an embodiment of the present disclosure. [Fig. 7 A]
FIG. 7A shows an example scenario where the safety constraint is violated and a retreat planner is triggered, according to an embodiment of the present disclosure.
[Fig. 7B]
FIG. 7B shows a block diagram of a method of the retreat planner for determining a path between a current location and a safe spot for retreating, according to an embodiment of the present disclosure.
[Fig. 7C]
FIG. 7C shows a schematic illustrating an evaluation of node cost, according to an embodiment of the present disclosure.
[Fig. 7D]
FIG. 7D shows a schematic illustrating a process of the retreat planner, according to an embodiment of the present disclosure. [Fig. 8A]
FIG. 8A illustrates an example scenario, where the obstacle vehicle stops on a reference trajectory, according to an embodiment of the present disclosure.
[Fig. 8B]
FIG. 8B shows a block diagram for verifying feasibility of a repaired path, according to an embodiment of the present disclosure.
[Fig. 9]
FIG. 9 is a schematic diagram illustrating a system that can be used for implementing systems and methods of the present disclosure.
[Description of Embodiments]
[0022] In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the present disclosure. It will be apparent, however, to one skilled in the art that the present disclosure may be practiced without these specific details. In other instances, apparatuses and methods are shown in block diagram form only in order to avoid obscuring the present disclosure.
[0023] As used in this specification and claims, the terms “for example,” “for instance,” and “such as,” and the verbs “comprising,” “having,” “including,” and their other verb forms, when used in conjunction with a listing of one or more components or other items, are each to be construed as open ended, meaning that that the listing is not to be considered as excluding other, additional components or items. The term “based on” means at least partially based on. Further, it is to be understood that the phraseology and terminology employed herein are for the purpose of the description and should not be regarded as limiting. Any heading utilized within this description is for convenience only and has no legal or limiting effect. [0024] FIG. 1A illustrates an example parking area 100, according to some embodiments of present disclosure. The parking area 100 includes parking spots, such as a spot 101, for parking vehicles. The parking area 100 further includes one or multiple stationary vehicles, such as vehicles 103, 105, and 107, parked in the parking area 100, and one or multiple obstacle vehicles moving in the parking area 100, for example, an obstacle vehicle 109. The obstacle vehicle 109 may be leaving the parking area 100 by following a path 111. Additionally, the parking area 100 includes static obstacles such as a wall 113 and/or columns of the parking area 100.
[0025] It is an object of some embodiments to park an autonomous egovehicle 115 in such a dynamic environment of the parking area 100, without colliding with the stationary vehicles 103, 105, and 107, the obstacle vehicle 109, the wall 113, and other obstacles that may be present in the parking area 100. For example, the autonomous ego-vehicle 115 has an initial state 117 and needs to be parked at a parking spot defined by a target state 119, without colliding with the stationary vehicles 103, 105, and 107, the obstacle vehicle 109, and the wall 113. Each state, e.g., the initial state 117 and the target state 119, defines at least a position (x, y) in a 2D plane and an orientation 6 of the autonomous ego-vehicle 115.
[0026] To achieve such an objective some embodiments provide a system for parking the autonomous ego-vehicle 115. The system for parking the autonomous ego-vehicle 115 is explained in FIG. IB.
[0027] FIG. IB shows a block diagram of a system 123 for parking the autonomous ego-vehicle 115, according to an embodiment of the present disclosure. The system 123 includes a processor 125 and a memory 127. The processor 125 may be a single core processor, a multi-core processor, a computing cluster, or any number of other configurations. The memory 127 may include random access memory (RAM), read only memory (ROM), flash memory, or any other suitable memory systems. Additionally, in some embodiments, the memory 127 may be implemented using a hard drive, an optical drive, a thumb drive, an array of drives, or any combinations thereof. In an embodiment, the system 123 may be integrated into the autonomous egovehicle 115. The processor 125 is configured to execute various functions to park the autonomous ego-vehicle 115, as explained in FIG. 1C.
[0028] FIG. 1C shows a block diagram of a method 129 for parking the autonomous ego- vehicle 115 in the dynamic environment of the parking area 100, according to an embodiment of the present disclosure. At block 131, the processor 125 collects measurements of a state of the dynamic environment in the parking area 100 indicative of available parking spots in the parking area 100, a state of each of the stationary vehicles 103, 105, and 107, and a state of the obstacle vehicle 109. In an embodiment, the state of the obstacle vehicle 109 may include a position and an orientation of the obstacle vehicle 109, and geometric information of the obstacle vehicle 109, for example, its wheelbase. [0029] At block 133, the processor 125 executes a path planner configured to produce a trajectory for parking the autonomous ego- vehicle 115 based on the state of the dynamic environment in the parking area 100. For example, the processor 125 executes the path planner to produce a trajectory 121, shown in FIG. 1A, for parking the autonomous ego- vehicle 115 at the target state 119. In an embodiment, the path planner may be stored in the memory 127.
[0030] Some embodiments are based on the realization that motion of the obstacle vehicle 109 needs to be considered for achieving collision-free parking of the autonomous ego- vehicle 115 in the parking area 100. To that end, some embodiments aim to predict a path of the obstacle vehicle 109. FIG. ID shows a schematic illustrating different paths along which the obstacle vehicle 109 may traverse to exit the parking area 100, according to an embodiment of the present disclosure. The obstacle vehicle 109 may, for example, traverse along a path 109a and or a path 109b.
[0031] Some embodiments are based on realization that, considering the dynamic environment of the parking area 100, it is insufficient to predict only the path of the obstacle vehicle 109, it is also useful to determine a mode of the motion of the obstacle vehicle 109. Some embodiments are based on observation that the obstacle vehicle 109 runs in various modes in the parking area 100, such as a cruising mode and a maneuvering mode. In the cruising mode, the obstacle vehicle 109 has small or steady steering angles. The obstacle vehicle 109 runs in the cruising mode when the obstacle vehicle 109 enters the parking area 100 and is approaching a parking spot or when the obstacle vehicle 109 is exiting the parking area 100. The cruising mode is also referred to as a first mode. In the maneuvering mode, the obstacle vehicle 109 changes the steering angle frequently and deviates from its path to park or leave narrow parking spot. The maneuvering mode is also referred to as a second mode.
[0032] To that end, it is an objective of some embodiments to predict both the path and the mode of motion the obstacle vehicle 109. To predict the path and the mode of motion the obstacle vehicle 109, at block 135, the processor 125 executes an environment predictor configured to predict the path and the mode of motion of the obstacle vehicle 109. In an embodiment, the environment predictor may be stored in the memory 127. [0033] Further, at block 137, the processor 125 determines a safety constraint for the obstacle vehicle 109 based on the predicted path and the mode of motion of the obstacle vehicle 109. For example, if the predicted path is the path 109b and the predicted mode is the cruising mode, then the safety constraint is determined around the state of the obstacle vehicle 109 along its path. The safety constraint determined around the state of the obstacle vehicle 109 along its path is referred to as a safety margin. FIG. IE shows a schematic illustrating a safety margin 141 determined around the state of the obstacle vehicle 109 along its path 109b, according to an embodiment of the present disclosure. The safety margin 141 is determined as the safety constraint for the cruising mode because, in the cruising mode, the obstacle vehicle 109 takes less aggressive steering actions, and its motion can be predicted with relatively high confidence. However, in the maneuvering mode, it is difficult to predict the motion of the obstacle vehicle 109 and the obstacle vehicle 109 tends to occupy a larger space to adjust its orientation. To that end, for the maneuvering mode, the safety constraint is determined as safety bounds in all driving directions around the state of the obstacle vehicle 109.
[0034] Thus, for example, if the predicted path is the path 109b and the predicted mode is the maneuvering mode, then the safety bounds are determined for the obstacle vehicle 109. FIG. IF shows a schematic illustrating safety bounds 143a and 143b, according to an embodiment of the present disclosure. The safety bounds 143a and 143b define an area 145 that the autonomous ego vehicle 115 should avoid if the obstacle vehicle 109 is traversing the path 109b in the maneuvering mode. [0035] Further, at block 139, the processor 125 parks the autonomous ego-vehicle 115 by controlling the autonomous ego- vehicle 115 based on the trajectory 121 and the determined safety constraint.
[0036] , According to an embodiment, the trajectory 121 may be determined based on a kinematic model of the autonomous ego-vehicle 115 that describes a motion of the autonomous ego-vehicle 115 without consideration of a mass/inertia of the autonomous ego-vehicle 115 or forces/torques that caused the motion. One example of such a kinematic model is given by
Figure imgf000018_0001
0 = u^,
(1) where (x, y) coordinates of a midpoint of a rear wheel-axis, 0 an orientation of the autonomous ego-vehicle 115,
Figure imgf000018_0002
a velocity of the midpoint of the rear wheel axis (named after longitudinal velocity of the autonomous ego- vehicle 115), and u2 = tan(5)/b with 6 an angle between front wheels and the orientation, b a distance between x, y) and the midpoint of the front wheels (or called wheelbase). Without loss of generality, u2 is referred as a (normalized) steering angle.
[0037] In some embodiments, the kinematic model can include an additional state variable to ensure continuous curvature. In another embodiment, the kinematic model can include a longitudinal velocity, a steering angle, and a steering angle rate as additional state variables.
[0038] In some embodiments, the motion of the autonomous ego-vehicle 115, specifying how fast the autonomous ego-vehicle 115 moves along the path, is determined based on a dynamic model of the autonomous ego-vehicle 115. As used herein, the dynamic model of the autonomous ego-vehicle 115 accounts for time-dependent changes in the state of the autonomous egovehicle 115. The dynamic models are represented by ordinary differential equations. In one embodiment, the dynamic model of the autonomous egovehicle 115 is fifth order differential equation: x = cos(0)v y ~ sin(0)v £ > vtan(g) b V = a 6 = vs,
(2) where a is a longitudinal acceleration, and vs is the steering rate (steering angular velocity).
[0039] A path/motion is kinematically feasible if it is a solution of the kinematic model (1). An autonomous ego-vehicle state X = (x, y, 0) is collision free only if the autonomous ego-vehicle 115 located at X does not collide with any obstacle. All collision free vehicle states constitute a collision free configuration space Cfree . The initial state 117 is abbreviated as Xo = OWoA), and the target state 119 is denoted by X? = (Xf, yf, 0f) . For a specific parking task with a parking area represented by a rectangle L x H, the autonomous ego-vehicle state belongs to a state space X : [0, L) X [0, H) x [0,271).
[0040] In some embodiments, a path/motion is generated according to the dynamic model (2). A dynamically feasible path/motion is a solution of the dynamic model (2). Accordingly, an autonomous ego-vehicle state is X = (x, y, 0, v, 5). Since a vehicle typically starts and ends with zero velocity and steering angle, thus the initial state 117 and the target state 119 for the dynamic model (2) are Xo = (x0, y0, Oo, 0,0) and X = (x , Of, 0,0), respectively.
[0041] According to an embodiment, the path of the obstacle vehicle 109 can be determined based on a position and a velocity of the obstacle vehicle 109. However, some embodiments are based on the realization that the position and the velocity is insufficient to estimate the modes of the motion of the obstacle vehicle 109. This is because a vehicle at the same position and having the same velocity at any instance of time can be in different modes. To that end, some embodiments are based on the realization that the steering angle can be more indicative of the modes of the motion because for obstacle vehicle 109 in the cruising mode the steering angle is more aligned with the velocity and/or orientation of the obstacle vehicle 109 than the steering angle of the obstacle vehicle 109 in the maneuvering mode.
[0042] Therefore, in some embodiments, the environment predictor estimates the position, the velocity, and the steering angle of the obstacle vehicle 109 at different instances of time and estimates the path of the obstacle vehicle 109 based on the position and the velocity of the obstacle vehicle 109 at the different instances of time. The environment predictor further estimates the mode of the motion of the obstacle vehicle 109 based on the steering angle of the obstacle vehicle 109 at the different instances of time. Additionally, or alternatively, in some embodiments, the environment predictor determines the mode of the motion of the obstacle vehicle 109 based on a misalignment of the orientation of the obstacle vehicle 109 and the steering angle at the different instances of time. The environment predictor used to determine the path and the mode of motion of the obstacle vehicle 109 is explained in detail below in FIG. 2. [0043] FIG. 2 shows a schematic 200 of the environment predictor, according to an embodiment of the present disclosure. Based on measurements, a motion estimator 201 estimates a current state 203 of the obstacle vehicle 109, and a motion predictor 205 predicts a path 207 over a finite time horizon. The measurements include one or a combination of the target state 119, map information, the position, and the orientation of the obstacle vehicle 109, and the geometric information of the obstacle vehicle 109. Further, based on the current state 203 and the predicted path 207, a long-term mode predictor 209 determines a probability 211 of each mode that the obstacle vehicle 109 possibly belongs to. Further, based on the current state 203 and the predicted path 207, and the mode probability 211, the processor 125 determines a safety constraint 213.
[0044] In some embodiments, the motion estimator 201 can be realized by implementing an Extended Kalman Filter (EKF) based on discretization of the following continuous-time unicycle model
Figure imgf000021_0001
where a> represents a yaw rate of obstacle vehicle 109, and qit 1 < i < 5 are Gaussian processes, with zero mean and certain covariances, representing model uncertainties, and r, 1 < i < 3 are Gaussian processes, with zero mean and certain covariances, representing measurement noises. An output of the EKF, e.g., the estimated current state 203, is denoted by x,y, 6, v, co). Some embodiments are based on recognition that it is computationally advantageous to use two EKFs. One EKF estimates (x, y, v) based on measurement of (x, y), and another EKF estimates (0, co) based on measurement (0, v). However, performing motion estimation, or the motion estimator 201, based on the model (3) implicitly restricts the constant velocity and yaw rate over the entire prediction time horizon to obtain the predicted path 207 of the obstacle vehicle 109, and thus a mean of the predicted path 207 is always a circle in the 2D plane.
[0045] In some embodiments, the motion estimator 201 can be realized based on discretization of the following continuous-time bicycle model
Figure imgf000022_0001
where 6 represents the steering angle of the obstacle vehicle 109, and qit 1 < i < 5 are Gaussian processes representing model uncertainties, and r, 1 < i < 3 are Gaussian processes representing measurement noises. Given the nonlinear model (4), state estimators such as EKF or particle filter are applied for the state estimation. Some embodiments are based on an observation that it is not straightforward to tune the EKF for accurate estimation of the steering angle. This is partially attributed to term vtan(d) which involves multiplication of unmeasured states. Meanwhile, heavy computation also presents a hurdle for the adoption of the particle filter. [0046] To address such shortcomings, some embodiments provide the motion estimator 201 based on a multi-stage estimation framework. The motion estimator 201 based on the multi-stage estimation framework is explained below in FIG. 3A.
[0047] FIG. 3 A shows a schematic of the motion estimator 201 based on the multi-stage estimation framework, according to an embodiment of the present disclosure. An EKF 301 is configured to estimate (x, y, v) of the obstacle vehicle 109, based on the measurements and a partial dynamic model given by
Figure imgf000023_0001
[0048] In an embodiment, the EKF 301 integrates measurements of the position of the obstacle vehicle 109 to estimate the velocity of the obstacle vehicle 109. Thus, the EKF 301 outputs an estimate 303 denoted as (x, y, v). Further, the estimations of the velocity at the different instances of time may be integrated forward in time to produce estimations of the steering angle at the different instances of time. Specifically, in an embodiment, the estimate 303 drives an adaptive state estimator 305 to estimate the steering angle based on the following model
Figure imgf000023_0002
[0049] The adaptive state estimator’s 305 output is denoted as (0, 5). The estimate of the current state 203 of the obstacle vehicle 109 is a combination of the estimate 303 and the adaptive state estimator output, i.e., (x,y, 9, v, 5), where in some embodiments, 9 is from the EKF 301, and in another embodiment 9 is from the adaptive state estimator 305. It is noteworthy that this model, capturing explicit nonlinearity in 9 -dynamics, offers a better prediction performance than the estimation based on assuming 9 = o)=constant. [0050] Another embodiment is based on the realization that it is not valid to assume constant velocity and steering angle of the obstacle vehicle 109 over a specified prediction horizon. Such an assumption is particularly inadequate to characterize future motion when the obstacle vehicle 109 is parking, where frequent changes of moving direction (involves lots of acceleration and deceleration) and steering actions (involving non-constant steering velocity) are anticipated. To accurately predict a short-term motion of the obstacle vehicle 109 and thus improve a confidence of successive mode estimation, it is advantageous to reconstruct a control input u , including the longitudinal acceleration a and a steering velocity vs.
[0051] To that end, some embodiments provide the motion estimator 201 based on the following dynamic model of the obstacle vehicle 109
Figure imgf000024_0001
where s = tan(^-\ and q6, q7 are Gaussian processes with zero mean and Lt certain covariances. A multi-stage framework, similar to the multi-stage framework described in FIG. 3A, is employed to solve the motion estimation based on the dynamic model (7).
[0052] FIG. 3B shows a schematic for the motion estimation based on the dynamic model (7), according to an embodiment of the present disclosure. The EKF 301 estimates (x, y, v, a) based on discretization of the dynamic model
Figure imgf000025_0001
[0053] Thus, the EKF 301 outputs an estimate 307 denoted as
Figure imgf000025_0002
v, a). The adaptive state estimator 305, addressing nonlinearity in 0 and s-dynamics, performs estimation based on the rest dynamic model
0 = sv s = Ls2vs vs = 0 k } z = [0, v]T.
[0054] The adaptive state estimator 305 produces an estimate (0, s, vs). The estimate of the current state 203 of the obstacle vehicle 109 is a combination of the estimate 307 and the estimate (0, s, vs) , i.e., (x, y, 0, v, a, s, vs), where in some embodiments, 0 in the current state 203 is from the EKF 301, and in another embodiment 0 in the current state 203 is from the adaptive state estimator 305. In some embodiments, the adaptive state estimator 305, adopting the multi-stage framework, first estimates s based on v, 9 from the EKF 301, and then estimates vs based on the dynamics of s, vs, while treating s as a measurement. By estimating control actions vs, a along with (x, y, 0, v, S} , the path of the obstacle vehicle 109 can be predicted accurately.
[0055] FIG. 3C shows schematics of paths of the obstacle vehicle 109 predicted using different models, according to an embodiment of the present disclosure. An example path predicted using the unicycle model (3) is represented by 309. An example path predicted using the bicycle model (4) is represented by 311. An example path predicted using the dynamic model (7) is represented by 313.
[0056] In an embodiment, the EKF 301 and the adaptive state estimator 305 are used to estimate a state [x, y, 0, S, v]T of the obstacle vehicle 109, based on the bicycle model (4). Here, the EKF 301 is based on the following discretetime model (obtained by applying Euler discretization method):
Figure imgf000026_0001
where k is time step, q is disturbance, zk = [zXik, zy k, Zgik]T is measurement at the time step k, Xov>k = [xk, yk, 0k]T, and rk is zero mean Gaussian processes. The EKF 301 outputs XEKF k = [xk, yk, 0k, vk, a)k]T. Estimation of the steering angle is based on the following continuous-time model 0 = vs, ze = 0, (11) tnntS* where Zg is the measurement and s = — Further, the adaptive state Li estimator 305 can be designed as follows:
Figure imgf000027_0001
where M is an auxiliary signal, g, A e H are tuning gains, 9a is an estimated heading angle, and v is a velocity estimated by the EKF 301. The estimated steering angle can be calculated with s, i.e., 8 = tan-1 (si). An output of the adaptive state estimator 305 is denoted as Xcc k = [xk, yk, §k, 8k, vk] T . In another embodiment, the output of the adaptive state estimator 305 is Xcc k = [xk, yk, §k, 8k, vk, ak, vsk]T, where ak and vsk denote an estimated longitudinal acceleration (control action) and steering velocity at the Zcth time step, respectively.
[0057] For the sake of computation efficiency, it is assumed the shortterm motion of the obstacle vehicle 109 is fully captured by a mean value of Xcc and its covariance. For the mean value, the estimated current state 203 is propagated forward, based on its dynamic model, and a short-term motion prediction XH k = [X-^, ... ,X^fc]T for future H steps of the time horizon is obtained. Similarly, forward propagation is conducted to obtain covariance matrices PmH k = {Pmk+1, ... , Pmk+H] according to EKF’s forward prediction formula.
[0058] FIG. 3D illustrates an example short-term motion prediction result 315 based on the bicycle model and an example short-term motion prediction result 317 based on the unicycle model, according to an embodiment of the present disclosure. Bounding box footprints are 319 and 321. A prediction deviation from ground truth 323 is smaller when using constant steering provided by the bicycle model than when using constant yaw rate provided by the unicycle model. For example, one embodiment uses a kinematic simulator that takes constant inputs and records the state-changes throughout a period of time.
[0059] Some embodiments are based on the realization that a long-term motion of the obstacle vehicle 109 in the parking area 100 can be characterized by possible paths (including 201a and 201b as an example), possible modes (maneuvering or cruising for each path), and a probability of the obstacle vehicle 109 following a certain mode. Combining the path and mode information, the obstacle vehicle 109 that has N possible paths to follow may have 2N possible modes. According to an embodiment, prediction of the longterm motion, at the k time step, corresponds to estimation of the probability of each mode by the long-term mode predictor 209.
[0060] FIG. 4A shows a schematic of the long-term mode predictor 209 for the estimation of the probability of each mode, according to an embodiment of the present disclosure. The long-term motion prediction, at the k time step, corresponds to estimation of the probability of each mode m based on the current state 203 and the predicted path 207, belief about mode probability b(mfc_i) at prior time step, a mode belief forward propagation matrix
Figure imgf000028_0001
and a conditional probability of the current state 203 and the predicted path 207 given mode m.
[0061] To determine the mode m that the obstacle vehicle 109 belongs to at time step k, i.e., mk, Bayesian framework is employed to keep track of a belief of each mode, i.e., b(mfc). At block, 401, the processor 125 performs mode belief forward propagation to determine the mode belief forward propagation matrix p(mk\mk^1). At block 403, the processor 125 performs mode belief posterior update, based on a cost map 405 and the mode belief forward propagation matrix pfjn^m^^, to determine a posterior probability distribution
Figure imgf000029_0001
XH k, M route}- The cost map 405 is used to exploit the obstacle vehicle 109 ’s relative movement against parking area environment and capture possible long-term movements of the obstacle vehicle 109. The posterior probability distribution is proportional to
Figure imgf000029_0003
the prior multiplied with the conditional probability of the current state 203 and the predicted path 207 given mode, i.e., . Further, at
Figure imgf000029_0004
block 407, the processor 125 normalizes the posterior probability distribution to obtain P(mfc) 409, based on which, at block 411, the processor 125 selects a value of mk with the highest belief to be the mode mk 413 of the obstacle vehicle 109. Further, in some embodiments, at block 415, F(mk) 409 and the mode mk 413 are merged as the mode probability 211 for determining a safety set. The determination of the safety set is explained in detail FIG. 5. For example, the safe area where the car can occupy, i.e., the area excluding the spaces enclosed by the safety margin and the safety bounds. In some embodiments, the safe set is represented as safety constraints.
[0062] FIG. 4B shows a schematic for formulation of the cost map 405, according to an embodiment of the present disclosure. According to an embodiment, the cost map 405 may be determined based on a vehicle state Xcc k 405a, a short-term predicted trajectory XH k 405b, and a route planner 405c. The cost map 405 represents a conditional probability of the obstacle vehicle 109 following the predicted path 207 given that the obstacle vehicle 109 is in the mode m. In some embodiments, Boltzmann policy is adopted to represent this conditional probability as
Figure imgf000029_0002
[0063] Function Mr.0Ute (mkl Xcc kl XHik') measures a distance between the predicted path 207 and the path 109b related to mk :
Figure imgf000030_0001
where is ith waypoint of the path in mode
Figure imgf000030_0004
Figure imgf000030_0005
Figure imgf000030_0006
2 are weighting matrices. Function is proportional to a magnitude of the obstacle vehicle 109’s
Figure imgf000030_0007
steering angle and a deviation of the obstacle vehicle 109’s heading
Figure imgf000030_0008
angle 0cc k from a final heading angle of the path
Figure imgf000030_0009
Figure imgf000030_0002
[0064] FIG. 5 shows a block diagram for determining the safety constraint (such as safety margin 141 and the safety bounds 143a and 143b) and the safety set, according to an embodiment of the present disclosure. At block 501, the mode probability 211 is split into P(mfc) 409 and the mode
Figure imgf000030_0014
413. At block 503, the processor 125 determines if the obstacle vehicle 109 is in the cruising mode, based on 413. If the obstacle vehicle 109 is in the cruising mode,
Figure imgf000030_0015
then, at block 505, the processor 125 determines safety margins
Figure imgf000030_0010
(f°r example, the safety margin 141 shown in FIG. IE). According to an embodiment, the safety margin of hth future time step is an ellipsoid and a length of principal semi-axes s
Figure imgf000030_0011
k+h ( are proportional to differential entropy of a mode
Figure imgf000030_0012
and a covariance from the motion estimation
Figure imgf000030_0013
Figure imgf000030_0003
log(fe(7nfc))).
[0065] If the obstacle vehicle 109 is not in the cruising mode, then the processor 125 contemplates that the obstacle vehicle 109 is in the maneuvering mode and, at block 507, the processor 125 determines safety bounds (for example, the safety bounds 143a and 143b shown in FIG. IF). Further, at block 509, the processor 125 determines a safety set based on the safety margin and/or the safety bounds, buncertainty
Figure imgf000031_0001
Additionally or alternatively, some embodiment use differential entropy of the mode belief as the Mode belief uncertainty metric.
[0066] Some embodiments are based on the realization that the motion of the autonomous ego-vehicle 115 in the parking area 100 may be constantly adapted in response to changes of the parking area environment. Hence, even after the trajectory 121 for the autonomous ego-vehicle 115 is determined, a need to adjust the trajectory 121 may arise at any time. To that end, to control the parking of the autonomous ego-vehicle 115, some embodiments use a strategic motion planner that varies control strategy based on different situations in the parking area 100.
[0067] For example, in some embodiments, the strategic motion planner uses three different strategies based on different situations. The strategic motion planner includes a model predictive control-based safety controller for trajectory tracking, a search-based retreat planner for finding an evasion path, and an optimization-based repair planner for planning a new trajectory/repair path when a reference trajectory is invalidated. The strategic motion planner strikes a balance between safety, plan feasibility, and smooth maneuvering by leveraging advantages of optimization-based and search-based approaches.
[0068] Further, some embodiments are based on the realization that the strategic motion planner can be integrated with the environment predictor to form an integrated motion prediction and planning framework. Such a framework is explained below in FIG. 6.
[0069] FIG. 6 shows a block diagram of a motion prediction and planning framework 600, according to an embodiment of the present disclosure. The motion prediction and planning framework 600 includes a central controller 601, an environment predictor 603 and a strategic motion planner 605. The environment predictor 603 is same as the environment predictor 603 described in FIG. 2. In an embodiment, the central controller 601, the environment predictor 603 and the strategic motion planner 605 are stored in the memory 127.
[0070] During run-time, the central controller 601 first initializes 607 a path planner 611. The path planner 611 generates a reference trajectory
Figure imgf000032_0003
613, based on a parking area map 609, without considering the moving obstacle vehicle 109. In an embodiment, a Bi-directional A-search Guide Tree (BIAGT) algorithm used as the path planner 611, as it is guaranteed to plan a trajectory that brings the autonomous ego vehicle 115 to a target state.
[0071] The environment predictor 603 monitors the parking area environment and produces prediction 615 of movements of the obstacle vehicle 109. Based on the prediction 615, the strategic motion planner 605 checks 617 if a current state of the autonomous ego-vehicle 115 is violating the safety constraint. If there is safety violation (i.e., the violation of the safety constraint), then a retreat planner 619 is triggered. The retreat planner 619 determines a retreat path to avoid collision with the obstacle vehicle 109 and updates the reference traj ectory
Figure imgf000032_0004
f .
[0072] In case, the current state of the autonomous ego-vehicle 115 is not violating the safety constraint, the strategic motion planner 605 checks 621 if the reference trajectory T
Figure imgf000032_0005
needs to be repaired. For example, the strategic motion planner 605 checks if it is possible to track the path of the reference trajectory
Figure imgf000032_0001
613 without stopping the autonomous ego-vehicle 115. If it is not possible to track the path of the reference trajectory
Figure imgf000032_0002
613 without stopping the autonomous ego- vehicle 115, then a repair planner 623 is activated. The repair planner 623 updates the reference trajectory .
Figure imgf000033_0002
[0073] Further, the strategic motion planner 605 determines if it is possible to track the updated reference trajectory without stopping the autonomous ego-vehicle 115. If it is determined that it is possible to track the updated reference trajectory without stopping the autonomous ego- vehicle 115, then a model predictive controller (MPC)-based safety controller 625 is executed to track the updated reference trajectory. In an embodiment, the MPC- based safety controller 625 submits control commands, e.g., steering angle and velocity, to the autonomous ego- vehicle 115 to track the updated reference trajectory.
[0074] If the repair planner 623 cannot succeed, for example, if it is determined that it is not possible to track the updated reference trajectory without stopping the autonomous ego-vehicle 115, then the repair planner 623 requests the central controller 601 to re-plan 627 by updating the parking area map 609 and regenerating a reference trajectory.
[0075] In an embodiment, the MPC-based safety controller 625 computes tracking motions to follow reference trajectory
Figure imgf000033_0001
in an MPC framework. Let
Figure imgf000033_0005
be a segment of the reference trajectory P “ ref to track at time step k. k is selected and trimmed so that it neither violates the safety margin (in
Figure imgf000033_0004
all modes) nor the safety bounds (in maneuvering mode). The trajectory tracking problem is formulated as follows: Given a planning horizon H , a model the autonomous ego- vehicle 115, and the segment X
Figure imgf000033_0003
solve the following optimization problem for a sequence of control actions
Figure imgf000034_0002
uk 3
Figure imgf000034_0001
k are configurations that maintains the
Figure imgf000034_0003
autonomous ego-vehicle 115 to be clear of the safety margins and safety bounds. The optimization problem (14) can be readily formulated as a non- convex optimization problem using various software tools and solved using nonlinear programming solvers, e.g., IPOPT.
[0076] FIG. 7A shows an example scenario where the safety constraint is violated and the retreat planner 619 is triggered, according to an embodiment of the present disclosure. Upper row 701a shows a prediction at time step k, where a short-term motion predicted trajectory 703 and a safety margin 705 are shown. The processor 125 checks if the safety margin 705 at t, t
Figure imgf000034_0004
is in collision with the autonomous ego-vehicle’s 115 current location. In other words, the processor 125 checks if the safety constraint, i.e., the safety margin 705, is violated. The safety constraint violation normally does not happen unless the obstacle vehicle 109 changes its movement drastically. As shown in bottom row 701b, the obstacle vehicle 109 changes its movement in time step k + 1, and the autonomous ego- vehicle 115 will suddenly be in collision with the safety margin 705 in H/2 time steps and eventually in collision with the obstacle vehicle 109 if the autonomous ego-vehicle 115 does not retreat. In such a scenario, the retreat planner 619 determines a safe spot and a path between the current location and the safe spot for the autonomous ego-vehicle 115 to retreat. [0077] FIG. 7B shows a block diagram of a method of the retreat planner 619 for determining the path between the current location and the safe spot for retreating (i.e., the retreat path), according to an embodiment of the present disclosure. The retreat planner 619 constructs a tree composed of a
Figure imgf000035_0002
node set and an edge set 8 , where a node in the node set V
Figure imgf000035_0004
represents a collision free state can be reached from the current location, and represents a feasible path between , representing
Figure imgf000035_0003
Figure imgf000035_0001
all possible states where the autonomous ego-vehicle 115 does not overlap any obstacles, is implicitly obtained by checking collisions with obstacles in the parking area map. Let M denote a finite set of motion primitives pre-computed through available control actions, and Vmax denotes a maximum number of nodes allowed.
[0078] At block 707, the processor 125 adds the current location as a root node to a priority queue. At block 709, the processor 125 selects a node with the lowest cost and pops from the priority queue and expands by applying the precomputed motion primitives. Further, at block 711, the processor 125 checks if all the precomputed motion primitives are applied. If all the precomputed motion primitives are not applied, then, at block 713, the processor 125 applies a new motion primitive to produce an edge set 715. At block 717, the processor 125 determines if the edge set 715 is collision free. If the edge set is not collision free, then again a new motion primitive is applied to produce a new edge set. If the edge set is collision free, then, at block 719, the processor 125 records a child node which the edge set ends. Further, at block 721, the processor determines if the child node satisfies safe clearance, for example, if it is certain distance-away. If the child node does not satisfy the safe clearance, then, at block 723, the processor 125 evaluates node cost. Further, at block 725, the processor 125 pushes the child node into the priority queue. If the child node satisfies the safe clearance, then at block 727, the processor calculates a path from the current location to the child node. The path to the child node corresponds to the retreat path.
[0079] The evaluation of node cost performed at block 723 is explained in detail below in FIG. 7C.
[0080] FIG. 7C shows a schematic illustrating the evaluation of the node cost, according to an embodiment of the present disclosure. A node cost function T (•) 735 sums up a heuristic value h(-) and an arrival cost $(•) 733. The heuristic value is calculated based on a collision-cost map 729 measuring a risk of collision with the obstacle vehicle 109 if the autonomous ego- vehicle 115 stays at the node. The heuristic value can be alternatively interpreted as potential cost to a safe spot. Typically, the heuristic value is less, the closer the node is to the safe spot. In some embodiments, the arrival cost 733 of a node represents a distance that the autonomous ego-vehicle 115 has to travel from the root node (representing the current state) to the node.
[0081] In an embodiment, the collision-cost map 729 is a weighted sum of Gaussian distributions centered at waypoints of both the predicted trajectory XH<fc, i.e., Xh k, and routes on the cost map, i.e., Xmk i, and
Figure imgf000036_0001
where c is a weighting constant, and W4, W5 are weighting matrices.
[0082] FIG. 7D shows a schematic illustrating a process of the retreat planner 619, according to an embodiment of the present disclosure. The retreat planner 619 begins with a current location 737, which is likely leading to collision with the obstacle vehicle 109. In the beginning, since the priority queue only contains the root node, the retreat planner 619 selects the root node to expand. Applying a motion primitive 739 ends up with collision- free new node 741a. Repeating the application of motion primitives at the root node results in adding another four new nodes to the tree: 741b, 741c, 741d, and 741e. All five new nodes 741a, 741b, 741c, 74 Id, and 74 le are pushed into the priority queue. In next iteration,
Figure imgf000037_0001
3 is identified as the best node, due to its lowest node cost, and selected for expansion. Repeating the process eventually yields a new node 745, which satisfies the safe clearance. The tree construction process stops, and the retreat planner 619 returns a retreat goal being node 745, and a path 747 from the root node to the node 745.
[0083] FIG. 8A illustrates an example scenario, where the obstacle vehicle 109 stops on a reference trajectory 801, according to an embodiment of the present disclosure. In such a scenario, the MPC-based safety controller 625 commands the autonomous ego- vehicle 115 to stop on the reference trajectory 801 when an area in front is infeasible. Unless receiving a new reference trajectory, the MPC-based safety controller 625 stops the autonomous egovehicle 115 and waits for the obstacle vehicle 109 to clear. However, it is not desirable to wait if the obstacle vehicle 109 stops for a long time. Some embodiments are based on the realization that the obstacle vehicle 109 can be treated as a static obstacle and the reference trajectory 801 can be updated/repaired by the repair planner 623 to form a repaired path 803 such that the autonomous ego-vehicle 115 can go around the obstacle vehicle 109 and merge back to the reference trajectory 801.
[0084] According to an embodiment, the repaired path 803 lies in the same homotope class as the reference trajectory 801, which makes an optimization-based repair planner a viable solution. The optimization-based repair planner takes original waypoints 805 as initialization and move them to optimal collision- free locations 807 to form a repaired path 803.
[0085] To obtain the repaired path 803quickly, some embodiments conduct repair planning over the 2D space. A simple kinematic model is used to model the vehicle on a 2D -plane. Denote states at time step k as zk = [xk, 7/c] T an input velocity as
Figure imgf000038_0006
, and a robot configuration as 0 = [x, y]T. The kinematic model, is
Figure imgf000038_0001
[0086] In the optimization problem, a decision variable for each time step is uk , and an input vector that is optimized is denoted as
Figure imgf000038_0007
u : [uj > uf, , J] , where H is a planning horizon. Similarly, a resulting state vector is z
Figure imgf000038_0002
[0087] Given an initial state, z0 , z = fki
Figure imgf000038_0003
u) is obtained by concatenating kinematic function throughout the planning horizon. For simplicity, denote the kinematic function as - In order to
Figure imgf000038_0010
obtain an optimal solution u* given a constrained feasible set f and an input constraint x, the following optimization problem needs to be solved:
Figure imgf000038_0009
Figure imgf000038_0008
Figure imgf000038_0004
[0088] In some embodiments, cost function is quadratic that has the form:
Figure imgf000038_0005
i, which is convex and regular. The first term penalizes a distance from the goal, Xf and the second term penalizes the input. [0089] Some embodiments are based on an understanding that the repaired path 803, despite being collision-free, cannot always be followed accurately because the repaired path 803 may not satisfy the autonomous egovehicle’s 115 kinematics or dynamics, causing the autonomous ego- vehicle 115 to collide into obstacles. Therefore, there is a need to verify feasibility of the repaired path 803 before accepting the repaired path 803.
[0090] FIG. 8B shows a block diagram for verifying the feasibility of the repaired path 803, according to an embodiment of the present disclosure. At block 809, an iterative linear quadratic regulator (ILQR) tracking controller is used. The ILQR is an optimization-based controller configured to generate a motion plan that tracks the repaired path according to the car kinematics if the repaired path is close to a kinematically feasible trajectory. If the ILQR failed, it means that the repaired path cannot be followed, thus, the repair planner failed. The ILQR, is an extension of a LQR control, and determines a sequence of control in a given time horizon by solving an optimization problem.
[0091] At block 811, the processor 125 determines if the repaired path 803 can be followed accurately. If the repaired path 803 can be followed accurately, then, at block 813, the processor 125 updates the reference trajectory 801 based on the repaired path 803. If it is determined that the repaired path 803 cannot be followed accurately, then, at block 815, the processor 125 requests the central controller 601 to provide a new reference trajectory.
[0092] FIG. 9 is a schematic diagram illustrating a system that can be used for implementing systems and methods of the present disclosure. The system 900 includes one or combination of a transceiver 901, an inertial measurement unit (IMU) 903, a display 905, a sensor(s) 907, a memory 909, and a processor 911, operatively coupled to each other through connections 913. The connections 913 can comprise buses, lines, fibers, links, or combination thereof. [0093] The transceiver 901 can, for example, include a transmitter enabled to transmit one or more signals over one or more types of wireless communication networks and a receiver to receive one or more signals transmitted over the one or more types of wireless communication networks. The transceiver 901 can permit communication with wireless networks based on a variety of technologies such as, but not limited to, femtocells, Wi-Fi networks or Wireless Local Area Networks (WLANs), which may be based on the IEEE 802.11 family of standards, Wireless Personal Area Networks (WPANS) such Bluetooth, Near Field Communication (NFC), networks based on the IEEE 802.15x family of standards, and/or Wireless Wide Area Networks (WWANs) such as LTE, WiMAX, etc. The system 900 can also include one or more ports for communicating over wired networks.
[0094] In some embodiments, the system 900 can comprise image sensors such as CCD or CMOS sensors, lasers and/or camera, which are hereinafter referred to as "sensor 907". For example, the sensor 907 can convert an optical image into an electronic or digital image and can send acquired images to processor 911. Additionally, or alternatively, the sensor 907 can sense the light reflected from a target object in a scene and submit the intensities of the captured light to the processor 911.
[0095] For example, the sensor 907 can include color or grayscale cameras, which provide "color information." The term "color information" as used herein refers to color and/or grayscale information. In general, as used herein, a color image or color information can be viewed as comprising 1 to N channels, where N is some integer dependent on the color space being used to store the image. For example, an RGB image comprises three channels, with one channel each for Red, Blue, and Green information.
[0096] For example, the sensor 907 can include a depth sensor for providing "depth information." The depth information can be acquired in a variety of ways using depth sensors. The term "depth sensor" is used to refer to functional units that may be used to obtain depth information independently and/or in conjunction with some other cameras. For example, in some embodiments, the depth sensor and the optical camera can be part of the sensor 907. For example, in some embodiments, the sensor 907 includes RGBD cameras, which may capture per-pixel depth (D) information when the depth sensor is enabled, in addition to color (RGB) images.
[0097] As another example, in some embodiments, the sensor 907 can include a 3D Time Of Flight (3DTOF) camera. In embodiments with 3DTOF camera, the depth sensor can take the form of a strobe light coupled to the 3DTOF camera, which can illuminate objects in a scene and reflected light can be captured by a CCD/CMOS sensor in the sensor 410. Depth information can be obtained by measuring the time that the light pulses take to travel to the objects and back to the sensor.
[0098] As a further example, the depth sensor can take the form of a light source coupled to the sensor 907. In one embodiment, the light source projects a structured or textured light pattern, which can include one or more narrow bands of light, onto objects in a scene. Depth information is obtained by exploiting geometrical distortions of the projected pattern caused by the surface shape of the object. One embodiment determines depth information from stereo sensors such as a combination of an infra-red structured light projector and an infra-red camera registered to a RGB camera. [0099] In some embodiments, the sensor 907 includes stereoscopic cameras. For example, a depth sensor may form part of a passive stereo vision sensor, which may use two or more cameras to obtain depth information for a scene. The pixel coordinates of points common to both cameras in a captured scene may be used along with camera pose information and/or triangulation techniques to obtain per-pixel depth information.
[0100] In some embodiments, the system 900 can be operatively connected to multiple sensors 907, such as dual front cameras and/or a front and rear-facing cameras, which may also incorporate various sensors. In some embodiments, the sensors 907 can capture both still and video images. In some embodiments, the sensor 907 can include RGBD or stereoscopic video cameras capable of capturing images at, e.g., 30 frames per second (fps). In one embodiment, images captured by the sensor 907 can be in a raw uncompressed format and can be compressed prior to being processed and/or stored in memory 909. In some embodiments, image compression can be performed by the processor 911 using lossless or lossy compression techniques.
[0101] In some embodiments, the processor 911 can also receive input from IMU 903. In other embodiments, the IMU 903 can comprise 3-axis accelerometer(s), 3-axis gyroscope(s), and/or magnetometer(s). The IMU 903 can provide velocity, orientation, and/or other position related information to the processor 911. In some embodiments, the IMU 903 can output measured information in synchronization with the capture of each image frame by the sensor 907. In some embodiments, the output of the IMU 903 is used in part by the processor 911 to fuse the sensor measurements and/or to further process the fused measurements. [0102] The system 900 can also include a screen or display 905 rendering images, such as color and/or depth images. In some embodiments, the display 905 can be used to display live images captured by the sensor 907, fused images, augmented reality (AR) images, graphical user interfaces (GUIs), and other program outputs. In some embodiments, the display 905 can include and/or be housed with a touchscreen to permit users to input data via some combination of virtual keyboards, icons, menus, or other GUIs, user gestures and/or input devices such as styli and other writing implements. In some embodiments, the display 905 can be implemented using a liquid crystal display (LCD) display or a light emitting diode (LED) display, such as an organic LED (OLED) display. In other embodiments, the display 480 can be a wearable display. In some embodiments, the result of the fusion can be rendered on the display 905 or submitted to different applications that can be internal or external to the system 900.
[0103] Exemplary system 900 can also be modified in various ways in a manner consistent with the disclosure, such as, by adding, combining, or omitting one or more of the functional blocks shown. For example, in some configurations, the system 900 does not include the IMU 903 or the transceiver 901. Further, in certain example implementations, the system 900 include a variety of other sensors (not shown) such as an ambient light sensor, microphones, acoustic sensors, ultrasonic sensors, laser range finders, etc. In some embodiments, portions of the system 400 take the form of one or more chipsets, and/or the like.
[0104] The processor 911 can be implemented using a combination of hardware, firmware, and software. The processor 911 can represent one or more circuits configurable to perform at least a portion of a computing procedure or process related to sensor fusion and/or methods for further processing the fused measurements. The processor 911 retrieves instructions and/or data from the memory 909. The processor 911 can be implemented using one or more application specific integrated circuits (ASICs), central and/or graphical processing units (CPUs and/or GPUs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (FPGAs), controllers, micro-controllers, microprocessors, embedded processor cores, electronic devices, other electronic units designed to perform the functions described herein, or a combination thereof.
[0105] The memory 909 can be implemented within the processor 911 and/or external to the processor 911. As used herein the term "memory" refers to any type of long term, short term, volatile, nonvolatile, or other memory and is not to be limited to any particular type of memory or number of memories, or type of physical media upon which memory is stored. In some embodiments, the memory 909 holds program codes that facilitate the automated parking.
[0106] For example, the memory 909 can store the measurements of the sensors, such as still images, depth information, video frames, program results, as well as data provided by the IMU 903 and other sensors. The memory 909 can store a memory storing a geometry of the vehicle, a map of the parking space, a kinematic model of the autonomous ego-vehicle, and a dynamic model of the autonomous ego-vehicle. In general, the memory 909 can represent any data storage mechanism. The memory 909 can include, for example, a primary memory and/or a secondary memory. The primary memory can include, for example, a random-access memory, read only memory, etc. [0107] Secondary memory can include, for example, the same or similar type of memory as primary memory and/or one or more data storage devices or systems, such as, for example, flash/USB memory drives, memory card drives, disk drives, optical disc drives, tape drives, solid state drives, hybrid drives etc. In certain implementations, secondary memory can be operatively receptive of, or otherwise configurable to a non-transitory computer-readable medium in a removable media drive (not shown). In some embodiments, the non-transitory computer readable medium forms part of the memory 909 and/or the processor 911.
[0108] The following description provides exemplary embodiments only, and is not intended to limit the scope, applicability, or configuration of the disclosure. Rather, the following description of the exemplary embodiments will provide those skilled in the art with an enabling description for implementing one or more exemplary embodiments. Contemplated are various changes that may be made in the function and arrangement of elements without departing from the spirit and scope of the subject matter disclosed as set forth in the appended claims.
[0109] Specific details are given in the following description to provide a thorough understanding of the embodiments. However, understood by one of ordinary skill in the art can be that the embodiments may be practiced without these specific details. For example, systems, processes, and other elements in the subject matter disclosed may be shown as components in block diagram form in order not to obscure the embodiments in unnecessary detail. In other instances, well-known processes, structures, and techniques may be shown without unnecessary detail in order to avoid obscuring the embodiments. Further, like reference numbers and designations in the various drawings indicate like elements.
[0110] Also, individual embodiments may be described as a process which is depicted as a flowchart, a flow diagram, a data flow diagram, a structure diagram, or a block diagram. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel or concurrently. In addition, the order of the operations may be rearranged. A process may be terminated when its operations are completed but may have additional steps not discussed or included in a figure. Furthermore, not all operations in any particularly described process may occur in all embodiments. A process may correspond to a method, a function, a procedure, a subroutine, a subprogram, etc. When a process corresponds to a function, the function’s termination can correspond to a return of the function to the calling function or the main function.
[OHl] Furthermore, embodiments of the subject matter disclosed may be implemented, at least in part, either manually or automatically. Manual or automatic implementations may be executed, or at least assisted, through the use of machines, hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine readable medium. A processor(s) may perform the necessary tasks.
[0112] Various methods or processes outlined herein may be coded as software that is executable on one or more processors that employ any one of a variety of operating systems or platforms. Additionally, such software may be written using any of a number of suitable programming languages and/or programming or scripting tools, and also may be compiled as executable machine language code or intermediate code that is executed on a framework or virtual machine. Typically, the functionality of the program modules may be combined or distributed as desired in various embodiments.
[0113] Embodiments of the present disclosure may be embodied as a method, of which an example has been provided. The acts performed as part of the method may be ordered in any suitable way. Accordingly, embodiments may be constructed in which acts are performed in an order different than illustrated, which may include performing some acts concurrently, even though shown as sequential acts in illustrative embodiments.
[0114] Although the present disclosure has been described with reference to certain preferred embodiments, it is to be understood that various other adaptations and modifications can be made within the spirit and scope of the present disclosure. Therefore, it is the aspect of the appended claims to cover all such variations and modifications as come within the true spirit and scope of the present disclosure.

Claims

[CLAIMS]
[Claim 1]
An integrated system for parking an autonomous ego-vehicle in a dynamic environment of a parking area, comprising: a processor; and a memory having instructions stored thereon that, when executed by the processor, cause the integrated system to: collect measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of one or multiple stationary vehicles parked in the parking area, and a state of one or multiple obstacle vehicles moving in the parking area; execute a path planner configured to produce a trajectory for parking the autonomous ego-vehicle based on the state of the dynamic environment in the parking area; execute an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles, wherein the mode of motion includes a first mode of motion or a second mode of motion; determine a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles, wherein the safety constraint is determined around the state of an obstacle vehicle along its path if the obstacle vehicle is in the first mode of motion, and wherein the safety constraint is determined in all driving directions around the state of the obstacle vehicle if the obstacle vehicle is in the second mode of motion; and park the autonomous ego-vehicle based on the trajectory for parking and the safety constraint for each of the obstacle vehicles.
[Claim 2] The system of claim 1 , wherein the environment predictor is configured to estimate a position, a velocity, a steering angle, an acceleration, and a steering angular rate of the obstacle vehicle at different instances of time; estimate the path of the obstacle vehicle based on the position, the velocity, the steering angle, the acceleration, and the steering angular rate of the obstacle vehicle at the different instances of time; and estimate the mode of the motion of the obstacle vehicle based on the steering angle of the obstacle vehicle at the different instances of time.
[Claim 3]
The system of claim 2, wherein the environment predictor is further configured to: execute a Kalman filter integrating measurements of the position of the obstacle vehicle to estimate the velocity and acceleration of the obstacle vehicle at the different instances of time; and integrate the estimations of the velocity in time to produce estimations of the steering angle and the steering angular rate of the obstacle vehicle at the different instances of time.
[Claim 4]
The system of claim 1 , wherein the environment predictor is configured to estimate a position, a velocity, and a steering angle of the obstacle vehicle at different instances of time; estimate the path of the obstacle vehicle based on the position and the velocity of the obstacle vehicle at the different instances of time; and estimate the mode of the motion of the obstacle vehicle based on the steering angle of the obstacle vehicle at the different instances of time.
[Claim 5]
The system of claim 4, wherein the environment predictor is further configured to determine the mode of the motion of the obstacle vehicle based on a misalignment of an orientation of the obstacle vehicle and the steering angle of the obstacle vehicle at the different instances of time.
[Claim 6]
The system of claim 4, wherein the environment predictor is further configured to: execute a Kalman filter integrating measurements of the position of the obstacle vehicle to estimate the velocity of the obstacle vehicle at the different instances of time; and integrate the estimations of the velocity at the different instances of time forward in time to produce estimations of the steering angle at the different instances of time.
[Claim 7]
The system of claim 4, wherein the environment predictor is further configured to: estimate a likelihood of the obstacle vehicle moving on different paths in the parking area; estimate a belief of the mode of motion of the obstacle vehicle for each of the different paths in the parking area; and assign the mode of the motion to the obstacle vehicle corresponding to the path with the highest likelihood.
[Claim 8]
The system of claim 7, wherein the environment predictor is further configured to use a Bayesian framework to track the beliefs and the likelihoods of each mode for each of the different routes in the parking area.
[Claim 9]
The system of claim 1 , wherein to park the autonomous ego-vehicle based on the trajectory for parking and the safety constraint for each of the obstacle vehicles, the processor is further configured to execute a strategic motion planner configured to select one of a model predictive controller for tracking a path of the trajectory, a search-based retreat planner for finding an evasion path for the autonomous ego-vehicle, and an optimization-based repair planner for modifying the path of the trajectory.
[Claim 10]
The system of claim 9, wherein the strategic motion planner is further configured to evaluate the trajectory against the safety constraint for each of the obstacle vehicles and execute the model predictive controller if it is possible to track the path of the trajectory without stopping the autonomous ego-vehicle.
[Claim 11]
The system of claim 9, wherein the model predictive controller is further configured to treat the safety constraint as constraints of a trajectory optimization problem, and wherein the trajectory optimization problem optimizes for at least one feasible and smooth trajectory that follows the trajectory.
[Claim 12] The system of claim 9, wherein the strategic motion planner is further configured to evaluate the trajectory against the safety constraint for each of the obstacle vehicles and execute the repair planner to adjust the trajectory if it is not possible to track the path of the trajectory without stopping the autonomous ego- vehicle.
[Claim 13]
The system of claim 12, wherein the repair planner is configured to move original waypoints of the trajectory to optimal collision-free locations to produce a repaired path.
[Claim 14]
The system of claim 9, wherein the strategic motion planner is further configured to evaluate the trajectory against the safety constraint for each of the obstacle vehicles and execute the retreat planner if a current state of the autonomous ego-vehicle following the trajectory violates the safety constraint.
[Claim 15]
The system of claim 14, wherein the retreat planner is configured to: obtain one or more new nodes by applying one or more precomputed motion primitives on a root node; select a node with the lowest cost from the one or more new nodes and apply one or more precomputed motion primitives at the selected node; repeat select and apply until a safe node satisfying safe clearance is found or time runs out; and compute a path to the safe node or the node having the lowest cost if time runs out as a retreat path.
[Claim 16] A method for parking an autonomous ego-vehicle in a dynamic environment of a parking area, the method comprising: collecting measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of one or multiple stationary vehicles parked in the parking area, and a state of one or multiple obstacle vehicles moving in the parking area; executing a path planner configured to produce a trajectory for parking the autonomous ego-vehicle based on the state of the dynamic environment in the parking area; executing an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles, wherein the mode of motion includes a first mode of motion or a second mode of motion; determining a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles, wherein the safety constraint is determined around the state of an obstacle vehicle along its path if the obstacle vehicle is in the first mode of motion, and wherein the safety constraint is determined in all driving directions around the state of the obstacle vehicle if the obstacle vehicle is in the second mode of motion; and parking the autonomous ego-vehicle based on the trajectory for parking and the safety constraint for each of the obstacle vehicles.
[Claim 17]
The method of claim 16, wherein the environment predictor is configured to estimate a position, a velocity, and a steering angle of the obstacle vehicle at different instances of time; estimate the path of the obstacle vehicle based on the position and the velocity of the obstacle vehicle at the different instances of time; and estimate the mode of the motion of the obstacle vehicle based on the steering angle of the obstacle vehicle at the different instances of time.
[Claim 18]
The method of claim 16, wherein the environment predictor is further configured to determine the mode of the motion of the obstacle vehicle based on a misalignment of an orientation of the obstacle vehicle and the steering angle of the obstacle vehicle at the different instances of time.
[Claim 19]
The method of claim 18, wherein the environment predictor is further configured to: execute a Kalman filter integrating measurements of the position of the obstacle vehicle to estimate the velocity of the obstacle vehicle at the different instances of time; and integrate the estimations of the velocity at the different instances of time forward in time to produce estimations of the steering angle at the different instances of time.
[Claim 20]
A non-transitory computer-readable storage medium embodied thereon a program executable by a processor for performing a method for parking an autonomous ego-vehicle in a dynamic environment of a parking area, the method comprising: collecting measurements of a state of the dynamic environment in the parking area indicative of available parking spots in the parking area, a state of one or multiple stationary vehicles parked in the parking area, and a state of one or multiple obstacle vehicles moving in the parking area; executing a path planner configured to produce a trajectory for parking the autonomous ego-vehicle based on the state of the dynamic environment in the parking area; executing an environment predictor configured to predict a path and a mode of motion for each of the obstacle vehicles, wherein the mode of motion includes a first mode of motion or a second mode of motion; determining a safety constraint for each of the obstacle vehicles based on the path and the mode of motion for each of the obstacle vehicles, wherein the safety constraint is determined around the state of an obstacle vehicle along its path if the obstacle vehicle is in the first mode of motion, and wherein the safety constraint is determined in all driving directions around the state of the obstacle vehicle if the obstacle vehicle is in the second mode of motion; and parking the autonomous ego-vehicle based on the trajectory for parking and the safety constraint for each of the obstacle vehicles.
PCT/JP2022/048715 2022-03-01 2022-12-22 System and method for parking an autonomous ego- vehicle in a dynamic environment of a parking area WO2023166845A1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US202263268725P 2022-03-01 2022-03-01
US63/268,725 2022-03-01
US17/654,709 2022-03-14
US17/654,709 US20230278593A1 (en) 2022-03-01 2022-03-14 System and Method for Parking an Autonomous Ego-Vehicle in a Dynamic Environment of a Parking Area

Publications (1)

Publication Number Publication Date
WO2023166845A1 true WO2023166845A1 (en) 2023-09-07

Family

ID=85157267

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/048715 WO2023166845A1 (en) 2022-03-01 2022-12-22 System and method for parking an autonomous ego- vehicle in a dynamic environment of a parking area

Country Status (1)

Country Link
WO (1) WO2023166845A1 (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210116930A1 (en) * 2018-02-28 2021-04-22 Sony Corporation Information processing apparatus, information processing method, program, and mobile object

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210116930A1 (en) * 2018-02-28 2021-04-22 Sony Corporation Information processing apparatus, information processing method, program, and mobile object

Similar Documents

Publication Publication Date Title
US10809732B2 (en) Deterministic path planning for controlling vehicle movement
EP3568334B1 (en) System, method and non-transitory computer readable storage medium for parking vehicle
EP3638981B1 (en) Fusion framework of navigation information for autonomous navigation
US11378961B2 (en) Method for generating prediction trajectories of obstacles for autonomous driving vehicles
US10459441B2 (en) Method and system for operating autonomous driving vehicles based on motion plans
US10663966B2 (en) Vehicle motion control system and method
US11767035B2 (en) Autonomous parking with hybrid exploration of parking space
JP6896103B2 (en) Systems and methods for controlling vehicle motion
US10409283B2 (en) Vehicle motion control system and method
US11731612B2 (en) Neural network approach for parameter learning to speed up planning for complex driving scenarios
US11506502B2 (en) Robust localization
US11958498B2 (en) Data-driven warm start selection for optimization-based trajectory planning
US11353878B2 (en) Soft-boundary based path optimization for complex scenes for autonomous driving vehicles
US10732632B2 (en) Method for generating a reference line by stitching multiple reference lines together using multiple threads
US20230331217A1 (en) System and Method for Motion and Path Planning for Trailer-Based Vehicle
US20210188275A1 (en) Vehicle control device and storage medium which stores a computer program for vehicle control
US11106212B2 (en) Path planning for complex scenes with self-adjusting path length for autonomous driving vehicles
US20230278593A1 (en) System and Method for Parking an Autonomous Ego-Vehicle in a Dynamic Environment of a Parking Area
WO2023166845A1 (en) System and method for parking an autonomous ego- vehicle in a dynamic environment of a parking area
US20200125111A1 (en) Moving body control apparatus
US20240059317A1 (en) System and Method for Controlling Movement of a Vehicle
WO2024038687A1 (en) System and method for controlling movement of a vehicle
US11810371B1 (en) Model-based localization on high-definition maps
US20240118985A1 (en) Selection of runtime performance estimator using machine learning
WO2023199610A1 (en) System and method for motion and path planning for trailer-based vehicle

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22851045

Country of ref document: EP

Kind code of ref document: A1