WO2017197170A1 - Commande sécurisée d'une entité autonome en présence d'agents intelligents - Google Patents

Commande sécurisée d'une entité autonome en présence d'agents intelligents Download PDF

Info

Publication number
WO2017197170A1
WO2017197170A1 PCT/US2017/032243 US2017032243W WO2017197170A1 WO 2017197170 A1 WO2017197170 A1 WO 2017197170A1 US 2017032243 W US2017032243 W US 2017032243W WO 2017197170 A1 WO2017197170 A1 WO 2017197170A1
Authority
WO
WIPO (PCT)
Prior art keywords
autonomous entity
entities
safety
motion
control signal
Prior art date
Application number
PCT/US2017/032243
Other languages
English (en)
Inventor
Changliu LIU
Masayoshi Tomizuka
Original Assignee
The Regents Of The University Of California
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by The Regents Of The University Of California filed Critical The Regents Of The University Of California
Publication of WO2017197170A1 publication Critical patent/WO2017197170A1/fr

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1674Programme controls characterised by safety, monitoring, diagnostic
    • B25J9/1676Avoiding collision or forbidden zones
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/30Surgical robots
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0214Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory in accordance with safety or protection criteria, e.g. avoiding hazardous areas

Definitions

  • This application relates generally to computer-controlled entities and more specifically to a controller for safely controlling an autonomous entity.
  • a method, non-transitory computer-readable storage medium, and controller device controls an autonomous entity.
  • a state estimator estimates a state of the autonomous entity based on first sensor measurements.
  • a motion predictor predicts a motion of one or more other entities based on second sensor measurements and a motion model for the one or more other entities.
  • a baseline controller estimates a baseline control signal for controlling the autonomous entity to complete a task based on the estimated state of the autonomous entity and an input goal, the baseline control signal representing a baseline trajectory of the autonomous entity that enables the autonomous entity to complete the task in accordance with an optimization criterion.
  • an enhanced baseline controller (also referred to herein as the "efficiency controller,”) is further based on the long-term predicted motion of the one or more other entities predicted motion of the one or more other entities in the estimation of a baseline control signal.
  • a safety controller generates a safety control signal for controlling the autonomous entity based on the baseline control signal, the predicted motion of the one or more other entities, and the estimated state of the autonomous entity.
  • the safety control signal represents a modification to the baseline trajectory of the autonomous entity determined by the baseline controller to cause the autonomous entity to deviate from the baseline trajectory to meet a safety constraint with respect to predicted motion of the one or more other entities.
  • the safety controller generates a safety index based on the estimated state of the autonomous entity and the predicted motion of one or more other entities.
  • the safety index indicates whether or not the estimated state of the autonomous entity is within a range of safe states given the predicted motion of the one or more other entities, and indicating a distance between the estimated state of the autonomous entity and a boundary of the range of safe states given the predicted motion of the one or more other entities.
  • the autonomous entity and the one or more other entities are modeled as geometric capsules, and the safety index is generated by determining the distance between the geometric capsules.
  • a criteria controller generates a criteria signal based on the baseline control signal and the safety index.
  • the criteria signal indicates whether the safety index indicates that the estimated state of the autonomous entity is within the range of safe states with respect to the predicted motion of the one or more other entities or whether the safety index indicates that the estimated state of the autonomous entity is outside the range of safe states with respect to the predicted motion of the one or more other entities.
  • a control modification controller generates a modification signal based on the baseline control signal, the estimated state of the autonomous entity, and the predicted motion of the one or more other entities.
  • the modification signal represents an optimal change in trajectory relative to the baseline trajectory to put the autonomous entity in the range of safe states relative to the predicted motion of the one or more other entities.
  • the modification signal is generated by causing a change in the state of the autonomous entity that results in a decrease in the safety index.
  • the motion predictor receives a plurality of motion models learned from an offline classifier. The motion predictor then updates parameters of the plurality of motion models based on observed motion of the one or more other entities, and generates the prediction from the plurality of motion models with the updated parameters.
  • the motion predictor models a state of each of the one or more other entities as a plurality of features.
  • the motion predictor applies a classifier to the plurality of features to determine probabilities of each of the one or more other entities taking different actions from a predefined set of possible actions.
  • the motion predictor then predicts the motion based on the determined probabilities.
  • FIG. 1 is a high-level block diagram of an embodiment of a controller for controlling an autonomous entity.
  • FIG. 2 is a detailed block diagram of a first embodiment of control module for controlling an autonomous entity.
  • FIG. 3 is a diagram illustrating an example of a control outcome achieved by the first embodiment of the controller.
  • FIG. 4 is a detailed block diagram of a second embodiment of control module for controlling an autonomous entity.
  • FIG. 5 is a diagram illustrating an example of a control outcome achieved by the second embodiment of the controller.
  • FIG. 6 is a diagram illustrating a first embodiment of safe states for operating an autonomous entity.
  • FIG. 7 is a diagram illustrating an example embodiment of a human model.
  • FIG. 8 is a diagram illustrating an example embodiment of a safety index and corresponding safe states for controlling an autonomous entity.
  • FIG. 9 is a diagram illustrating an example control outcome when controlling an autonomous entity.
  • FIG. 10 is a diagram illustrating an example scenario for controlling an autonomous vehicle.
  • FIG. 11 is a diagram illustrating an example embodiment of a vehicle model.
  • FIG. 12 is a diagram illustrating a sample interaction between an autonomously-controlled vehicle and other entities.
  • FIG. 13 is a block diagram illustrating an example embodiment of a motion predictor.
  • FIG. 14 is a block diagram illustrating an example embodiment of a controller for controlling an autonomous vehicle.
  • FIG. 15 is a diagram illustrating a second embodiment of a safety index and corresponding safe states for operating an autonomous entity.
  • FIG. 16 is a diagram illustrating a performance of a controller controlling an autonomous vehicle according to a first embodiment.
  • FIG. 17 is a diagram illustrating a performance of a controller controlling an autonomous vehicle according to a second embodiment.
  • a controller and control algorithm enables an autonomous entity such as a robot or autonomous vehicle to safely operate in environments where humans or other intelligent agents are present.
  • the described embodiments enable humans and robots to be co-workers and co-inhabitants in the flexible production lines while ensuring that humans and robots interact efficiently without harming each other.
  • the robot motion planning and control problem in a human involved environment is posed as a constrained optimal control problem.
  • a modularized parallel controller structure solves the problem online, which includes a baseline controller that ensures efficiency, and a safety controller that addresses real time safety by making a safe set invariant. Capsules are used to represent the complicated geometry of humans and robots.
  • human-friendly industrial co-robots are equipped with abilities such as, for example, (1) collecting environmental data and interpret such data, (2) adapting to different tasks and different environments, and (3) tailoring themselves to the human workers' needs.
  • the control scheme enables the robots to cope with complex and time-varying human motion and assure of real time safety without sacrificing efficiency.
  • a constrained optimal control problem describes this problem mathematically and a modularized controller architecture solves the problem.
  • the controller architecture is based on two online algorithms: a "safe set algorithm” (SSA) and a "safe exploration algorithm” (SEA), which confine the robot motion in a safe region regarding the predicted human motion.
  • SSA safety set algorithm
  • SEA safety exploration algorithm
  • the modularized architecture beneficially 1) treats the efficiency goal and the safety goal separately and allows more freedom in designing robot behaviors, 2) is compatible with existing robot motion control algorithms and can deal with complicated robot dynamics, 3) guarantees real time safety, and 4) is good for parallel computation.
  • the controller controls the driving behavior of an automated vehicle to prevent or minimize occurrences of collisions among vehicles and obstacles while maintaining efficiency (e.g. maintaining high speed on freeway).
  • the controller will make the driving decisions, e.g. finding a safe and efficient trajectory for the automated vehicle.
  • the controller operates in a freeway driving scenario such that at each time step, the automated vehicle predicts the future courses of all surrounding vehicles and confines its trajectory in a safe region regarding the prediction.
  • the decision making problem is posed as an optimal control problem, which is solved by 1) behavior classification and trajectory prediction of the surrounding vehicles, and 2) a unique parallel planner architecture which addresses the efficiency goal and the safety goal separately.
  • the controller controls another type of autonomous entity to operate safely in the presence of humans or other intelligent agents. While the specification describes details in the context of two specific types of autonomous entities (a robot arm and an autonomous vehicle), the methods, apparatuses, and principles described with respect to one of these embodiments may also be applied to the other embodiment or to a different type of autonomous entity. Similarly, while the specification described details in the context of operating an autonomous entity in the presence of humans (in the robot arm case) or human- driven vehicles (in the autonomous vehicle case), the described methods, apparatuses, and principles described with respect to either or both of these embodiments may also apply to the autonomous entity avoiding other intelligent agents (e.g., animals), human-controlled vehicles or objects, or other autonomously controlled vehicles, objects, or robots.
  • FIG. 1 is a block diagram illustrating an embodiment of a controller 100 for controlling an autonomous entity which may operate according to any of the principles described herein.
  • the controller 100 receives one or more input parameters 102.
  • the input parameters 102 may comprise, for example, sensor data used to determine the autonomous entity's position, velocity, acceleration, orientation, joint configuration, or proximity of surrounding objects, control inputs that specify a particular objective for the autonomous entity, or other inputs that guide operation of the autonomous entity as described herein.
  • the controller 100 produces one or more control outputs 104 to control movement of the autonomous entity.
  • the controller 100 comprises a processor 120 and a memory 110.
  • the processor 120 processes data signals and may comprise various computing architectures such as a complex instruction set computer (CISC) architecture, a reduced instruction set computer (RISC) architecture, or an architecture implementing a combination of instruction sets. Although only a single processor 120 is shown in FIG. 1, multiple processors may be included.
  • the processor 120 comprises an arithmetic logic unit, a microprocessor, a general purpose computer, or some other information appliance equipped to transmit, receive and process electronic data signals from the memory 110 or from external inputs.
  • the memory 110 comprises a non-transitory computer-readable storage medium that stores computer-executable instructions and computer-readable data.
  • the instructions may comprise code for performing any and/or all of the techniques described herein.
  • the memory 110 may furthermore temporarily or persistently store data inputted to the controller 100 (e.g., input parameter(s) 102), data to be outputted by the controller 100 (e.g., control output(s) 104), and any intermediate data used to carry out the process steps of the controller 100 described herein.
  • Memory 110 may comprise, for example, a dynamic random access memory (DRAM) device, a static random access memory (SRAM) device, Flash RAM (nonvolatile storage), combinations of the above, or some other memory device known in the art.
  • the processor 120 loads the computer-executable instructions and/or data from the memory 110 and executes the instructions to carry out the functions described herein.
  • the memory 110 stores a control module 112.
  • the control module 112 includes computer-executable program instructions that when executed by the processor 120, cause the controller 100 to receive the input parameter(s) 102, processes the input parameter(s) 102 according to the techniques described herein and generate the control output(s) 104 to control the autonomous entity.
  • the control module 1 12 generates control outputs 104 that control the autonomous entity to achieve its objective (or as near as possible) while operating safely in the presence of humans or other intelligent or non-intelligent entities in the environment of the autonomous entity.
  • the controller 100 may include more or less components than those shown in FIG. 1 without departing from the scope of the embodiments.
  • controller 100 may include additional memory, such as, for example, a first or second level cache, or one or more application specific integrated circuits (ASICs).
  • ASICs application specific integrated circuits
  • the controller 100 may be implemented entirely in hardware, in firmware, or using a combination of software, firmware, and hardware.
  • FIG. 2 illustrates an embodiment of a controller 200 for controlling an autonomous entity that may be used as control module 1 12 in the controller 100 of FIG. 1.
  • the controller 200 comprises a state estimator 202, a human motion predictor 204, a baseline controller 218, safety controller 210, and a combining block 222.
  • Alternative embodiments may include additional or different components.
  • the state estimator estimates a state 253 of the autonomous entity based on measurements 259.
  • the measurements 259 may be taken from various sensors that sense, for example, a position, velocity, acceleration, orientation, or configuration of the autonomous entity.
  • the motion predictor 204 predicts a motion 205 of one or more other entities (e.g., humans or other moving objects or intelligent agents) in the vicinity of the autonomous entity based on measurements 263 and a motion model 257.
  • the measurements 263 could include sensor data such as radar, LiDAR, images, video, or other sensed data that can be used to predict a state of the other entities that may include, for example, position, velocity, acceleration, orientation, or other configuration of the other entities.
  • the motion predictor 204 applies the motion model 257 to the states of the other entities to predict a future trajectory of the other entities as will be described in further detail below.
  • the baseline controller 218 receives the estimated state 253 of the autonomous entity and a goal 251 (e.g., embodied as an input control signal) and generates a baseline control signal 225.
  • the baseline control signal 225 may represent a baseline trajectory of the autonomous entity that, if followed, would enable the autonomous entity to meet the goal 251 in accordance with some optimization criterion defined in further detail below.
  • the baseline controller 218 may generate a baseline control signal 225 that would control the autonomous entity to reach the desired position according to an optimal path (e.g., shortest distance).
  • the safety controller 210 generates a safety control signal 21 1 based on the baseline control signal 225, the predicted motion 205 of the other entities, and the estimated state 253 of the autonomous entity.
  • the safety control signal 21 1 represents a modification to apply to the baseline trajectory specified in the baseline control signal 225 in order to ensure that the autonomous entity follows a safe trajectory to achieve the goal 251 that avoids collisions with the other entities.
  • the safety control signal 21 1 when combined with the baseline control signal 225, may cause the autonomous vehicle to deviate from the baseline trajectory when desirable to meet a safety constraint relating to the prediction motion 205 of the one or more other entities in the vicinity of the autonomous entity.
  • the combining block 222 combines the baseline control signal 225 with the safety control signal 21 1 to generate a combined control signal 261 for controlling the autonomous entity to follow a safe trajectory of the autonomous entity that meets the safety constraint while enabling the autonomous entity to complete the goal 251.
  • the safety controller 210 comprises a control modification module 212, a safety constraint module 214, a criteria module 216, and a combining block 224.
  • the safety constraint module 214 generates a safety index 255 based on the estimated state 253 of the autonomous entity and the predicted motion 205 of the one or more other entities.
  • the safety index 255 indicates whether or not the estimated state 253 of the autonomous entity is within a range of safe states given the prediction motion 205 of the one or more other entities, and if outside the range, may indicate a distance between the estimated sate of the autonomous entity and a boundary of the range of safe states.
  • the criteria module 216 generate a criteria signal 217 based on the baseline control signal 225 and the safety index 255 that indicates whether the safety index 255 is within the range of safe states or outside of the range of safe states.
  • the criteria signal 217 may comprise a binary value.
  • the control modification module 212 generates a modification signal 213 based on the baseline control signal 225, the estimated state 253 of the autonomous entity, and the prediction motion 205 of the one or more other entities.
  • the modification signal 213 represents an optimal change in trajectory relative to the baseline trajectory that will put the autonomous entity in the range of safe states relative to the predicted motion of the one or more other entities.
  • the combining block 224 combines the criteria signal 217 and the modification signal 213 to generate the safety control signal 21 1.
  • the combining block 224 acts as a multiplication block that multiplies the modification signal 213 by the binary value of the criteria signal 217 (e.g., zero or one) such that the safety control signal 21 1 is zero (thus providing no modification to the baseline control signal 225 when combined at combining block 222) when the autonomous entity is already within the safe space and the safety control signal 211 has a non-zero value (thus modifying the baseline control signal 225 when combined at combining block 222) when the autonomous entity is outside the safe space.
  • the criteria signal 217 e.g., zero or one
  • the expected outcome of the controller 200 is shown in FIG. 3 in an example scenario involving an autonomous vehicle 302.
  • the baseline controller 218 will command the autonomous vehicle 302 to go straight towards its goal 304 according to the baseline trajectory 312.
  • the human motion predictor 204 predicts that the human 306 will go to the prediction point 308 and he will be very likely to show up in the uncertainty range 310. Since the baseline trajectory 312 is no longer in the safe region 314, the safety controller 210 causes a modified trajectory 316 to be followed towards the goal 304 and avoids the human 306.
  • FIG. 4 illustrates an alternative embodiment of a controller 400 that can alternatively be used as the control module 1 12 in the controller 100 of FIG. 1.
  • the baseline controller 218 comprises an enhanced baseline controller, also referred to herein as an "efficiency controller” 418.
  • the efficiency controller 418 determines an enhanced baseline control signal (also referred to herein as an "efficiency signal") 425 representing an efficient trajectory that will stay within the safe space based on a long-term motion prediction 205 (together with the estimated state 253 and goal 251).
  • a benefit of the efficiency controller 418 is to provide the autonomous entity a global guidance in the long term in order to avoid local optima. Because this efficient trajectory in the efficiency signal 425 is based on a long-term motion prediction 205, it may not be completely safe if the prediction is inaccurate. Thus, the efficiency signal may still be modified by the safety control signal 21 1 at the combiner 222 to find a safe trajectory in real time for controlling the autonomous entity according to a combined control signal 461. However, relative to the baseline controller 218, the efficiency controller 418 may generate an initial trajectory more efficiently and reduce the computational burden on the safety controller 210 to find the modification that results in a safe trajectory. [0053] For example, in the example of FIG. 5, a robot 510 is configured to reach the goal 502 which is blocked by the moving obstacle 504.
  • the baseline controller 218 will generate a straight trajectory 506 towards the goal 502.
  • the safety controller 210 determines a large detour, which may be computationally intensive and may take a long time to calculate since the robot 510 actually goes away from the goal 502 by detouring according the trajectory 512. Such behavior is undesired in the short term.
  • the efficiency controller 418 will generate a reference detour in the long term. Although the trajectory may not be completely safe as the long term prediction of the moving obstacle may not be accurate, the long term detour can better guide the robot 510 and the safety controller 210 can then make it easier to find a safe and efficient trajectory in real time. While the efficiency controller 418 may achieve better performance, a trade-off with the above described baseline controller 218 is that the baseline controller 218 may have lower computation complexity.
  • Example operations are described in the context of the controller 100 controlling a robot arm as the autonomous entity and in the context of the controller 100 controlling an autonomous vehicle as the autonomous entity. Other types of autonomous entities may similarly be controlled by the controller 100 according to the principles described herein.
  • FIGs. 6-9 and the associated description describe a controller for controlling an autonomous entity and operating principles for the same in the context of a robot arm operating in the presence of humans.
  • the methods, apparatuses, and principles described with respect to FIGs. 6-9 may similarly apply to controlling an autonomous vehicle in the presence of other vehicles, or to controlling another type of autonomous entity in the presence of intelligent agents.
  • the following examples discuss operation of the controller 100 in a scenario in which co-robots co-operate and co-inhabit with human workers.
  • Safety in co-inhabitance and contactless co-operation form basic interaction types during human robot interactions. Since the interaction is contactless, robots and humans are independent to one another in the sense that the humans' inputs will not affects the robots' dynamics in the open loop. However, humans and robots are coupled together in the closed loop, since they will react to others' motions.
  • the state of the robot of interest (e.g., state 253) may be denoted as x R £ TM and the robot's control input as u R £ M M where n, m £ N.
  • the robot dynamics is affine, i.e.
  • the task or the goal for the robot (e.g., goal 251) is denoted as G R , which can be, for example, 1) a settle point in the Cartesian space (e.g. a work piece the robot needs to get), 2) a settle point in the configuration space (e.g. a posture), 3) a path in the Cartesian space or 4) a trajectory in the configuration space.
  • G R can be, for example, 1) a settle point in the Cartesian space (e.g. a work piece the robot needs to get), 2) a settle point in the configuration space (e.g. a posture), 3) a path in the Cartesian space or 4) a trajectory in the configuration space.
  • the robot fulfills the aforementioned tasks safely.
  • FIG. 6 illustrates X S in the upper left and lower right regions off-diagonal area in the system's state space.
  • R S (X H ) ⁇ 1 ⁇ 2 : i X R> X H Y e 3 ⁇ 4L which is time varying with x H .
  • two steps may be applied to safely control the robot motion: 1) predicting the human motion; and 2) finding the safe region for the robot (marked area along x R axis in FIG. 6) based on the prediction.
  • the goal of the co-robot is to finish the tasks GR efficiently while staying in the safe region RS(XH), which leads to the following optimization problem:
  • is the constraint on control inputs
  • is the state space constraint (e.g., joint limits, stationary obstacles).
  • the safety constraint RS(XH) may be nonlinear, non-convex and time varying with unknown dynamics.
  • optimizations e.g. sequential convex optimization, A* search, and Monte-Carlo based rapidly-exploring random trees (R T) method.
  • R T rapidly-exploring random trees
  • the computation loads may be too high for online applications on industrial co-robots.
  • analytical methods such as potential field methods and sliding mode methods may be used which have low computation loads.
  • these methods may fail to emphasize optimality.
  • XH may be a function of XR.
  • a safe set algorithm SSA
  • SEA safe exploration algorithm
  • SSA safe set algorithm
  • SEA safe exploration algorithm
  • a generalized method may be used in a modularized controller architecture that can handle three-dimensional (3D) interactions as described below.
  • the baseline controller 218 discussed above solves equations (2)-(3), which is time -invariant and can be solved offline.
  • the safety controller 210 enforces the time varying safety constraint (4), which computes whether the baseline control signal 225 is safe to execute or not (in the safety constraint module 214 and the criteria module 216) based on the predictions 205 made in the human motion predictor 204, and what the modification signal 213 should be (in the "control modification module 212).
  • the efficiency controller 418 solves equations (2)-(4). By solving the equations together in the efficiency controller 418, global guidance can be achieved in the long term, avoiding local optima.
  • the baseline controller 218 in the embodiment of FIG. 2 solves equations (2)-(3), which is similar to the controller in use when the robot is working in the cage.
  • the control policy can be obtained by solving the problem offline.
  • Basic collision avoidance algorithms can be used to avoid stationary obstacles described by the constraint XR £ ⁇ . This controller 218 is included to ensure that the robot can still perform the tasks properly when the safety constraint RS(XH) is satisfied.
  • the efficiency controller 418 in the embodiment of FIG. 4 operates similarly to the baseline controller 218 but solves equations (2)-(4) above.
  • the human body may be represented at various levels of details.
  • AGVs AGVs, mobile robots, and planar arms
  • a human can be tracked as a rigid body in the 2D plane with the state XH being the position and velocity of the center of mass and the rotation around it.
  • the choice of the human model depends on his distance to the robot.
  • the human may also be treated as one rigid body to simplify the computation. In the close proximity, however, the human's limb movements may be considered.
  • the human may be modeled as a connection of ten rigid parts: part 1 is the head; part 2 is the trunk; part 3, 4, 5 and 6 are upper limbs; and part 7, 8, 9 and 10 are lower limbs.
  • the joint positions can be tracked using 3D sensors.
  • the human's state XH can be described by a combination of the states of all rigid parts.
  • the prediction of future human motion XH by the motion predictor 204 may be done in two steps: inference of the human's goal GH and prediction of the trajectory to the goal. Once the goal is identified, a linearized reaction model can be assumed for trajectory prediction, e.g.
  • x H Ax H + B 1 G H + B 2 x R + w H (5)
  • WH is the noise
  • A, Bi, and B2 are unknown matrix parameters which encode the dependence of future human motion on his current posture, his goal and the robot motion.
  • Those parameters can be identified using parameter identification algorithms, while the prediction can be made using the identified parameters. Note that to account for human's time varying behaviors, the parameters may be identified online. This method is based on the assumption that human does not 'change' very fast.
  • the safe set X s is a collision free subspace in the system's state space, which depends on the relative distance among humans and robots. Since humans and robots have complicated geometric features, simple geometric representations may be used for efficient online distance calculation. For example, in one embodiment, ellipsoids may be used.
  • capsules or spherocylinders
  • a spherocylinder consists of a cylinder body and two hemisphere ends. Spherocylinders are introduced to bound the geometric figures.
  • a sphere is considered as a generalized capsule with the length of the cylinder being zero.
  • the distance between two capsules can be calculated analytically, which equals to the distance between their center lines minus their radiuses as shown in FIG.7B. In the case of a sphere, the center line reduces to a point. In this way, the relative distance among complicated geometric objects can be calculated just using several skeletons and points.
  • the skeleton representation is also ideal for tracking the human motion.
  • the design of X s may be mainly the design of the minimum distances among the capsules.
  • the design is not too conservative, while larger buffer volumes may be used to bound critical body parts such as the head and the trunk, as shown in FIG.7A.
  • the safe set in the 3D interactions can be designed as:
  • X S ⁇ X : 1 ⁇ 2 ⁇ > l
  • Vi l, - -- , 10, V ; - 6 / ⁇ (6)
  • ⁇ ( ⁇ , x R ) measures the minimum distance from the capsule of body ⁇ part on the human (or the robot) j to the capsules of the robot R. dij min £ M + is the designed minimum safe distance.
  • a safety index ⁇ (e.g., safety index 255 in FIG. 2) is introduced (as generated by safety constraint module 214), which is a Lyapunov-like function over the system's state space as illustrated in FIG.8.
  • the safety index may be selected to satisfy three conditions: 1) the relative degree from ⁇ to the robot control u R in the Lie derivative sense is one (to ensure that the robot's control input can drive the state to the safe set directly); 2) ⁇ is differentiable almost everywhere; 3) the control strategy that ⁇ ⁇ 0 if ⁇ ⁇ 0 will make the set X s invariant, i.e. x(t) £ X s , Vt > t 0 if x(t 0 ) £ X s .
  • the safety index for the safe set in (6) can be designed as:
  • I contains the closest point (the critical point) to the robot R.
  • I £ N is the relative degree from the function d(-, x R ) to u R in the Lie derivative sense.
  • I 2 since the robot's control input can affect joint acceleration
  • c > 1 is a tunable parameter, while larger c means heavier penalties on small relative distance
  • ⁇ > 0 is a safety margin that can uniformly enlarge the capsules in FIG. 7.
  • the criteria module 216 determines whether or not a modification signal 213 should be added to the baseline control signal 225.
  • the first criterion defines a reactive safety behavior, i.e. the control signal is modified once the safety constraint is violated.
  • the second criterion defines a forward-looking safety behavior, i.e. the safety controller considers whether the safety constraint will be violated ⁇ time ahead.
  • the prediction in the second criterion is made upon the estimated human dynamics and the baseline control law. In the case when the prediction of future
  • the modification signal may be added when the probability for criteria (II) to happen is non-trivial, e.g. + ⁇ ) > 0 ⁇ ) > e for some e £ (0,1). THE SET OF SAFE CONTROL AND THE CONTROL MODIFICATION
  • the set of safe control UR is the equivalent safety constraint on the control space, i.e. the set of control that can drive the system state into the safe set as shown in FIG. 8.
  • the robot can drive the system state into the safe set through the safety index, i.e. by choosing a control such that ⁇ ⁇ 0. Since
  • ⁇ £ M + is a margin and x H comes from human motion predictor.
  • be the compact set that contains major probability mass of x H , e.g.
  • Au R arg min uR o +ue u s n ⁇ 2 n Ur u T Qu (10) where Q £ M mxm is positive definite which determines a metric on the robot's control space.
  • Q may be close enough to the metric imposed by the cost function / in (2), e.g. Q « d 2 J/du where / is assumed to be convex in u R .
  • U R is the equivalent constraint on the control space of the state space constraint ⁇ , which can be constructed following the same procedure of constructing UR . Equation (10) is a convex optimization problem. In the case that UR n ⁇ ⁇ U R is empty, a smaller margin ⁇ can be chosen so that the feasible control set becomes nonempty.
  • the described controller 100 and control algorithm controls a planar robot arm.
  • the state space equation of the planar robot is affine:
  • the vertical displacement of the robot arm may be ignored in one embodiment.
  • the baseline controller 218 and efficiency controller 418 may comprise a computed torque controller with settle point G R .
  • a sampling frequency of 20hz may be used. Due to the limitation of bandwidth, both reactive and forward-looking criteria may be used, in order not to violate the safety constraints between two samples.
  • the computation of U t follows from (9).
  • the computation of U 2 is similar.
  • the metric Q is chosen to be (0) in this embodiment, which puts larger penalties on the torque modification applied to heavier link, thus is energy efficient.
  • Example results of a sample interaction are illustrated in FIG. 9.
  • the first plot 910 in FIG. 9 shows the critical point on the arm that is the closest to the human capsule.
  • the second plot 920 shows the distance from the robot endpoint to the robot's goal position.
  • the third plot 930 shows the relative distance d between the robot capsules and the human capsule, while the red area represents the danger zone ⁇ d ⁇ d min ⁇ .
  • the bars in the fourth plot 940 illustrate whether the safety controller 210 is active or not at each time step.
  • the safety controller went active and performed real time trajectory planning to let the robot arm detour to avoid the human.
  • the separation offers more freedom in designing the robot behavior and is good for parallel computation.
  • FIGs. 10-17 describe a controller for controlling an autonomous entity and operating principles for the same in the context of an autonomous vehicle operating in the presence of other vehicles.
  • the methods, apparatuses, and principles described with respect to FIGs. 10- 17 may similarly apply to controlling a robot arm in the presence of humans, or to controlling another type of autonomous entity in the presence of intelligent agents.
  • the scenario shown in FIG. 10 may be modeled in the framework of multi -agent systems.
  • the lanes in the freeway are along the x-direction with no curvature.
  • the lanes are indexed increasing from the right lane (bottom of FIG. 10) to the left lane (top of FIG. 10).
  • a multi-agent system comprises a system composed of multiple interacting intelligent agents within an environment. All vehicles (the automated vehicle and other manually driven vehicles) on freeway are viewed as agents, which have several important characteristics: 1) autonomy: the agents are self-aware and autonomous; 2) local views: no agent has a full global view of the system; 3) decentralization: there is no designated controlling agent.
  • Agent / chooses the control u t based on its information set n t and its objective G t (which can be intended behaviors or desired speed).
  • the controller for the automated vehicle i.e. the function go
  • the controller for the automated vehicle may be chosen by balancing the following two factors: 1) Efficiency: The objective G 0 should be achieved in an optimal manner through minimizing a cost function J(x 0 , u 0 , G 0 ); 2) Safety: Efficiency requirement should be fulfilled safely.
  • X s which is a closed subset of the state space X of the system state x that is collision free.
  • g 0 should be chosen such that Vt, x(t) £ X s .
  • An optimal control problem similar to that presented in (2- 4) may be formulated. For example, in one embodiment, the following optimal control problem can be formulated:
  • is the control space constraint for vehicle stability, which depends on model uncertainties and disturbances
  • ⁇ (x e ) is the state space constraint regarding the speed limit.
  • human drivers may classify other drivers' intended behavior first. If the intended driving behaviors are understood, the future trajectories can be predicted using empirical models. Mimicking what humans would do, the learning structure in FIG. 13 is designed for the automated vehicle to make predictions of the surrounding vehicles, where the process is divided into two steps: 1) the behavior classification, where the observed trajectory of a vehicle goes through an offline trained classifier; and 2) the trajectory prediction, where the future trajectory is predicted based on the identified behavior, by using an empirical model which contains adjustable parameters to accommodate the driver's time-varying behavior.
  • the behavior classification is a backward process to identify G t
  • the classification step may be beneficial when the communications among vehicles are limited. Otherwise, vehicles can broadcast their planned behaviors.
  • the intended behavior of vehicle i at time step may be denoted as k as bi(k). At least the following five behaviors may be considered:
  • B 1 is the steady state behavior
  • B 2 , B 3 and B 4 are driving maneuvers
  • B 5 is the exiting behavior. It is assumed that there are be gaps (lane following) between two maneuvers (B 2 , B 3 and B ). Let e M 5 be the probability vector that vehicle i intends to conduct B 1 , ... , B 5 at time step k given information up to time step k.
  • ⁇ i f(Xi) + B 1 [k + k 2 fi 5 ] + B 2 [k 3 f? + k ⁇ 10 ] (23) where fa , fa, fa, fa £ M. are online-adjustable parameters.
  • the online parameter adaptation is desirable to capture the time varying behavior of drivers.
  • k 1 and k 2 are identified using historical data, they can be used to predict the trajectories in the near future if the vehicle's behavior does not change very fast compared to the sampling rate.
  • the automated vehicle may find a safe and efficient trajectory satisfying the optimal control problem (19-21).
  • the decision making architecture in FIG. 14 may be used in one embodiment, which is designed to be a parallel combination of a baseline planner 1402 that solves the problem in a long time horizon without the safety constraint (21), and a safety planner 1404 that takes care of the safety constraint in real time.
  • the controller in FIG. 14 has similarities to the controller 200 of FIG. 2 or the controller 400 of FIG. 4 described above.
  • this controller separates the baseline planner 1402 from a safety planner 1404 in which the baseline planner 1402 operates to control the vehicle to achieve a desired objective, while the safety planner 1404 operates to modify the trajectory if needed to avoid collisions.
  • the baseline planner 1402 may operate similarly to the efficiency controller 418 of FIG. 4 to generate an efficient trajectory based on the safety constraint.
  • the baseline planner 1402 and the safety planner 1404 both operate based on inputs from a vehicle state estimator 1406 that provides an estimated state of the vehicle, and a learning center and predictor 1408 that provides predictions based on a learned model.
  • the baseline planner 1402 solves the optimal control problem (19- 20) to ensure efficiency, which is similar to the planner in use when the automated vehicle is navigating in an open environment.
  • the cost function and the control constraint are designed to be convex, (19-20) become a convex optimization problem.
  • the baseline planner 1402 tries to plan a trajectory to accomplish Go.
  • the cost function is designed as
  • the computation in the baseline planner 1402 can be done offline.
  • the resulting control policy will be stored for online application, to ensure real-time planning.
  • the baseline planner 1402 may instead operate similarly to the efficiency control 418 of FIG. to solve the optimal control problem in (19)-(21), thus generating an efficient trajectory based on the safety constraint.
  • the safety planner 1404 modifies the trajectory planned by the baseline planner 1402 locally to ensure that it will lie in the safe setXs in real time.
  • the safety planner 1404 may control the automated vehicle to keep a safe headway.
  • di (x) calculates the minimum distance between the automated vehicles and the vehicle or obstacles in front of it.
  • the safety constraint controls the automated vehicle to keep a safe distance from vehicles on both lanes.
  • 3 ⁇ 4(3 ⁇ 4), 3 ⁇ 4 (3 ⁇ 4) i x ' ⁇ d 2 (x) ⁇ d min ] where di(x) calculates the minimum distance between the automated vehicle and all surrounding vehicles and obstacles in the two lanes.
  • the safe set is described using a safety index ⁇ , which is a real-valued continuously differentiable function on the system's state space.
  • the state x is considered safe only if ⁇ ( ⁇ ) ⁇ 0.
  • An example of the safety index was described earlier with respect to FIG. 8, and another example is shown in FIG. 15.
  • the safe region for the automated vehicle is affected by the future trajectory of the surrounding vehicles. Based on the prediction of other vehicles, if the baseline trajectory leads to ⁇ 0 now or in the near future, the safety planner will generate a modification signal to decrease the safety index by making ⁇ ⁇ 0.
  • the safety index needs to satisfy the following conditions: 1) ⁇ 0; 2) the unsafe set ⁇ is not reachable given the control law ⁇ ⁇
  • U s (t) ⁇ u 0 (t) : ⁇ — ⁇ 0 when ⁇ ⁇ where ⁇ 0 £ M + is a safety margin.
  • the safety controller will map it to the set of safe control Us (t) according to the following quadratic cost function where Wis a positive definite matrix and defines a metric in the vehicle's control space.
  • should be close enough to the metric imposed by the cost function Jin (25) and (26), e.g. W « d 2 J /du ⁇ where Jis convex in wo.
  • the above explanation follows the same general principles as described above with respect to (7)-(10), using some variation in notation when applied to the autonomous vehicle example. In the lane following mode, if the lateral deviation is large due to obstacle avoidance and the safety controller continues to generate turning signal ⁇ 0, the vehicle will enter the lane changing mode.
  • the sampling time may be set to 0.05s.
  • An example objective of following the lane is considered under two different cases.
  • FIG. 16 shows the case when the automated vehicle suddenly noticed a stationary obstacle 40m ahead.
  • the safety controller goes active.
  • the command for deceleration and turn is generated.
  • the automated vehicle slows down and changes lane to the left to avoid the obstacle.
  • the vehicle accelerates to the desired speed again.
  • FIG. 17 shows another example case when the front vehicle is too slow. To illustrate the interaction, the trajectories of both vehicles are down sampled and shown in the last plot (bottom) in FIG. 17, where circles represent the automated vehicle and squares represent the slow vehicle. Different shades correspond to different time steps, the lighter the earlier. At the beginning, since the automated vehicle does not keep the desired speed behind the slow car, it starts to change lane to the left. After changing the lane, it overtakes the slow vehicle.
  • the multi-agent traffic model described above solves an optimal control problem for vehicle trajectory planning.
  • the behaviors of surrounding vehicles are identified and their future trajectories predicted.
  • the optimal control problem is solved online using a parallel combination of a baseline planner which solved the problem without the safety constraint and a safety planner which took care of the safety constraint online.
  • Certain aspects of the embodiments include process steps and instructions described herein in the form of an algorithm. It should be noted that the process steps and instructions of the embodiments can be embodied in software, firmware or hardware, and when embodied in software, could be downloaded to reside on and be operated from different platforms used by a variety of operating systems. The embodiments can also be in a computer program product which can be executed on a computing system.
  • the embodiments also relate to an apparatus for performing the operations herein.
  • This apparatus may be specially constructed for the purposes, e.g., a specific computer, or it may comprise a general-purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), EPROMs, EEPROMs, magnetic or optical cards, application specific integrated circuits (ASICs), or any type of media suitable for storing electronic instructions, and each coupled to a computer system bus.
  • Memory can include any of the above and/or other devices that can store information/data/programs and can be transient or non-transient medium, where a non-transient or non-transitory medium can include memory/storage that stores information for more than a minimal duration.
  • computers referred to in the specification may include a single processor or may be architectures employing multiple processor designs for increased computing capability.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Robotics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Heart & Thoracic Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Remote Sensing (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Mechanical Engineering (AREA)
  • Biomedical Technology (AREA)
  • Physics & Mathematics (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

Un dispositif de commande commande le déplacement d'une entité autonome. Le dispositif de commande détermine une trajectoire de base pour l'entité autonome qui atteint un objectif particulier (par exemple, déplace l'entité autonome à un emplacement particulier). Un dispositif de commande de sécurité modifie ensuite la trajectoire de base pour éviter d'autres entités (par exemple, des humains ou d'autres entités) à proximité de l'entité autonome sur la base d'un déplacement de prédiction de ces entités. Le dispositif de commande peut être appliqué, par exemple, à un robot travaillant dans une usine conjointement avec des humains ou un véhicule autonome se déplaçant autour d'autres véhicules.
PCT/US2017/032243 2016-05-12 2017-05-11 Commande sécurisée d'une entité autonome en présence d'agents intelligents WO2017197170A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201662335373P 2016-05-12 2016-05-12
US62/335,373 2016-05-12

Publications (1)

Publication Number Publication Date
WO2017197170A1 true WO2017197170A1 (fr) 2017-11-16

Family

ID=60267948

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2017/032243 WO2017197170A1 (fr) 2016-05-12 2017-05-11 Commande sécurisée d'une entité autonome en présence d'agents intelligents

Country Status (1)

Country Link
WO (1) WO2017197170A1 (fr)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2019175691A1 (fr) * 2018-03-12 2019-09-19 Cover Sistemi S.R.L. Robot industriel doté d'un radar de prévention des collisions
WO2020001867A1 (fr) 2018-06-25 2020-01-02 Robert Bosch Gmbh Adaptation de la trajectoire d'un égo-véhicule à des objets étrangers en mouvement
WO2020056301A1 (fr) * 2018-09-13 2020-03-19 The Charles Stark Draper Laboratory, Inc. Interaction de robot avec des collègues humains
WO2020167291A1 (fr) * 2019-02-12 2020-08-20 Siemens Aktiengesellschaft Système et procédé informatisés configurés avec des contraintes hiérarchiques pour assurer le fonctionnement sûr d'une machine autonome
EP3738009A4 (fr) * 2018-01-12 2021-07-14 General Electric Company Système et procédés de planification de déplacement et de navigation autonome robotique
DE102020111953A1 (de) 2020-05-04 2021-11-04 Bayerische Motoren Werke Aktiengesellschaft Trajektorienplanungsmodul für automatisiertes fahren
US11327496B2 (en) 2019-01-16 2022-05-10 Ford Global Technologies, Llc Vehicle path identification
WO2022139939A1 (fr) * 2020-12-22 2022-06-30 Intrinsic Innovation Llc Planification de robot
WO2022221138A1 (fr) * 2021-04-16 2022-10-20 Dexterity, Inc. Robot à 7ème axe linéaire
WO2022271535A1 (fr) * 2021-06-21 2022-12-29 Intrinsic Innovation Llc Trajectoires de sécurité pour systèmes de commande robotique
US11958498B2 (en) 2020-08-24 2024-04-16 Toyota Research Institute, Inc. Data-driven warm start selection for optimization-based trajectory planning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030225479A1 (en) * 2002-05-30 2003-12-04 El-Houssaine Waled Method and control device for avoiding collisions between cooperating robots
US20110184555A1 (en) * 2009-05-22 2011-07-28 Kanto Auto Works, Ltd. Working support robot system
WO2011146259A2 (fr) * 2010-05-20 2011-11-24 Irobot Corporation Robot mobile d'interface humaine
US20130345718A1 (en) * 2007-02-16 2013-12-26 Excelsius Surgical, L.L.C. Surgical robot platform
US20150239124A1 (en) * 2012-10-08 2015-08-27 Deutsches Zentrum Für Luftund Raumfahrt E.V. Method for controlling a robot device, robot device and computer program product
WO2015185738A2 (fr) * 2014-06-05 2015-12-10 Aldebaran Robotics Robot humanoïde doté de capacités d'évitement de collision et de reprise de trajectoire

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030225479A1 (en) * 2002-05-30 2003-12-04 El-Houssaine Waled Method and control device for avoiding collisions between cooperating robots
US20130345718A1 (en) * 2007-02-16 2013-12-26 Excelsius Surgical, L.L.C. Surgical robot platform
US20110184555A1 (en) * 2009-05-22 2011-07-28 Kanto Auto Works, Ltd. Working support robot system
WO2011146259A2 (fr) * 2010-05-20 2011-11-24 Irobot Corporation Robot mobile d'interface humaine
US20150239124A1 (en) * 2012-10-08 2015-08-27 Deutsches Zentrum Für Luftund Raumfahrt E.V. Method for controlling a robot device, robot device and computer program product
WO2015185738A2 (fr) * 2014-06-05 2015-12-10 Aldebaran Robotics Robot humanoïde doté de capacités d'évitement de collision et de reprise de trajectoire

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3738009A4 (fr) * 2018-01-12 2021-07-14 General Electric Company Système et procédés de planification de déplacement et de navigation autonome robotique
WO2019175691A1 (fr) * 2018-03-12 2019-09-19 Cover Sistemi S.R.L. Robot industriel doté d'un radar de prévention des collisions
WO2020001867A1 (fr) 2018-06-25 2020-01-02 Robert Bosch Gmbh Adaptation de la trajectoire d'un égo-véhicule à des objets étrangers en mouvement
DE102018210280A1 (de) 2018-06-25 2020-01-02 Robert Bosch Gmbh Anpassung der Trajektorie eines Ego-Fahrzeugs an bewegte Fremdobjekte
US11858506B2 (en) 2018-06-25 2024-01-02 Robert Bosch Gmbh Adaptation of the trajectory of an ego vehicle to moving extraneous objects
US11628566B2 (en) 2018-09-13 2023-04-18 The Charles Stark Draper Laboratory, Inc. Manipulating fracturable and deformable materials using articulated manipulators
US11607810B2 (en) 2018-09-13 2023-03-21 The Charles Stark Draper Laboratory, Inc. Adaptor for food-safe, bin-compatible, washable, tool-changer utensils
US11872702B2 (en) 2018-09-13 2024-01-16 The Charles Stark Draper Laboratory, Inc. Robot interaction with human co-workers
WO2020056301A1 (fr) * 2018-09-13 2020-03-19 The Charles Stark Draper Laboratory, Inc. Interaction de robot avec des collègues humains
US11673268B2 (en) 2018-09-13 2023-06-13 The Charles Stark Draper Laboratory, Inc. Food-safe, washable, thermally-conductive robot cover
US11648669B2 (en) 2018-09-13 2023-05-16 The Charles Stark Draper Laboratory, Inc. One-click robot order
US11571814B2 (en) 2018-09-13 2023-02-07 The Charles Stark Draper Laboratory, Inc. Determining how to assemble a meal
US11597085B2 (en) 2018-09-13 2023-03-07 The Charles Stark Draper Laboratory, Inc. Locating and attaching interchangeable tools in-situ
US11597084B2 (en) 2018-09-13 2023-03-07 The Charles Stark Draper Laboratory, Inc. Controlling robot torque and velocity based on context
US11597087B2 (en) 2018-09-13 2023-03-07 The Charles Stark Draper Laboratory, Inc. User input or voice modification to robot motion plans
US11597086B2 (en) 2018-09-13 2023-03-07 The Charles Stark Draper Laboratory, Inc. Food-safe, washable interface for exchanging tools
US11327496B2 (en) 2019-01-16 2022-05-10 Ford Global Technologies, Llc Vehicle path identification
WO2020167291A1 (fr) * 2019-02-12 2020-08-20 Siemens Aktiengesellschaft Système et procédé informatisés configurés avec des contraintes hiérarchiques pour assurer le fonctionnement sûr d'une machine autonome
DE102020111953A1 (de) 2020-05-04 2021-11-04 Bayerische Motoren Werke Aktiengesellschaft Trajektorienplanungsmodul für automatisiertes fahren
US11958498B2 (en) 2020-08-24 2024-04-16 Toyota Research Institute, Inc. Data-driven warm start selection for optimization-based trajectory planning
WO2022139939A1 (fr) * 2020-12-22 2022-06-30 Intrinsic Innovation Llc Planification de robot
US11571812B2 (en) 2021-04-16 2023-02-07 Dexterity, Inc. Robot with linear 7th axis
WO2022221138A1 (fr) * 2021-04-16 2022-10-20 Dexterity, Inc. Robot à 7ème axe linéaire
WO2022271535A1 (fr) * 2021-06-21 2022-12-29 Intrinsic Innovation Llc Trajectoires de sécurité pour systèmes de commande robotique

Similar Documents

Publication Publication Date Title
WO2017197170A1 (fr) Commande sécurisée d'une entité autonome en présence d'agents intelligents
Brito et al. Model predictive contouring control for collision avoidance in unstructured dynamic environments
Dixit et al. Trajectory planning for autonomous high-speed overtaking in structured environments using robust MPC
US10012984B2 (en) System and method for controlling autonomous vehicles
EP4091028A1 (fr) Commande adaptative de véhicule autonome ou semi-autonome
Owen et al. Motion planning in dynamic environments using the velocity space
Ben-Messaoud et al. Smooth obstacle avoidance path planning for autonomous vehicles
Vilca et al. Adaptive leader-follower formation in cluttered environment using dynamic target reconfiguration
Vilca et al. Optimal multi-criteria waypoint selection for autonomous vehicle navigation in structured environment
Zhu et al. A gaussian process model for opponent prediction in autonomous racing
Vasilopoulos et al. Sensor-based legged robot homing using range-only target localization
Elmi et al. Path planning using model predictive controller based on potential field for autonomous vehicles
Huy et al. A practical and optimal path planning for autonomous parking using fast marching algorithm and support vector machine
Cho et al. Model predictive control of autonomous vehicles with integrated barriers using occupancy grid maps
Spencer et al. SLQR suboptimal human-robot collaborative guidance and navigation for autonomous underwater vehicles
Kim et al. Vehicle path tracking control using pure pursuit with MPC-based look-ahead distance optimization
Kazim et al. Recent advances in path integral control for trajectory optimization: An overview in theoretical and algorithmic perspectives
Sarvesh et al. Reshaping local path planner
Kashyap et al. Modified type-2 fuzzy controller for intercollision avoidance of single and multi-humanoid robots in complex terrains
Serigstad Hybrid collision avoidance for autonomous surface vessels
Berti et al. Improved dynamic window approach by using Lyapunov stability criteria
Ventura Safe and flexible hybrid control architecture for the navigation in formation of a group of vehicles
Kanjanawanishkul et al. Path following with an optimal forward velocity for a mobile robot
Kalaria et al. Delay-aware robust control for safe autonomous driving and racing
Liu et al. Flexibility optimized control for robot efficient moving in corridors based on viability theory

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17796875

Country of ref document: EP

Kind code of ref document: A1

122 Ep: pct application non-entry in european phase

Ref document number: 17796875

Country of ref document: EP

Kind code of ref document: A1