EP3753684A1 - Method and system for robot manipulation planning - Google Patents

Method and system for robot manipulation planning Download PDF

Info

Publication number
EP3753684A1
EP3753684A1 EP19181874.9A EP19181874A EP3753684A1 EP 3753684 A1 EP3753684 A1 EP 3753684A1 EP 19181874 A EP19181874 A EP 19181874A EP 3753684 A1 EP3753684 A1 EP 3753684A1
Authority
EP
European Patent Office
Prior art keywords
manipulation
skills
sequence
given
skill
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP19181874.9A
Other languages
German (de)
French (fr)
Other versions
EP3753684B1 (en
Inventor
Mathias Buerger
Markus Spies
Patrick Kesper
Philipp Christian Schillinger
Leonel Rozo
Marco Todescato
Nicolai Waniek
Markus Giftthaler
Andras Gabor Kupcsik
Meng Guo
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Robert Bosch GmbH
Original Assignee
Robert Bosch GmbH
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch GmbH filed Critical Robert Bosch GmbH
Priority to EP19181874.9A priority Critical patent/EP3753684B1/en
Priority to US16/892,811 priority patent/US11498212B2/en
Priority to CN202010564725.9A priority patent/CN112109079A/en
Publication of EP3753684A1 publication Critical patent/EP3753684A1/en
Application granted granted Critical
Publication of EP3753684B1 publication Critical patent/EP3753684B1/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/01Dynamic search techniques; Heuristics; Dynamic trees; Branch-and-bound
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39205Markov model
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40113Task planning
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40391Human to robot skill transfer
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40429Stochastic, probabilistic generation of intermediate points
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40465Criteria is lowest cost function, minimum work path
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40629Manipulation planning, consider manipulation task, path, grasping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N7/00Computing arrangements based on specific mathematical models
    • G06N7/01Probabilistic graphical models, e.g. probabilistic networks

Definitions

  • the present invention relates to automatic processes for planning tasks by determining a sequence of manipulation skills.
  • the present invention relates to a motion planning framework for robot manipulation.
  • the robot needs to recognize and encode the intentions behind these demonstrations and should be capable to generalize the trained manipulation to unforeseen situations. Furthermore, several skills need to be performed in a sequence to accomplish complex tasks.
  • the task planning problem aims to define the right sequence of actions and needs a prespecified definition of the planning model and of the preconditions and effects of all available skills. Due to the large variation of skills, the definition of such a planning model quickly becomes impractical.
  • a method for planning a manipulation task of an agent comprising the steps of:
  • the learning of the number of manipulation skills may be performed in that a plurality of manipulation trajectories for each respective manipulation skill is recorded, particularly by demonstration, a task parametrized Hidden Semi-Markov model (TP-HSMM) is determined depending on the plurality of manipulation trajectories for each respective manipulation skill and the symbolic abstraction of the respective manipulation skill is generated.
  • TP-HSMM Hidden Semi-Markov model
  • Manipulation skills are action skills in general which may also include translations or movements.
  • the general manipulation skills are object-oriented and respectively relate to a single action performed on the object, such as a grasping skill, a dropping skill, a moving skill or the like. These manipulations skills may have different instances, which means that the skills can be carried out in different ways (instances) according to what is needed to be done next.
  • the general skills are provided with object centric symbolic action descriptions for the logic-based planning.
  • the above method is based on the idea of learning from demonstration by means of fitting a prescribed skilled model, such as a Gaussian mixture model to a handful of demonstrations.
  • a TP-GMM task parameterized Gaussian mixture model may be described which can then be used to an execution to reproduce a trajectory for the learned manipulation skill.
  • the TP-GMM is defined by one or more specific frames (coordinate systems) which indicates the translation and rotation with respect to a word frame. After observation of the actual frame, the learned TP-GMM can be converted into a single GMM.
  • One advantage of the TP-GMM is that the resulting GMM can be updated in a real time according to the observed task parameters. Hence, the TP-HSMM allows to adapt to changes in the objects during the execution of the manipulation task.
  • the generating of the symbolic abstraction of the manipulations skills may comprise constructing a PDDL model, wherein objects, initial state and goal specification define a problem instance, while predicates and actions define a domain of a given manipulation, wherein particularly the symbolic abstraction of the manipulations skills uses a classical PDDL planning language.
  • the concatenated sequence of manipulation skills is determined, such that the probability of achieving the given goal specification is maximized, wherein particularly a PDDL planning step is used to find a sequence of actions to fulfill the given goal specification, starting from a given initial state.
  • the transition probability between states of the TP-HSMM may be determined using Expectation-Maximization.
  • TP-HSMM task parametrized Hidden Semi-Markov model
  • TP-HSMM may be determined by cascading manipulations skills, wherein a Viterbi algorithm is used to retrieve the sequence of states from the single TP-HSMM based on the determined concatenated sequence of manipulation skills.
  • Parameters of the TP-HSMM may be learned through a classical Expectation-Maximization algorithm.
  • the symbolic abstractions of the demonstrated manipulation skills may be determined by mapping low-variance geometric relations of segments of manipulation trajectories into the set of predicates.
  • the step of determining the concatenated sequence of manipulation skills may comprise an optimization process, particularly with the goal of minimizing the total length of the trajectory.
  • determining the concatenated sequence of manipulation skills may comprise selectively reproducing one or more of the manipulation skills of a given sequence of manipulation skills so as to maximize the probability of satisfying the given goal specification.
  • determining the concatenated sequence of manipulation skills may include the steps of:
  • the modified Viterbi algorithm may include missing observations and duration probabilities.
  • a device for planning a manipulation task of an agent particularly a robot, wherein the device is configured to:
  • Figure 1 shows a system (agent) 1 including a controllable robot arm 2 as an example of an object manipulator.
  • the robot arm 2 is a multi-DoF robotic arm with several links 21 and an end-effector 22 which allows a state x ⁇ R 3 ⁇ S 3 ⁇ R 1 (describing the Cartesian position, orientation and gripper state in a global coordinate system (frame)), that operates within a static or dynamic and known workspace.
  • O ⁇ o 1 , o 1 ,..., o J ⁇ .
  • a human user can perform several kinesthetic demonstrations on the arm to manipulate one or several objects for certain manipulation skills.
  • A ⁇ a 1 , a 1 ,..., a H ⁇ the set of demonstrated skills.
  • O a h the set of objects involved is given by O a h and the set of available demonstrations is denoted by D a h .
  • the robot arm 2 is controlled by means of a control unit 3 which may actuate actuators to move the robot arm 2 and activate the end effector 22.
  • Sensors may be provided at the robot arm 2 or at the robot workspace to record the state of objects in the robot workspace.
  • the control unit 3 is configured to record movements made with the robot arm 2 and to obtain information about objects in the workspace from the sensors and further to perform a task planning process as described below.
  • the control unit 3 has a processing unit where the algorithm as described below is implemented in hardware and/or software.
  • TP-GMM task-parametrized-Gaussian Mixture Models
  • the basic idea is to fit a prescribed skill model such as GMMs to multiple demonstrations.
  • GMMs are well known in the art as e.g. disclosed in S. Niekum et al. "Learning grounded finite-state representations from unstructured demonstrations", The International Journal of Robotics Research, 34(2), pages 131-157, 2015 .
  • N M ⁇ T m total observations
  • TP-GMM task-parametrized-Gaussian Mixture Model
  • the mixture model above cannot be learned independently for each frame p. Indeed, the mixing coefficients ⁇ k are shared by all frames p and the k -th component in frame p must map to the corresponding k -th component in the global frame.
  • Expectation-Maximization EM
  • an expectation-maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in a statistical model, which depends on unobserved latent variables.
  • HMMs Hidden semi-Markov Models
  • HMMs Hidden semi-Markov Models
  • HMMs Hidden semi-Markov Models
  • the number of states correspond to the number of Gaussian components in the "attached" TP-GMM.
  • HSMM states are Gaussian distributions, which means that its observation probability distribution is represented as a classical GMM.
  • the observation probabilities can be parametrized as it is done in TP-GMM to obtain a TP-HSMM.
  • the same forward variable can also be used during reproduction to predict future steps until T m .
  • future observations are not available, only transition and duration information are used, i.e., by setting N ( ⁇ l
  • a goal specification G is given as a propositional logic expression over the predicates B, i.e., via nested conjunction, disjunction and negation operators.
  • the goal specification G represents the desired configuration of the arm and the objects, assumed to be feasible.
  • one specification could be "within ( peg, cylinder ) ⁇ onTop ( cylinder, box )" , i.e., "the peg should be inside the cylinder and the cylinder should be on top of the box”.
  • the PDDL model P includes a domain for the demonstrated skills and a problem instance given the goal specification G.
  • the Planning Domain Definition Language (PDDL) is the standard classic planning language. Formally, the language consists of the following key ingredients:
  • motion planning is performed at the end-effector/gripper trajectory level. This means it is assumed that a low-level motion controller is used to track the desired trajectory.
  • TP-HSMM model M ⁇ is to be learned for each demonstrated skill a h and reproduce the skill for a given final configuration G.
  • the TP-HSMM model M a h abstracting the spatio-temporal features of trajectories related to skill a h , can be learned in step S1 using e.g. an EM-like algorithm.
  • This is beneficial as only one model for the general skill is constructed.
  • Figure 3 shows an example of an HSMM for "pick the peg" skill that contains 10 demonstrations for "pick from top” and "pick from side”.
  • the learned HSMM model in the global frame has a single initial HSMM state from which two branches encode the two different instantiations of the same "pick” skill.
  • Figure 3 shows illustrations for HSMM states of a learned skill wherein demonstration trajectories in 2D(left) and transition probabilities between associated states are shown.
  • a final goal configuration G is provided in step S2 which can be translated into the final state of the end effector 22 x G ⁇ R 3 ⁇ S 3 ⁇ R 1 .
  • the most likely sequence s T m * given only by ⁇ 0 and ⁇ T m .
  • step S3 a modification of the Viterbi algorithm is used.
  • the classical Viterbi algorithm has been extensively used to find the most likely sequence of states (also called the Viterbi path) in classical HMMs that result in a given sequence of observed events
  • the modified implementation differs in that: (a) it works on HSMM instead of HMM; and that (b) most observations except the first and the last ones are missing.
  • the Viterbi algorithm is modified to include missing observations, which is basically what is described for variable 'b'. Moreover, the inclusion of duration probabilities p j ( d ) in the computation of variable 'd_t(j)' makes it work for HSMM.
  • the above modified Viterbi algorithm provides the most likely state sequence for a single TP-HSMM model that produces the final observation ⁇ T .
  • these models need to be sequenced and ⁇ t ( j ) has to be computed for each individual model M a h .
  • ⁇ 1 and ⁇ T can be observed, thus some models will not produce observations.
  • An additional challenge emerges when sequencing HSMMs: as the state transition probabilities between subsequent HSMM states are unknown, the Viterbi algorithm cannot be applied directly to find the optimal state sequence.
  • a PDDL model contains the problem instance P P and the domain P D .
  • a h ⁇ A is a symbolic representation for one demonstrated skill in PDDL form.
  • the segments of demonstrations that belong to any of the initial state are to be identified, and to further derive the low-variance geometric relations which can be mapped into the set of predicates B .
  • These frames correspond to objects ⁇ ⁇ 1 ,..., o P ⁇ , i.e., skill a h interacts with these objects.
  • B i the set of instantiated predicates that are True within state i , ⁇ i ⁇ J .
  • PreCond a h ⁇ i ⁇ J ⁇ b ⁇ B i b where ⁇ and ⁇ are the disjunction and conjunction operations.
  • the procedure described above can be applied to the set of final states .
  • the PDDL model P can be generated in an automated way. More importantly, the domain P D can be constructed incrementally whenever a new skill is demonstrated and its descriptions are abstracted as above. On the other hand, the problem instance P P needs to be re-constructed whenever a new initial state or goal specification is given.
  • the PDDL definition P has been constructed, which can be directly fed into any compatible PDDL planner. Different optimization techniques can be enforced during the planning, e.g., minimizing the total length of the plan or total cost.
  • a D * a 1 * a 2 * ... a D * the generated optimal sequence of skills, where a d * ⁇ A holds for each skill.
  • M a d * the learned TP-HSMM associated with a d * .
  • the learned TP-HSMM encapsulates a general skill that might have several plausible paths and the choice relies heavily on the desired initial and final configurations.
  • a compatibility measure shall be embedded while concatenating the skills within a D * .
  • the proposed solution contains three main steps:
  • step S4 the concatenated sequence of manipulation skills is executed.

Abstract

The invention relates to a method for planning a manipulation task of an agent, particularly a robot, comprising the steps of:
- Learning (S1) a number of manipulation skills (ah ) wherein a symbolic abstraction of the respective manipulation skill is generated;
- Determining (S3) a concatenated sequence of manipulation skills (ah ) selected from the number of learned manipulation skills (ah ) based on their symbolic abstraction so that a given goal specification (G) indicating a given complex manipulation task is satisfied;
- Executing (S4) the sequence of manipulation skills (ah ).

Description

    Technical field
  • The present invention relates to automatic processes for planning tasks by determining a sequence of manipulation skills. Particularly, the present invention relates to a motion planning framework for robot manipulation.
  • Technical background
  • General use of robots for performing various tasks is challenging, as it is almost impossible to preprogram all robot capabilities that may potentially be required in the latter application. Training the skill whenever it is needed renders the use of robots inconvenient and will not be accepted by a user. Further, simply recording and replaying a demonstrated manipulation is often insufficient, because changes in the environment, such as varying robot and/or object poses, would render any attempt unsuccessful.
  • Therefore, the robot needs to recognize and encode the intentions behind these demonstrations and should be capable to generalize the trained manipulation to unforeseen situations. Furthermore, several skills need to be performed in a sequence to accomplish complex tasks. The task planning problem aims to define the right sequence of actions and needs a prespecified definition of the planning model and of the preconditions and effects of all available skills. Due to the large variation of skills, the definition of such a planning model quickly becomes impractical.
  • Summary of the invention
  • According to the invention, a method for planning an object manipulation according to claim 1 and a system for planning object manipulation according to the further independent claim.
  • Further embodiments are indicated in the depending subclaims.
  • According to a first aspect a method for planning a manipulation task of an agent, particularly a robot, is provided, comprising the steps of:
    • Learning (Training) a number of manipulation skills (ah ) wherein a symbolic abstraction of the respective manipulation skill is generated;
    • Determining a concatenated sequence of manipulation skills selected from the number of learned manipulation skills based on their symbolic abstraction so that a given goal specification indicating a given complex manipulation task is satisfied;
    • Executing the sequence of manipulation skills.
  • Further, the learning of the number of manipulation skills may be performed in that a plurality of manipulation trajectories for each respective manipulation skill is recorded, particularly by demonstration, a task parametrized Hidden Semi-Markov model (TP-HSMM) is determined depending on the plurality of manipulation trajectories for each respective manipulation skill and the symbolic abstraction of the respective manipulation skill is generated.
  • The above task planning framework allows a high-level planning of a task by sequencing general manipulation skills. Manipulation skills are action skills in general which may also include translations or movements. The general manipulation skills are object-oriented and respectively relate to a single action performed on the object, such as a grasping skill, a dropping skill, a moving skill or the like. These manipulations skills may have different instances, which means that the skills can be carried out in different ways (instances) according to what is needed to be done next. Furthermore, the general skills are provided with object centric symbolic action descriptions for the logic-based planning.
  • The above method is based on the idea of learning from demonstration by means of fitting a prescribed skilled model, such as a Gaussian mixture model to a handful of demonstrations. Generally, a TP-GMM task parameterized Gaussian mixture model may be described which can then be used to an execution to reproduce a trajectory for the learned manipulation skill. The TP-GMM is defined by one or more specific frames (coordinate systems) which indicates the translation and rotation with respect to a word frame. After observation of the actual frame, the learned TP-GMM can be converted into a single GMM. One advantage of the TP-GMM is that the resulting GMM can be updated in a real time according to the observed task parameters. Hence, the TP-HSMM allows to adapt to changes in the objects during the execution of the manipulation task.
  • Furthermore, the generating of the symbolic abstraction of the manipulations skills may comprise constructing a PDDL model, wherein objects, initial state and goal specification define a problem instance, while predicates and actions define a domain of a given manipulation, wherein particularly the symbolic abstraction of the manipulations skills uses a classical PDDL planning language.
  • It may be provided that the concatenated sequence of manipulation skills is determined, such that the probability of achieving the given goal specification is maximized, wherein particularly a PDDL planning step is used to find a sequence of actions to fulfill the given goal specification, starting from a given initial state.
  • According to an embodiment, the transition probability between states of the TP-HSMM may be determined using Expectation-Maximization.
  • Moreover, a task parametrized Hidden Semi-Markov model (TP-HSMM) may be determined by cascading manipulations skills, wherein a Viterbi algorithm is used to retrieve the sequence of states from the single TP-HSMM based on the determined concatenated sequence of manipulation skills.
  • Parameters of the TP-HSMM may be learned through a classical Expectation-Maximization algorithm.
  • Furthermore, the symbolic abstractions of the demonstrated manipulation skills may be determined by mapping low-variance geometric relations of segments of manipulation trajectories into the set of predicates.
  • According to an embodiment, the step of determining the concatenated sequence of manipulation skills may comprise an optimization process, particularly with the goal of minimizing the total length of the trajectory.
  • Particularly, determining the concatenated sequence of manipulation skills may comprise selectively reproducing one or more of the manipulation skills of a given sequence of manipulation skills so as to maximize the probability of satisfying the given goal specification.
  • Furthermore, determining the concatenated sequence of manipulation skills may include the steps of:
    • Cascading the TP-HSMMs of consecutive manipulation skills into one complete model by computing transition probabilities according to a divergence of emission probabilities between end states and initial states of different manipulations skills;
    • Searching the most-likely complete state sequence between the initial and goal states of the manipulation task using a modified Viterbi algorithm.
  • Particularly, the modified Viterbi algorithm may include missing observations and duration probabilities.
  • According to a further embodiment, a device for planning a manipulation task of an agent, particularly a robot, is provided, wherein the device is configured to:
    • learn a number of manipulation skills, wherein a symbolic abstraction of the respective manipulation skill is generated;
    • determine a concatenated sequence of manipulation skills selected from the number of learned manipulation skills based on their symbolic abstraction so that a given goal specification indicating a complex manipulation task is satisfied; and
    • instruct execution of the sequence of manipulation skills.
    Brief description of the drawings
  • Embodiments are described in more detail in conjunction with the accompanying drawings, in which:
  • Figure 1
    schematically shows a robot arm; and
    Figure 2
    shows a flowchart illustrating the method or manipulation of an object by sequencing of manipulation skills.
    Figure 3
    shows illustrations for HSMM states of a learned manipulation skill wherein demonstration trajectories and transition probabilities are illustrated.
    Description of embodiments
  • Figure 1 shows a system (agent) 1 including a controllable robot arm 2 as an example of an object manipulator. The robot arm 2 is a multi-DoF robotic arm with several links 21 and an end-effector 22 which allows a state x R 3 × S 3 × R 1
    Figure imgb0001
    (describing the Cartesian position, orientation and gripper state in a global coordinate system (frame)), that operates within a static or dynamic and known workspace. Also, within the reach of the robot arm 2, there are objects of interest denoted by O = {o 1,o 1,...,oJ }.
  • Within this setup, a human user can perform several kinesthetic demonstrations on the arm to manipulate one or several objects for certain manipulation skills. Denote by A = {a 1,a 1,...,aH } the set of demonstrated skills. Moreover, for manipulation skill ah A, the set of objects involved is given by Oah and the set of available demonstrations is denoted by Dah .
  • The robot arm 2 is controlled by means of a control unit 3 which may actuate actuators to move the robot arm 2 and activate the end effector 22. Sensors may be provided at the robot arm 2 or at the robot workspace to record the state of objects in the robot workspace. Furthermore, the control unit 3 is configured to record movements made with the robot arm 2 and to obtain information about objects in the workspace from the sensors and further to perform a task planning process as described below. The control unit 3 has a processing unit where the algorithm as described below is implemented in hardware and/or software.
  • All demonstrations are described by the structure of TP-GMM (task-parametrized-Gaussian Mixture Models). The basic idea is to fit a prescribed skill model such as GMMs to multiple demonstrations. GMMs are well known in the art as e.g. disclosed in S. Niekum et al. "Learning grounded finite-state representations from unstructured demonstrations", The International Journal of Robotics Research, 34(2), pages 131-157, 2015. For a number M of given demonstrations (trajectory measurement results), each of which contains Tm data points for a dataset, N = MTm total observations ξ = ξ t t = 1 N
    Figure imgb0002
    exists, where ξ t R d
    Figure imgb0003
    for sake of clarity. Also, it is assumed the same demonstrations are recorded from the perspective of P different coordinate systems (given by the task parameters such as objects of interest). One common way to obtain such data is to transform the demonstrations from global frame (global coordinate system) to frame p by ξ t p = A t p 1 ξ t b t p ,
    Figure imgb0004
    where b t p A t p p = 1 P
    Figure imgb0005
    is the translation and rotation of frame p with respect to the given global frame at time t. Then, a TP-GMM (task-parametrized-Gaussian Mixture Model) is described by the parameters π k µ k p Σ k p p = 1 P k = 1 K
    Figure imgb0006
    where K represents the number of Gaussian components in the mixture model, πk is the prior probability of each component, and µ k p Σ k p p = 1 P
    Figure imgb0007
    are the parameters of the k-th component within frame p. K may be manually assigned. However, there are approaches to automatically set K.
  • Differently from standard GMM learning, the mixture model above cannot be learned independently for each frame p. Indeed, the mixing coefficients πk are shared by all frames p and the k-th component in frame p must map to the corresponding k-th component in the global frame. For example, Expectation-Maximization (EM) is a well-established method to learn such models. In general, an expectation-maximization (EM) algorithm is an iterative method to find maximum likelihood or maximum a posteriori (MAP) estimates of parameters in a statistical model, which depends on unobserved latent variables.
  • Once learned, the TP-GMM can be used during execution to reproduce a trajectory for the learned skill. Namely, given the observed frames b t p A t p p = 1 P ,
    Figure imgb0008
    the learned TP-GMM is converted into one single GMM with parameters π k µ t p Σ t p k = 1 K ,
    Figure imgb0009
    by multiplying the affine-transformed Gaussian components across different frames, as follows Σ ^ t , k 1 = p = 1 P Σ ^ t , k p 1 , µ ^ t , k = Σ ^ t , k p = 1 P Σ ^ t , k p 1 µ ^ t , k p
    Figure imgb0010
    where the parameters of the updated Gaussian at each frame p are computed as µ ^ t , k p = A t p µ k p + b t p and Σ ^ t , k p = A t p Σ p p A t p T
    Figure imgb0011
  • Hidden semi-Markov Models (HSMMs) have been successfully applied, in combination with TP-GMMs, for robot skill encoding to learn spatio-temporal features of the demonstrations, such as manipulation trajectories of a robot or trajectories of a movable agent.
  • Hidden semi-Markov Models (HSMMs) extend standard hidden Markov Models (HMMs) by embedding temporal information of the underlying stochastic process. That is, while in HMM the underlying hidden process is assumed to be Markov, i.e., the probability of transitioning to the next state depends only on the current state, in HSMM the state process is assumed semi-Markov. This means that a transition to the next state depends on the current state as well as on the elapsed time since the state was entered.
  • More specifically, a task parametrized HSMM model consists of the following parameters
    Figure imgb0012
    where akh is the transition probability from state k to h; µ k D σ k D
    Figure imgb0013
    describes the Gaussian distributions for the duration of state k, i.e., the probability of staying in state k for a certain number of consecutive steps; π k µ k p Σ k p p = 1 P k = 1 K
    Figure imgb0014
    equals the TP-GMM introduced earlier and, for each k, describe the emission probability, i.e. the probability of observation, corresponding to state k. In an HSMM the number of states correspond to the number of Gaussian components in the "attached" TP-GMM. In general, HSMM states are Gaussian distributions, which means that its observation probability distribution is represented as a classical GMM. To render HSMM to be object-centric, the observation probabilities can be parametrized as it is done in TP-GMM to obtain a TP-HSMM.
  • Given a certain sequence of observed data points ξ = 1 t ,
    Figure imgb0015
    the associated sequence of states in
    Figure imgb0016
    is given by st = s 1,s 2...st. The probability of data point ξt belonging to state k (i.e., st = k) is given by the forward variable at (k) = p s t = k , ξ = 1 t :
    Figure imgb0017
    a t k = τ 1 t 1 h 1 K a t τ h a hk N τ | µ k D , σ k D o τ t
    Figure imgb0018
    where o τ t = = t τ + 1 t N ξ | µ ^ , k , Σ ^ , k
    Figure imgb0019
    is the emission probability, where (µ̂ℓ,k, Σ̂ ℓ,k ) are derived from Σ ^ t , k 1 = p = 1 P Σ ^ t , k p 1 ,
    Figure imgb0020
    µ ^ t , k = Σ ^ t , k p = 1 P Σ ^ t , k p 1 µ ^ t , k p
    Figure imgb0021
    as shown above . Furthermore, the same forward variable can also be used during reproduction to predict future steps until Tm . In this case however, since future observations are not available, only transition and duration information are used, i.e., by setting N(ξ |µ̂ℓ,k,Σ̂,k ) = 1 for all k and ℓ>t in at (k) = τ 1 t 1 h 1 K a t τ h a hk N τ | µ k D , σ k D o τ t .
    Figure imgb0022
    At last, the sequence of the most-likely states s T m * = s 1 * , s 2 * s T m *
    Figure imgb0023
    is determined by choosing s t * = arg max k a t k , 1 t T m .
    Figure imgb0024
  • All demonstrations are recorded from multiple frames. Normally, these frames are closely attached to the objects in Oah . For example, the skill "insert the peg in the cylinder" involves the objects "peg" and "cylinder", and the associated demonstrations are recorded from both the robot, the "peg" and the "cylinder" frames, respectively.
  • In addition, consider a set of pre-defined predicates, denoted by B = {b 1,b 2,...,bL }, representing possible geometric relations among the objects of interest. Here, predicates bB are abstracted as Boolean functions taking as inputs the status of several objects while outputting whether the associated geometric relation holds or not. For instance, grasp := OB indicates whether an object is grasped by the robot arm; within: O x OB indicates whether an object is inside another object; and onTop: O × OB indicates whether an object is on the top of another object. Note that these predicates are not bound to specific manipulation skills but rather shared among them. Usually, such predicate functions can be easily validated for the robot arm states and the object states (e.g., position and orientations).
  • Finally, a goal specification G is given as a propositional logic expression over the predicates B, i.e., via nested conjunction, disjunction and negation operators. In general, the goal specification G represents the desired configuration of the arm and the objects, assumed to be feasible. As an example, one specification could be "within(peg, cylinder) ∧ onTop(cylinder, box)", i.e., "the peg should be inside the cylinder and the cylinder should be on top of the box".
  • In the following, a problem can be defined as follows: Given a set of demonstrations D for skills A and the goal G, the objective is
    1. a) to learn a TP-HSMM model Mθ of the form
      Figure imgb0025
      for each demonstrated skill ah and reproduce the skill for a given final configuration G. For the reproduction of the skill the sequence of states obtained through Viterbi algorithm can be used.
    2. b) to construct a PDDL model P of the form P = P D P P
      Figure imgb0026
      where Objects, Initial State and Goal Specification define a problem instance PP, while Predicates and Actions define the domain of a problem PD;
    3. c) to derive and subsequently execute the sequence of manipulation skills, such that the probability of achieving the given goal specification G is maximized. Once the domain and problem files are specified, a PDDL (Planning Domain Definition Language.) planner has to find a sequence of actions to fulfill the given goal specification, starting from the initial state.
  • The PDDL model P includes a domain for the demonstrated skills and a problem instance given the goal specification G.
  • The Planning Domain Definition Language (PDDL) is the standard classic planning language. Formally, the language consists of the following key ingredients:
    • Objects, everything of interest in the world;
    • Predicates, object properties and relations;
    • Initial States, set of grounded predicates as the initial states;
    • A goal Specification, the goal states; and
    • Actions, how predicates are changed by an action and also the preconditions on the actions.
  • In the embodiment described herein, motion planning is performed at the end-effector/gripper trajectory level. This means it is assumed that a low-level motion controller is used to track the desired trajectory.
  • The method for planning a manipulation task is described in detail with respect to the flowchart of Figure 2.
  • Firstly, a TP-HSMM model Mθ is to be learned for each demonstrated skill ah and reproduce the skill for a given final configuration G.
  • One demonstrated skill ah A is considered. As described above, the set of available demonstrations, recorded in P frames, is given by D a h = ξ t t = 1 N .
    Figure imgb0027
    Furthermore, the set of objects involved with skill ah is given by Oah . The P frames are directly attached to the objects in Oah .
  • Given a properly chosen number of components K which correspond to the TP-HSMM states, which are the Gaussian components representing the observation probability distributions, the TP-HSMM model Mah abstracting the spatio-temporal features of trajectories related to skill ah, can be learned in step S1 using e.g. an EM-like algorithm. This is beneficial as only one model for the general skill is constructed. Figure 3 shows an example of an HSMM for "pick the peg" skill that contains 10 demonstrations for "pick from top" and "pick from side". The learned HSMM model in the global frame has a single initial HSMM state from which two branches encode the two different instantiations of the same "pick" skill. Figure 3 shows illustrations for HSMM states of a learned skill wherein demonstration trajectories in 2D(left) and transition probabilities between associated states are shown.
  • A final goal configuration G is provided in step S2 which can be translated into the final state of the end effector 22 x G R 3 × S 3 × R 1 .
    Figure imgb0028
    This configuration can be imposed as the desired final observation of the reproduction, i.e., ξTm = xG . Similarly, the initial configuration of the end-effector 22 can be imposed as the initial observation, i.e., ξ 0 = x 0. Following, the most likely sequence s T m *
    Figure imgb0029
    given only by ξ 0 and ξTm .
  • The forward variable of formula a t k = τ 1 t 1 h 1 K a t τ h a hk N τ | µ k D , σ k D o τ t
    Figure imgb0030
    allows to compute the sequence of marginally most probable states, while we are looking for the jointly most probable sequence of states given the last observation ξTm . As a result, when using above formula there is no guarantee that the returned sequence s T m *
    Figure imgb0031
    will match both the spatio-temporal patterns of the demonstrations and the final observation. In terms of the example in Figure 3, it may return the lower branch as the most likely sequence (i.e., to grasp the object from the side), even if the desired final configuration is that the end effector 22 is on the top of object.
  • To overcome this issue, in step S3 a modification of the Viterbi algorithm is used. Whereas the classical Viterbi algorithm has been extensively used to find the most likely sequence of states (also called the Viterbi path) in classical HMMs that result in a given sequence of observed events, the modified implementation differs in that: (a) it works on HSMM instead of HMM; and that (b) most observations except the first and the last ones are missing.
  • Specifically, in the absence of observations the Viterbi algorithm becomes δ t j = max d D max i j v t d i a ij p j d = t d + 1 t b ˜ j ξ δ 1 j = b j o 1 π j p j 1
    Figure imgb0032
    where b ˜ j ξ = { N ξ | µ ^ j , Σ ^ j , t = 1 t = T 1 , 1 < t < T
    Figure imgb0033
  • The Viterbi algorithm is modified to include missing observations, which is basically what is described for variable 'b'. Moreover, the inclusion of duration probabilities pj (d) in the computation of variable 'd_t(j)' makes it work for HSMM.
  • At each time t and for each state j, the two arguments that maximize equation δt (j) are recorded, and a simple backtracking procedure can then be used to find the most probable state sequence s T m * .
    Figure imgb0034
  • The above modified Viterbi algorithm provides the most likely state sequence for a single TP-HSMM model that produces the final observation ξT . As multiple skills are used, these models need to be sequenced and δt (j) has to be computed for each individual model Mah . However, only ξ 1 and ξT can be observed, thus some models will not produce observations. An additional challenge emerges when sequencing HSMMs: as the state transition probabilities between subsequent HSMM states are unknown, the Viterbi algorithm cannot be applied directly to find the optimal state sequence.
  • As a next step, symbolic abstractions of the demonstrated skills allow the robot to understand the meaning of each skill on a symbolic level, instead of the data level of HSMM. This may generalize a demonstrated and learned skill. Hence, the high-level reasoning of the herein described PDDL planner, needs to understand how a skill can be incorporated into an action sequence in order to achieve a desired goal specification starting from an initial state. A PDDL model contains the problem instance PP and the domain PD.
  • While the problem PP can be easily specified given the objects 0, the initial state and the goal specification G, the key ingredient for symbolic abstraction is to construct the actions description in the domain PD for each demonstrated skill, wherein PD should be invariant to different task parameters.
  • ah A is a symbolic representation for one demonstrated skill in PDDL form. The learned TP-HSMM Mah contains a task-parametrized model π k µ k p Σ k p p = 1 P k = 1 K .
    Figure imgb0035
  • For each model, it is then possible to identify two sets, each of which containing initial and final states and denoted by J , F 1 , , K ,
    Figure imgb0036
    respectively.
  • To construct the preconditions of a skill, the segments of demonstrations that belong to any of the initial state are to be identified, and to further derive the low-variance geometric relations which can be mapped into the set of predicates B. For each initial state i J ,
    Figure imgb0037
    its corresponding component in frame p is given by µ i p Σ i p
    Figure imgb0038
    for p = 1, ...,P. These frames correspond to objects {ο 1,...,oP }, i.e., skill ah interacts with these objects. For each demonstration ξ j p t = 1 T m D a h ,
    Figure imgb0039
    the most-likely sequence s T m * = s 1 * s 2 * s T m *
    Figure imgb0040
    can be computed as described above. If the first Ti time steps correspond to state i, namely, s 1 * = s 2 * = = s T i * = i ,
    Figure imgb0041
    one instance of the predicate b B can be evaluated at state i along this segment: b (o 1,...,oP ) = T, if
    Figure imgb0042
    where 0 < η < 1 is a design parameter (probability threshold). o 1 t , , o P t
    Figure imgb0043
    are object states computed based on the recorded frame coordinates b t p A t p p = 1 P
    Figure imgb0044
    and object geometric dimensions. Denote by Bi the set of instantiated predicates that are True within state i, i J .
    Figure imgb0045
    As a result, the overall precondition of skill ah is given by the disjunction of the conjunction of these predicates for each initial state, i.e., PreCond a h = i J b B i b
    Figure imgb0046
    where ∨ and ∧ are the disjunction and conjunction operations. Similarly, to construct the effect of a skill, the procedure described above can be applied to the set of final states
    Figure imgb0047
    . In particular, for each final state f F ,
    Figure imgb0048
    the set of instantiated predicates that are True within f is denoted by Bf. However, in contrast to the precondition, the effect cannot contain a disjunction of predicates. Consequently, the effect of skill ah is given by Effect a h = f F b B i b
    Figure imgb0049
    as the invariant predicates common for all of the final states. Based on the above elements, the PDDL model P can be generated in an automated way. More importantly, the domain PD can be constructed incrementally whenever a new skill is demonstrated and its descriptions are abstracted as above. On the other hand, the problem instance PP needs to be re-constructed whenever a new initial state or goal specification is given.
  • Following, it is referred to the planning and sequencing of trained and abstracted skills. The PDDL definition P has been constructed, which can be directly fed into any compatible PDDL planner. Different optimization techniques can be enforced during the planning, e.g., minimizing the total length of the plan or total cost. Denote by a D * = a 1 * a 2 * a D *
    Figure imgb0050
    the generated optimal sequence of skills, where a d * A
    Figure imgb0051
    holds for each skill. Moreover, denote by M a d *
    Figure imgb0052
    the learned TP-HSMM associated with a d * .
    Figure imgb0053
  • Given this sequence a D * ,
    Figure imgb0054
    each skill within a D * ,
    Figure imgb0055
    is reproduced as the end-effector trajectory level, so to maximize the probability of satisfying the given goal G.
  • The learned TP-HSMM encapsulates a general skill that might have several plausible paths and the choice relies heavily on the desired initial and final configurations. To avoid incompatible transitions from one skill to the next, a compatibility measure shall be embedded while concatenating the skills within a D * .
    Figure imgb0056
    Particularly, the proposed solution contains three main steps:
    1. a) Cascade the TP-HSMMs of each skill within a D *
      Figure imgb0057
      into one complete model M ^ a D * ,
      Figure imgb0058
      by creating transition probabilities according to the divergence between the final states/components of the previous manipulation skill and the initial states of the current manipulation skill.
      Since the transition from one skill to another is never demonstrated, such transition probabilities are computed from the divergence of emission probabilities between the sets of final and starting states. Particularly, consider two consecutive skills a d *
      Figure imgb0059
      and a d + 1 *
      Figure imgb0060
      in a D * .
      Figure imgb0061
      The transition probability from one final state f of M a d *
      Figure imgb0062
      to one starting state i of M a d + 1 *
      Figure imgb0063
      is given by a fd exp α p P c KL ( N µ f p Σ f p N µ i p Σ i p
      Figure imgb0064
      where KL(·∥·) is a KL-divergence (Kullback-Leibler-Divergence, see also Kullback, S.; Leibler, R. A. On Information and Sufficiency. Ann. Math. Statist. 22 (1951), no. 1, 79--86. doi:10.1214/aoms/1177729694. https://projecteuclid.org/euclid.aoms/1177729694), Pc is the set of common frames between these two skills, and α ≥ 0 is a design parameter. The outgoing probability of any final state should be normalized. This process is repeated for all pairs of starting and final states between consecutive skills in a D * .
      Figure imgb0065
      In this way, one complete model M ^ a D *
      Figure imgb0066
      has been created for the desired sequence of skills.
    2. b) Find the most-likely complete state sequence s ^ T D *
      Figure imgb0067
      within M ^ a D *
      Figure imgb0068
      given the initial and goal configurations. Given the derived complete TP-HSMM M ^ a D * ,
      Figure imgb0069
      the observed initial state and the goal specification, the above modified Viterbi algorithm can be applied to find the most-likely state sequence s ^ T D * .
      Figure imgb0070
    3. c) Generate the robot end-effector trajectory that optimally tracks s ^ T D * ,
      Figure imgb0071
      namely, to reproduce all skills in a D * .
      Figure imgb0072
      Given the states sequence s ^ T D * ,
      Figure imgb0073
      a linear quadratic regulator (LQR) can be applied to generate the necessary control commands to reproduce the optimal state sequence s ^ T D * .
      Figure imgb0074
      A step-wise reference is obtained from µ ^ s t , s t s ^ T D * ,
      Figure imgb0075
      which is tracked by the LQR1 using associated tracking precision matrices Σ ^ s t 1 .
      Figure imgb0076
      As the robot state x lies in the Riemannian manifold M R = R 3 × S 3 ×
      Figure imgb0077
      Figure imgb0078
      , the emission probabilities are therefore Riemannian Gaussian, and thus the step-wise reference is also given in MR . This implies that the required linear system dynamics for the end-effector 22 cannot be defined on MR , since it is not a vector space. However, the linear tangent spaces can be exploited to achieve a similar result. Specifically, the state error between the desired reference µ̂st and current robot state xt can be computed using the logarithmic map log µ ^ s t x t
      Figure imgb0079
      that projects the minimum length path between µ̂st and xt into the Euclidean space
      Figure imgb0080
      The covariance matrices Σ̂ st describe the variance and correlation of the robot state variables in a tangent space
      Figure imgb0081
      Under the above assumptions, the control objective in the tangent space can be formulated as c u = log µ ^ s t x t T Σ ^ s t 1 log µ ^ s t x t + u t T Ru t dt
      Figure imgb0082
      where u is the control input and R regulates the control effort. To retrieve only the reference trajectory, it can be assumed double integrator dynamics, which provides smooth transitions between consecutive µ̂st .
  • Finally, in step S4 the concatenated sequence of manipulation skills is executed.

Claims (15)

  1. Computer-implemented method for planning a manipulation task of an agent (1), particularly a robot, comprising the steps of:
    - Learning (S1) a number of manipulation skills (ah ) wherein a symbolic abstraction of the respective manipulation skill is generated;
    - Determining (S3) a concatenated sequence of manipulation skills (ah ) selected from the number of learned manipulation skills (ah ) based on their symbolic abstraction so that a given goal specification (G) indicating a given complex manipulation task is satisfied;
    - Executing (S4) the sequence of manipulation skills (ah ).
  2. Method according to claim 1, wherein the learning (S1) of the number of manipulation skills (ah ) is performed in that a plurality of manipulation trajectories for each respective manipulation skill is recorded, particularly by demonstration (A), a task parametrized Hidden Semi-Markov model (TP-HSMM) is determined depending on the plurality of manipulation trajectories for each respective manipulation skill (ah ) and the symbolic abstraction of the respective manipulation skill (ah ) is generated.
  3. Method according to claim 2, wherein the generating of the symbolic abstraction of the manipulations skills (ah ) comprises constructing a PDDL model, wherein objects, initial state and goal specification (G) define a problem instance, while predicates and actions define a domain of a given manipulation, wherein particularly the symbolic abstraction of the manipulations skills (ah ) uses a classical PDDL planning language.
  4. Method according to any of the claims 2 to 3, where the determining of the concatenated sequence of manipulation skills (ah ) is performed, such that the probability of achieving the given goal specification (G) is maximized, wherein particularly a PDDL planning step is used to find a sequence of actions to fulfill the given goal specification (G), starting from a given initial state.
  5. Method according to any of the claims 2 to 4, where the transition probability between states of the TP-HSMM are determined using Expectation-Maximization.
  6. Method according to any of the claims 2 to 5, wherein the task parametrized Hidden Semi-Markov model (TP-HSMM) is determined by cascading manipulations skills (ah ), wherein a Viterbi algorithm is used to retrieve the sequence of states from the single TP-HSMM based on the determined concatenated sequence of manipulation skills (ah ).
  7. Method according to claim 6, wherein parameters of the TP-HSMM are learned through a classical Expectation-Maximization algorithm.
  8. Method according to any of the claims 2 to 7, wherein the symbolic abstractions of the demonstrated manipulation skills are determined by mapping low-variance geometric relations of segments of manipulation trajectories into the set of predicates B.
  9. Method according to any of the claims 2 to 8, wherein determining a concatenated sequence of manipulation skills (ah ) comprises an optimization process, particularly with the goal of minimizing the total length of the trajectory.
  10. Method according to claim 9, wherein determining the concatenated sequence of manipulation skills comprises selectively reproducing one or more of the manipulation skills of a given sequence of manipulation skills (ah ) so as to maximize the probability of satisfying the given goal specification (G).
  11. Method according to claim 9 or 10, wherein determining the concatenated sequence of manipulation skills (ah ) includes the steps of:
    - Cascading the TP-HSMMs of consecutive manipulation skills (ah ) into one complete model by computing transition probabilities according to a divergence of emission probabilities between end states and initial states of different manipulations skills (ah );
    - Searching the most-likely complete state sequence between the initial and goal states of the manipulation task (ah ) using a modified Viterbi algorithm.
  12. Method according to claim 11, wherein the modified Viterbi algorithm includes missing observations and duration probabilities (pj (d)).
  13. Device for planning a manipulation task of an agent, particularly a robot, wherein the device is configured to:
    - learn a number of manipulation skills (ah ), wherein a symbolic abstraction of the respective manipulation skill (ah ) is generated;
    - determine a concatenated sequence of manipulation skills selected from the number of learned manipulation skills (ah ) based on their symbolic abstraction so that a given goal specification (G) indicating a complex manipulation task is satisfied; and
    - instruct execution of the sequence of manipulation skills (ah ).
  14. A computer program product comprising instructions which, when the program is executed by a data processing unit, cause the data processing unit to carry out the method of any of the claims 1 to 12.
  15. A machine-readable storage medium having stored thereon a computer program comprising a routine of set instructions for causing the machine to perform the method of any of the claims 1 to 12.
EP19181874.9A 2019-06-21 2019-06-21 Method and system for robot manipulation planning Active EP3753684B1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
EP19181874.9A EP3753684B1 (en) 2019-06-21 2019-06-21 Method and system for robot manipulation planning
US16/892,811 US11498212B2 (en) 2019-06-21 2020-06-04 Method and system for robot manipulation planning
CN202010564725.9A CN112109079A (en) 2019-06-21 2020-06-19 Method and system for robot maneuver planning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
EP19181874.9A EP3753684B1 (en) 2019-06-21 2019-06-21 Method and system for robot manipulation planning

Publications (2)

Publication Number Publication Date
EP3753684A1 true EP3753684A1 (en) 2020-12-23
EP3753684B1 EP3753684B1 (en) 2022-08-10

Family

ID=67003225

Family Applications (1)

Application Number Title Priority Date Filing Date
EP19181874.9A Active EP3753684B1 (en) 2019-06-21 2019-06-21 Method and system for robot manipulation planning

Country Status (3)

Country Link
US (1) US11498212B2 (en)
EP (1) EP3753684B1 (en)
CN (1) CN112109079A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021244819A1 (en) * 2020-06-05 2021-12-09 Robert Bosch Gmbh Method for controlling a robot and robot controller
DE102021204961A1 (en) 2021-05-17 2022-11-17 Robert Bosch Gesellschaft mit beschränkter Haftung Method of controlling a robotic device
DE102022206381A1 (en) 2022-06-24 2024-01-04 Robert Bosch Gesellschaft mit beschränkter Haftung Method for controlling a robotic device

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2021094677A (en) * 2019-12-19 2021-06-24 本田技研工業株式会社 Robot control device, robot control method, program and learning model
CN112828894B (en) * 2021-01-21 2022-09-02 中国科学院重庆绿色智能技术研究院 Position and force hybrid control method of redundant manipulator
CN114043478B (en) * 2021-11-24 2023-07-07 深圳大学 Method and device for expressing complex operation skills of robot, intelligent terminal and medium
CN114578727A (en) * 2022-01-29 2022-06-03 深圳市云鼠科技开发有限公司 Service logic control method for cleaning robot
DE102022201116A1 (en) 2022-02-02 2023-08-03 Robert Bosch Gesellschaft mit beschränkter Haftung Method of controlling a robotic device
CN115070764B (en) * 2022-06-24 2023-05-23 中国科学院空间应用工程与技术中心 Mechanical arm movement track planning method, system, storage medium and electronic equipment
CN115048282B (en) * 2022-08-15 2022-10-25 北京弘玑信息技术有限公司 Extraction method of repeated operation, electronic device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040186717A1 (en) * 2003-03-17 2004-09-23 Rensselaer Polytechnic Institute System for reconstruction of symbols in a sequence
US20130218340A1 (en) * 2010-11-11 2013-08-22 The John Hopkins University Human-machine collaborative robotic systems
KR20160080349A (en) * 2014-12-29 2016-07-08 한양대학교 산학협력단 Method for Learning Task Skill and Robot Using Thereof

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5835684A (en) * 1994-11-09 1998-11-10 Amada Company, Ltd. Method for planning/controlling robot motion
JP4169063B2 (en) * 2006-04-06 2008-10-22 ソニー株式会社 Data processing apparatus, data processing method, and program
JP5440840B2 (en) * 2009-06-11 2014-03-12 ソニー株式会社 Information processing apparatus, information processing method, and program
AT509927B1 (en) * 2010-06-08 2015-05-15 Keba Ag METHOD FOR PROGRAMMING OR PRESENTING MOVEMENTS OR RUNS OF AN INDUSTRIAL ROBOT
DE102015012961B4 (en) * 2015-10-08 2022-05-05 Kastanienbaum GmbH robotic system
CN106826789A (en) * 2017-03-10 2017-06-13 蒙柳 A kind of modular remote operating machinery arm controller
CN108393884B (en) * 2018-01-18 2021-01-05 西北工业大学 Petri network-based collaborative task planning method for multi-mechanical-arm teleoperation system
GB2577312B (en) * 2018-09-21 2022-07-20 Imperial College Innovations Ltd Task embedding for device control
CN109176532B (en) * 2018-11-09 2020-09-29 中国科学院自动化研究所 Method, system and device for planning path of mechanical arm
DE102020214231A1 (en) * 2020-11-12 2022-05-12 Robert Bosch Gesellschaft mit beschränkter Haftung METHOD OF CONTROLLING A ROBOT DEVICE AND ROBOT CONTROLLER

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040186717A1 (en) * 2003-03-17 2004-09-23 Rensselaer Polytechnic Institute System for reconstruction of symbols in a sequence
US20130218340A1 (en) * 2010-11-11 2013-08-22 The John Hopkins University Human-machine collaborative robotic systems
KR20160080349A (en) * 2014-12-29 2016-07-08 한양대학교 산학협력단 Method for Learning Task Skill and Robot Using Thereof

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
CHRIS PAXTON ET AL: "Do What I Want, Not What I Did: Imitation of Skills by Planning Sequences of Actions", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 5 December 2016 (2016-12-05), XP080736730, DOI: 10.1109/IROS.2016.7759556 *
EMMANUEL PIGNAT ET AL: "Learning adaptive dressing assistance from human demonstration", ROBOTICS AND AUTONOMOUS SYSTEMS, vol. 93, 1 July 2017 (2017-07-01), AMSTERDAM, NL, pages 61 - 75, XP055674005, ISSN: 0921-8890, DOI: 10.1016/j.robot.2017.03.017 *
KULLBACK, S.LEIBLER, R. A.: "On Information and Sufficiency", ANN. MATH. STATIST., vol. 22, no. 1, 1951, pages 79 - 86, XP008024182, Retrieved from the Internet <URL:https://projecteuclid.org/euclid.aoms/1177729694>
REZA AHMADZADEH S ET AL: "Visuospatial Skill Learning for Robots", ARXIV.ORG, CORNELL UNIVERSITY LIBRARY, 201 OLIN LIBRARY CORNELL UNIVERSITY ITHACA, NY 14853, 3 June 2017 (2017-06-03), XP080767339 *
S. NIEKUM ET AL.: "Learning grounded finite-state representations from unstructured demonstrations", THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH, vol. 34, no. 2, 2015, pages 131 - 157, XP055447099, DOI: doi:10.1177/0278364914554471

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021244819A1 (en) * 2020-06-05 2021-12-09 Robert Bosch Gmbh Method for controlling a robot and robot controller
DE102021204961A1 (en) 2021-05-17 2022-11-17 Robert Bosch Gesellschaft mit beschränkter Haftung Method of controlling a robotic device
DE102021204961B4 (en) 2021-05-17 2023-06-07 Robert Bosch Gesellschaft mit beschränkter Haftung Method of controlling a robotic device
DE102022206381A1 (en) 2022-06-24 2024-01-04 Robert Bosch Gesellschaft mit beschränkter Haftung Method for controlling a robotic device

Also Published As

Publication number Publication date
EP3753684B1 (en) 2022-08-10
CN112109079A (en) 2020-12-22
US11498212B2 (en) 2022-11-15
US20200398427A1 (en) 2020-12-24

Similar Documents

Publication Publication Date Title
EP3753684B1 (en) Method and system for robot manipulation planning
Pertsch et al. Accelerating reinforcement learning with learned skill priors
Qureshi et al. Motion planning networks
Bechtle et al. Meta learning via learned loss
Chandak et al. Learning action representations for reinforcement learning
Calinon et al. Encoding the time and space constraints of a task in explicit-duration hidden Markov model
Osa et al. Guiding trajectory optimization by demonstrated distributions
US20230202034A1 (en) Method for controlling a robot and robot controller
Zhou et al. Clone swarms: Learning to predict and control multi-robot systems by imitation
Francis et al. Stochastic functional gradient for motion planning in continuous occupancy maps
Duan et al. Learning to avoid obstacles with minimal intervention control
Tavassoli et al. Learning skills from demonstrations: A trend from motion primitives to experience abstraction
Oh et al. Bayesian Disturbance Injection: Robust imitation learning of flexible policies for robot manipulation
CN113867137A (en) Method and device for operating a machine
Braun et al. Incorporation of expert knowledge for learning robotic assembly tasks
Alibeigi et al. A fast, robust, and incremental model for learning high-level concepts from human motions by imitation
Kulić et al. Incremental learning of full body motion primitives
Alizadeh et al. Exploiting the task space redundancy in robot programming by demonstration
Vallon et al. Task decomposition for MPC: A computationally efficient approach for linear time-varying systems
Tan A behavior generation framework for robots to learn from demonstrations
Pérez-Dattari et al. Deep Metric Imitation Learning for Stable Motion Primitives
Bruno et al. Learning adaptive movements from demonstration and self-guided exploration
Hagos Estimation of Phases for Compliant Motion
Puranic et al. Learning performance graphs from demonstrations via task-based evaluations-supplemental material
Shaffer et al. Expanding Kinodynamic Optimization Solutions with Recurrent Neural Networks and Path-tracking Control

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN PUBLISHED

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

AX Request for extension of the european patent

Extension state: BA ME

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20210623

RBV Designated contracting states (corrected)

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

17Q First examination report despatched

Effective date: 20210813

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTG Intention to grant announced

Effective date: 20211123

GRAJ Information related to disapproval of communication of intention to grant by the applicant or resumption of examination proceedings by the epo deleted

Free format text: ORIGINAL CODE: EPIDOSDIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: EXAMINATION IS IN PROGRESS

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: GRANT OF PATENT IS INTENDED

INTC Intention to grant announced (deleted)
INTG Intention to grant announced

Effective date: 20220201

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE PATENT HAS BEEN GRANTED

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

REG Reference to a national code

Ref country code: AT

Ref legal event code: REF

Ref document number: 1510153

Country of ref document: AT

Kind code of ref document: T

Effective date: 20220815

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: DE

Ref legal event code: R096

Ref document number: 602019017998

Country of ref document: DE

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: NL

Ref legal event code: MP

Effective date: 20220810

REG Reference to a national code

Ref country code: LT

Ref legal event code: MG9D

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: RS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221212

Ref country code: NO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221110

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: LV

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: LT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

REG Reference to a national code

Ref country code: AT

Ref legal event code: MK05

Ref document number: 1510153

Country of ref document: AT

Kind code of ref document: T

Effective date: 20220810

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: IS

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221210

Ref country code: HR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20221111

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SM

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

REG Reference to a national code

Ref country code: DE

Ref legal event code: R097

Ref document number: 602019017998

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: AL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

26N No opposition filed

Effective date: 20230511

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20230817

Year of fee payment: 5

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20220810

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

REG Reference to a national code

Ref country code: BE

Ref legal event code: MM

Effective date: 20230630

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20230621

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230621

REG Reference to a national code

Ref country code: IE

Ref legal event code: MM4A

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230621

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230621