WO2011067621A1 - Robot control system, motion data creation apparatus and its creating method - Google Patents

Robot control system, motion data creation apparatus and its creating method Download PDF

Info

Publication number
WO2011067621A1
WO2011067621A1 PCT/IB2009/007954 IB2009007954W WO2011067621A1 WO 2011067621 A1 WO2011067621 A1 WO 2011067621A1 IB 2009007954 W IB2009007954 W IB 2009007954W WO 2011067621 A1 WO2011067621 A1 WO 2011067621A1
Authority
WO
WIPO (PCT)
Prior art keywords
robot
joint angle
posture
data creation
creation apparatus
Prior art date
Application number
PCT/IB2009/007954
Other languages
French (fr)
Inventor
Sébastien DALIBARD
Alireza Nakhaei Sarvedani
Jean-Paul Laumond
Toru Miyagawa
Kunimatsu Hashimoto
Original Assignee
Toyota Jidosha Kabushiki Kaisha
Centre National De La Recherche Scientifique
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toyota Jidosha Kabushiki Kaisha, Centre National De La Recherche Scientifique filed Critical Toyota Jidosha Kabushiki Kaisha
Priority to PCT/IB2009/007954 priority Critical patent/WO2011067621A1/en
Publication of WO2011067621A1 publication Critical patent/WO2011067621A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • B25J9/1607Calculation of inertia, jacobian matrixes and inverses
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • B25J9/1666Avoiding collision or forbidden zones
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39079Solve inverse differential kinematics in closed, feedback loop, iterate
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39217Keep constant orientation of handled object while moving manipulator
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40338Task priority redundancy
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40352Combination of priority, basic task, tip position, and task for link movement

Definitions

  • the present invention relates to a robot control system that makes, for example, a motion plan of a robot to which a plurality of tasks are provided, a motion data creation apparatus and its creating method.
  • the robot has to be able to operate autonomously.
  • the created posture or motion must be collision free with objects, except for the phase of physical interaction.
  • Non-patent documents 1 to 5 As a technology to optimize the motion creation of a robot and to optimize the generated motion, technologies disclosed in Non-patent documents 1 to 5 have been known. Further, Non-patent document 6 and Patent documents 1 and 2 disclose technologies using task control with priorities. Citation List
  • NPL 1 Marcelo Kallmann, Amaury Aubel, Tolga Abaci and Daniel Thalmann, "Planning Collision-Free Reaching Motions for
  • NPL 2 J.J. Kuffner, S. Kagaml, K. Nishiwaki, M. Inaba, and H. Inoue, "Dynamically-stable motion planning for humanoid robots" , Autonomous Robots (special issue on Humanoid Robotics) , 12,2002.
  • NPL 3 Dmitry Berensony, Siddllartha S . Srlnivasazy, Dave Fergusonzy, Alvaro Collety, James J. Kuttner, "Manipulation Planning with Workspace Goal Regions", ICRA2009.
  • NPL 4 Marc Toussaint, Michael Gienger and Christian Goerick, "Optimization of sequential attractor-based movement for compact behaviour generation", HUMANOIDS 2007.
  • NPL 5 Hiroki Sanada, Eiichi Yoshida and Kazuhito Yokoi, "Passing Under Obstacles with Humanoid Robots", IROS 2007.
  • NPL 6 Nikolaus Vahrenkamp, Dmitry Berenson, Tamim Asfour, James Kuffner and Rudiger Dillmann, "Humanoid Motion Planning for Dual-Arm Manipulation and Re-Grasping Tasks", IROS 2009.
  • an exemplary object of the present invention is to provide a robot control system capable of planning a trajectory of joint angles reaching to the target final position/posture by providing only the final target value of the task (for example, the final position and posture of the finger tips), motion data creation apparatus and its creating method.
  • a robot control system in accordance with the present invention includes : a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and the robot that moves according to a joint angle created by the motion data creation apparatus; wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a joint space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a
  • Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.
  • the prioritized task may be set for a plurality of portions or the center of mass of the robot.
  • the motion data creation apparatus may be configured to perform the whole body cooperation calculation such that the difference between the target value of a constraint point specified by the prioritized task and the current value converges by generating a third node from a first node extracted in a random manner or in a predetermined manner and a second node that is a node on the trajectory data and closest to the first node, and using a constraint point obtained from a joint angle corresponding to the third node as an initial value.
  • the motion data creation apparatus may be configured to perform optimization processing of the trajectory data by dividing the generated trajectory data by predetermined unit, using a posture corresponding to each divided point as an initial value, and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges. In this way, an optimized posture can be realized for each point on the trajectory.
  • the motion data creation apparatus may further include an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot.
  • the motion data creation apparatus may be configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the joint space by referring to information stored in the object data storage unit, and when they collode, not to add the calculated joint angle newly to the trajectory data. In this way, a trajectory avoiding a collision with an object can be obtained.
  • a robot control system in accordance with another aspect of the present invention includes : a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and the robot that moves according to a joint angle created by the motion data creation apparatus; wherein the motion data creation apparatus calculates, in a joint space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a
  • Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.
  • the prioritized task may be set for a plurality of portions or the center of mass of the robot .
  • the motion data creation apparatus may be configured to perform optimization processing of the calculated joint angle by using a posture obtained from the calculated joint angle as an initial value and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges.
  • the motion data creation apparatus may further includes an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot.
  • the motion data creation apparatus may be configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the joint space by referring to information stored in the object data storage unit, and when they collide, not to adopt the calculated joint angle.
  • a motion data creation apparatus in accordance with the present invention is a motion data creation apparatus that creates a joint angle to cause a robot to move, wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a joint space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
  • a motion data creation apparatus in accordance with another aspect of the present invention is a motion data creation apparatus that creates a joint angle to cause a robot to move, wherein the motion data creation apparatus calculates, in a joint space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by a prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
  • a motion data creation method in accordance with the present invention is a motion data creation method for creating a joint angle to cause a robot to move, wherein when trajectory data of a joint angle reaching from an initial posture of the robot to a target posture is searched for in a joint space by using a search technique based on sampling, a joint angle satisfying an constraint is calculated by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
  • a motion data creation method in accordance with another aspect of the present invention is a motion data creation method that creates a joint angle to cause a robot to move, wherein a joint angle satisfying an constraint is calculated in a joint space by using a constraint point obtained from a
  • the prioritized task being set for a plurality of portions or a center of mass of a robot.
  • a robot control system capable of planning a trajectory of joint angles reaching to the target final position/posture by providing only the final target value of the task (for example, the final position and posture of the finger tips) , motion data creation apparatus and its creating method can be provided.
  • Fig. 1 is a configuration diagram of a robot control system in accordance with a first exemplary embodiment
  • Fig. 2 is a diagram of a model of a robot in accordance with a first exemplary embodiment
  • Fig. 3 is a diagram for explaining a relation between manipulable space and redundant space
  • Fig. 4 is a functional configuration diagram of a whole body cooperation IK calculation module in accordance with a first exemplary embodiment
  • Fig. 5 is a process flowchart for a whole body cooperation IK calculation module in accordance with a first exemplary embodiment
  • FIG. 6 is an overall flowchart of motion data creation processing in accordance with a first exemplary embodiment
  • FIG. 7 is a process flowchart for a motion/posture generation unit in accordance with a first exemplary embodiment
  • Fig. 8 is a functional block diagram including feedback calculation in accordance with a first exemplary embodiment
  • Fig. 9 is a functional block diagram including feedback calculation in accordance with a first exemplary embodiment
  • Fig. 9 is a process flowchart for an optimization unit in accordance with a first exemplary embodiment
  • Fig. 10 is a functional block diagram showing a
  • Fig. 11 is a flowchart of motion trajectory generation processing by a motion/posture generation unit in accordance with a first exemplary embodiment
  • Fig. 12 is a diagram for explaining a specific example of motion trajectory generation processing by an RRT method in accordance with a first exemplary embodiment
  • Fig. 13 is a diagram for explaining a specific example of motion trajectory generation processing by an RRT method in accordance with a first exemplary embodiment
  • Fig. 14 is a diagram for explaining a specific example of motion trajectory generation processing by an RRT method in accordance with a first exemplary embodiment
  • Fig. 15 is a flowchart for motion trajectory optimization by an optimization unit in accordance with a first exemplary embodiment .
  • Fig. 1 shows a configuration diagram of a robot control system according to the first embodiment.
  • a control system 1 includes an interface unit 10, a motion data creation unit 20, a robot control unit 60, and a robot 100.
  • Each component is realized by hardware or software included in a computer.
  • the motion data creation unit 20 is mainly realized by software
  • the robot control unit 60 and the robot 100 are mainly realized by hardware.
  • the interface unit 10 is a user interface or a data interface to receive command values to the robot 100.
  • a plurality of tasks are provided as the command values.
  • the task means the target where control is desired in creating motion data.
  • the task includes the one that is set for a portion of the robot 100 which is the target where motion is controlled, or the position of the center of mass of the robot 100.
  • the task is provided as coordinate values of six dimensions at maximum indicating the position and the posture of the portion which is the control target.
  • the robot control unit 60 calculates a torque control value to drive each joint using a target joint angle generated by the motion data creation unit 20 and a current joint angle measured by an encoder (not shown) .
  • the robot control unit 60 includes a motor driver (not shown) and a controller thereof (not shown) , and drives each oint by operating an actuator in accordance with the torque control value that is calculated. Accordingly, the motion in accordance with the motion data generated by the motion data creation unit 20 is realized.
  • the robot 100 executes the operation based on motion data generated by the motion data creation unit 20.
  • the robot 100 is a leg type moving robot.
  • the robot 100 includes at least one arm in order to achieve the task.
  • the robot 100 considers the restriction that it keeps its balance or the restriction by a plurality of tasks, as an example.
  • Fig. 2 is a model diagram showing the robot 100 with joints and links connecting the joints.
  • the robot 100 includes left and right two leg links LR and LL, and left and right two arm links AR and AL.
  • the robot 100 is supplied with the motion data generated by the motion data creation unit 20 so as to be able to walk with the leg links LR and LL and execute operations such as carrying an object with the arm links AR and AL.
  • the position of the center of mass of the robot 100 is specified and the task is set for a swing leg and a stance leg. Further, the task is set for fingertips when the robot carries out work with its hand, for example.
  • the robot 100 is formed of two leg links LR and LL, two arm links AR and AL, and a body trunk BD to which these leg links and arm links are connected.
  • a neck joint that supports a head 101 includes a joint of roll direction 102, a joint of pitch direction 103, and a joint of yaw direction 103.
  • the right arm link AR includes a shoulder joint of pitch direction 105, a shoulder joint of roll direction 106, an upper arm joint of yaw direction 107, an elbow joint of pitch direction 108, and a wrist joint of yaw direction 109, and a hand 141R that functions as an end-effector that holds an object and carries it is provided in an end of the right arm link AR.
  • the mechanism of the hand 141R may be determined by the shape or the type of the object that is to be held, and the hand may have the configuration of a multi-degree of freedom and multiple joint having a plurality of fingers, for example.
  • the left arm link AL includes the similar configuration as the right arm link AR. More specifically, the left arm link AL includes five joints 110 to 114, and includes a hand 141L in the end thereof.
  • the right leg link LR includes a waist joint of yaw direction 117, a waist joint of pitch direction 118, a waist joint of roll direction 119, a knee joint of pitch direction 120, an ankle joint of pitch direction 121, and an ankle joint of roll direction 122.
  • a sole 130 that contacts with a floor surface is provided in the lower part of the ankle joint 122.
  • the left leg link LL includes the similar configuration as the right leg link LR. More specifically, the left leg link LL includes six joints 123 to 128, and includes a sole 131 in the end thereof.
  • the body trunk 143 includes a joint of roll direction 115 and a joint of pitch direction 116.
  • the motion data creation unit 20 includes a motion/posture generation unit 30, an object data store unit 40, and an optimization unit 50.
  • the motion data creation unit 20 includes a module that performs whole body cooperation IK calculation that will be described later (whole body cooperation IK control unit 70) , and the motion/posture generation unit 30 and the optimization unit 50 call for a whole body cooperation IK module.
  • the motion/posture generation unit 30 generates the posture and the motion with which the object in the environment and the robot 100 collide using the task input from the interface unit 10, and the information stored in the object data store unit 40.
  • the object data store unit 40 the shape/texture data regarding the robot 100 and the object in the environment, and the geometrical data of the robot 100 are stored.
  • the motion/posture generation unit 30 has a characteristic feature in that it combines a method of configuration that is extracted randomly or in a predefined manner and a method of inverse kinematics calculation with balance and task restrictions (local path planning - whole body cooperation IK calculation) regarding the generation of the collision-free posture.
  • the whole body cooperation IK calculation means the inverse kinematics calculation with prioritized task restrictions for redundant degrees of freedom system using null space Jacobian matrix.
  • the configurations that satisfy balance and/or task restrictions are output. Note that the detail of the whole body cooperation IK will be described later .
  • the whole body cooperation IK is a method that has been developed for several years, which has now been applied for humanoid robots.
  • the motion/posture generation unit 30 has a characteristic feature in that it combines a method of a configuration search algorithm that is extracted randomly or in a predefined manner and a method of the inverse kinematics calculation with balance and/or task restrictions (local path planning-whole body cooperation IK calculation) regarding the generation of the collision-free motion. Further, the motion/posture generation unit 30 is able to combine the above method with two-dimensional path planning for further
  • the optimization unit 50 has a characteristic feature in that it combines a method of the whole body cooperation IK calculation and a mathematical optimization method regarding the optimization of the collision-free posture.
  • a mathematical optimization method a steepest descent method, a gradient descent method or the like can be employed.
  • the optimization unit 50 may perform optimization of motion trajectory (joint angular trajectory) for further applications.
  • the examples of the prioritized task control when there is one task include a case of specifying the position of the center of mass and setting the tasks for the swing leg and the stance leg while the robot is walking, and a case of setting the task for the fingertips while the robot is operating with its hands.
  • the degree of freedom of the control increases as the redundant degree of freedom (the number of joints of the robot) increases.
  • the redundant degree of freedom means that, for example, when the movement of the arms having redundant degree of freedom is considered on the two-dimensional plane, there are a plurality of trajectories of the arm even when the position and the posture of the final joint is fixed.
  • Other properties of the prioritized task control include that there is Jacobian matrix for each task, and the tasks can be prioritized with the least squares solution. Further, it is also known that the calculation load increases in accordance with increase of the number of joints or increase of the tasks .
  • priority orders can be added to the tasks and the task can be set for an arbitrary portion.
  • all the tasks are satisfied when there is sufficient redundancy, while tasks having higher priority orders are satisfied when the redundancy is insufficient. Any point (center of mass, fingertips, or the like) other than the portion can be controlled as long as Jacobian matrix can be obtained.
  • n-dimensional joint angle ⁇ and the space of m-dimensional task r are considered (where n ⁇ m) .
  • the space which can be indicated by the joint angular velocity (denoted by dotted ⁇ in Fig. 3 , which means the derivative value of ⁇ ) based on the current joint angle ⁇ is defined as a manipulable space
  • the joint angular space which can be indicated by one task in the task space is defined as a redundant space (null space) .
  • the above technique is applied to a method of obtaining the joint angle using Jacobian matrix when there is one task that istobeset.
  • the Jacobian matrix J, the task (constraint) , and the joint angle have relation shown in the following expression (4) .
  • x with a dot in the upper part indicates the task velocity, and is a vector of r columns .
  • J(9) is a Jacobian matrix, and is rxc matrix.
  • ⁇ with a dot in the upper part indicates the joint angular velocity, and is a vector of c columns.
  • x j ⁇ 0)9 —(4)
  • the task (constraint) is input in the following expression (5) obtained by deforming the above equation (4) , so as to obtain the joint angle.
  • J with # indicates a pseudo-inverse matrix of the Jacobian matrix J, which is a matrix of cxr. Further, by integrating the joint angular velocity obtained by the following expression (5) , the joint angle is calculated.
  • the redundant space is employed in the prioritized task control so as to satisfy the plurality of constraints.
  • k is defined as shown in the following expression (6) . Note that g is a feedback gain.
  • k may be specified by using a potential method expressed by expression (7) shown below.
  • P is obtained by forming the restrictions that should be satisfied into formulas.
  • the second task is shown by the following expression (9) .
  • the expression shown by the underline is the expression to solve ki as the least squares solution.
  • N is defined from the following expression (11) .
  • k is a derivative value (joint angular velocity) of the joint angle of the robot posture which is a standard.
  • the current posture angular vector is 9 cur
  • gk(0 ⁇ gk ⁇ l) is an arbitrary gain
  • Fig. 4 is a diagram showing the function configuration of the whole body cooperation IK calculation module.
  • Fig. 5 is a diagram showing a process flow by the whole body cooperation IK calculation module.
  • the whole body cooperation IK calculation module in the oint space (CS: Configuration Space), the posture that satisfies the task (constraint) set from the initial value of the joint angle that is provided is obtained by a Newton-Raphson method, or a feedback method.
  • CS Configuration Space
  • Fig. 4 is a function configuration diagram of the whole body cooperation IK
  • the whole body cooperation IK control unit 70 is supplied with a robot-base coordinate p C ur» a current joint angle e C ur of the robot 100, an arbitrary parameter k, and a plurality of tasks with prioritization (task 1 to task n) , and performs feedback calculation based on the difference between the current value and the target value of the constraint points (center of mass, fingertips, and the like) defined by the tasks, so as to output the joint angle 9 0 ut that satisfies the constraint.
  • the initial position is indicated by the robot-base coordinate and the initial posture is indicated by the current joint angle of the robot 100, and then these values are updated so as to satisfy the constraint points by the feedback calculation.
  • each task is provided by the six-dimensional coordinate values at maximum including the position and the posture, and the velocity which is the derivative value of these values is input.
  • the position of the center of mass is also input as is similar to the other tasks (constraints) for the purpose of simplicity of illustration. However, it may be input as the Jacobian matrix reference coordinate, not as the task.
  • the joint angular velocity calculation unit 71 calculates the joint angular velocity (denoted by ⁇ with a dot in the upper part in the drawings) which is the derivative value of the joint angle.
  • the joint angular velocity calculation unit 71 calculates the joint angular velocity (denoted by ⁇ with a dot in the upper part in the drawings) which is the derivative value of the joint angle.
  • the joint angular velocity calculated by the joint angular velocity calculation unit 71 is integrated with an integrator 72 so as to obtain the joint angle.
  • the joint angle that is obtained is input to a real machine (robot control unit 60 and robot 100) .
  • the Jacobian matrix calculation unit 74 provides unit velocity to each joint, so as to calculate the Jacobian matrix for each task. Note that the known method may be employed as the method of calculating the Jacobian matrix, and the detailed description will be omitted here.
  • a robot forward kinematics model 73 performs the forward kinematics calculation based on the robot geometrical
  • the target joint angle Q ou m 3 ⁇ 4y be provided.
  • the information parameter of the robot 100 such as the joint angle 9 ou t/ each task position xi, X2 Xn and the like is output as well, it is possible to use these information parameters of the robot 100 when more complicated control needs to be performed, for example, when performing compliance control using a force sensor.
  • the whole body cooperation IK control unit 70 uses a forward kinematics model (numerical model) of the robot 100 to provide the unit velocity to each joint of the numerical model, thereby calculating a Jacobian matrix (SI01) .
  • the whole body cooperation IK control unit 70 acquires and determines input data such as the velocity of a task and an arbitrary parameter (S102) .
  • the whole body cooperation IK calculation control unit 70 calculates an equation corresponding to the prioritized task control with redundant degrees, and calculates a joint angular velocity (S103) .
  • the whole body cooperation IK calculation control unit 70 integrates the calculated joint angular velocity, thereby calculating the joint angles of the robot (S104) .
  • the position of the center of mass of the robot can also be calculated from the joint angular velocity of the robot by using the numerical model .
  • the motion data creation processing includes processing of determining a goal posture (S201, S202) and processing of obtaining a trajectory from an initial position to the goal posture (S203, S204) .
  • the motion/posture generation unit 30 of the motion data creation apparatus 20 generates the goal posture which satisfies restrictions such as the position of the center of mass and the position of finger tips of the robot 100 and which is free from collision with any object (S201) .
  • the optimization unit 50 of the motion data creation apparatus 20 optimizes the generated goal posture (S202) . That is, the optimization unit 50 calculates a joint angle closest to the target joint angle, while satisfying the restrictions.
  • the motion/posture generation unit 30 generates a motion trajectory (trajectory of the joint angle) of the robot 100 which is a trajectory from the initial posture to the goal posture and which is free from collision with any object, while satisfying the restrictions (S203) .
  • the optimization unit 50 optimizes the generated motion trajectory (S204) . That is, the optimization unit 50 calculates a joint angle closest to the target joint angle, while satisfying the restrictions.
  • Steps S201 to S204 are described as a series of processings, each processing may be combined with processing of another motion data creation apparatus.
  • the goal posture generation processing may be carried out using another conventional motion data creation apparatus, and the obtained goal posture may be combined with trajectory generation processing (S203) or trajectory optimization processing (204) according to this embodiment.
  • Figs. 7 and 8 are diagrams illustrating the goal posture generation processing performed by the motion/posture generation unit 30.
  • Fig. 7 is a diagram illustrating the processing flow performed by the motion/posture generation unit 30.
  • the motion/posture generation unit 30 generates a joint angle for each axis of the robot randomly in the joint space (CS) by using the following function (S301) .
  • the randomly generated joint angle be 6 ran d.
  • 9 ran( j is a joint angle vector, because values for all the joints are generated.
  • e ran d Rand(CS)
  • the motion/posture generation unit 30 provides the generated joint angles to the numerical model of the robot 100 to thereby obtain the posture of the robot 100 with the joint angles , and checks whether the robot 100 with the obtained posture is free from collision with any object within the environment (S302) .
  • collision checking is carried out using the following function.
  • the process returns to S301 to randomly generate the joint angles of the robot 100 again.
  • checking for collision with any object within the environment may be performed using well-known techniques. For instance, the robot 100 and objects within the environment are approximated by simple shapes such as a rectangular solid or a sphere, and these approximated models are stored in the object data storage unit 40. Then, by referring to the stored information, it can be checked whether the robot 100 collides with any object within the environment.
  • the motion/posture generation unit 30 performs local path planning and calculates the posture that satisfies configured tasks (constraints) (S303) .
  • the posture that satisfies the constraints is obtained by
  • Newton-Raphson method using the prioritized task control, or feedback method, with the generated random posture set as an initial value.
  • An example using the feedback method is herein described.
  • Fig. 8 is a functional block diagram including feedback operation for use in the local path planning.
  • the motion/posture generation unit 30 sets the current posture angle 9 cur as the initial value of the posture angle
  • the motion/posture generation unit 30 obtains the current robot base coordinate P C ur and the positions xi, ⁇ ⁇ ⁇ x n of constraint points at the current posture angle 9 cur , by the robot forward kinematics model .
  • the motion/posture generation unit 30 calculates the amount of input to the whole body cooperation IK control unit 70, at each constraint point, from a difference between a target value ref ref
  • 9 re f in the target values represents an ideal posture and is set in advance by a user, ref ref ref together with target tasks X , * ⁇ n .
  • ⁇ ⁇ X n are satisfied in order of priority, and a joint angle closest to 9 re f is selected. Specifically, the value is selected so that the square norm of the target joint angle 9 re f and the current joint angle 9 cur becomes minimum.
  • the gain 75 is an arbitrary adjustment parameter.
  • the motion/posture generation unit 30 then obtains the oint angles of the whole body of the robot 100 by using the whole body cooperation IK control unit 70, as shown in the following expression (16) .
  • the motion/posture generation unit 30 then confirms whether the difference between the target value and the current value is within a predetermined threshold, at all the constraint points. When the difference is within the predetermined threshold, the processing of the local path planning is ended. Otherwise, 9 Q ut is updated to new 0 cur (6 cur -G 0 ut) / and the process returns to the processing (1) .
  • the motion/posture generation unit 30 provides the joint angles calculated in the local path planning to the numerical model of the robot 100 to thereby obtain the posture of the robot 100 with the joint angles, and checks whether the robot 100 with the obtained posture collides with any object within the environment (S304) . As a result of the collision checking, when the robot collides with any object, the process returns to S301 to randomly generate the joint angles of the robot 100 again. Otherwise, the joint angles obtained in the local path planning are output, and the goal posture generation processing is ended. Thus, it is possible to generate the goal posture which satisfies restrictions such as the position of the center of mass and the position of finger tips of the robot 100 and which is free from collision with any object.
  • Figs. 9 and 10 are diagrams illustrating goal posture optimization processing performed by the optimization unit 50.
  • Fig. 9 is a diagram illustrating the processing flow performed by the optimization unit 50
  • Fig. 10 is a functional block diagram showing the feedback method for use in whole body cooperation IK calculation.
  • the optimization unit 50 inputs the current joint angle 9 CU r in the joint space (CS) (S401) .
  • the goal posture obtained by the motion/posture generation unit 30 is input as 9cur ⁇
  • the optimization unit 50 calculates a difference between an ideal posture and a current posture as shown in the following expression (17) (S402) .
  • 6 re f represents an ideal posture
  • gk represents an arbitrarily determined gain
  • the ideal posture can be set by the user in advance, and any posture which looks beautiful for people may be set as the ideal posture. In the case of an upright posture, for example, such a posture that the shoulders are held straight is ideal.
  • the optimization unit 50 obtains joint angles closest to the ideal posture from the initial posture while satisfying the constraints, by the feedback method using the prioritized task control .
  • Fig. 10 is a functional block diagram including feedback operation for use in optimization processing.
  • the amount of input with respect to a constraint point of a task is set to zero as shown in the following expression (18) , since the posture input in Step S401 has already satisfied the constraints.
  • the optimization unit 50 calculates the joint angles of the whole body of the robot 100 by using the whole body cooperation IK calculation module, as shown in the following expression (19) .
  • the optimization unit 50 provides the joint angles, which are calculated using the whole body cooperation IK module, to the numerical model of the robot 100 to thereby obtain the posture of the robot 100 with the joint angles, and checks whether the robot 100 with the obtained posture collides with any object within the environment (S404) . As a result of the collision checking, when the robot collides with any object, the joint angles previously calculated in S403 are output, and the processing is ended (S405) .
  • Fig. 11 is a diagram illustrating the flow of motion trajectory generation processing performed by the motion/posture generation unit 30.
  • the motion/posture generation unit 30 generates N goal postures in the joint space (CS) (S501, S506) .
  • CS joint space
  • the technique illustrated in Fig. 7 may be used.
  • other conventional techniques may be employed to generate the goal posture .
  • random search techniques such as RRT
  • the motion/posture generation unit 30 generates a joint angle for each axis of the robot 100 randomly (S503) .
  • the randomly generated joint angle be Brand-
  • 9 ran d is a joint angle vector, because values for all the joints are generated.
  • the motion/posture generation unit 30 checks whether the robot 100 is free from collision with any object within the environment (S504) .
  • the collision checking is performed using the following function.
  • P C ur is herein provided to the argument of the function.
  • the robot base coordinate (robot standing position) P C ur ma Y be randomly searched, the description will be given assuming that the robot base coordinate is provided by the user.
  • the process returns to S503 to randomly generate the joint angle of the robot 100 again.
  • the motion/posture generation unit 30 performs local path planning (S505) .
  • the feedback method using the prioritized task control in the local path planning will be described by way of example.
  • the motion/posture generation unit 30 sets the current posture angle 9 cur as the initial value ( O cur «-e ran d) .
  • the motion/posture generation unit 30 obtains the current robot base coordinate P C ur and the positions ⁇ , ⁇ ⁇ ⁇ x n of constraint points at the current posture angle 9 cur , by the robot forward kinematics model.
  • the motion/posture generation unit 30 sets the target values (constraints) obtained in the processing (2) .
  • Operation Example 1 A case where the right hand reaches an arbitrary goal position from the initial posture in which the robot 100 holds nothing in its hands.
  • Task 1 The position of the center of mass (user designates a three-dimensional position) as shown in the following expression (20) .
  • Task 2 The position and posture of the right hand as shown in the following expression (21) (reaching six-dimensional goal position (position and posture) ) .
  • Task 1 The position of the center of mass (user designates a three-dimensional position) .
  • Task 2 (none) .
  • Operation Example 2 A case where the robot 100 places a glass at a certain goal position from the initial posture in which the robot 100 holds the glass of water.
  • Task 1 The position of the center of mass (user designates a three-dimensional position) as shown in the following expression (22) .
  • Pcogz Task 2 The right hand (six-dimensional reaching goal position) as shown in the following expression (23).
  • Task 1 The position of the center of mass (user designates a three-dimensional position) .
  • Task 2 The position and posture of the right hand (set as horizontal constraints. Only two dimensions of the three dimensional posture are constrained. In order to keep it horizontally, the target value is set to " 0 " in two dimensions as shown in the following expression ( 24 ) ) .
  • the motion/posture generation unit 30 calculates the amount of input to the whole body cooperation IK control unit 70 , at each constraint point, from the difference between the target value and the current value of each constraint point (finger tips, center of mass, or the like) .
  • the motion/posture generation unit 30 obtains the joint angles of the whole body of the robot 100 by using the whole body cooperation IK control unit 70 .
  • the motion/posture generation unit 30 confirms whether the difference between the target value and the current value is within the predetermined threshold, at all the constraint points . When the difference is within the predetermined threshold, the processing of the local path planning is ended. Otherwise, 9 D ut is updated to new 9 cur O cur ⁇ -0out) ⁇ and the process returns to the processing ( 1 ) .
  • the motion/posture generation unit 30 checks whether the joint angles obtained in the local path planning are free from collision with any ob ect (S506) .
  • the collision checking is performed using the following function. Note that when a collision occurs, the process returns to S503 to randomly generate the joint angles of the robot 100 again.
  • the motion/posture generation unit 30 determines whether the target value of each task (each constraint) reaches a final target value (S507) . When the target value reaches the final target value, it is determined that a path from the initial posture has been found, and the processing is ended.
  • the target value In the case where the target value has not reached the final target value, the target value is still an intermediate target value. Accordingly, nodes and edges are added to a tree constructed by the RRT method (S508) , and the process returns to S503 to continue searching.
  • the final target value e.g., the final position and posture of finger tips
  • a trajectory to the final target position and posture can be generated while avoiding the collision with any object.
  • a most basic example of the RRT method is herein described.
  • many other techniques such as a method that uses RRT-Connect which forms branches from both the start and the goal, and a method that prepares a plurality of goal postures in advance can be employed.
  • the motion/posture generation unit 30 searches the joint space (CS) for a path from a start node to a goal node.
  • a node currently generated by the motion/posture generation unit 30 is assumed as a node 81.
  • a node N is created using information about robot base coordinates, joint angles, and tasks (constraints) , and is expressed by the following expression (25) .
  • a tree constructed by the RRT method is an assembly of nodes which are linked by edges from a start node and edges which are formed between the nodes .
  • the motion/posture generation unit 30 links the currently generated node 81 and a point (node) , which is nearest to the node 81 among the nodes that belong to the tree, with an edge.
  • a distance "d" between the nodes exceeds a predetermined threshold set in advance, one or more nodes are created in the middle of the edge leading to the node .
  • Various techniques can be employed as a method for obtaining the distance "d" . For instance, as the simplest method, a square norm of a difference between joint angles 9 ou t at the nodes may be employed, as shown in the following expression (26) .
  • the motion/posture generation unit 30 uses information about previous and next nodes (nodes 83 and 84) . Details are given below.
  • the motion/posture generation unit 30 obtains a joint angle at the created node 85 as shown in the following expression (27) .
  • the motion/posture generation unit 30 executes the local path planning with the obtained joint angle set as an initial input .
  • the motion/posture generation unit 30 checks if a collision with any object occurs, and when no collision occurs, the nodes are registered in the tree. When a collision occurs, all the subsequent nodes (including the node 84) which are created in the middle of the edge leading to the node 84 are discarded.
  • the motion/posture generation unit 30 tries to create an edge between the node 86 and the goal node.
  • the edge it is determined that a path can be created, and the processing is ended.
  • Fig. 15 is a diagram illustrating the flow of motion trajectory optimization processing performed by the optimization unit 50.
  • the optimization unit 50 obtains a trajectory closest to the ideal posture from a reference motion trajectory (motion posture trajectory) .
  • the optimization unit 50 inputs a robot motion trajectory (joint angle trajectory), which is obtained before the optimization, in the joint space (CS) (S601) .
  • the motion trajectory obtained by the motion/posture generation unit 30 is input .
  • the optimization unit 50 divides the input joint angle trajectory into M segments based on a predetermined unit (S602) .
  • a predetermined unit S602
  • a constraint point moving distance of 2-5 cm or a joint angle range of 1-5 deg is set as the predetermined unit.
  • the optimization unit 50 performs optimization processing on each node (joint angle) among divided nodes m (0 ⁇ m ⁇ M) (S603) .
  • the optimization unit 50 repeats the optimization processing on M nodes (S604, S605) .
  • an interpolation between the divided nodes is carried out.
  • linear interpolation or prioritized task control may be employed so as to perform the interpolation in more detailed units .
  • the trajectory generated using random search techniques can be smoother .
  • a joint angle trajectory reaching from the initial posture to the final target posture is generated in joint space (CS) by combining a search algorithm based on sampling with whole body cooperation IK calculation using task control with priorities. Therefore, by providing only the final target value of a task (for example, the final position andposture of the finger tips) , a joint angle trajectory satisfying the constraint can be planned. Further, when a joint angle trajectory is to be calculated, a joint angle trajectory reaching to the final target value can be planned while avoiding a collision with an object within the environment by making a collision check between the posture of a robot 100 obtained by the calculated joint angle and the object within the environment. Furthermore, the posture that causes no collision or the trajectory can be optimized by combining the whole body cooperation IK calculation with a mathematical optimization technique and updating the joint angle so as to get closer to an ideal posture at the final target value.
  • priority is set for a plurality of constraint points, so that when a plurality of constraint positions that cannot be simultaneously satisfied are input, calculation can be performed such that the constraints are satisfied in accordance with the priority.
  • no priority is set, only results that do not satisfy any of the constraints can be obtained in most cases .
  • priority may be assigned in a similar manner to that of the present invention so that the risk that the robot could topple down can be reduced.
  • the present invention is not limited to above-described exemplary embodiments and can be modified as appropriate without departing from the limits of the spirit of the present invention.
  • the optimization unit 50 may perform optimization processing on a trajectory generated by a trajectory creating method in the related art .
  • the present invention is applicable to a robot control system that makes, for example, a motion plan of a robot to which a plurality of tasks are provided, a motion data creation apparatus and its creating method.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

To plan trajectories of joint angles reaching to the target value by providing only the final target value of the task. A robot control system in accordance with the present invention includes a motion data creation apparatus and the robot. When the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a configuration space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing' whole body cooperation calculation such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.

Description

DESCRIPTION
Title of Invention
ROBOT CONTROL SYSTEM, MOTION DATA CREATION APPARATUS AND ITS CREATING METHOD
Technical Field
[0001]
The present invention relates to a robot control system that makes, for example, a motion plan of a robot to which a plurality of tasks are provided, a motion data creation apparatus and its creating method.
Background Art
[0002]
To realize a useful humanoid robot for environments such as a home and a hospital, the robot has to be able to operate autonomously. In order to autonomously create posture or motion of a robot, the created posture or motion must be collision free with objects, except for the phase of physical interaction.
[0003]
As a technology to optimize the motion creation of a robot and to optimize the generated motion, technologies disclosed in Non-patent documents 1 to 5 have been known. Further, Non-patent document 6 and Patent documents 1 and 2 disclose technologies using task control with priorities. Citation List
Patent Literature
[0004]
PTL 1 : Japanese Unexamined Patent Application Publication No.
2008-36761
PTL 2 : Japanese Unexamined Patent Application Publication No.
2009-214268
Non Patent Literature
[0005]
NPL 1: Marcelo Kallmann, Amaury Aubel, Tolga Abaci and Daniel Thalmann, "Planning Collision-Free Reaching Motions for
Interactive Object Manipulation and Grasping", EUROGRAPHICS 2003.
NPL 2: J.J. Kuffner, S. Kagaml, K. Nishiwaki, M. Inaba, and H. Inoue, "Dynamically-stable motion planning for humanoid robots" , Autonomous Robots (special issue on Humanoid Robotics) , 12,2002.
NPL 3: Dmitry Berensony, Siddllartha S . Srlnivasazy, Dave Fergusonzy, Alvaro Collety, James J. Kuttner, "Manipulation Planning with Workspace Goal Regions", ICRA2009.
NPL 4: Marc Toussaint, Michael Gienger and Christian Goerick, "Optimization of sequential attractor-based movement for compact behaviour generation", HUMANOIDS 2007.
NPL 5: Hiroki Sanada, Eiichi Yoshida and Kazuhito Yokoi, "Passing Under Obstacles with Humanoid Robots", IROS 2007.
NPL 6: Nikolaus Vahrenkamp, Dmitry Berenson, Tamim Asfour, James Kuffner and Rudiger Dillmann, "Humanoid Motion Planning for Dual-Arm Manipulation and Re-Grasping Tasks", IROS 2009.
Summary of Invention
Technical Problem
[0006]
However, in the related technology, in the case of motion to grasp an object, for example, values regarding all the joints in the posture they want the robot to ultimately achieve to grasp the object are provided. That is, to realize the final position/posture of the finger tips, the portions other than the finger tips can take various postures. However, in the related art, values for all the joints, which are strictly specified by the user, are provided. Then, it is calculated how the robot arrives at the provided final posture from the current posture. Further, it does not ensure that all the tasks are necessarily satisfied during the process to calculate the final posture.
[0007]
Therefore, a technology capable of creating motion by providing only values of the final position/posture of the finger tips, instead of providing those for all the joints in the target posture, has been desired. This means that the robot is constrained only in the position/posture of the finger tips and that the other joints can take any position/posture without restraint. Therefore, it is based on a conception that providing only the target position/posture of the finger tips will be sufficient enough to generate the motion.
[0008]
Accordingly, an exemplary object of the present invention is to provide a robot control system capable of planning a trajectory of joint angles reaching to the target final position/posture by providing only the final target value of the task (for example, the final position and posture of the finger tips), motion data creation apparatus and its creating method.
Solution to Problem
[0009]
A robot control system in accordance with the present invention includes : a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and the robot that moves according to a joint angle created by the motion data creation apparatus; wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a joint space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a
Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.
[0010]
In this way, it is possible to plan a trajectory of a joint angle reaching to the target final position/posture by providing only the final target value of the task.
[0011]
Further, the prioritized task may be set for a plurality of portions or the center of mass of the robot.
[0012]
Further, the motion data creation apparatus may be configured to perform the whole body cooperation calculation such that the difference between the target value of a constraint point specified by the prioritized task and the current value converges by generating a third node from a first node extracted in a random manner or in a predetermined manner and a second node that is a node on the trajectory data and closest to the first node, and using a constraint point obtained from a joint angle corresponding to the third node as an initial value.
[0013]
Further, the motion data creation apparatus may be configured to perform optimization processing of the trajectory data by dividing the generated trajectory data by predetermined unit, using a posture corresponding to each divided point as an initial value, and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges. In this way, an optimized posture can be realized for each point on the trajectory.
[0014]
Further, the motion data creation apparatus may further include an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot. With that, the motion data creation apparatus may be configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the joint space by referring to information stored in the object data storage unit, and when they collode, not to add the calculated joint angle newly to the trajectory data. In this way, a trajectory avoiding a collision with an object can be obtained.
[0015]
A robot control system in accordance with another aspect of the present invention includes : a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and the robot that moves according to a joint angle created by the motion data creation apparatus; wherein the motion data creation apparatus calculates, in a joint space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a
Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.
[0016]
Further, the prioritized task may be set for a plurality of portions or the center of mass of the robot .
[0017]
Further, the motion data creation apparatus may be configured to perform optimization processing of the calculated joint angle by using a posture obtained from the calculated joint angle as an initial value and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges.
[0018]
Further, the motion data creation apparatus may further includes an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot. With that, the motion data creation apparatus may be configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the joint space by referring to information stored in the object data storage unit, and when they collide, not to adopt the calculated joint angle.
[0019]
A motion data creation apparatus in accordance with the present invention is a motion data creation apparatus that creates a joint angle to cause a robot to move, wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a joint space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
[0020] Further, a motion data creation apparatus in accordance with another aspect of the present invention is a motion data creation apparatus that creates a joint angle to cause a robot to move, wherein the motion data creation apparatus calculates, in a joint space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by a prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
[0021]
Further, a motion data creation method in accordance with the present invention is a motion data creation method for creating a joint angle to cause a robot to move, wherein when trajectory data of a joint angle reaching from an initial posture of the robot to a target posture is searched for in a joint space by using a search technique based on sampling, a joint angle satisfying an constraint is calculated by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
[0022]
Further, a motion data creation method in accordance with another aspect of the present invention is a motion data creation method that creates a joint angle to cause a robot to move, wherein a joint angle satisfying an constraint is calculated in a joint space by using a constraint point obtained from a
randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by a prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
Advantageous Effects of invention
[0023]
In accordance with the present invention, a robot control system capable of planning a trajectory of joint angles reaching to the target final position/posture by providing only the final target value of the task (for example, the final position and posture of the finger tips) , motion data creation apparatus and its creating method can be provided. Brief Description of Drawings [0024]
[Fig. 1]
Fig. 1 is a configuration diagram of a robot control system in accordance with a first exemplary embodiment;
[Fig. 2]
Fig. 2 is a diagram of a model of a robot in accordance with a first exemplary embodiment;
[Fig. 3]
Fig. 3 is a diagram for explaining a relation between manipulable space and redundant space;
[Fig. 4]
Fig. 4 is a functional configuration diagram of a whole body cooperation IK calculation module in accordance with a first exemplary embodiment;
[Fig. 5]
Fig. 5 is a process flowchart for a whole body cooperation IK calculation module in accordance with a first exemplary embodiment ;
[Fig. 6]
Fig. 6 is an overall flowchart of motion data creation processing in accordance with a first exemplary embodiment; [Fig. 7]
Fig. 7 is a process flowchart for a motion/posture generation unit in accordance with a first exemplary embodiment; [Fig. 8] Fig. 8 is a functional block diagram including feedback calculation in accordance with a first exemplary embodiment; [Fig. 9]
Fig. 9 is a process flowchart for an optimization unit in accordance with a first exemplary embodiment;
[Fig. 10]
Fig. 10 is a functional block diagram showing a
configuration of feedback calculation in accordance with a first exemplary embodiment;
[Fig. 11]
Fig. 11 is a flowchart of motion trajectory generation processing by a motion/posture generation unit in accordance with a first exemplary embodiment;
[Fig. 12]
Fig. 12 is a diagram for explaining a specific example of motion trajectory generation processing by an RRT method in accordance with a first exemplary embodiment;
[Fig. 13]
Fig. 13 is a diagram for explaining a specific example of motion trajectory generation processing by an RRT method in accordance with a first exemplary embodiment;
[Fig. 14]
Fig. 14 is a diagram for explaining a specific example of motion trajectory generation processing by an RRT method in accordance with a first exemplary embodiment; and [Fig. 15]
Fig. 15 is a flowchart for motion trajectory optimization by an optimization unit in accordance with a first exemplary embodiment .
Description of Embodiments
[0025]
First Embodiment
Hereinafter, the embodiments of the present invention will be described with reference to the drawings. Fig. 1 shows a configuration diagram of a robot control system according to the first embodiment. As shown in Fig. 1, a control system 1 includes an interface unit 10, a motion data creation unit 20, a robot control unit 60, and a robot 100. Each component is realized by hardware or software included in a computer. In the present embodiment, the motion data creation unit 20 is mainly realized by software, and the robot control unit 60 and the robot 100 are mainly realized by hardware.
[0026]
The interface unit 10 is a user interface or a data interface to receive command values to the robot 100. in the present embodiment, a plurality of tasks are provided as the command values. Note that, in the following description, the task means the target where control is desired in creating motion data. For example, the task includes the one that is set for a portion of the robot 100 which is the target where motion is controlled, or the position of the center of mass of the robot 100. The task is provided as coordinate values of six dimensions at maximum indicating the position and the posture of the portion which is the control target.
[0027]
The robot control unit 60 calculates a torque control value to drive each joint using a target joint angle generated by the motion data creation unit 20 and a current joint angle measured by an encoder (not shown) . The robot control unit 60 includes a motor driver (not shown) and a controller thereof (not shown) , and drives each oint by operating an actuator in accordance with the torque control value that is calculated. Accordingly, the motion in accordance with the motion data generated by the motion data creation unit 20 is realized.
[0028]
The robot 100 executes the operation based on motion data generated by the motion data creation unit 20. In the present embodiment, it is assumed that the robot 100 is a leg type moving robot. Further, the robot 100 includes at least one arm in order to achieve the task. Further, the robot 100 considers the restriction that it keeps its balance or the restriction by a plurality of tasks, as an example.
[0029]
Fig. 2 is a model diagram showing the robot 100 with joints and links connecting the joints. The robot 100 includes left and right two leg links LR and LL, and left and right two arm links AR and AL. The robot 100 is supplied with the motion data generated by the motion data creation unit 20 so as to be able to walk with the leg links LR and LL and execute operations such as carrying an object with the arm links AR and AL. For example, when the robot 100 walks, the position of the center of mass of the robot 100 is specified and the task is set for a swing leg and a stance leg. Further, the task is set for fingertips when the robot carries out work with its hand, for example.
[0030]
The robot 100 is formed of two leg links LR and LL, two arm links AR and AL, and a body trunk BD to which these leg links and arm links are connected.
[0031]
A neck joint that supports a head 101 includes a joint of roll direction 102, a joint of pitch direction 103, and a joint of yaw direction 103. The right arm link AR includes a shoulder joint of pitch direction 105, a shoulder joint of roll direction 106, an upper arm joint of yaw direction 107, an elbow joint of pitch direction 108, and a wrist joint of yaw direction 109, and a hand 141R that functions as an end-effector that holds an object and carries it is provided in an end of the right arm link AR. Note that the mechanism of the hand 141R may be determined by the shape or the type of the object that is to be held, and the hand may have the configuration of a multi-degree of freedom and multiple joint having a plurality of fingers, for example.
[0032]
The left arm link AL includes the similar configuration as the right arm link AR. More specifically, the left arm link AL includes five joints 110 to 114, and includes a hand 141L in the end thereof.
[0033]
The right leg link LR includes a waist joint of yaw direction 117, a waist joint of pitch direction 118, a waist joint of roll direction 119, a knee joint of pitch direction 120, an ankle joint of pitch direction 121, and an ankle joint of roll direction 122.
A sole 130 that contacts with a floor surface is provided in the lower part of the ankle joint 122.
[0034]
The left leg link LL includes the similar configuration as the right leg link LR. More specifically, the left leg link LL includes six joints 123 to 128, and includes a sole 131 in the end thereof.
[0035]
The body trunk 143 includes a joint of roll direction 115 and a joint of pitch direction 116.
[0036]
The motion data creation unit 20 includes a motion/posture generation unit 30, an object data store unit 40, and an optimization unit 50. The motion data creation unit 20 includes a module that performs whole body cooperation IK calculation that will be described later (whole body cooperation IK control unit 70) , and the motion/posture generation unit 30 and the optimization unit 50 call for a whole body cooperation IK module.
[0037]
The motion/posture generation unit 30 generates the posture and the motion with which the object in the environment and the robot 100 collide using the task input from the interface unit 10, and the information stored in the object data store unit 40. In the object data store unit 40, the shape/texture data regarding the robot 100 and the object in the environment, and the geometrical data of the robot 100 are stored.
[0038]
The motion/posture generation unit 30 has a characteristic feature in that it combines a method of configuration that is extracted randomly or in a predefined manner and a method of inverse kinematics calculation with balance and task restrictions (local path planning - whole body cooperation IK calculation) regarding the generation of the collision-free posture. The whole body cooperation IK calculation means the inverse kinematics calculation with prioritized task restrictions for redundant degrees of freedom system using null space Jacobian matrix. In the whole body cooperation IK calculation, the configurations that satisfy balance and/or task restrictions are output. Note that the detail of the whole body cooperation IK will be described later . The whole body cooperation IK is a method that has been developed for several years, which has now been applied for humanoid robots.
[0039]
Further, the motion/posture generation unit 30 has a characteristic feature in that it combines a method of a configuration search algorithm that is extracted randomly or in a predefined manner and a method of the inverse kinematics calculation with balance and/or task restrictions (local path planning-whole body cooperation IK calculation) regarding the generation of the collision-free motion. Further, the motion/posture generation unit 30 is able to combine the above method with two-dimensional path planning for further
applications. Note that, as the search algorithm based on the sampling, a method that is widely employed for the path or motion planning of the robot may be employed, and RRT, RRT-Connect (http://msl.cs.uiuc.edu/rrt/), or the like may be employed.
[0040]
The optimization unit 50 has a characteristic feature in that it combines a method of the whole body cooperation IK calculation and a mathematical optimization method regarding the optimization of the collision-free posture. As the mathematical optimization method, a steepest descent method, a gradient descent method or the like can be employed. Further, the optimization unit 50 may perform optimization of motion trajectory (joint angular trajectory) for further applications.
[0041]
Hereinafter, the prioritized task control is described first and thereafter the whole body cooperation IK calculation will be described for the purpose of better understanding of the present invention. First, examples, properties, and
characteristics of the prioritized task control will be described in brief, and a method of solving the task control expression when there are a plurality of tasks will be described.
[0042]
The examples of the prioritized task control when there is one task include a case of specifying the position of the center of mass and setting the tasks for the swing leg and the stance leg while the robot is walking, and a case of setting the task for the fingertips while the robot is operating with its hands.
[0043]
One of the properties of the prioritized task control is that the degree of freedom of the control increases as the redundant degree of freedom (the number of joints of the robot) increases. The redundant degree of freedom means that, for example, when the movement of the arms having redundant degree of freedom is considered on the two-dimensional plane, there are a plurality of trajectories of the arm even when the position and the posture of the final joint is fixed. Other properties of the prioritized task control include that there is Jacobian matrix for each task, and the tasks can be prioritized with the least squares solution. Further, it is also known that the calculation load increases in accordance with increase of the number of joints or increase of the tasks .
[0044]
Further properties of the prioritized task control include that priority orders can be added to the tasks and the task can be set for an arbitrary portion. In the prioritized task control , all the tasks are satisfied when there is sufficient redundancy, while tasks having higher priority orders are satisfied when the redundancy is insufficient. Any point (center of mass, fingertips, or the like) other than the portion can be controlled as long as Jacobian matrix can be obtained.
[0045]
Now, relation between a manipulable space and a redundant space will be described. As shown in Fig. 3, the space of n-dimensional joint angle Θ and the space of m-dimensional task r are considered (where n<m) . The space which can be indicated by the joint angular velocity (denoted by dotted Θ in Fig. 3 , which means the derivative value of Θ) based on the current joint angle Θ is defined as a manipulable space, and the joint angular space which can be indicated by one task in the task space is defined as a redundant space (null space) . Then, the following relation is established for the number of joints. Number of joints = (dimension of manipulable space) + (dimension of redundant space)
[0046]
Further, the least squares solution will be described in brief. First, regarding the relation between vector y and vector x, the following expression (1) is established.
y-Ax x≡Rn,yeRm,n>m --(1)
The expression (1) is replaced with the optimization problem as shown below.
I |y-Ax| |=0 → min| |y-Ax| |
Then, the solution can be obtained from the following expression (2) . x = A*y+(lN - A*A)z A* = AT(AAT )1 -(2)
The expression gives the solution even when z is operated to an arbitrary value.
[0047]
The solution that satisfies min| |x| | of the solution of the expression (2) will be obtained by the following expression (3) . x = A*y ---(3)
[0048]
Next , the above technique is applied to a method of obtaining the joint angle using Jacobian matrix when there is one task that istobeset. First, the Jacobian matrix J, the task (constraint) , and the joint angle have relation shown in the following expression (4) . In the following expression, x with a dot in the upper part indicates the task velocity, and is a vector of r columns . Further, J(9) is a Jacobian matrix, and is rxc matrix. Further, Θ with a dot in the upper part indicates the joint angular velocity, and is a vector of c columns. x = j{0)9 —(4)
[0049]
Then, the task (constraint) is input in the following expression (5) obtained by deforming the above equation (4) , so as to obtain the joint angle. Note that J with # indicates a pseudo-inverse matrix of the Jacobian matrix J, which is a matrix of cxr. Further, by integrating the joint angular velocity obtained by the following expression (5) , the joint angle is calculated.
0 = J#x "-<5>
[0050]
When there are a plurality of tasks, the redundant space is employed in the prioritized task control so as to satisfy the plurality of constraints. In short, when there is redundancy, it is possible to satisfy other restrictions while satisfying the constraints. In order to approximate the joint angle Θ to the target joint angle 9cur using this redundancy, k is defined as shown in the following expression (6) . Note that g is a feedback gain.
Figure imgf000025_0001
[0051]
Note that k may be specified by using a potential method expressed by expression (7) shown below. P is obtained by forming the restrictions that should be satisfied into formulas.
Figure imgf000025_0002
[0052]
Next, the method of solving the prioritized task control will be described.
With using the least squares solution stated above, the first task is shown by the following expression (8) .
Χγ — 3θ
Figure imgf000025_0003
Then, the second task is shown by the following expression (9) . Note that the expression shown by the underline is the expression to solve ki as the least squares solution.
Figure imgf000026_0001
[0053]
Summing up the solution method stated above, as shown below, the priority level of each task, the Jacobian matrix, and the task position are input. Note that the robot-base coordinate which is the standard of each task χι,···χ η is indicated by pCur- priority level 1 Ji,xi priority level 2 J2,X2
priority level n Jn n
Then, as shown in the following expression (10) , the Jacobian matrix that is integrated in the priority level of each task is obtained.
Figure imgf000026_0002
/=0 In the equation (10) , N is defined from the following expression (11) .
Figure imgf000027_0001
[0054]
Next, the method of solving the task control expression when there are a plurality of tasks will be described.
First, with the following expression (12) , the joint angular velocity of each task considering the priority level is obtained with the following expression (12) .
Figure imgf000027_0002
Then, the joint angular velocity obtained in the expression (12) is substituted into the following expression (13) , so as to obtain the joint angular velocity while considering the whole robot .
Figure imgf000027_0003
Note that, in the expression (13) , k is a derivative value (joint angular velocity) of the joint angle of the robot posture which is a standard. For example, when 9ref is a posture angular vector that is desired to be kept, the current posture angular vector is 9cur, and gk(0<gk<l) is an arbitrary gain, k can be determined from the following expression (14) _ _
26
* = &(0*- —(1 )
[0055]
Now, the whole body cooperation IK calculation will be described in detail with reference to Figs. 4 and 5. Fig. 4 is a diagram showing the function configuration of the whole body cooperation IK calculation module. Fig. 5 is a diagram showing a process flow by the whole body cooperation IK calculation module. In the whole body cooperation IK calculation module, in the oint space (CS: Configuration Space), the posture that satisfies the task (constraint) set from the initial value of the joint angle that is provided is obtained by a Newton-Raphson method, or a feedback method. In the present embodiment, description will be made by taking a case of employing the feedback method in the whole body cooperation IK calculation as an example.
[0056]
Referring first to Fig. 4, the prioritized task control using the feedback method will be described. Fig. 4 is a function configuration diagram of the whole body cooperation IK
calculation module. The whole body cooperation IK control unit 70 is supplied with a robot-base coordinate pCur» a current joint angle eCur of the robot 100, an arbitrary parameter k, and a plurality of tasks with prioritization (task 1 to task n) , and performs feedback calculation based on the difference between the current value and the target value of the constraint points (center of mass, fingertips, and the like) defined by the tasks, so as to output the joint angle 90ut that satisfies the constraint.
In summary, the initial position is indicated by the robot-base coordinate and the initial posture is indicated by the current joint angle of the robot 100, and then these values are updated so as to satisfy the constraint points by the feedback calculation. Note that each task is provided by the six-dimensional coordinate values at maximum including the position and the posture, and the velocity which is the derivative value of these values is input. Further, in the present embodiment, the position of the center of mass is also input as is similar to the other tasks (constraints) for the purpose of simplicity of illustration. However, it may be input as the Jacobian matrix reference coordinate, not as the task.
[0057]
The joint angular velocity calculation unit 71 calculates the joint angular velocity (denoted by Θ with a dot in the upper part in the drawings) which is the derivative value of the joint angle. The joint angular velocity calculation unit 71
substitutes Jacobian matrix of each task, an arbitrary parameter k, and the velocity input of each task into the control expression in case of a plurality of tasks, so as to calculate the joint angular velocity. The joint angular velocity calculated by the joint angular velocity calculation unit 71 is integrated with an integrator 72 so as to obtain the joint angle. The joint angle that is obtained is input to a real machine (robot control unit 60 and robot 100) .
[0058]
The Jacobian matrix calculation unit 74 provides unit velocity to each joint, so as to calculate the Jacobian matrix for each task. Note that the known method may be employed as the method of calculating the Jacobian matrix, and the detailed description will be omitted here.
[0059]
A robot forward kinematics model 73 performs the forward kinematics calculation based on the robot geometrical
configuration, so as to calculate the values of the position and the posture of each task from the joint angle obtained by the integrator 72.
[0060]
In order to operate the robot 100, the target joint angle Qou m¾y be provided. However, if the information parameter of the robot 100 such as the joint angle 9out/ each task position xi, X2 Xn and the like is output as well, it is possible to use these information parameters of the robot 100 when more complicated control needs to be performed, for example, when performing compliance control using a force sensor.
[0061]
Referring next to Fig. 5, the processing flow of the whole body cooperation IK calculation module will be described.
First, the whole body cooperation IK control unit 70 uses a forward kinematics model (numerical model) of the robot 100 to provide the unit velocity to each joint of the numerical model, thereby calculating a Jacobian matrix (SI01) . Next, the whole body cooperation IK control unit 70 acquires and determines input data such as the velocity of a task and an arbitrary parameter (S102) . Then, the whole body cooperation IK calculation control unit 70 calculates an equation corresponding to the prioritized task control with redundant degrees, and calculates a joint angular velocity (S103) . Then, the whole body cooperation IK calculation control unit 70 integrates the calculated joint angular velocity, thereby calculating the joint angles of the robot (S104) . Note that the position of the center of mass of the robot can also be calculated from the joint angular velocity of the robot by using the numerical model .
[0062]
Referring to Fig. 6, the entire processing of creating motion data by the motion data creation apparatus 20 will be described. The motion data creation processing includes processing of determining a goal posture (S201, S202) and processing of obtaining a trajectory from an initial position to the goal posture (S203, S204) .
[0063]
The motion/posture generation unit 30 of the motion data creation apparatus 20 generates the goal posture which satisfies restrictions such as the position of the center of mass and the position of finger tips of the robot 100 and which is free from collision with any object (S201) . The optimization unit 50 of the motion data creation apparatus 20 optimizes the generated goal posture (S202) . That is, the optimization unit 50 calculates a joint angle closest to the target joint angle, while satisfying the restrictions.
[0064]
The motion/posture generation unit 30 generates a motion trajectory (trajectory of the joint angle) of the robot 100 which is a trajectory from the initial posture to the goal posture and which is free from collision with any object, while satisfying the restrictions (S203) . The optimization unit 50 optimizes the generated motion trajectory (S204) . That is, the optimization unit 50 calculates a joint angle closest to the target joint angle, while satisfying the restrictions.
[0065]
Note that, in this embodiment, though Steps S201 to S204 are described as a series of processings, each processing may be combined with processing of another motion data creation apparatus. For example, the goal posture generation processing may be carried out using another conventional motion data creation apparatus, and the obtained goal posture may be combined with trajectory generation processing (S203) or trajectory optimization processing (204) according to this embodiment.
[0066]
The details of each processing shown in Fig. 6 will be described below.
Figs. 7 and 8 are diagrams illustrating the goal posture generation processing performed by the motion/posture generation unit 30. Fig. 7 is a diagram illustrating the processing flow performed by the motion/posture generation unit 30.
[0067]
In Fig. 7, the motion/posture generation unit 30 generates a joint angle for each axis of the robot randomly in the joint space (CS) by using the following function (S301) . Let the randomly generated joint angle be 6rand. Herein, 9ran(j is a joint angle vector, because values for all the joints are generated. erand=Rand(CS)
[0068]
The motion/posture generation unit 30 provides the generated joint angles to the numerical model of the robot 100 to thereby obtain the posture of the robot 100 with the joint angles , and checks whether the robot 100 with the obtained posture is free from collision with any object within the environment (S302) . Herein, collision checking is carried out using the following function.
CollisionCheck(9ranci) [0069]
As a result of the collision checking, when the robot 100 collides with any object within the environment, the process returns to S301 to randomly generate the joint angles of the robot 100 again. Note that checking for collision with any object within the environment may be performed using well-known techniques. For instance, the robot 100 and objects within the environment are approximated by simple shapes such as a rectangular solid or a sphere, and these approximated models are stored in the object data storage unit 40. Then, by referring to the stored information, it can be checked whether the robot 100 collides with any object within the environment.
[0070]
The motion/posture generation unit 30 performs local path planning and calculates the posture that satisfies configured tasks (constraints) (S303) . In the local path planning, the posture that satisfies the constraints is obtained by
Newton-Raphson method using the prioritized task control, or feedback method, with the generated random posture set as an initial value. An example using the feedback method is herein described.
[0071]
Fig. 8 is a functional block diagram including feedback operation for use in the local path planning.
(1) The motion/posture generation unit 30 sets the current posture angle 9cur as the initial value of the posture angle
(eCur-erand) .
(2) The motion/posture generation unit 30 obtains the current robot base coordinate PCur and the positions xi, · · · xn of constraint points at the current posture angle 9cur, by the robot forward kinematics model .
[0072]
(3) The motion/posture generation unit 30 calculates the amount of input to the whole body cooperation IK control unit 70, at each constraint point, from a difference between a target value ref ref
(9ref, X i, · · · X n) of each constraint point (finger tips, center of mass, or the like) determined by tasks, and a current
CU_ CUr
value (9cur, X i, · · · X n) · Note that 9ref in the target values represents an ideal posture and is set in advance by a user, ref ref ref together with target tasks X , *··Χ n. The target tasks X i, ref
• · · X n are satisfied in order of priority, and a joint angle closest to 9ref is selected. Specifically, the value is selected so that the square norm of the target joint angle 9ref and the current joint angle 9cur becomes minimum. The gain 75 is an arbitrary adjustment parameter. Thus, the amount of input
(velocity input of each task) to the whole body cooperation IK control unit 70 is calculated as shown in the following expression (15)
Figure imgf000036_0001
[0073]
(4) The motion/posture generation unit 30 then obtains the oint angles of the whole body of the robot 100 by using the whole body cooperation IK control unit 70, as shown in the following expression (16) .
Figure imgf000036_0002
[0074]
(5) The motion/posture generation unit 30 then confirms whether the difference between the target value and the current value is within a predetermined threshold, at all the constraint points. When the difference is within the predetermined threshold, the processing of the local path planning is ended. Otherwise, 9Qut is updated to new 0cur (6cur-G0ut) / and the process returns to the processing (1) .
[0075]
Referring back to Fig. 7, description will be continued. The motion/posture generation unit 30 provides the joint angles calculated in the local path planning to the numerical model of the robot 100 to thereby obtain the posture of the robot 100 with the joint angles, and checks whether the robot 100 with the obtained posture collides with any object within the environment (S304) . As a result of the collision checking, when the robot collides with any object, the process returns to S301 to randomly generate the joint angles of the robot 100 again. Otherwise, the joint angles obtained in the local path planning are output, and the goal posture generation processing is ended. Thus, it is possible to generate the goal posture which satisfies restrictions such as the position of the center of mass and the position of finger tips of the robot 100 and which is free from collision with any object.
[0076]
Figs. 9 and 10 are diagrams illustrating goal posture optimization processing performed by the optimization unit 50. Fig. 9 is a diagram illustrating the processing flow performed by the optimization unit 50, and Fig. 10 is a functional block diagram showing the feedback method for use in whole body cooperation IK calculation.
[0077]
In Fig. 9, the optimization unit 50 inputs the current joint angle 9CUr in the joint space (CS) (S401) . Herein, the goal posture obtained by the motion/posture generation unit 30 is input as 9cur ·
[0078]
The optimization unit 50 calculates a difference between an ideal posture and a current posture as shown in the following expression (17) (S402) . Note that 6ref represents an ideal posture, and gk represents an arbitrarily determined gain
(0<gk<l) . The ideal posture can be set by the user in advance, and any posture which looks beautiful for people may be set as the ideal posture. In the case of an upright posture, for example, such a posture that the shoulders are held straight is ideal.
Figure imgf000038_0001
[0079]
The optimization unit 50 obtains joint angles closest to the ideal posture from the initial posture while satisfying the constraints, by the feedback method using the prioritized task control .
Fig. 10 is a functional block diagram including feedback operation for use in optimization processing. Herein, the amount of input with respect to a constraint point of a task is set to zero as shown in the following expression (18) , since the posture input in Step S401 has already satisfied the constraints.
Χγ9Χ2···Χη = 0 -"(18)
[0080]
The optimization unit 50 calculates the joint angles of the whole body of the robot 100 by using the whole body cooperation IK calculation module, as shown in the following expression (19) .
Figure imgf000039_0001
[0081]
The optimization unit 50 provides the joint angles, which are calculated using the whole body cooperation IK module, to the numerical model of the robot 100 to thereby obtain the posture of the robot 100 with the joint angles, and checks whether the robot 100 with the obtained posture collides with any object within the environment (S404) . As a result of the collision checking, when the robot collides with any object, the joint angles previously calculated in S403 are output, and the processing is ended (S405) .
[0082]
As a result of the collision checking, when the robot is free from collision with any object, it is checked whether the output value of the joint angular velocity is within a
predetermined threshold (S405) , and when the output value is within the predetermined threshold (when it seems to almost converge) , the processing is ended. Otherwise, 9cur is updated to new 0cur Ocur<-9out) cmcl the process returns to Step S402. Thus , the input current posture can be optimized.
[0083]
Fig. 11 is a diagram illustrating the flow of motion trajectory generation processing performed by the motion/posture generation unit 30.
In Fig. 11, the motion/posture generation unit 30 generates N goal postures in the joint space (CS) (S501, S506) . Note that, as a goal posture generation method, the technique illustrated in Fig. 7 may be used. Alternatively, other conventional techniques may be employed to generate the goal posture . Further, in random search techniques such as RRT
(http://msl.cs.uiuc.edu/rrt/), setting of a relatively large number of goal postures is favorable for the search. Accordingly, the N goal postures are generated herein.
[0084]
The motion/posture generation unit 30 generates a joint angle for each axis of the robot 100 randomly (S503) . Let the randomly generated joint angle be Brand- Herein, 9rand is a joint angle vector, because values for all the joints are generated.
[0085]
The motion/posture generation unit 30 checks whether the robot 100 is free from collision with any object within the environment (S504) . Herein, the collision checking is performed using the following function. In order to perform the collision checking more accurately, PCur is herein provided to the argument of the function. Although the robot base coordinate (robot standing position) PCur maY be randomly searched, the description will be given assuming that the robot base coordinate is provided by the user. When a collision occurs, the process returns to S503 to randomly generate the joint angle of the robot 100 again.
CollisionCheck (Pcur, Grand) [0086]
The motion/posture generation unit 30 performs local path planning (S505) . Herein, the feedback method using the prioritized task control in the local path planning will be described by way of example.
(1) The motion/posture generation unit 30 sets the current posture angle 9cur as the initial value ( Ocur«-erand) .
(2) The motion/posture generation unit 30 obtains the current robot base coordinate PCur and the positions χχ, · · · xn of constraint points at the current posture angle 9cur, by the robot forward kinematics model.
[0087]
(3) The motion/posture generation unit 30 sets the target values (constraints) obtained in the processing (2) .
Hereinafter, examples of the target values set in a certain state of the operation will be described in operation examples.
Operation Example 1 : A case where the right hand reaches an arbitrary goal position from the initial posture in which the robot 100 holds nothing in its hands.
(i) A case where a difference between the reaching goal position and the current hand position is within a predetermined threshold (in pursuit of the goal) .
Task 1: The position of the center of mass (user designates a three-dimensional position) as shown in the following expression (20) .
Pcogx
Pcogy (20)
Pcogz
Task 2 : The position and posture of the right hand as shown in the following expression (21) (reaching six-dimensional goal position (position and posture) ) .
Figure imgf000042_0001
(ii) A case where the difference between the reaching goal position and the current hand position is larger than the predetermined threshold (create intermediate nodes in the RRT) .
Task 1: The position of the center of mass (user designates a three-dimensional position) .
Task 2 : (none) .
[0088]
Operation Example 2: A case where the robot 100 places a glass at a certain goal position from the initial posture in which the robot 100 holds the glass of water.
(i) A case where the goal position and the current hand (or glass) position are within an arbitrary threshold (in pursuit of the goal) .
Task 1: The position of the center of mass (user designates a three-dimensional position) as shown in the following expression (22) .
Pcogx
Figure imgf000043_0001
Pcogy (22)
Pcogz Task 2: The right hand (six-dimensional reaching goal position) as shown in the following expression (23).
Figure imgf000043_0002
(ii) A case where the difference between the goal position and the current hand position is larger than the predetermined (create intermediate nodes in the RRT) .
Task 1: The position of the center of mass (user designates a three-dimensional position) .
Task 2: The position and posture of the right hand (set as horizontal constraints. Only two dimensions of the three dimensional posture are constrained. In order to keep it horizontally, the target value is set to " 0 " in two dimensions as shown in the following expression ( 24 ) ) .
Figure imgf000044_0001
Figure imgf000044_0002
[ 0089 ]
( 4 ) The motion/posture generation unit 30 calculates the amount of input to the whole body cooperation IK control unit 70 , at each constraint point, from the difference between the target value and the current value of each constraint point (finger tips, center of mass, or the like) .
( 5 ) The motion/posture generation unit 30 obtains the joint angles of the whole body of the robot 100 by using the whole body cooperation IK control unit 70 .
( 6 ) The motion/posture generation unit 30 confirms whether the difference between the target value and the current value is within the predetermined threshold, at all the constraint points . When the difference is within the predetermined threshold, the processing of the local path planning is ended. Otherwise, 9Dut is updated to new 9cur Ocur<-0out) · and the process returns to the processing ( 1 ) .
[ 0090 ]
The motion/posture generation unit 30 checks whether the joint angles obtained in the local path planning are free from collision with any ob ect (S506) . Herein, the collision checking is performed using the following function. Note that when a collision occurs, the process returns to S503 to randomly generate the joint angles of the robot 100 again.
CollisionCheck (Pcur, 6out) [0091]
When the obtained joint angle is free from collision with any object, the motion/posture generation unit 30 determines whether the target value of each task (each constraint) reaches a final target value (S507) . When the target value reaches the final target value, it is determined that a path from the initial posture has been found, and the processing is ended.
[0092]
In the case where the target value has not reached the final target value, the target value is still an intermediate target value. Accordingly, nodes and edges are added to a tree constructed by the RRT method (S508) , and the process returns to S503 to continue searching. Thus, merely by providing the final target value (e.g., the final position and posture of finger tips) of each task, a trajectory to the final target position and posture can be generated while avoiding the collision with any object.
[0093]
Referring to Figs. 12 to 14, a description is given of a specific example of the motion trajectory generation processing using the RRT, which is performed by the motion/posture generation unit 30. A most basic example of the RRT method is herein described. In addition to the basic RRT method, many other techniques such as a method that uses RRT-Connect which forms branches from both the start and the goal, and a method that prepares a plurality of goal postures in advance can be employed.
[0094]
As shown in Fig. 12, the motion/posture generation unit 30 searches the joint space (CS) for a path from a start node to a goal node.
In the figure, a node currently generated by the motion/posture generation unit 30 is assumed as a node 81. Note that a node N is created using information about robot base coordinates, joint angles, and tasks (constraints) , and is expressed by the following expression (25) . A tree constructed by the RRT method is an assembly of nodes which are linked by edges from a start node and edges which are formed between the nodes .
Figure imgf000046_0001
[0095]
The motion/posture generation unit 30 links the currently generated node 81 and a point (node) , which is nearest to the node 81 among the nodes that belong to the tree, with an edge. When a distance "d" between the nodes exceeds a predetermined threshold set in advance, one or more nodes are created in the middle of the edge leading to the node . Various techniques can be employed as a method for obtaining the distance "d" . For instance, as the simplest method, a square norm of a difference between joint angles 9out at the nodes may be employed, as shown in the following expression (26) .
HKJ - <26)
[0096]
As shown in Fig. 13, when a node 85 is created in the middle of the edge from the node 83 (k-th node) to the node 84 ((k+l)th node) , the motion/posture generation unit 30 uses information about previous and next nodes (nodes 83 and 84) . Details are given below.
(i) The motion/posture generation unit 30 obtains a joint angle at the created node 85 as shown in the following expression (27) .
Figure imgf000047_0001
(ii) The motion/posture generation unit 30 executes the local path planning with the obtained joint angle set as an initial input .
(iii) The motion/posture generation unit 30 checks if a collision with any object occurs, and when no collision occurs, the nodes are registered in the tree. When a collision occurs, all the subsequent nodes (including the node 84) which are created in the middle of the edge leading to the node 84 are discarded.
[0097]
As shown in Fig. 14, regarding a node 86 which is subsequent to the node 85 ( (k+2) th node) , when the distance between the node 86 and the goal node is within a certain range, the motion/posture generation unit 30 tries to create an edge between the node 86 and the goal node. When the edge can be created, it is determined that a path can be created, and the processing is ended.
[0098]
Fig. 15 is a diagram illustrating the flow of motion trajectory optimization processing performed by the optimization unit 50. The optimization unit 50 obtains a trajectory closest to the ideal posture from a reference motion trajectory (motion posture trajectory) .
The optimization unit 50 inputs a robot motion trajectory (joint angle trajectory), which is obtained before the optimization, in the joint space (CS) (S601) . Herein, the motion trajectory obtained by the motion/posture generation unit 30 is input .
[0099]
The optimization unit 50 divides the input joint angle trajectory into M segments based on a predetermined unit (S602) . In general, a constraint point moving distance of 2-5 cm or a joint angle range of 1-5 deg is set as the predetermined unit.
As described with reference to Figs. 9 and 10, the optimization unit 50 performs optimization processing on each node (joint angle) among divided nodes m (0≤m<M) (S603) .
The optimization unit 50 repeats the optimization processing on M nodes (S604, S605) . When the optimization processing on the M nodes is ended, an interpolation between the divided nodes is carried out. As the interpolation method, linear interpolation or prioritized task control may be employed so as to perform the interpolation in more detailed units . As a result , the trajectory generated using random search techniques can be smoother .
[0100]
As has been explained so far, in accordance with the present invention, a joint angle trajectory reaching from the initial posture to the final target posture is generated in joint space (CS) by combining a search algorithm based on sampling with whole body cooperation IK calculation using task control with priorities. Therefore, by providing only the final target value of a task (for example, the final position andposture of the finger tips) , a joint angle trajectory satisfying the constraint can be planned. Further, when a joint angle trajectory is to be calculated, a joint angle trajectory reaching to the final target value can be planned while avoiding a collision with an object within the environment by making a collision check between the posture of a robot 100 obtained by the calculated joint angle and the object within the environment. Furthermore, the posture that causes no collision or the trajectory can be optimized by combining the whole body cooperation IK calculation with a mathematical optimization technique and updating the joint angle so as to get closer to an ideal posture at the final target value.
[0101]
Further, in accordance with the present invention, priority is set for a plurality of constraint points, so that when a plurality of constraint positions that cannot be simultaneously satisfied are input, calculation can be performed such that the constraints are satisfied in accordance with the priority. In contrast to that, if no priority is set, only results that do not satisfy any of the constraints can be obtained in most cases . For example, in the case of a bipedal robot or the like in which the robot itself needs to be controlled into a stable state on the premise that the constraints of the legs are satisfied, priority may be assigned in a similar manner to that of the present invention so that the risk that the robot could topple down can be reduced.
[0102]
Note that the present invention is not limited to above-described exemplary embodiments and can be modified as appropriate without departing from the limits of the spirit of the present invention. For example, although the above-described exemplary embodiments are explained on the assumption that the optimization unit 50 performs optimization processing on a trajectory generated by the motion/posture generation unit 30, the optimization unit 50 may perform optimization processing on a trajectory generated by a trajectory creating method in the related art .
Industrial Applicability
[0103]
The present invention is applicable to a robot control system that makes, for example, a motion plan of a robot to which a plurality of tasks are provided, a motion data creation apparatus and its creating method.
Reference Signs List
[0104]
1 CONTROL SYSTEM
10 INTERFACE UNIT
20 MOTION DATA CREATION APPARATUS
30 MOTION/POSTURE CREATION UNIT
40 OBJCT DATA STORE UNIT
50 OPTIMIZATION UNIT
60 ROBOT CONTROL UNIT
70 WHOLE BODY COOPERATION IK CONTROL UNIT
71 JOINT ANNGULAR VELOCITY CALCULATION UNIT 72 INTEGRATOR
73 ROBOT FORWARD KINEMATICS MODEL
74 JOCOBIAN MATRIX CALCULATION UNIT 75, 76 GAIN
81, 83, 84, 86 NODE
82, 85 EDGE
100 ROBOT
101 HEAD
102-128 JOINT
130, 131 SOLE
141R, 141L HAND
LR RIGHT LEG LINK
LL LEFT LEG LINK
AR RIGHT ARM LINK AL LEFT ARM LINK
BD BODY TRUNK

Claims

[Claim 1]
A robot control system comprising:
a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and
the robot that moves according to a joint angle created by the motion data creation apparatus;
wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a configuration space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a
Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.
[Claim 2]
The robot control system according to Claim 1, wherein the prioritized task is set for a plurality of portions or a center of mass of the robot.
[Claim 3]
The robot control system according to Claim 1 or 2, wherein the motion data creation apparatus is configured to perform the whole body cooperation calculation such that the difference between the target value of a constraint point specified by the prioritized task and the current value converges by generating a third node from a first node extracted in a random manner or in a predetermined manner and a second node that is a node on the trajectory data and closest to the first node, and using a constraint point obtained from a joint angle corresponding to the third node as an initial value.
[Claim 4]
The robot control system according to any one of Claims 1 to 3, wherein the motion data creation apparatus is configured to perform optimization processing of the trajectory data by dividing the generated trajectory data by predetermined unit, using a posture corresponding to each divided point as an initial value, and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges .
[Claim 5]
The robot control system according to any one of Claims 1 to 4, wherein
the motion data creation apparatus further comprises an object data storage unit that stores data including a
shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot, and the motion data creation apparatus is configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the configuration space by referring to information stored in the object data storage unit, and when they collide, not to add the calculated joint angle newly to the trajectory data.
[Claim 6]
A robot control system comprising:
a motion data creation apparatus that receives a prioritized task and generates a joint angle satisfying a constraint specified by the task, the prioritized task being set for a plurality of portions of a robot; and
the robot that moves according to a joint angle created by the motion data creation apparatus;
wherein the motion data creation apparatus calculates, in a configuration space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges.
[Claim 7]
The robot control system according to Claim 6, wherein the prioritized task is set for a plurality of portions or a center of mass of the robot.
[Claim 8]
The robot control system according to Claim 6 or 7, wherein the motion data creation apparatus is configured to perform optimization processing of the calculated joint angle by using a posture obtained from the calculated joint angle as an initial value and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges .
[Claim 9]
The robot control system according to any one of Claims 6 to 8, wherein
the motion data creation apparatus comprises an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot, and
the motion data creation apparatus is configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the configuration space by referring to information stored in the object data storage unit, and when they collide, not to adopt the calculated joint angle.
[Claim 10]
Amotion data creation apparatus that creates a joint angle to cause a robot to move, wherein when the motion data creation apparatus searches for trajectory data of a joint angle reaching from an initial posture of the robot to a target posture in a configuration space by using a search technique based on sampling, the motion data creation apparatus calculates a joint angle satisfying an constraint by using a constraint point obtained from a joint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
[Claim 11]
The motion data creation apparatus according to Claim 10, wherein the motion data creation apparatus is configured to perform optimization processing of the trajectory data by dividing the generated trajectory data by predetermined unit, using a posture corresponding to each divided point as an initial value, and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges .
[Claim 12]
The motion data creation apparatus according to Claim 10 or 11, wherein
the motion data creation apparatus further comprises an object data storage unit that stores data including a
shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot, and the motion data creation apparatus is configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the configuration space by referring to information stored in the object data storage unit, and when they collide, not to add the calculated joint angle newly to the trajectory data.
[Claim 13]
Amotion data creation apparatus that creates a joint angle to cause a robot to move, wherein the motion data creation apparatus calculates, in a configuration space, a joint angle satisfying an constraint by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a
Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by a prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
[Claim 14]
The motion data creation apparatus according to Claim 13, wherein the motion data creation apparatus is configured to perform optimization processing of the calculated joint angle by using a posture obtained from the calculated joint angle as an initial value and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges.
[Claim 15]
The motion data creation apparatus according to Claim 13 or 14, wherein
the motion data creation apparatus comprises an object data storage unit that stores data including a shape/configuration data of an object within an environment and the robot, and geometrical configuration data of the robot, and
the motion data creation apparatus is configured to check whether or not the robot in a posture obtained from a calculated joint angle collides with an object within the environment in the configuration space by referring to information stored in the object data storage unit, and when they collide, not to adopt the calculated joint angle.
[Claim 16]
A motion data creation method for creating a joint angle to cause a robot to move, wherein when trajectory data of a joint angle reaching from an initial posture of the robot to a target posture is searched for in a configuration space by using a search technique based on sampling, a joint angle satisfying an constraint is calculated by using a constraint point obtained from a oint angle to be added newly to the trajectory data as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by the prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
[Claim 17]
The motion data creation method according to Claim 16, wherein optimization processing of the tra ectory data is further performed by dividing the generated trajectory data by predetermined unit, using a posture corresponding to each divided point as an initial value, and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges.
[Claim 18]
The motion data creation method according to Claim 16 or 17, wherein whether or not the robot in a posture obtained from a calculated joint angle collides with an object within an environment in the configuration space is checked by referring to data including a shape/configuration data of the object within the environment and the robot and geometrical configuration data of the robot, and when they collide, the calculated joint angle is not added newly to the trajectory data.
[Claim 19]
A motion data creation method for creating a joint angle to cause a robot to move, wherein a joint angle satisfying an constraint is calculated in a configuration space by using a constraint point obtained from a randomly-generated joint angle as an initial value and performing whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a target value of a constraint point specified by a prioritized task and a current value converges, the prioritized task being set for a plurality of portions or a center of mass of a robot.
[Claim 20]
The motion data creation method according to Claim 19, wherein optimization processing of the calculated joint angle is further performed by using a posture obtained from the calculated joint angle as an initial value and performing the whole body cooperation calculation using a Newton-Raphson method or a feedback method such that a difference between a provided ideal posture and a current posture converges.
[Claim 21]
The motion data creation method according to Claim 19 or
20, wherein whether or not the robot in a posture obtained from a calculated joint angle collides with an object within an environment in the configuration space is checked by referring to data including a shape/configuration data of the object within the environment and the robot and geometrical configuration data of the robot, and when they collide, the calculated joint angle is not adopted.
PCT/IB2009/007954 2009-12-04 2009-12-04 Robot control system, motion data creation apparatus and its creating method WO2011067621A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2009/007954 WO2011067621A1 (en) 2009-12-04 2009-12-04 Robot control system, motion data creation apparatus and its creating method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2009/007954 WO2011067621A1 (en) 2009-12-04 2009-12-04 Robot control system, motion data creation apparatus and its creating method

Publications (1)

Publication Number Publication Date
WO2011067621A1 true WO2011067621A1 (en) 2011-06-09

Family

ID=41698204

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2009/007954 WO2011067621A1 (en) 2009-12-04 2009-12-04 Robot control system, motion data creation apparatus and its creating method

Country Status (1)

Country Link
WO (1) WO2011067621A1 (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106985140A (en) * 2017-04-19 2017-07-28 广州视源电子科技股份有限公司 Robot point-to-point motion control method and system
JP2017209750A (en) * 2016-05-25 2017-11-30 本田技研工業株式会社 Functional device, and control device and control method for the device
WO2017219681A1 (en) * 2016-06-24 2017-12-28 深圳市前海康启源科技有限公司 Domestic robot control system and method
CN109927031A (en) * 2019-03-28 2019-06-25 大连理工大学 A kind of combination joint and cartesian space six-shaft industrial robot paths planning method
CN110153994A (en) * 2018-02-16 2019-08-23 日本电产三协株式会社 The amendment value calculating method of industrial robot
CN110398967A (en) * 2019-07-24 2019-11-01 西安电子科技大学 A kind of multirobot collaboration trace information processing method using discretization method
WO2019213164A1 (en) * 2018-04-30 2019-11-07 Vanderbilt University A control method for a robotic system
CN110587595A (en) * 2018-06-13 2019-12-20 西门子医疗有限公司 Method for operating a robot, data storage device, robot and robot system
WO2020030284A1 (en) * 2018-08-10 2020-02-13 Abb Schweiz Ag Method for controlling movement of a robot
CN111766870A (en) * 2020-05-29 2020-10-13 广州极飞科技有限公司 Transition path and operation path planning method and related device
CN111844128A (en) * 2019-04-25 2020-10-30 株式会社日立制作所 Track planning device and track planning method
CN111857173A (en) * 2020-08-17 2020-10-30 常州工程职业技术学院 Jumping gait planning control system and method for quadruped robot
CN113276112A (en) * 2021-04-30 2021-08-20 北京卫星制造厂有限公司 Mobile double-robot-based weak rigid member machining process planning method
CN113305845A (en) * 2021-06-03 2021-08-27 广东工业大学 Multi-mechanical arm cooperation method
US11813756B2 (en) 2020-04-16 2023-11-14 Fanuc Corporation Disassembly based assembly planning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008036761A (en) 2006-08-04 2008-02-21 Toyota Motor Corp Movement data creation device and method therefor
JP2009214268A (en) 2008-03-12 2009-09-24 Toyota Motor Corp Body cooperative control device, robot and robot control method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008036761A (en) 2006-08-04 2008-02-21 Toyota Motor Corp Movement data creation device and method therefor
JP2009214268A (en) 2008-03-12 2009-09-24 Toyota Motor Corp Body cooperative control device, robot and robot control method

Non-Patent Citations (12)

* Cited by examiner, † Cited by third party
Title
BOUYARMANE K ET AL: "Potential field guide for humanoid multicontacts acyclic motion planning", ROBOTICS AND AUTOMATION, 2009. ICRA '09. IEEE INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 12 May 2009 (2009-05-12), pages 1165 - 1170, XP031509587, ISBN: 978-1-4244-2788-8 *
DMITRY BERENSONY; SIDDLLARTHA S.SRLNIVASAZY; DAVE FERGUSONZY; ALVARO COLLETY; JAMES J. KUTTNER: "Manipulation Planning with Workspace Goal Regions.", ICRA, 2009
EIICHI YOSHIDA ET AL: "Planning 3-D Collision-Free Dynamic Robotic Motion Through Iterative Reshaping", IEEE TRANSACTIONS ON ROBOTICS, IEEE SERVICE CENTER, PISCATAWAY, NJ, US LNKD- DOI:10.1109/TRO.2008.2002312, vol. 24, no. 5, 1 October 2008 (2008-10-01), pages 1186 - 1198, XP011234183, ISSN: 1552-3098 *
EIICHI YOSHIDA ET AL: "Smooth Collision Avoidance: Practical Issues in Dynamic Humanoid Motion", INTELLIGENT ROBOTS AND SYSTEMS, 2006 IEEE/RSJ INTERNATIONAL CONFERENCE ON, IEEE, PI, 1 October 2006 (2006-10-01), pages 827 - 832, XP031006173, ISBN: 978-1-4244-0258-8 *
FERRE E ET AL: "An iterative diffusion algorithm for part disassembly", ROBOTICS AND AUTOMATION, 2004. PROCEEDINGS. ICRA '04. 2004 IEEE INTERN ATIONAL CONFERENCE ON NEW ORLEANS, LA, USA APRIL 26-MAY 1, 2004, PISCATAWAY, NJ, USA,IEEE, US LNKD- DOI:10.1109/ROBOT.2004.1307547, vol. 3, 26 April 2004 (2004-04-26), pages 3149 - 3154, XP010768589, ISBN: 978-0-7803-8232-9 *
HIROKI SANADA; EIICHI YOSHIDA; KAZUHITO YOKOI: "Passing Under Obstacles with Humanoid Robots.", IROS, 2007
J.J. KUFFNER; S. KAGAML; K. NISHIWAKI; M. INABA; H. INOUE: "Dynamically-stable motion planning for humanoid robots", AUTONOMOUS ROBOTS, vol. 12, 2002, XP002605538
KUFFNER J J JR ET AL: "Dynamically-stable motion planning for humanoid robots", AUTONOMOUS ROBOTS KLUWER ACADEMIC PUBLISHERS NETHERLANDS, vol. 12, no. 1, January 2002 (2002-01-01), pages 105 - 118, XP002605538, ISSN: 0929-5593 *
MARC TOUSSAINT; MICHAEL GIENGER; CHRISTIAN GOERICK: "Optimization of sequential attractor-based movement for compact behaviour generation", HUMANOIDS, 2007
NIKOLAUS VAHRENKAMP; DMITRY BERENSON; TAMIM ASFOUR; JAMES KUFFNER; RUDIGER DILLMANN: "Humanoid Motion Planning for Dual-Arm Manipulation and Re-Grasping Tasks", IROS, 2009
VAHRENKAMP N ET AL: "Humanoid motion planning for dual-arm manipulation and re-grasping tasks", INTELLIGENT ROBOTS AND SYSTEMS, 2009. IROS 2009. IEEE/RSJ INTERNATIONAL CONFERENCE ON, IEEE, PISCATAWAY, NJ, USA, 10 October 2009 (2009-10-10), pages 2464 - 2470, XP031580983, ISBN: 978-1-4244-3803-7 *
YOSHIDA E ET AL: "Motion autonomy for humanoids: experiments on HRP-2 No. 14", COMPUTER ANIMATION AND VIRTUAL WORLDS JOHN WILEY & SONS LTD. UK, vol. 20, no. 5-6, September 2009 (2009-09-01), pages 511 - 522, XP002605537, ISSN: 1546-4261 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2017209750A (en) * 2016-05-25 2017-11-30 本田技研工業株式会社 Functional device, and control device and control method for the device
WO2017219681A1 (en) * 2016-06-24 2017-12-28 深圳市前海康启源科技有限公司 Domestic robot control system and method
CN106985140A (en) * 2017-04-19 2017-07-28 广州视源电子科技股份有限公司 Robot point-to-point motion control method and system
CN106985140B (en) * 2017-04-19 2019-05-07 广州视源电子科技股份有限公司 Robot point-to-point motion control method and system
CN110153994B (en) * 2018-02-16 2022-09-27 日本电产三协株式会社 Method for calculating correction value of industrial robot
CN110153994A (en) * 2018-02-16 2019-08-23 日本电产三协株式会社 The amendment value calculating method of industrial robot
EP3788447A4 (en) * 2018-04-30 2022-01-19 Vanderbilt University A control method for a robotic system
WO2019213164A1 (en) * 2018-04-30 2019-11-07 Vanderbilt University A control method for a robotic system
US11865723B2 (en) 2018-04-30 2024-01-09 Vanderbilt University Control method for a robotic system
CN110587595A (en) * 2018-06-13 2019-12-20 西门子医疗有限公司 Method for operating a robot, data storage device, robot and robot system
CN110587595B (en) * 2018-06-13 2022-11-25 西门子医疗有限公司 Method for operating a robot, data storage device, robot and robot system
WO2020030284A1 (en) * 2018-08-10 2020-02-13 Abb Schweiz Ag Method for controlling movement of a robot
US11717962B2 (en) 2018-08-10 2023-08-08 Abb Schweiz Ag Method for controlling movement of a robot
CN109927031A (en) * 2019-03-28 2019-06-25 大连理工大学 A kind of combination joint and cartesian space six-shaft industrial robot paths planning method
CN111844128A (en) * 2019-04-25 2020-10-30 株式会社日立制作所 Track planning device and track planning method
CN110398967B (en) * 2019-07-24 2021-07-16 西安电子科技大学 Multi-robot cooperative track information processing method adopting discretization method
CN110398967A (en) * 2019-07-24 2019-11-01 西安电子科技大学 A kind of multirobot collaboration trace information processing method using discretization method
US11813756B2 (en) 2020-04-16 2023-11-14 Fanuc Corporation Disassembly based assembly planning
CN111766870A (en) * 2020-05-29 2020-10-13 广州极飞科技有限公司 Transition path and operation path planning method and related device
CN111857173A (en) * 2020-08-17 2020-10-30 常州工程职业技术学院 Jumping gait planning control system and method for quadruped robot
CN113276112A (en) * 2021-04-30 2021-08-20 北京卫星制造厂有限公司 Mobile double-robot-based weak rigid member machining process planning method
CN113305845B (en) * 2021-06-03 2022-11-04 广东工业大学 Multi-mechanical arm cooperation method
CN113305845A (en) * 2021-06-03 2021-08-27 广东工业大学 Multi-mechanical arm cooperation method

Similar Documents

Publication Publication Date Title
WO2011067621A1 (en) Robot control system, motion data creation apparatus and its creating method
Shkolnik et al. Path planning in 1000+ dimensions using a task-space Voronoi bias
Petrič et al. Smooth continuous transition between tasks on a kinematic control level: Obstacle avoidance as a control problem
US8483874B2 (en) Path planning apparatus of robot and method and computer-readable medium thereof
KR101743926B1 (en) Robot and control method thereof
Penco et al. A multimode teleoperation framework for humanoid loco-manipulation: An application for the icub robot
US20110035087A1 (en) Method and apparatus to plan motion path of robot
Oriolo et al. Repeatable motion planning for redundant robots over cyclic tasks
US20220314443A1 (en) Controlling a robot based on constraint-consistent and sequence-optimized pose adaptation
US11787055B2 (en) Controlling a robot using predictive decision making
KR20120031595A (en) Robot and control method thereof
Behnisch et al. Task space motion planning using reactive control
Walker et al. Robot-human handovers based on trust
Tarbouriech et al. Dual-arm relative tasks performance using sparse kinematic control
US20220314437A1 (en) Simulating task performance of virtual characters
Zhang et al. Task-space decomposed motion planning framework for multi-robot loco-manipulation
Lin et al. Intuitive kinematic control of a robot arm via human motion
Petrič et al. Smooth transition between tasks on a kinematic control level: Application to self collision avoidance for two Kuka LWR robots
Vahrenkamp et al. Adaptive motion planning for humanoid robots
Fan et al. A humanoid robot teleoperation approach based on waist–arm coordination
JP6168158B2 (en) Mobile robot movement control method and mobile robot
Gienger et al. Optimization of fluent approach and grasp motions
Xie et al. Fault tolerant motion planning of robotic manipulators based on a nested RRT algorithm
Torielli et al. Manipulability-aware shared locomanipulation motion generation for teleoperation of mobile manipulators
Kawabe et al. A flexible collision-free trajectory planning for multiple robot arms by combining Q-learning and RRT

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09806033

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09806033

Country of ref document: EP

Kind code of ref document: A1