WO2010136961A1 - Control device and method for controlling a robot - Google Patents

Control device and method for controlling a robot Download PDF

Info

Publication number
WO2010136961A1
WO2010136961A1 PCT/IB2010/052303 IB2010052303W WO2010136961A1 WO 2010136961 A1 WO2010136961 A1 WO 2010136961A1 IB 2010052303 W IB2010052303 W IB 2010052303W WO 2010136961 A1 WO2010136961 A1 WO 2010136961A1
Authority
WO
WIPO (PCT)
Prior art keywords
end effector
robot
values
constraint
movement
Prior art date
Application number
PCT/IB2010/052303
Other languages
French (fr)
Inventor
Boudewijn Theodorus Verhaar
Dennis Johannes Hubertinus Bruijnen
Original Assignee
Koninklijke Philips Electronics N.V.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koninklijke Philips Electronics N.V. filed Critical Koninklijke Philips Electronics N.V.
Publication of WO2010136961A1 publication Critical patent/WO2010136961A1/en

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B19/00Programme-control systems
    • G05B19/02Programme-control systems electric
    • G05B19/42Recording and playback systems, i.e. in which the programme is recorded from a cycle of operations, e.g. the cycle of operations being manually controlled, after which this record is played back on the same machine
    • G05B19/423Teaching successive positions by walk-through, i.e. the tool head or end effector being grasped and guided directly, with or without servo-assistance, to follow a path
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1661Programme controls characterised by programming, planning systems for manipulators characterised by task planning, object-oriented languages
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/36Nc in input of data, input key till input tape
    • G05B2219/36442Automatically teaching, teach by showing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/36Nc in input of data, input key till input tape
    • G05B2219/36489Position and force
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40116Learn by operator observation, symbiosis, show, watch
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40391Human to robot skill transfer

Definitions

  • the present invention relates to a control device for controlling a robot and to a corresponding method as well as to a computer program.
  • the invention further relates to a robot, to a robot system and to a teacher interface.
  • a main focus for providing flexible robots is to simplify the process of programming a robot, for example to simplify the process of programming the robot's tasks.
  • Pre -programming tasks by hand using a keyboard is not suitable, because this is very labor- intensive and/or very difficult.
  • a task cannot be pre-programmed, because it is impossible to pre-program it in such a way that it is robust enough to varying circumstances.
  • teaching a task to a robot, in particular by imparting a training movement onto a robot is very suitable for programming a robot.
  • a control device for controlling a robot having a robot arm with a number of individual arm sections, an end effector connected to one of the arm sections and a number of actuators for moving at least one of the end effector and at least one of the arm sections, wherein the control device has at least two different modes of operation, encompassing a working mode and a training mode, wherein in the working mode for controlling the robot at least one of the actuators is controlled depending on a number of set points representing a working movement and in the training mode a training movement is imparted onto at least one of the end effector and at least one of the arm sections, wherein the training movement corresponds to the working movement, is presented comprising: an activation unit for activating the training mode, a constraint determination unit for determining a number of constraint values representing a motion constraint imposed on at least one of the end effector and at least one of the arm sections while imparting the training movement, and a set point determination unit for determining the number of set points depending on the number of constrain
  • a corresponding computer program comprising program code means for causing a computer to carry out the steps of said method when said computer program is carried out on a computer.
  • the present invention is based on the idea of simultaneously determining constraint values while imparting a training movement onto a robot, wherein said training movement corresponds to a working movement.
  • information about an existing motion constraint is directly obtained on the basis of the training movement.
  • No additional precautions are needed, such as conducting additional movements besides said training movement for defining an obstacle existing in the environment of the robot, or such as defining an obstacle with the help of a 3D-mouse or such as entering position coordinates of an obstacle with the help of a keyboard.
  • the number of set points representing the working movement is determined depending on the number of constraint values.
  • a robot can be completely programmed in a very easy, efficient, quick and cost-saving way, by imparting a training movement onto the robot.
  • the present invention is particularly suitable for programming a robot by demonstration (PbD).
  • a working movement can be programmed in a robust way.
  • the term motion constraint shall not only contain a physical obstacle existing in the environment of a robot, but also an invariant of a working movement, wherein said invariant is specified by a person or teacher imparting a training movement onto the robot, wherein the training movement corresponds to the working movement.
  • the training movement is identical with the working movement.
  • a working movement can be a complex movement, consisting of a plurality of so-called basic or atomic movements.
  • a working movement can be such a basic movement as well.
  • a training movement and a working movement can also be understood as a task.
  • Programming a robot is realized by teaching a working movement. Teaching the working movement is realized by imparting a training movement.
  • the working mode is also referred to as replay mode.
  • said set point determination unit is adapted for selecting a control policy from a plurality of different control policies, wherein selecting the control policy is carried out depending on the number of constraint values, wherein the number of set points is determined according to the selected control policy.
  • a first aspect of adaption could be to select a first control policy, as long as no constraint is imposed on the end effector or, on one of the arm sections, and to select a second control policy as soon as a constraint is imposed.
  • a second aspect of adaption could be, in case a constraint is imposed, to select a control policy for reacting on changing constraint conditions, e. g. a change in the surface of an existing obstacle.
  • a third control policy could be selected for a first surface condition and a fourth control policy could be selected for a second surface condition. All four control policies shall differ from each other.
  • a control policy is selected, based on information obtained from a human demonstration of a training movement.
  • the selected control policy is based on a reference variable for which the number of set points is determined.
  • For controlling the robot at least one of the actuators is controlled depending on the number of set points. Therefore the robot is controlled according to the selected control policy.
  • the plurality of different control policies comprise a first control policy with a first reference variable corresponding to a first physical quantity and a second control policy with a second reference variable corresponding to a second physical quantity.
  • This measure allows the use of optimally adapted and therefore completely different control policies.
  • a control policy based on a reference variable can be used that would not be used in case no obstacle exists.
  • the plurality of different control policies comprise two different control policies, both based on the same physical quantity but applying different formulas for calculating the set points, so that first set points determined according to the first control policy and second set points determined according to the second control policy differ from each other.
  • the first physical quantity is a position and the second physical quantity is a force.
  • the robot can either be controlled with a position control or with a force control. If necessary, the robot can even be controlled with a control that is a combination of both control policies.
  • the robot is controlled with a position control in case no motion constraint exists and with a force control in case a motion constraint exists.
  • using force set points has the following advantage: on the assumption that a flat surface exists to which the end effector is into contact for a segment of the training movement, a motion constraint perpendicular to the flat surface is imposed to the end effector.
  • While executing the working movement in other words during replay of the demonstrated task, in said segment the robot should be controlled with a control that is based on a force control policy. This ensures that the surface will not be damaged by the end effector.
  • a position control policy can be used for the other segments of the working movement, for which the end effector is not into contact with the surface. The approach described above is accordingly applicable to a scenario of grabbing an object.
  • said training movement is initiated by manually operating a teacher interface attached to the end effector.
  • a teacher interface a person programming a robot can directly and therefore very precisely impart a training movement onto the robot, in contrast to programming a robot for example by interacting with a virtual environment.
  • a second advantage results from the position, at which the teacher interface is attached to the robot arm. As the teacher interface is attached to the end effector, the end effector can be maneuvered very precisely to desired positions.
  • the training movement is initiated by moving the robot arm by muscle power initiated by a person moving the teacher interface.
  • the training movement is initiated using a teacher interface that is adapted as a haptic interface, wherein said haptic interface is for example equipped with a force feedback unit.
  • the arm is moved by controlling at least one of the robot arm actuators according to the movement of the haptic interface.
  • the training movement is initiated by a person interacting with the robot arm.
  • the haptic interface the interaction between the person and the robot arm is an indirect one. For example the person interacts with the haptic interface (“haptic master”) and the haptic interface interacts with the robot arm (“haptic slave”) via a controller.
  • the teacher interface can be attached either permanently or removable to the end effector. In any case, if the teacher interface is sufficiently small and lightweight, it can stay attached to the robot arm during the working mode and hence while executing the working movement.
  • the training movement is initiated by manually operating a teacher interface that is structurally separated from the robot.
  • a teacher interface that is structurally separated from the robot.
  • a feedback unit is provided, enabling remote control of the robot or the robot arm with haptic feedback.
  • said constraint determination unit is adapted for determining a number of end effector position values representing a number of positions the end effector passes through while imparting the training movement and wherein said constraint determination unit is further adapted for determining the number of constraint values depending on the number of end effector position values.
  • said constraint determination unit is further adapted for determining the number of end effector position values depending on a number of position values representing a movement conducted by the actuator.
  • a position sensor is assigned, wherein the position sensor is adapted for sensing a position representing a movement conducted by the actuator and further adapted for providing a position value representing the sensed position, wherein a number of position values is provided by the position sensor while imparting the training movement.
  • a position sensor is assigned to an actuator shall be understood such, that the position sensor is spatially assigned to the actuator. Said sensor shall not necessarily directly sense a position at the actuator, for example the position of a movable actuator component. It rather shall be possible to sense the position of the end effector or arm section moved by the actuator.
  • at least one of said actuators is adapted for conducting a rotational movement, wherein said position sensor is further adapted for sensing an angular position. Therefore the position values are available in spherical coordinates. In case the end effector position values shall be available in Cartesian coordinates a transformation is necessary.
  • said constraint determination unit is further adapted for determining a number of interaction values representing at least one of a torque and a force acting on at least one of the end effector and at least one of the arm sections while imparting the training movement, and wherein said constraint determination unit is further adapted for determining the number of constraint values depending on the number of interaction values.
  • said constraint determination unit is further adapted for determining the number of interaction values depending on a number of component torque values representing a torque acting on a component moved by the actuator, wherein the component is the end effector or one of the arm sections.
  • a torque sensor is assigned, wherein the torque sensor is adapted for sensing a torque acting on a component moved by the actuator and further adapted for providing a component torque value representing the sensed torque, wherein the component is the end effector or one of the arm sections, wherein a number of component torque values is provided by the torque sensor while imparting the training movement.
  • said constraint determination unit is further adapted for determining the number of interaction values depending on a number of operating values representing an operation of a teacher interface, wherein the teacher interface is attached to the end effector and manually operated for initiating said training movement.
  • said teacher interface preferably comprises a teacher interface sensor, wherein the teacher interface sensor is adapted for sensing an operation of the teacher interface and further adapted for providing an operating value representing the sensed operation, wherein a number of operating values is provided by the teacher interface sensor while imparting the training movement.
  • a first advantage of this measure is that said teacher interface sensor allows an easy determination of the invariants of a task or training movement.
  • the number of operating values can be examined to that effect if with regard to one of the directions of motion the actuation of the teacher interface is unchanged. If such a constellation is detected, it can be assumed that the corresponding direction of motion will surely not be an essential direction of the working movement.
  • the number of operation values provided by the teacher interface sensor can be used for controlling the actuators so that the robot arm conducts a training movement initiated by operating the teacher interface.
  • a great benefit in doing so is that the force sensor is not in the main mechanical and thus force transmitting chain of the robot arm itself.
  • a compensation for dynamics such as robot arm inertia, and compensation for joint forces is necessary.
  • This putative disadvantage can be avoided by using the teacher interface sensor for sensing the interaction between the end effector or the robot arm and the environment of the robot, especially an obstacle.
  • said teacher interface sensor is adapted for sensing a torque acting on the teacher interface and is further adapted for providing a teacher interface torque value representing the sensed torque.
  • said teacher interface sensor is adapted for sensing a force acting on the teacher interface and further adapted for providing a teacher interface force value representing the sensed force.
  • said teacher interface sensor is adapted for concurrently sensing both a torque and a force. Sensing a force or a torque is advantageous, because these quantities represent reliably an operation of the teacher interface.
  • At least one of said actuators is driven for generating small noisy perturbations while imparting the training movement.
  • These perturbations make sure that an excitation of the interaction between the end effector and the environment is persistently excited.
  • all principal directions are excited independently, advantageously with sufficiently uncorrelated excitations, like measuring a multi-dimensional frequency response function (FRF).
  • FPF frequency response function
  • the actuators of the robot arm purposely give some excitation.
  • the reaction forces both of the environment, especially an existing obstacle, and of the person imparting the training movement give additional information about the motion constraints in the environment and about the invariants of the working movement.
  • all actuators are driven.
  • the invention further relates to a teacher interface for manually imparting a training movement onto a robot arm, in particular onto an end effector of a robot arm, having a handle section, a fastening section for attaching the teacher interface to the robot arm, in particular to the end effector of a robot arm, and a teacher interface sensor for sensing at least one of a force and a torque, wherein the teacher interface sensor is arranged between the handle section and the fastening section.
  • the teacher interface is constructed in such a way, that the invariants of the training movement and hence of the corresponding working movement can be easily detected.
  • the teacher interface sensor is physically inserted between the handle section and the robot arm, for example all interaction forces between the handle section and the robot arm go via this sensor.
  • the teacher interface sensor is adapted as a so- called 6DOF (6 degrees of freedom) force and torque sensor.
  • Such a sensor is adapted for sensing torque and force with regard to three different axes, wherein these axes are orthogonal to each other.
  • the handle element comprises a tactile sensor.
  • components of the end effector for example individual fingers can be controlled or manipulated by squeezing the handle section.
  • the teacher interface is rigidly attached to the end effector. Thus, by arbitrarily moving the teacher interface, the robot arm will follow.
  • the invention further relates to a robot having a base, a first arm section connected to the base by a first joint, a second arm section connected to the first arm section by a second joint and an end effector connected to the second arm section by a third joint, wherein at least one of the joints contains at least one actuator and a position sensor and a torque sensor, each of the sensors assigned to the actuator, wherein the end effector has a fastening element to which a teacher interface can be attached, in particular a teacher interface according to the present invention.
  • the invention further relates to a robot system comprising a robot having a robot arm with a number of individual arm sections, an end effector connected to one of the arm sections and a number of actuators for moving at least one of the end effector and at least one of the arm sections and a control device according to the present invention.
  • a robot having a robot arm with a number of individual arm sections, an end effector connected to one of the arm sections and a number of actuators for moving at least one of the end effector and at least one of the arm sections and a control device according to the present invention.
  • the robot is adapted as described above.
  • the invention further relates to a robot system comprising a robot as described above and a control device according to the invention.
  • Fig. 1 shows an embodiment of a robot system in accordance with the invention
  • Fig. 2 schematically shows an embodiment of a control device according to the present invention
  • Fig. 3 shows a flowchart of an embodiment of the method according to the present invention.
  • Fig. 4 shows set point diagrams.
  • Fig. 1 shows a first embodiment of a robot system 10 comprising a robot 12 and a control device 14 for controlling the robot 12.
  • the robot 12 has a base 16 and a robot arm 18.
  • the robot arm 18 comprises a number of arm sections 20, a number of joints 22 and an end effector 24.
  • a first arm section 26 is connected to the base 16 by a first joint 28, a so- called shoulder joint.
  • a first actuator 30 is arranged in the first joint 28 .
  • a first control signal 32 is generated in the control device 14.
  • the first arm section 26 has three rotational degrees of freedom.
  • the first actuator 30 is adapted for conducting the required three rotational movements.
  • a first position sensor 34 for sensing a position representing a movement conducted by the first actuator 30 is arranged in the first joint 28.
  • the first position sensor 34 is adapted for sensing an angular position of a moving element of the first actuator 30 or of the first arm section 26.
  • the first position sensor 34 can sense positions with regard to the three degrees of freedom of the first arm section 26.
  • a first position signal 36 representing a first number of position values provided by the first position sensor 34 is fed to the control device 14. Further a first torque sensor 38 for sensing a torque acting on the first arm section 26 is arranged in the first joint 28. A first torque signal 40 representing a first number of torque values provided by the first torque sensor 38 is fed to the control device 14.
  • a second arm section 42 is connected to the first arm section 26 by a second joint 44, a so-called elbow joint.
  • a second actuator 46 is arranged in the second joint 44.
  • a second control signal 48 is generated in the control device 14.
  • the second arm section 42 has two rotational degrees of freedom.
  • the second actuator 46 is adapted for conducting the required two rotational movements.
  • a second position sensor 50 for sensing a position representing a movement conducted by the second actuator 46 is arranged in the second joint 44.
  • the second position sensor 50 is adapted for sensing an angular position of a moving element of the second actuator 46 or of the second arm section 42.
  • the second position sensor 50 can sense positions with regard to the two degrees of freedom of the second arm section 42.
  • a second position signal 52 representing a second number of position values provided by the second position sensor 50 is fed to the control device 14. Further a second torque sensor 54 for sensing a torque acting on the second arm section 42 is arranged in the second joint 44. A second torque signal 56 representing a second number of torque values provided by the second torque sensor 54 is fed to the control device 14.
  • the end effector 24 is connected to the second arm section 42 by a third joint 58, a so-called wrist joint.
  • the end effector 24 consists of a hand element 60 and a plurality of fingers 62.
  • a third actuator 64 is arranged in the third joint 58 .
  • a third control signal 66 is generated in the control device 14.
  • the end effector 24 has two rotational degrees of freedom, excluded the degrees of freedom arising from the plurality of fingers 62.
  • the third actuator 64 is adapted for conducting the required two rotational movements.
  • a third position sensor 68 for sensing a position representing a movement conducted by the third actuator 64 is arranged in the third joint 58.
  • the third position sensor 68 is adapted for sensing an angular position of a moving element of the third actuator 64 or of the end effector 24, more precisely of the hand element 60.
  • the third position sensor 68 can sense positions with regard to the two degrees of freedom of the end effector 24.
  • a third position signal 70 representing a third number of position values provided by the third position sensor 68 is fed to the control device 14.
  • a third torque sensor 72 for sensing a torque acting on the end effector 24 is arranged in the third joint 58.
  • a third torque signal 74 representing a third number of torque values provided by the third torque sensor 72 is fed to the control device 14.
  • actuators and sensors arranged in the single joints an alternative embodiment is conceivable.
  • a sole actuator a plurality of individual actuators, each assigned to one of the degrees of freedom of the corresponding arm section or the end effector are provided.
  • Electric motors can be used as actuators.
  • Each of the fingers, contained in the plurality of fingers 62 comprises a number of finger actuators and a number of finger sensors. For reasons of clarity these finger actuators and finger sensors are not illustrated in Fig. 1.
  • finger sensor signals 76 and finger control signals 78 are exchanged between the finger actuators or the finger sensors and the control device 14.
  • end effector not only the end effector as a whole but also an element of the end effector, for example a sole finger or even a sole finger element can be meant.
  • An actuator for moving the end effector can also be an actuator for moving an element of the end effector.
  • the end effector 24 further comprises a fastening element 80 to which a teacher interface 82 can be attached.
  • the fastening element 80 is preferably located in the direct vicinity of the third joint 58. With the teacher interface 82 a training movement can manually be imparted onto the robot arm 18, in particular onto the end effector 24.
  • the teacher interface 82 comprises a handle section 84 and a fastening section 86 for attaching the teacher interface 82 to the end effector 24, more precisely to the fastening element 80.
  • the teacher interface 82 further comprises a teacher interface sensor 88 for sensing at least one of a force and a torque.
  • the teacher interface sensor 88 can be adapted as a 6DOF force and torque sensor.
  • the teacher interface sensor provides a teacher interface torque value representing the sensed torque and a teacher interface force value representing the sensed force.
  • the teacher interface sensor 88 is adapted for sensing an operation of the teacher interface 82.
  • the teacher interface sensor 88 is arranged between the handle section 84 and the fastening section 86.
  • the handle section 84 is rigidly connected to the teacher interface sensor 88 and the teacher interface sensor 88 is rigidly connected to the fastening section 86.
  • these three components are realized as one sole piece.
  • An operating signal 90 representing a number of operating values is fed to the control device 14, wherein the number of operating values comprises a number of teacher interface torque values and a number of teacher interface force values.
  • the handle section 84 comprises a tactile sensor 92.
  • the tactile sensor 92 is adapted for sensing a squeezing of the handle section 84.
  • a corresponding squeezing signal 94 is fed to the control device 14.
  • the handle section 84 further comprises a force feedback unit 96, with which a pinch force, resulting at the plurality of fingers 62, is fed back to the handle section 84.
  • the force feedback unit 96 is controlled with a feedback signal 98.
  • the tactile sensor 92 and the force feedback unit 96 can be regarded as a haptic interface. With the fastening element 80 and the fastening section 86 the teacher interface 82 is mechanically attached to the end effector 24. In addition an electric connection is realized.
  • the robot arm 18 has in total 7 degrees of freedom.
  • the robot 12 could be extended with a second, preferably mirrored arm.
  • the robot 12 can be extended with additional sensors, such as cameras, attached to the base 16, the end effector 24 or anywhere else on the robot arm 18.
  • a working movement represented by an arrow 100 shall be taught.
  • an object 102 shall be moved from a first position 104 at which the object 102 is located on a first work bench 106 to a second position 108 at which the object 102 is located on a second work bench 110.
  • the arrow 100 does not show the concrete trace the object 102 passes through when the robot 12 executes the working movement.
  • the arrow 100 shall only indicate that the object 102 is moved from the first work bench 106 to the second work bench 110.
  • the working movement is taught by imparting a training movement onto the robot 12 or onto the robot arm 18, using the teacher interface 82.
  • the training movement corresponds to the working movement.
  • the training movement starts at the first position 104, grabbing the object 102, lifting it from the first work bench 106, moving it around an obstacle 112 located between the two work benches 106, 110 and ends at the second position 108, putting the object 102 down on the second work bench 110.
  • Putting the object 102 down shall comprise moving it over a short distance on the surface 114 of the second work bench 110.
  • a first motion constraint arises from moving the object 102 around the obstacle 112.
  • This first motion constraint is an invariant of the working movement and is specified by the person or teacher imparting the teaching movement onto the robot 12. Because of the obstacle 112 a minimum height is required for transferring the object 102 from the first work bench 106 to the second work bench 110. For this reason the teacher operates the teacher interface 82 in a corresponding manner for keeping this minimum height. From this pattern of operation the invariant of the working movement can be derived.
  • the first motion constraint can for example be seen from the fact that in the sequence of the training movement, at which the obstacle 112 is passed, no change in the height of the end effector 24 occurs and that at the start and at the end of this sequence no change in force and torque acting on the end effector 24 occurs.
  • a second constraint arises from moving the object 102 on the surface 114 of the second work bench 110. With regard to the working movement of the robot arm 18 the second work bench 110 has the character of an obstacle. From the moment, the object 102 is in contact with the surface 114, a force is acting on the end effector 24 causing a torque. The direction of said force is perpendicular to the surface 114. Said torque can be sensed with the torque sensors 38, 54, 72.
  • the force acting on the end effector 24 can be derived from the sensed torques. While the object 102 is moved on the surface 114 no change in the height of the end effector 24 occurs. The combination of a torque acting on the end effector 24 without changing the position of the end effector 24, wherein said position corresponds to the torque, is an evidence for an obstacle to which the end effector 24 is into contact. Moving the object 102 on the surface 114 also causes an invariant of the working movement.
  • a motion constraint or physical constraints occurring during the training movement is detected by correlating force vector variations and/or torque vector variations with variations in position, velocity and acceleration vectors or variations in angle, angular velocity and angular acceleration vectors.
  • a number of constraint values are determined, wherein the constraint values represent the first and the second motion constraint imposed on the end effector 24 while imparting the training movement onto the robot 12.
  • the number of constraint values is determined depending on a number of end effector position values, representing a number of positions the end effector 24 passes through while imparting the training movement, on a number of interaction values, representing at least one of a torque and a force acting on at least one of the end effector 24 and at least one of the arm sections 26, 42 while imparting the training movement and on a number of operating values, representing an operation of the teacher interface 82.
  • a number of set points are determined, wherein the number of set points represents the working movement.
  • a working movement can also be or comprise a movement executed for example for machining the object 102.
  • Fig. 2 shows the control device 14 in a more detailed illustration.
  • the control device 14 contains a processor 120, connected to an input/output unit 122.
  • the input/output unit 122 receives input signals from a number of sensors and forwards these signals in an adapted data format to the processor 120. Further the input/output unit 122 generates output signals in dependency of the processor 120 for controlling a number of actuators.
  • the number of sensors comprise the position sensors 34, 50, 68, providing the position signals 36, 52, 70, the torque sensors 38, 54, 72, providing the torque signals 40, 56, 74, the teacher interface sensor 88 providing the operating signal 90, the tactile sensor 92, providing the squeezing signal 94 and finger sensors 124, arranged in the plurality of fingers 62, providing the finger sensor signals 76.
  • the finger sensors 124 are adapted for sensing for example positions, torques and/or forces.
  • the number of actuators comprises the actuators 30, 46, 64 arranged in the joints 28, 44, 58, each of them receiving its control signal 32, 48, 66, the force feedback unit 96, receiving the feedback signal 98 and finger actuators 126, arranged in the plurality of fingers 62, receiving the finger control signals 78.
  • the control device 14 comprises a control memory 128 in which a machine- code is stored, representing a control program for controlling the robot 12.
  • the control device 14 has at least two different modes of operation, a working mode and a training mode.
  • the actuators 30, 46, 64, 126 are controlled depending on a number of set points representing a working movement.
  • a training movement is imparted onto at least one of the end effector 24 and at least one of the arm sections 26, 42, wherein the training movement corresponds to the working movement.
  • Imparting a training movement onto the end effector 24 shall include imparting it onto the plurality of fingers 62.
  • the robot 12 can be operated in at least two different modes of operation, encompassing a working mode and a training mode.
  • the control device 14 comprises an activation unit 130 for activating the training mode. Based on condition data 132, the activation unit 130 determines which mode of operation is requested. In case the training mode is requested, the activation unit 130 outputs a corresponding training mode flag 134.
  • the condition data 132 represent signals only existing in case the teacher interface 82 is attached to the end effector 24. Such a signal may indicate that the teacher interface 82 is electrically connected to the end effector 24.
  • the condition data 132 represent a teaching mode request signal, existing in case a teaching mode request button is operated.
  • the training mode is activated parts of the control program corresponding to the training mode are processed. Further, while imparting the training movement teacher interface force values, teacher interface torque values, component torque values representing torques sensed with the torque sensors 38, 54, 72 and angular position values, sensed with the position sensors 34, 50, 68 are recorded in a record memory 136. If necessary, squeezing values, represented by the squeezing signal 94 and finger values, represented by the finger sensor signals 76 may also be recorded in the record memory 136.
  • the control device 14 contains a constraint determination unit 138 for determining a number of constraint values 140 representing a motion constraint imposed on at least one of the end effector 24 and at least one of the arm sections 26, 42 while imparting the training movement.
  • the number of constraint values 140 is determined depending on a number of end effector position values, representing a number of positions the end effector 24 passes through while imparting the training movement, and a number of interaction values representing at least one of a torque and a force acting on at least one of the end effector 24 and at least one of the arm sections 26, 42 while imparting the training movement.
  • the number of end effector position values is determined depending on a number of angular position values 142, fed from the record memory 136 to the constraint determination unit 138.
  • the number of interaction values is determined depending on a number of component torque values 144, sensed with the sensors 38, 54, 72, a number of teacher interface torque values 146 and a number of teacher interface force values 148, all fed from the record memory 136 to the constraint determination unit 138. Additionally the dynamics of the robot arm 18 are taken into consideration. Preferably the number of interaction values is determined according to the following approach: it is assumed that the dynamics of the robot arm are known. This basically means that the system equation of the robot arm is known. This in turn means that it is known how the state, in particular the Markov state s of the robot arm evolves given the component torque values, the teacher interface force values, the teacher interface torque values and additionally end effector to environment interaction force values.
  • state(t+Dt) f(state(t), component torque values, teacher interface force values, teacher interface torque values, end effector to environment interaction force values).
  • state(t+Dt) f(state(t), component torque values, teacher interface force values, teacher interface torque values, end effector to environment interaction force values).
  • state of the robot arm can be measured, for example by using position sensors and by using derivatives for velocities, the current and the next state are known.
  • the component torque values, the teacher interface force values, the teacher interface torque values are all measured. Therefore the state equation can be solved for determining the end effector to environment interaction force values.
  • As the geometry of the robot arm or the single components of the robot arm is known, based on the end effector to environment interaction force values end effector to environment interaction torque values can be determined.
  • the number of constraint values 140 is determined by correlating end effector position values with the corresponding interaction values. Especially, variations in the interaction values are compared with variations in the end effector position values.
  • the interaction values have the character of a flag, having two different conditions, depending on whether a motion constraint exists or not.
  • the constraint values can be determined simultaneously while imparting the training movement, directly using the relevant values provided by the corresponding sensors.
  • the motion constraint and therefore the number of constraint values is determined according to the following approach: a motion constraint is determined in dependency of the end effector position values and derivatives of them and the interaction values using a mathematical model. The parameters of this model are estimated using the end effector position values.
  • f c + K X p + B X v + M X a
  • f is either a 6x1 vector with the generalized 6DOF end effector to environment force values or a nxl vector with the component torque values
  • c is a bias representing a force bias
  • K is a stiffness matrix
  • B is a damping matrix
  • M is a inertia matrix
  • p are the end effector position values
  • v are end effector velocity values determined from the end effector position values
  • a are end effector acceleration values determined from the end effector position values.
  • the control device 14 further comprises a set point determination unit 150 for determining a number of set points 152 depending on the number of constraint values 140. Concrete, depending on the number of constraint values 140 a control policy is selected from a plurality of different control policies.
  • the number of set points 152 is determined according to the selected control policy. In doing so, the working movement or the training movement is divided into a plurality of individual coherent segments. For each of these segments a control policy is selected. For a segment, for which a position control is selected, end effector position values corresponding to this segment are selected from the number of end effector position values 154 determined in the constraint determination unit 138. For a segment, for which a force control is selected, force set points are determined depending on interaction values corresponding to this segment, wherein these interaction values 146 are selected from the number of interaction values 156 determined in the constraint determination unit 138. If the interaction values are forces, the interaction values can directly be used as force set points. The number of set points 152 is stored in a set point memory 158.
  • the total number of set points is generally constant in time and equal to the number of degrees of freedom to be controlled.
  • a comparison unit 160 the number of set points 152 is compared with a number of corresponding actual values 162.
  • control values 164 for the individual actuators 30, 46, 64, 126 are determined.
  • the control values 168 are fed to the corresponding actuators by forwarding a corresponding control signal to the individual actuator.
  • the units 130, 138, 150, 160 are functional units within a main memory 166. This shall not have any restricting impact on the invention. Of course said units can also be realized as structural units.
  • Fig. 3 shows a flowchart of an embodiment of the method according to the present invention.
  • the teacher interface 82 is attached to the end effector 24. This step is optional. In case the teacher interface 82 is permanently attached to the end effector 24 step 180 is not needed.
  • the training mode is activated.
  • small noisy perturbations having the character of uncorrelated torques are generated by driving at least one of the actuators 30, 46, 64. These perturbations or disturbances give additional information about the correlation between forces, acting on the end effector 24, on the one hand and end effector positions or velocities and accelerations derived from these end effector positions on the other hand.
  • a training movement is imparted onto the robot arm 18 or onto the end effector 24.
  • the robot arm 18 is moved by moving the handle section 84.
  • the handle section 84 can be moved with a translation or a rotation; all 6 degrees of freedom are possible.
  • the plurality of fingers 62 is moved by squeezing the handle section 84.
  • a number of teacher interface force values 148, a number of teacher interface torque values 146, a number of component torque values 144 and a number of angular position values 142 are recorded.
  • a number of end effector position values 154 are determined. The number of end effector positions values 154 represent all 6 degrees of freedom, thus positions and orientations.
  • a number of interaction values 156 are determined.
  • a number of constraint values 140 are determined.
  • a control policy is selected.
  • set points are determined.
  • the training mode is deactivated.
  • step 202 is also optional.
  • the order of steps 184, 186 as illustrated in Fig. 3 shall not have any restricting impact on the invention. Of course the order of these two steps can be changed.
  • Fig. 4 shows for a working movement a first diagram representing position set points x s with regard to the x-axis (Fig. 4a) and a second diagram representing force set points F xs with regard to the x-axis (Fig. 4b).
  • 4 segments are regarded. A first segment starting at the origin of coordinates and ending at point in time to, a second segment defined by points in time to and t ls a third segment defined by points in time ti and t 2 and a fourth segment defined by point in times t 2 and t3.
  • the further progress of the working movement after the point of time t 3 is not of interest.
  • the curve shapes shown do not correspond to the situation illustrated in Fig. 1.
  • the position set points x s rise from a first level up to a second level, whereas the force set points F xs are predominantly close to zero.
  • no motion constraint is imposed on the end effector 24, wherefore a position control is selected as control policy.
  • the position set points x s stay on the second level, whereas the force set points F xs show a parabolic progress with a positive peak.
  • a motion constraint is imposed on the end effector 24, wherefore a force control is selected as control policy.
  • the position set points x s decline from the second level down to a third level, whereas the force set points F xs are predominantly close to zero.
  • the robot executes the working movement or acquired skill, by executing the position and force set points as indicated in the diagrams of Fig. 4.
  • the transitions between the individual segments are triggered by events, such as the encounter of a motion constraint which was encountered while imparting the training movement.
  • the intention of the teacher is an invariant specified by him while imparting the training movement onto the robot.
  • the intention can be represented by a position set point or a force set point or a combination of these set points. Such a combination is employed in case a position with regard to a first direction is fixed and a force with regard to a second direction is fixed.
  • the intention of the teacher is distilled from position traces and force traces.
  • the control policy is chosen such that it coincides with the distilled intentions of the teacher.
  • a control device comprising an activation unit, a constraint determination unit and a set point determination unit is used together with a teacher interface that is structurally separated from the robot.
  • This embodiment enables remote control of a robot or a robot arm. Therefore, a training movement is imparted or demonstrated using remote control. On the training movement demonstrated with remote control constraint identification is applied. Said constraint identification is based on analyzing position traces and corresponding force traces.
  • a feedback unit is provided, enabling remote control with haptic feedback.
  • a control device comprising an activation unit, a constraint determination unit and a set point determination unit is used together with a teacher interface that is attached to an end effector of a robot arm.
  • Said teacher interface comprises a 6D0F force ad torque sensor for measuring interaction forces and interaction torques between said teacher interface and the robot arm or end effector.
  • This embodiment is used for non-remote demonstration of a training movement. Based on said interaction forces and said interaction torques constraint identification is conducted.
  • a control device comprising an activation unit, a constraint determination unit and a set point determination unit is used together with a teacher interface, whereby it is irrelevant whether said teacher interface is attached to the end effector or structurally separated from the robot.
  • at least one joint motor contained in joints of a robot arm is driven for generating additional noisy signals to make all directions persistently excited and thereby improving constraint identification.
  • Said constraint identification can be adapted as described in the first or in the second favorable embodiment.
  • a computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
  • a suitable medium such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The present invention relates to a control device for controlling a robot (12) having a robot arm (18) with a number of individual arm sections (26, 42), an end effector (24) connected to one of the arm sections and a number of actuators (30, 46, 64, 126) for moving at least one of the end effector (24) and at least one of the arm sections (26, 42), wherein the control device (14) has at least two different modes of operation, encompassing a working mode and a training mode, wherein in the working mode for controlling the robot (12) at least one of the actuators (30, 46, 64, 126) is controlled depending on a number of set points (152) representing a working movement and in the training mode a training movement is imparted onto at least one of the end effector (24) and at least one of the arm sections (26, 42), wherein the training movement corresponds to the working movement, said control device comprising: an activation unit (130) for activating the training mode, a constraint determination unit (138) for determining a number of constraint values (140) representing a motion constraint imposed on at least one of the end effector (24) and at least one of the arm sections (26, 42) while imparting the training movement, and a set point determination unit (150) for determining the number of set points (152) depending on the number of constraint values (140). The present invention further relates to a corresponding method as well as to a computer program. In addition, the present invention relates to a robot, to a robot system and to a teacher interface.

Description

Control device and method for controlling a robot
FIELD OF THE INVENTION
The present invention relates to a control device for controlling a robot and to a corresponding method as well as to a computer program. The invention further relates to a robot, to a robot system and to a teacher interface.
BACKGROUND OF THE INVENTION
Nowadays robots are mainly applied in production lines, e.g. for assembling or packaging products. Due to decreasing life cycles of products there is a need for flexible production equipment. This is a requirement actual production robots can hardly fulfill. For that reason many products are still assembled or produced manually, because humans are able to flexibly adapt to changing tasks. However, production lines with a high ratio of manual labor are expensive due to the labor costs. Therefore, a need for flexible production robots exists. Such production robots should be usable for many different tasks and should be easily programmed for new tasks.
Moreover, the importance of robots for consumers will increase. In the coming years, it is foreseen that consumer robots will be developed that can make the lives of humans more enjoyable. Consumer robots can assist them with daily tasks. Such tasks have to be completed successfully in environments which are both unknown a-priori and subject to change. Such environments can be house-holds, offices or outdoor. Therefore a consumer robot has to be very flexible and easily programmed for new tasks, too.
A main focus for providing flexible robots is to simplify the process of programming a robot, for example to simplify the process of programming the robot's tasks. Pre -programming tasks by hand using a keyboard is not suitable, because this is very labor- intensive and/or very difficult. Furthermore, sometimes a task cannot be pre-programmed, because it is impossible to pre-program it in such a way that it is robust enough to varying circumstances. In contrast, teaching a task to a robot, in particular by imparting a training movement onto a robot, is very suitable for programming a robot.
The publication, "Teleoperating space robots, Impact for the design of industrial robots," from Hirzinger, et al., (Proceedings of the IEEE International Symposium on Industrial Electronics, pages SS250ff., vol. 1, 1997), discloses several approaches for programming a robot. With one approach, training movements are imparted onto the robot, e.g. by guiding the robot with a robot mounted sensor. Programming a robot by imparting a training movement onto a robot contributes to create a flexible robot. However, with regard to the demand of easily programming a robot this approach is still not optimal.
SUMMARY OF THE INVENTION
It is an object of the present invention to provide a control device for controlling a robot and a corresponding method as well as a computer program by which a flexible robot can be programmed in an easier way, thus enabling a quick and hence cost- saving programming of a robot. Said object also applies to a robot, to a robot system comprising said robot and said control device and to a teacher interface attachable to a robot arm of said robot.
In a first aspect of the present invention a control device for controlling a robot having a robot arm with a number of individual arm sections, an end effector connected to one of the arm sections and a number of actuators for moving at least one of the end effector and at least one of the arm sections, wherein the control device has at least two different modes of operation, encompassing a working mode and a training mode, wherein in the working mode for controlling the robot at least one of the actuators is controlled depending on a number of set points representing a working movement and in the training mode a training movement is imparted onto at least one of the end effector and at least one of the arm sections, wherein the training movement corresponds to the working movement, is presented comprising: an activation unit for activating the training mode, a constraint determination unit for determining a number of constraint values representing a motion constraint imposed on at least one of the end effector and at least one of the arm sections while imparting the training movement, and a set point determination unit for determining the number of set points depending on the number of constraint values.
In a further aspect of the present invention a method for controlling a robot having a robot arm with a number of individual arm sections, an end effector connected to one of the arm sections and a number of actuators for moving at least one of the end effector and at least one of the arm sections, wherein the robot can be operated in at least two different modes of operation, encompassing a working mode and a training mode, wherein in the working mode for controlling the robot at least one of the actuators is controlled depending on a number of set points representing a working movement and in the training mode a training movement is imparted onto at least one of the end effector and at least one of the arm sections, wherein the training movement corresponds to the working movement, is presented comprising the steps of: activating the training mode, imparting the training movement, determining a number of constraint values representing a motion constraint imposed on at least one of the end effector and at least one of the arm sections while imparting the training movement, determining a number of set points depending on the number of constraint values.
In a still further aspect of the present invention a corresponding computer program is presented comprising program code means for causing a computer to carry out the steps of said method when said computer program is carried out on a computer.
Preferred embodiments of the invention are defined in the dependent claims. It shall be understood that the claimed method and computer program have similar and/or identical preferred embodiments as the claimed control device and as defined in the dependent claims.
The present invention is based on the idea of simultaneously determining constraint values while imparting a training movement onto a robot, wherein said training movement corresponds to a working movement. Hence information about an existing motion constraint is directly obtained on the basis of the training movement. No additional precautions are needed, such as conducting additional movements besides said training movement for defining an obstacle existing in the environment of the robot, or such as defining an obstacle with the help of a 3D-mouse or such as entering position coordinates of an obstacle with the help of a keyboard. The number of set points representing the working movement is determined depending on the number of constraint values. Thus, with the present invention a robot can be completely programmed in a very easy, efficient, quick and cost-saving way, by imparting a training movement onto the robot. In addition, programming a robot using the present invention is not very labor-intensive. The present invention is particularly suitable for programming a robot by demonstration (PbD). Using the present invention a working movement can be programmed in a robust way. The term motion constraint shall not only contain a physical obstacle existing in the environment of a robot, but also an invariant of a working movement, wherein said invariant is specified by a person or teacher imparting a training movement onto the robot, wherein the training movement corresponds to the working movement. Ideally the training movement is identical with the working movement. A working movement can be a complex movement, consisting of a plurality of so-called basic or atomic movements. A working movement can be such a basic movement as well. A training movement and a working movement can also be understood as a task. Programming a robot is realized by teaching a working movement. Teaching the working movement is realized by imparting a training movement. The working mode is also referred to as replay mode.
According to a preferred embodiment said set point determination unit is adapted for selecting a control policy from a plurality of different control policies, wherein selecting the control policy is carried out depending on the number of constraint values, wherein the number of set points is determined according to the selected control policy. With this measure an optimal adaption of the robot control is possible. A first aspect of adaption could be to select a first control policy, as long as no constraint is imposed on the end effector or, on one of the arm sections, and to select a second control policy as soon as a constraint is imposed. A second aspect of adaption could be, in case a constraint is imposed, to select a control policy for reacting on changing constraint conditions, e. g. a change in the surface of an existing obstacle. Therefore a third control policy could be selected for a first surface condition and a fourth control policy could be selected for a second surface condition. All four control policies shall differ from each other. With this approach a control policy is selected, based on information obtained from a human demonstration of a training movement. The selected control policy is based on a reference variable for which the number of set points is determined. For controlling the robot at least one of the actuators is controlled depending on the number of set points. Therefore the robot is controlled according to the selected control policy.
Preferably, the plurality of different control policies comprise a first control policy with a first reference variable corresponding to a first physical quantity and a second control policy with a second reference variable corresponding to a second physical quantity. This measure allows the use of optimally adapted and therefore completely different control policies. For example in case of an existing obstacle a control policy based on a reference variable can be used that would not be used in case no obstacle exists. As a further embodiment, it is conceivable that the plurality of different control policies comprise two different control policies, both based on the same physical quantity but applying different formulas for calculating the set points, so that first set points determined according to the first control policy and second set points determined according to the second control policy differ from each other.
Further, it is proposed that the first physical quantity is a position and the second physical quantity is a force. Thus, depending on the concrete constraint situation the robot can either be controlled with a position control or with a force control. If necessary, the robot can even be controlled with a control that is a combination of both control policies.
In particular it is advantageous, if the first control policy is selected in case the number of constraint values indicates that no motion constraint exists and that the second control policy is selected in case the number of constraint values indicates that a motion constraint exists. Thus the robot is controlled with a position control in case no motion constraint exists and with a force control in case a motion constraint exists. In case a motion constraint exists, using force set points has the following advantage: on the assumption that a flat surface exists to which the end effector is into contact for a segment of the training movement, a motion constraint perpendicular to the flat surface is imposed to the end effector. While executing the working movement, in other words during replay of the demonstrated task, in said segment the robot should be controlled with a control that is based on a force control policy. This ensures that the surface will not be damaged by the end effector. For the other segments of the working movement, for which the end effector is not into contact with the surface, a position control policy can be used. The approach described above is accordingly applicable to a scenario of grabbing an object.
Preferably, said training movement is initiated by manually operating a teacher interface attached to the end effector. This measure has several advantages. Using a teacher interface, a person programming a robot can directly and therefore very precisely impart a training movement onto the robot, in contrast to programming a robot for example by interacting with a virtual environment. A second advantage results from the position, at which the teacher interface is attached to the robot arm. As the teacher interface is attached to the end effector, the end effector can be maneuvered very precisely to desired positions.
There are several embodiments conceivable of how a training movement can be initiated. In a first embodiment the training movement is initiated by moving the robot arm by muscle power initiated by a person moving the teacher interface. In a second embodiment the training movement is initiated using a teacher interface that is adapted as a haptic interface, wherein said haptic interface is for example equipped with a force feedback unit. Using a haptic interface, the arm is moved by controlling at least one of the robot arm actuators according to the movement of the haptic interface. In both cases the training movement is initiated by a person interacting with the robot arm. But in case of the haptic interface the interaction between the person and the robot arm is an indirect one. For example the person interacts with the haptic interface ("haptic master") and the haptic interface interacts with the robot arm ("haptic slave") via a controller.
The teacher interface can be attached either permanently or removable to the end effector. In any case, if the teacher interface is sufficiently small and lightweight, it can stay attached to the robot arm during the working mode and hence while executing the working movement.
According to another embodiment, the training movement is initiated by manually operating a teacher interface that is structurally separated from the robot. This measure enables a remote control. Advantageously a feedback unit is provided, enabling remote control of the robot or the robot arm with haptic feedback.
According to another embodiment, said constraint determination unit is adapted for determining a number of end effector position values representing a number of positions the end effector passes through while imparting the training movement and wherein said constraint determination unit is further adapted for determining the number of constraint values depending on the number of end effector position values. Thus, one of the basic information for locating a possibly existing obstacle and therefore a motion constraint is available. The number of positions the end effector passes through form a trace representing the training movement. Therefore, the number of positions the end effector passes through can be regarded as a position trace representing one single training movement.
According to a further embodiment of the previous measure, said constraint determination unit is further adapted for determining the number of end effector position values depending on a number of position values representing a movement conducted by the actuator. For this purpose, preferably to at least one of said actuators a position sensor is assigned, wherein the position sensor is adapted for sensing a position representing a movement conducted by the actuator and further adapted for providing a position value representing the sensed position, wherein a number of position values is provided by the position sensor while imparting the training movement. With this measure the number of end effector position values can be determined easily and cost-effectively, since sensors are used with which the robot is equipped anyway with regard to controlling the robot. That a position sensor is assigned to an actuator shall be understood such, that the position sensor is spatially assigned to the actuator. Said sensor shall not necessarily directly sense a position at the actuator, for example the position of a movable actuator component. It rather shall be possible to sense the position of the end effector or arm section moved by the actuator. Advantageously, at least one of said actuators is adapted for conducting a rotational movement, wherein said position sensor is further adapted for sensing an angular position. Therefore the position values are available in spherical coordinates. In case the end effector position values shall be available in Cartesian coordinates a transformation is necessary.
Preferably said constraint determination unit is further adapted for determining a number of interaction values representing at least one of a torque and a force acting on at least one of the end effector and at least one of the arm sections while imparting the training movement, and wherein said constraint determination unit is further adapted for determining the number of constraint values depending on the number of interaction values. Thus, further basic information for locating a possibly existing obstacle and therefore a motion constraint is available. As mentioned above, for the single training movement a position trace exists. Therefore, by combining this position trace with the number of interaction values an interaction trace can be generated. In case the interaction values represent a torque it is a torque trace. In case the interaction values represent a force it is a force trace.
According to another embodiment, said constraint determination unit is further adapted for determining the number of interaction values depending on a number of component torque values representing a torque acting on a component moved by the actuator, wherein the component is the end effector or one of the arm sections. For this purpose, preferably to at least one of said actuators a torque sensor is assigned, wherein the torque sensor is adapted for sensing a torque acting on a component moved by the actuator and further adapted for providing a component torque value representing the sensed torque, wherein the component is the end effector or one of the arm sections, wherein a number of component torque values is provided by the torque sensor while imparting the training movement. This measure has the advantage of easily and reliably determining an impact onto a robot caused by an obstacle. That a torque sensor is assigned to an actuator shall be understood in the same way as described with regard to the position sensor.
Preferably, said constraint determination unit is further adapted for determining the number of interaction values depending on a number of operating values representing an operation of a teacher interface, wherein the teacher interface is attached to the end effector and manually operated for initiating said training movement. For this purpose, said teacher interface preferably comprises a teacher interface sensor, wherein the teacher interface sensor is adapted for sensing an operation of the teacher interface and further adapted for providing an operating value representing the sensed operation, wherein a number of operating values is provided by the teacher interface sensor while imparting the training movement. A first advantage of this measure is that said teacher interface sensor allows an easy determination of the invariants of a task or training movement. For example, the number of operating values can be examined to that effect if with regard to one of the directions of motion the actuation of the teacher interface is unchanged. If such a constellation is detected, it can be assumed that the corresponding direction of motion will surely not be an essential direction of the working movement.
A second advantage exists with regard to the embodiment according to which a training movement is initiated by moving the robot arm by muscle power. The number of operation values provided by the teacher interface sensor can be used for controlling the actuators so that the robot arm conducts a training movement initiated by operating the teacher interface. A great benefit in doing so is that the force sensor is not in the main mechanical and thus force transmitting chain of the robot arm itself. However, as the interaction between the teacher interface and the robot arm is sensed, for determining the relevant end effector forces a compensation for dynamics such as robot arm inertia, and compensation for joint forces is necessary. This putative disadvantage can be avoided by using the teacher interface sensor for sensing the interaction between the end effector or the robot arm and the environment of the robot, especially an obstacle.
Preferably, said teacher interface sensor is adapted for sensing a torque acting on the teacher interface and is further adapted for providing a teacher interface torque value representing the sensed torque. In an alternative embodiment said teacher interface sensor is adapted for sensing a force acting on the teacher interface and further adapted for providing a teacher interface force value representing the sensed force. In a further alternative embodiment said teacher interface sensor is adapted for concurrently sensing both a torque and a force. Sensing a force or a torque is advantageous, because these quantities represent reliably an operation of the teacher interface.
In a preferred embodiment, at least one of said actuators is driven for generating small noisy perturbations while imparting the training movement. These perturbations make sure that an excitation of the interaction between the end effector and the environment is persistently excited. Preferably, all principal directions are excited independently, advantageously with sufficiently uncorrelated excitations, like measuring a multi-dimensional frequency response function (FRF). Thus, while imparting the training movement, the actuators of the robot arm purposely give some excitation. For example the reaction forces both of the environment, especially an existing obstacle, and of the person imparting the training movement, give additional information about the motion constraints in the environment and about the invariants of the working movement. Preferably, all actuators are driven.
The invention further relates to a teacher interface for manually imparting a training movement onto a robot arm, in particular onto an end effector of a robot arm, having a handle section, a fastening section for attaching the teacher interface to the robot arm, in particular to the end effector of a robot arm, and a teacher interface sensor for sensing at least one of a force and a torque, wherein the teacher interface sensor is arranged between the handle section and the fastening section.
Teaching a working movement by imparting a training movement with a teacher interface is very efficient. The teacher interface is constructed in such a way, that the invariants of the training movement and hence of the corresponding working movement can be easily detected. As the teacher interface sensor is physically inserted between the handle section and the robot arm, for example all interaction forces between the handle section and the robot arm go via this sensor. Preferably, the teacher interface sensor is adapted as a so- called 6DOF (6 degrees of freedom) force and torque sensor. Such a sensor is adapted for sensing torque and force with regard to three different axes, wherein these axes are orthogonal to each other. In a preferred embodiment, the handle element comprises a tactile sensor. Using such a tactile sensor, components of the end effector, for example individual fingers can be controlled or manipulated by squeezing the handle section. Preferably, the teacher interface is rigidly attached to the end effector. Thus, by arbitrarily moving the teacher interface, the robot arm will follow.
The invention further relates to a robot having a base, a first arm section connected to the base by a first joint, a second arm section connected to the first arm section by a second joint and an end effector connected to the second arm section by a third joint, wherein at least one of the joints contains at least one actuator and a position sensor and a torque sensor, each of the sensors assigned to the actuator, wherein the end effector has a fastening element to which a teacher interface can be attached, in particular a teacher interface according to the present invention. The invention further relates to a robot system comprising a robot having a robot arm with a number of individual arm sections, an end effector connected to one of the arm sections and a number of actuators for moving at least one of the end effector and at least one of the arm sections and a control device according to the present invention. Preferably the robot is adapted as described above.
The invention further relates to a robot system comprising a robot as described above and a control device according to the invention.
BRIEF DESCRIPTION OF THE DRAWINGS
These and other aspects of the invention will be apparent from and elucidated with reference to the embodiment(s) described hereinafter. In the following drawings
Fig. 1 shows an embodiment of a robot system in accordance with the invention,
Fig. 2 schematically shows an embodiment of a control device according to the present invention,
Fig. 3 shows a flowchart of an embodiment of the method according to the present invention, and
Fig. 4 shows set point diagrams.
DETAILED DESCRIPTION OF THE INVENTION
Fig. 1 shows a first embodiment of a robot system 10 comprising a robot 12 and a control device 14 for controlling the robot 12. The robot 12 has a base 16 and a robot arm 18. The robot arm 18 comprises a number of arm sections 20, a number of joints 22 and an end effector 24.
A first arm section 26 is connected to the base 16 by a first joint 28, a so- called shoulder joint. In the first joint 28 a first actuator 30 is arranged. For controlling the first actuator 30 a first control signal 32 is generated in the control device 14. The first arm section 26 has three rotational degrees of freedom. Hence the first actuator 30 is adapted for conducting the required three rotational movements. A first position sensor 34 for sensing a position representing a movement conducted by the first actuator 30 is arranged in the first joint 28. The first position sensor 34 is adapted for sensing an angular position of a moving element of the first actuator 30 or of the first arm section 26. The first position sensor 34 can sense positions with regard to the three degrees of freedom of the first arm section 26. A first position signal 36 representing a first number of position values provided by the first position sensor 34 is fed to the control device 14. Further a first torque sensor 38 for sensing a torque acting on the first arm section 26 is arranged in the first joint 28. A first torque signal 40 representing a first number of torque values provided by the first torque sensor 38 is fed to the control device 14.
A second arm section 42 is connected to the first arm section 26 by a second joint 44, a so-called elbow joint. In the second joint 44 a second actuator 46 is arranged. For controlling the second actuator 46 a second control signal 48 is generated in the control device 14. The second arm section 42 has two rotational degrees of freedom. Hence the second actuator 46 is adapted for conducting the required two rotational movements. A second position sensor 50 for sensing a position representing a movement conducted by the second actuator 46 is arranged in the second joint 44. The second position sensor 50 is adapted for sensing an angular position of a moving element of the second actuator 46 or of the second arm section 42. The second position sensor 50 can sense positions with regard to the two degrees of freedom of the second arm section 42. A second position signal 52 representing a second number of position values provided by the second position sensor 50 is fed to the control device 14. Further a second torque sensor 54 for sensing a torque acting on the second arm section 42 is arranged in the second joint 44. A second torque signal 56 representing a second number of torque values provided by the second torque sensor 54 is fed to the control device 14.
The end effector 24 is connected to the second arm section 42 by a third joint 58, a so-called wrist joint. The end effector 24 consists of a hand element 60 and a plurality of fingers 62. In the third joint 58 a third actuator 64 is arranged. For controlling the third actuator 64 a third control signal 66 is generated in the control device 14. The end effector 24 has two rotational degrees of freedom, excluded the degrees of freedom arising from the plurality of fingers 62. Hence the third actuator 64 is adapted for conducting the required two rotational movements. A third position sensor 68 for sensing a position representing a movement conducted by the third actuator 64 is arranged in the third joint 58. The third position sensor 68 is adapted for sensing an angular position of a moving element of the third actuator 64 or of the end effector 24, more precisely of the hand element 60. The third position sensor 68 can sense positions with regard to the two degrees of freedom of the end effector 24. A third position signal 70 representing a third number of position values provided by the third position sensor 68 is fed to the control device 14. Further a third torque sensor 72 for sensing a torque acting on the end effector 24 is arranged in the third joint 58. A third torque signal 74 representing a third number of torque values provided by the third torque sensor 72 is fed to the control device 14.
With regard to the actuators and sensors arranged in the single joints an alternative embodiment is conceivable. Instead of a sole actuator a plurality of individual actuators, each assigned to one of the degrees of freedom of the corresponding arm section or the end effector are provided. The same applies to the position sensors and to the torque sensors. Electric motors can be used as actuators.
Each of the fingers, contained in the plurality of fingers 62 comprises a number of finger actuators and a number of finger sensors. For reasons of clarity these finger actuators and finger sensors are not illustrated in Fig. 1. For controlling the plurality of fingers 62 finger sensor signals 76 and finger control signals 78 are exchanged between the finger actuators or the finger sensors and the control device 14. In this context it shall be pointed out the following: if the term end effector is used, not only the end effector as a whole but also an element of the end effector, for example a sole finger or even a sole finger element can be meant. The same shall apply with regard to the term actuator. An actuator for moving the end effector can also be an actuator for moving an element of the end effector.
The end effector 24 further comprises a fastening element 80 to which a teacher interface 82 can be attached. The fastening element 80 is preferably located in the direct vicinity of the third joint 58. With the teacher interface 82 a training movement can manually be imparted onto the robot arm 18, in particular onto the end effector 24. The teacher interface 82 comprises a handle section 84 and a fastening section 86 for attaching the teacher interface 82 to the end effector 24, more precisely to the fastening element 80. The teacher interface 82 further comprises a teacher interface sensor 88 for sensing at least one of a force and a torque. The teacher interface sensor 88 can be adapted as a 6DOF force and torque sensor. The teacher interface sensor provides a teacher interface torque value representing the sensed torque and a teacher interface force value representing the sensed force. Generalized, the teacher interface sensor 88 is adapted for sensing an operation of the teacher interface 82. The teacher interface sensor 88 is arranged between the handle section 84 and the fastening section 86. The handle section 84 is rigidly connected to the teacher interface sensor 88 and the teacher interface sensor 88 is rigidly connected to the fastening section 86. Preferably, these three components are realized as one sole piece. An operating signal 90 representing a number of operating values is fed to the control device 14, wherein the number of operating values comprises a number of teacher interface torque values and a number of teacher interface force values. The handle section 84 comprises a tactile sensor 92. The tactile sensor 92 is adapted for sensing a squeezing of the handle section 84. A corresponding squeezing signal 94 is fed to the control device 14. By squeezing the handle section 84 a single finger or even an element of such a finger can be controlled or manipulated. The handle section 84 further comprises a force feedback unit 96, with which a pinch force, resulting at the plurality of fingers 62, is fed back to the handle section 84. The force feedback unit 96 is controlled with a feedback signal 98. The tactile sensor 92 and the force feedback unit 96 can be regarded as a haptic interface. With the fastening element 80 and the fastening section 86 the teacher interface 82 is mechanically attached to the end effector 24. In addition an electric connection is realized.
Excluding the degrees of freedom arising from the plurality of fingers 62, the robot arm 18 has in total 7 degrees of freedom. The robot 12 could be extended with a second, preferably mirrored arm. Furthermore, the robot 12 can be extended with additional sensors, such as cameras, attached to the base 16, the end effector 24 or anywhere else on the robot arm 18.
In the following it is described, how the robot 12 is programmed using the control device 14. It is assumed that a working movement, represented by an arrow 100 shall be taught. With the working movement an object 102 shall be moved from a first position 104 at which the object 102 is located on a first work bench 106 to a second position 108 at which the object 102 is located on a second work bench 110. The arrow 100 does not show the concrete trace the object 102 passes through when the robot 12 executes the working movement. In fact the arrow 100 shall only indicate that the object 102 is moved from the first work bench 106 to the second work bench 110. The working movement is taught by imparting a training movement onto the robot 12 or onto the robot arm 18, using the teacher interface 82. The training movement corresponds to the working movement. Therefore the following description of the training movement also applies to the working movement. The training movement starts at the first position 104, grabbing the object 102, lifting it from the first work bench 106, moving it around an obstacle 112 located between the two work benches 106, 110 and ends at the second position 108, putting the object 102 down on the second work bench 110. Putting the object 102 down shall comprise moving it over a short distance on the surface 114 of the second work bench 110.
While imparting the training movement two motion constraints are imposed on at least the end effector 24 or at least one of the arm sections 26, 42. A first motion constraint arises from moving the object 102 around the obstacle 112. This first motion constraint is an invariant of the working movement and is specified by the person or teacher imparting the teaching movement onto the robot 12. Because of the obstacle 112 a minimum height is required for transferring the object 102 from the first work bench 106 to the second work bench 110. For this reason the teacher operates the teacher interface 82 in a corresponding manner for keeping this minimum height. From this pattern of operation the invariant of the working movement can be derived. The first motion constraint can for example be seen from the fact that in the sequence of the training movement, at which the obstacle 112 is passed, no change in the height of the end effector 24 occurs and that at the start and at the end of this sequence no change in force and torque acting on the end effector 24 occurs. A second constraint arises from moving the object 102 on the surface 114 of the second work bench 110. With regard to the working movement of the robot arm 18 the second work bench 110 has the character of an obstacle. From the moment, the object 102 is in contact with the surface 114, a force is acting on the end effector 24 causing a torque. The direction of said force is perpendicular to the surface 114. Said torque can be sensed with the torque sensors 38, 54, 72. The force acting on the end effector 24 can be derived from the sensed torques. While the object 102 is moved on the surface 114 no change in the height of the end effector 24 occurs. The combination of a torque acting on the end effector 24 without changing the position of the end effector 24, wherein said position corresponds to the torque, is an evidence for an obstacle to which the end effector 24 is into contact. Moving the object 102 on the surface 114 also causes an invariant of the working movement.
Generalized, according to the present invention, a motion constraint or physical constraints occurring during the training movement is detected by correlating force vector variations and/or torque vector variations with variations in position, velocity and acceleration vectors or variations in angle, angular velocity and angular acceleration vectors.
According to the present invention a number of constraint values are determined, wherein the constraint values represent the first and the second motion constraint imposed on the end effector 24 while imparting the training movement onto the robot 12. The number of constraint values is determined depending on a number of end effector position values, representing a number of positions the end effector 24 passes through while imparting the training movement, on a number of interaction values, representing at least one of a torque and a force acting on at least one of the end effector 24 and at least one of the arm sections 26, 42 while imparting the training movement and on a number of operating values, representing an operation of the teacher interface 82. Depending on the number of constraint values a number of set points are determined, wherein the number of set points represents the working movement.
The working movement and the training movement described above shall not have any restricting impact on the invention. A working movement can also be or comprise a movement executed for example for machining the object 102.
Fig. 2 shows the control device 14 in a more detailed illustration. The control device 14 contains a processor 120, connected to an input/output unit 122. The input/output unit 122 receives input signals from a number of sensors and forwards these signals in an adapted data format to the processor 120. Further the input/output unit 122 generates output signals in dependency of the processor 120 for controlling a number of actuators. The number of sensors comprise the position sensors 34, 50, 68, providing the position signals 36, 52, 70, the torque sensors 38, 54, 72, providing the torque signals 40, 56, 74, the teacher interface sensor 88 providing the operating signal 90, the tactile sensor 92, providing the squeezing signal 94 and finger sensors 124, arranged in the plurality of fingers 62, providing the finger sensor signals 76. The finger sensors 124 are adapted for sensing for example positions, torques and/or forces. The number of actuators comprises the actuators 30, 46, 64 arranged in the joints 28, 44, 58, each of them receiving its control signal 32, 48, 66, the force feedback unit 96, receiving the feedback signal 98 and finger actuators 126, arranged in the plurality of fingers 62, receiving the finger control signals 78.
The control device 14 comprises a control memory 128 in which a machine- code is stored, representing a control program for controlling the robot 12. According to the control program the control device 14 has at least two different modes of operation, a working mode and a training mode. In the working mode the actuators 30, 46, 64, 126 are controlled depending on a number of set points representing a working movement. In the training mode a training movement is imparted onto at least one of the end effector 24 and at least one of the arm sections 26, 42, wherein the training movement corresponds to the working movement. Imparting a training movement onto the end effector 24 shall include imparting it onto the plurality of fingers 62. According to the modes of operation of the control device 14, the robot 12 can be operated in at least two different modes of operation, encompassing a working mode and a training mode.
The control device 14 comprises an activation unit 130 for activating the training mode. Based on condition data 132, the activation unit 130 determines which mode of operation is requested. In case the training mode is requested, the activation unit 130 outputs a corresponding training mode flag 134. In a first embodiment, the condition data 132 represent signals only existing in case the teacher interface 82 is attached to the end effector 24. Such a signal may indicate that the teacher interface 82 is electrically connected to the end effector 24. In a second embodiment, the condition data 132 represent a teaching mode request signal, existing in case a teaching mode request button is operated.
In case the training mode is activated parts of the control program corresponding to the training mode are processed. Further, while imparting the training movement teacher interface force values, teacher interface torque values, component torque values representing torques sensed with the torque sensors 38, 54, 72 and angular position values, sensed with the position sensors 34, 50, 68 are recorded in a record memory 136. If necessary, squeezing values, represented by the squeezing signal 94 and finger values, represented by the finger sensor signals 76 may also be recorded in the record memory 136.
The control device 14 contains a constraint determination unit 138 for determining a number of constraint values 140 representing a motion constraint imposed on at least one of the end effector 24 and at least one of the arm sections 26, 42 while imparting the training movement. The number of constraint values 140 is determined depending on a number of end effector position values, representing a number of positions the end effector 24 passes through while imparting the training movement, and a number of interaction values representing at least one of a torque and a force acting on at least one of the end effector 24 and at least one of the arm sections 26, 42 while imparting the training movement. Wherein the number of end effector position values is determined depending on a number of angular position values 142, fed from the record memory 136 to the constraint determination unit 138. Wherein the number of interaction values is determined depending on a number of component torque values 144, sensed with the sensors 38, 54, 72, a number of teacher interface torque values 146 and a number of teacher interface force values 148, all fed from the record memory 136 to the constraint determination unit 138. Additionally the dynamics of the robot arm 18 are taken into consideration. Preferably the number of interaction values is determined according to the following approach: it is assumed that the dynamics of the robot arm are known. This basically means that the system equation of the robot arm is known. This in turn means that it is known how the state, in particular the Markov state s of the robot arm evolves given the component torque values, the teacher interface force values, the teacher interface torque values and additionally end effector to environment interaction force values. The system equation f is given by: state(t+Dt) = f(state(t), component torque values, teacher interface force values, teacher interface torque values, end effector to environment interaction force values). Assuming the state of the robot arm can be measured, for example by using position sensors and by using derivatives for velocities, the current and the next state are known. The component torque values, the teacher interface force values, the teacher interface torque values are all measured. Therefore the state equation can be solved for determining the end effector to environment interaction force values. As the geometry of the robot arm or the single components of the robot arm is known, based on the end effector to environment interaction force values end effector to environment interaction torque values can be determined. The end effector to environment interaction force values as well as the end effector to environment interaction torque values are named in a generalized manner for interaction values.
The number of constraint values 140 is determined by correlating end effector position values with the corresponding interaction values. Especially, variations in the interaction values are compared with variations in the end effector position values. The interaction values have the character of a flag, having two different conditions, depending on whether a motion constraint exists or not. Instead of determining the constraint values based on recorded values, the constraint values can be determined simultaneously while imparting the training movement, directly using the relevant values provided by the corresponding sensors. Preferably the motion constraint and therefore the number of constraint values is determined according to the following approach: a motion constraint is determined in dependency of the end effector position values and derivatives of them and the interaction values using a mathematical model. The parameters of this model are estimated using the end effector position values. In particular the following model is used: f= c + K X p + B X v + M X a, whereby f is either a 6x1 vector with the generalized 6DOF end effector to environment force values or a nxl vector with the component torque values; c is a bias representing a force bias; K is a stiffness matrix; B is a damping matrix; M is a inertia matrix; p are the end effector position values; v are end effector velocity values determined from the end effector position values; a are end effector acceleration values determined from the end effector position values. Since f, p, v and a are either measured or determined indirectly, the parameters c, K, B and M can be estimated once a certain number of samples is known. Then with new samples the estimations of c, K, B and M can be updated. This can be done in a recursive manner. The stiffness matrix K contains the information about the motion constraints and therefore the constraint values. A principal component analysis can be done on K to obtain the principal directions of the motion constraints and therefore the constraint values. The control device 14 further comprises a set point determination unit 150 for determining a number of set points 152 depending on the number of constraint values 140. Concrete, depending on the number of constraint values 140 a control policy is selected from a plurality of different control policies. The number of set points 152 is determined according to the selected control policy. In doing so, the working movement or the training movement is divided into a plurality of individual coherent segments. For each of these segments a control policy is selected. For a segment, for which a position control is selected, end effector position values corresponding to this segment are selected from the number of end effector position values 154 determined in the constraint determination unit 138. For a segment, for which a force control is selected, force set points are determined depending on interaction values corresponding to this segment, wherein these interaction values 146 are selected from the number of interaction values 156 determined in the constraint determination unit 138. If the interaction values are forces, the interaction values can directly be used as force set points. The number of set points 152 is stored in a set point memory 158. In general the number of force set points will be equal to the number of identified motion constraints and therefore motion constraint values. For example, if during a certain segment the only motion constraint is the surface of a table at height z = h, there will be a force set point trace Fzr(t) in z direction and position set points xr(t) and yr(t) in x and y directions. The total number of set points is generally constant in time and equal to the number of degrees of freedom to be controlled.
After the training mode is deactivated, those parts of the control program are processed, that correspond to the working mode. In doing so in a comparison unit 160 the number of set points 152 is compared with a number of corresponding actual values 162. Depending on the result of this comparison control values 164 for the individual actuators 30, 46, 64, 126 are determined. The control values 168 are fed to the corresponding actuators by forwarding a corresponding control signal to the individual actuator.
According to the embodiment described above, the units 130, 138, 150, 160 are functional units within a main memory 166. This shall not have any restricting impact on the invention. Of course said units can also be realized as structural units.
Fig. 3 shows a flowchart of an embodiment of the method according to the present invention. In a step 180 the teacher interface 82 is attached to the end effector 24. This step is optional. In case the teacher interface 82 is permanently attached to the end effector 24 step 180 is not needed. In a following step 182 the training mode is activated. In a next step 184 small noisy perturbations having the character of uncorrelated torques are generated by driving at least one of the actuators 30, 46, 64. These perturbations or disturbances give additional information about the correlation between forces, acting on the end effector 24, on the one hand and end effector positions or velocities and accelerations derived from these end effector positions on the other hand. In a subsequent step 186 a training movement is imparted onto the robot arm 18 or onto the end effector 24. The robot arm 18 is moved by moving the handle section 84. The handle section 84 can be moved with a translation or a rotation; all 6 degrees of freedom are possible. The plurality of fingers 62 is moved by squeezing the handle section 84. In a next step 188 a number of teacher interface force values 148, a number of teacher interface torque values 146, a number of component torque values 144 and a number of angular position values 142 are recorded. In a following step 190 a number of end effector position values 154 are determined. The number of end effector positions values 154 represent all 6 degrees of freedom, thus positions and orientations. In a successive step 192 a number of interaction values 156 are determined. In a next step 194 a number of constraint values 140 are determined. In a following step 196 a control policy is selected. In a subsequent step 198 set points are determined. In a further step 200 the training mode is deactivated. In a next step 202 the teacher interface 82 is removed. According to the explanation with regard to step 180, step 202 is also optional. The order of steps 184, 186 as illustrated in Fig. 3 shall not have any restricting impact on the invention. Of course the order of these two steps can be changed.
Fig. 4 shows for a working movement a first diagram representing position set points xs with regard to the x-axis (Fig. 4a) and a second diagram representing force set points Fxs with regard to the x-axis (Fig. 4b). In the following, 4 segments are regarded. A first segment starting at the origin of coordinates and ending at point in time to, a second segment defined by points in time to and tls a third segment defined by points in time ti and t2 and a fourth segment defined by point in times t2 and t3. The further progress of the working movement after the point of time t3 is not of interest. The curve shapes shown do not correspond to the situation illustrated in Fig. 1.
In the first segment the position set points xs rise from a first level up to a second level, whereas the force set points Fxs are predominantly close to zero. Thus, in the first segment no motion constraint is imposed on the end effector 24, wherefore a position control is selected as control policy. In the second segment the position set points xs stay on the second level, whereas the force set points Fxs show a parabolic progress with a positive peak. Hence, in the second segment a motion constraint is imposed on the end effector 24, wherefore a force control is selected as control policy. In the third segment the position set points xs decline from the second level down to a third level, whereas the force set points Fxs are predominantly close to zero. Therefore, in the third segment no motion constraint is imposed on the end effector 24, wherefore a position control is selected as control policy. In the fourth segment the position set points xs stay on the third level, whereas the force set points Fxs show a parabolic progress with a negative peak. Thus, in the fourth segment a motion constraint is imposed on the end effector 24, wherefore a force control is selected as control policy.
In the working mode or replay mode the robot executes the working movement or acquired skill, by executing the position and force set points as indicated in the diagrams of Fig. 4. The transitions between the individual segments are triggered by events, such as the encounter of a motion constraint which was encountered while imparting the training movement.
According to the present invention it is possible to distill for each basic movement the intention of the teacher from a training movement. The intention of the teacher is an invariant specified by him while imparting the training movement onto the robot. The intention can be represented by a position set point or a force set point or a combination of these set points. Such a combination is employed in case a position with regard to a first direction is fixed and a force with regard to a second direction is fixed. The intention of the teacher is distilled from position traces and force traces. The control policy is chosen such that it coincides with the distilled intentions of the teacher.
Concluding, several embodiments of the invention are mentioned that are particularly favorable. All components needed for these embodiments are described above. According to the specification of the concrete embodiment the person skilled in the art knows which subset of components he contingently has to select.
In a first favorable embodiment a control device comprising an activation unit, a constraint determination unit and a set point determination unit is used together with a teacher interface that is structurally separated from the robot. This embodiment enables remote control of a robot or a robot arm. Therefore, a training movement is imparted or demonstrated using remote control. On the training movement demonstrated with remote control constraint identification is applied. Said constraint identification is based on analyzing position traces and corresponding force traces. Preferably a feedback unit is provided, enabling remote control with haptic feedback.
In a second favorable embodiment a control device comprising an activation unit, a constraint determination unit and a set point determination unit is used together with a teacher interface that is attached to an end effector of a robot arm. Said teacher interface comprises a 6D0F force ad torque sensor for measuring interaction forces and interaction torques between said teacher interface and the robot arm or end effector. This embodiment is used for non-remote demonstration of a training movement. Based on said interaction forces and said interaction torques constraint identification is conducted.
In a third favorable embodiment a control device comprising an activation unit, a constraint determination unit and a set point determination unit is used together with a teacher interface, whereby it is irrelevant whether said teacher interface is attached to the end effector or structurally separated from the robot. In this embodiment at least one joint motor contained in joints of a robot arm is driven for generating additional noisy signals to make all directions persistently excited and thereby improving constraint identification. Said constraint identification can be adapted as described in the first or in the second favorable embodiment.
While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims.
In the claims, the word "comprising" does not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality. A single element or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.
A computer program may be stored/distributed on a suitable medium, such as an optical storage medium or a solid-state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the Internet or other wired or wireless telecommunication systems.
Any reference signs in the claims should not be construed as limiting the scope.

Claims

CLAIMS:
1. Control device for controlling a robot (12) having a robot arm (18) with a number of individual arm sections (26, 42), an end effector (24) connected to one of the arm sections and a number of actuators (30, 46, 64, 126) for moving at least one of the end effector (24) and at least one of the arm sections (26, 42), wherein the control device (14) has at least two different modes of operation, encompassing a working mode and a training mode, wherein in the working mode for controlling the robot (12) at least one of the actuators (30, 46, 64, 126) is controlled depending on a number of set points (152) representing a working movement and in the training mode a training movement is imparted onto at least one of the end effector (24) and at least one of the arm sections (26, 42), wherein the training movement corresponds to the working movement, said control device comprising: an activation unit (130) for activating the training mode, a constraint determination unit (138) for determining a number of constraint values (140) representing a motion constraint imposed on at least one of the end effector (24) and at least one of the arm sections (26, 42) while imparting the training movement, and a set point determination unit (150) for determining the number of set points (152) depending on the number of constraint values (140).
2. Control device according to claim 1, wherein said set point determination unit (150) is adapted for selecting a control policy from a plurality of different control policies, wherein selecting the control policy is carried out depending on the number of constraint values (140), wherein the number of set points (152) is determined according to the selected control policy.
3. Control device according to claim 2, wherein the plurality of different control policies comprise a first control policy with a first reference variable corresponding to a first physical quantity and a second control policy with a second reference variable corresponding to a second physical quantity.
4. Control device according to claim 3, wherein the first physical quantity is a position and the second physical quantity is a force, in particular the first control policy is selected in case the number of constraint values (140) indicate that no motion constraint exists and the second control policy is selected in case the number of constraint values (140) indicate that a motion constraint exists.
5. Control device according to claim 1, wherein said constraint determination unit (138) is adapted for determining a number of end effector position values (154) representing a number of positions the end effector (24) passes through while imparting the training movement, and wherein said constraint determination unit (138) is further adapted for determining the number of constraint values (140) depending on the number of end effector position values (154).
6. Control device according to claim 5, wherein said constraint determination unit (138) is further adapted for determining the number of end effector position values (154) depending on a number of position values (142) representing a movement conducted by the actuator (30, 46, 64).
7. Control device according to claim 1, wherein said constraint determination unit (138) is further adapted for determining a number of interaction values (156) representing at least one of a torque and a force acting on at least one of the end effector (24) and at least one of the arm sections (26, 42) while imparting the training movement, and wherein said constraint determination unit (138) is further adapted for determining the number of constraint values (140) depending on the number of interaction values (156).
8. Control device according to claim 7, wherein said constraint determination unit (138) is further adapted for determining the number of interaction values (156) depending on a number of component torque values (144) representing a torque acting on a component moved by the actuator (30, 46, 64), wherein the component is the end effector (24) or one of the arm sections (26, 42).
9. Control device according to claim 7 or 8, wherein said constraint determination unit (138) is further adapted for determining the number of interaction values (156) depending on a number of operating values (146, 148) representing an operation of a teacher interface (82), wherein the teacher interface (82) is attached to the end effector (24) and manually operated for initiating said training movement.
10. Control device according to claim 1, wherein at least one of said actuators (30, 46, 64) is driven for generating small noisy perturbations while imparting the training movement.
11. Teacher interface for manually imparting a training movement onto a robot arm (18), in particular onto an end effector (24) of a robot arm (18), having a handle section (84), a fastening section (86) for attaching the teacher interface (82) to the robot arm (18), in particular to the end effector (24) of a robot arm (18), and a teacher interface sensor (88) for sensing at least one of a force and a torque, wherein the teacher interface sensor (88) is arranged between the handle section (84) and the fastening section (86).
12. Robot having a base (16), a first arm section (26) connected to the base (16) by a first joint (28), a second arm section (42) connected to the first arm section (26) by a second joint (44) and an end effector (24) connected to the second arm section (42) by a third joint (58), wherein at least one of the joints (28, 44, 58) contains at least one actuator (30, 46, 64) and a position sensor (34, 50, 68) and a torque sensor (38, 54, 72), each of the sensors (34, 38, 54, 50, 68, 72) assigned to the actuator (30, 46, 64), wherein the end effector (24) has a fastening element (80) to which a teacher interface (82) can be attached, in particular a teacher interface (82) according to claim 11.
13. Robot system comprising a robot (12) having a robot arm (18) with a number of individual arm sections (26, 42), an end effector (24) connected to one of the arm sections (26, 42) and a number of actuators (30, 46, 64) for moving at least one of the end effector (24) and at least one of the arm sections (26, 42) and a control device (14) according to claim 1.
14. Method for controlling a robot (12) having a robot arm (18) with a number of individual arm sections (26, 42), an end effector (24) connected to one of the arm sections (26, 42) and a number of actuators (30, 46, 64) for moving at least one of the end effector (24) and at least one of the arm sections (26, 42), wherein the robot (12) can be operated in at least two different modes of operation, encompassing a working mode and a training mode, wherein in the working mode for controlling the robot (12) at least one of the actuators (30, 46, 64) is controlled depending on a number of set points (152) representing a working movement and in the training mode a training movement is imparted onto at least one of the end effector (24) and at least one of the arm sections (26, 42), wherein the training movement corresponds to the working movement, said method comprising the steps of: activating the training mode, imparting the training movement, determining a number of constraint values (140) representing a motion constraint imposed on at least one of the end effector (24) and at least one of the arm sections (26, 42) while imparting the training movement, determining a number of set points (152) depending on the number of constraint values (140).
15. Computer program comprising program code means for causing a computer to carry out the steps of the method as claimed in claim 14 when said computer program is carried out on a computer.
PCT/IB2010/052303 2009-05-29 2010-05-25 Control device and method for controlling a robot WO2010136961A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
EP09161468 2009-05-29
EP09161468.5 2009-05-29

Publications (1)

Publication Number Publication Date
WO2010136961A1 true WO2010136961A1 (en) 2010-12-02

Family

ID=42777924

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2010/052303 WO2010136961A1 (en) 2009-05-29 2010-05-25 Control device and method for controlling a robot

Country Status (1)

Country Link
WO (1) WO2010136961A1 (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140277744A1 (en) * 2013-03-15 2014-09-18 Olivier Coenen Robotic training apparatus and methods
US9186793B1 (en) 2012-08-31 2015-11-17 Brain Corporation Apparatus and methods for controlling attention of a robot
US9242372B2 (en) 2013-05-31 2016-01-26 Brain Corporation Adaptive robotic interface apparatus and methods
US9248569B2 (en) 2013-11-22 2016-02-02 Brain Corporation Discrepancy detection apparatus and methods for machine learning
US9296101B2 (en) 2013-09-27 2016-03-29 Brain Corporation Robotic control arbitration apparatus and methods
US9314924B1 (en) 2013-06-14 2016-04-19 Brain Corporation Predictive robotic controller apparatus and methods
US9346167B2 (en) 2014-04-29 2016-05-24 Brain Corporation Trainable convolutional network apparatus and methods for operating a robotic vehicle
US9358685B2 (en) 2014-02-03 2016-06-07 Brain Corporation Apparatus and methods for control of robot actions based on corrective user inputs
US9364950B2 (en) 2014-03-13 2016-06-14 Brain Corporation Trainable modular robotic methods
US9384443B2 (en) 2013-06-14 2016-07-05 Brain Corporation Robotic training apparatus and methods
US9426946B2 (en) 2014-12-02 2016-08-30 Brain Corporation Computerized learning landscaping apparatus and methods
US9436909B2 (en) 2013-06-19 2016-09-06 Brain Corporation Increased dynamic range artificial neuron network apparatus and methods
US9463571B2 (en) 2013-11-01 2016-10-11 Brian Corporation Apparatus and methods for online training of robots
US9505128B1 (en) 2015-06-24 2016-11-29 Boris Kesil Method of teaching robotic station for processing objects
US9533413B2 (en) 2014-03-13 2017-01-03 Brain Corporation Trainable modular robotic apparatus and methods
US9566710B2 (en) 2011-06-02 2017-02-14 Brain Corporation Apparatus and methods for operating robotic devices using selective state space training
US9579789B2 (en) 2013-09-27 2017-02-28 Brain Corporation Apparatus and methods for training of robotic control arbitration
US9597797B2 (en) 2013-11-01 2017-03-21 Brain Corporation Apparatus and methods for haptic training of robots
US9604359B1 (en) 2014-10-02 2017-03-28 Brain Corporation Apparatus and methods for training path navigation by robots
US9764468B2 (en) 2013-03-15 2017-09-19 Brain Corporation Adaptive predictor apparatus and methods
US9792546B2 (en) 2013-06-14 2017-10-17 Brain Corporation Hierarchical robotic controller apparatus and methods
US9840003B2 (en) 2015-06-24 2017-12-12 Brain Corporation Apparatus and methods for safe navigation of robotic devices
US9925662B1 (en) 2015-06-28 2018-03-27 X Development Llc Generating a trained robot path based on physical manipulation of the robot and based on training user interface input(s) associated with the physical manipulation
US9987743B2 (en) 2014-03-13 2018-06-05 Brain Corporation Trainable modular robotic apparatus and methods
US10376117B2 (en) 2015-02-26 2019-08-13 Brain Corporation Apparatus and methods for programming and training of robotic household appliances
US20200301510A1 (en) * 2019-03-19 2020-09-24 Nvidia Corporation Force estimation using deep learning
WO2022140151A1 (en) * 2020-12-21 2022-06-30 Boston Dynamics, Inc. Constrained manipulation of objects
US11831955B2 (en) 2010-07-12 2023-11-28 Time Warner Cable Enterprises Llc Apparatus and methods for content management and account linking across multiple content delivery networks

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007096322A2 (en) * 2006-02-23 2007-08-30 Abb Ab A system for controlling the position and orientation of an object in dependence on received forces and torques from a user

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2007096322A2 (en) * 2006-02-23 2007-08-30 Abb Ab A system for controlling the position and orientation of an object in dependence on received forces and torques from a user

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
DELSON N ET AL: "Robot programming by human demonstration: adaptation and inconsistency in constrained motion", ROBOTICS AND AUTOMATION, 1996. PROCEEDINGS., 1996 IEEE INTERNATIONAL C ONFERENCE ON MINNEAPOLIS, MN, USA 22-28 APRIL 1996, NEW YORK, NY, USA,IEEE, US LNKD- DOI:10.1109/ROBOT.1996.503569, vol. 1, 22 April 1996 (1996-04-22), pages 30 - 36, XP010162726, ISBN: 978-0-7803-2988-1 *
DELSON N ET AL: "Robot programming by human demonstration: Subtask compliance controller identification", INTELLIGENT ROBOTS AND SYSTEMS '93, IROS '93. PROCEEDINGS OF THE 1993 IEIEE/RSJ INTERNATIONAL CONFERENCE ON YOKOHAMA, JAPAN 26-30 JULY 1993, NEW YORK, NY, USA,IEEE, US LNKD- DOI:10.1109/IROS.1993.583008, vol. 1, 26 July 1993 (1993-07-26), pages 33 - 41, XP010219077, ISBN: 978-0-7803-0823-7 *
DELSON N ET AL: "Segmentation of task into subtasks for robot programming by human demonstration", PROCEEDINGS OF 1996 JAPAN-USA SYMPOSIUM ON FLEXIBLE AUTOMATION,, vol. 1, 1 January 1996 (1996-01-01), pages 41 - 47, XP008127781, ISBN: 978-0-7918-1231-0 *
HIRZINGER ET AL.: "Teleoperating space robots, Impact for the design of industrial robots", PROCEEDINGS OF THE IEEE INTERNATIONAL SYMPOSIUM ON INDUSTRIAL ELECTRONICS, vol. 1, 1997, pages SS250FF
PIN F G ET AL: "ROBOTIC LEARNING FORM DISTRIBUTED SENSORY SOURCES", IEEE TRANSACTIONS ON SYSTEMS, MAN AND CYBERNETICS, IEEE INC. NEW YORK, US LNKD- DOI:10.1109/21.120073, vol. 21, no. 5, 1 September 1991 (1991-09-01), pages 1216 - 1223, XP000277252, ISSN: 0018-9472 *
SEKI H ET AL: "DETECTION OF KINEMATIC CONSTRAINT FROM SEARCH MOTION OF A ROBOT USING LINK WEIGHTS OF A NEURAL NETWORK", PROCEEDINGS OF THE 1995 IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS. IROS 95. HUMAN ROBOT INTERACTION AND COOPERATIVE ROBOTS. PITTSBURGH, AUG. 5 - 9, 1995; [PROCEEDINGS OF THE IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT R, 5 August 1995 (1995-08-05), pages 498 - 503, XP000730954, ISBN: 978-0-7803-3006-1 *

Cited By (50)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11831955B2 (en) 2010-07-12 2023-11-28 Time Warner Cable Enterprises Llc Apparatus and methods for content management and account linking across multiple content delivery networks
US9566710B2 (en) 2011-06-02 2017-02-14 Brain Corporation Apparatus and methods for operating robotic devices using selective state space training
US10545074B2 (en) 2012-08-31 2020-01-28 Gopro, Inc. Apparatus and methods for controlling attention of a robot
US10213921B2 (en) 2012-08-31 2019-02-26 Gopro, Inc. Apparatus and methods for controlling attention of a robot
US11867599B2 (en) 2012-08-31 2024-01-09 Gopro, Inc. Apparatus and methods for controlling attention of a robot
US9186793B1 (en) 2012-08-31 2015-11-17 Brain Corporation Apparatus and methods for controlling attention of a robot
US9446515B1 (en) 2012-08-31 2016-09-20 Brain Corporation Apparatus and methods for controlling attention of a robot
US11360003B2 (en) 2012-08-31 2022-06-14 Gopro, Inc. Apparatus and methods for controlling attention of a robot
US20140277744A1 (en) * 2013-03-15 2014-09-18 Olivier Coenen Robotic training apparatus and methods
US9764468B2 (en) 2013-03-15 2017-09-19 Brain Corporation Adaptive predictor apparatus and methods
US10155310B2 (en) 2013-03-15 2018-12-18 Brain Corporation Adaptive predictor apparatus and methods
US8996177B2 (en) * 2013-03-15 2015-03-31 Brain Corporation Robotic training apparatus and methods
US9242372B2 (en) 2013-05-31 2016-01-26 Brain Corporation Adaptive robotic interface apparatus and methods
US9821457B1 (en) 2013-05-31 2017-11-21 Brain Corporation Adaptive robotic interface apparatus and methods
US9384443B2 (en) 2013-06-14 2016-07-05 Brain Corporation Robotic training apparatus and methods
US9314924B1 (en) 2013-06-14 2016-04-19 Brain Corporation Predictive robotic controller apparatus and methods
US9950426B2 (en) 2013-06-14 2018-04-24 Brain Corporation Predictive robotic controller apparatus and methods
US9792546B2 (en) 2013-06-14 2017-10-17 Brain Corporation Hierarchical robotic controller apparatus and methods
US9436909B2 (en) 2013-06-19 2016-09-06 Brain Corporation Increased dynamic range artificial neuron network apparatus and methods
US9296101B2 (en) 2013-09-27 2016-03-29 Brain Corporation Robotic control arbitration apparatus and methods
US9579789B2 (en) 2013-09-27 2017-02-28 Brain Corporation Apparatus and methods for training of robotic control arbitration
US9844873B2 (en) 2013-11-01 2017-12-19 Brain Corporation Apparatus and methods for haptic training of robots
US9597797B2 (en) 2013-11-01 2017-03-21 Brain Corporation Apparatus and methods for haptic training of robots
US9463571B2 (en) 2013-11-01 2016-10-11 Brian Corporation Apparatus and methods for online training of robots
US9248569B2 (en) 2013-11-22 2016-02-02 Brain Corporation Discrepancy detection apparatus and methods for machine learning
US10322507B2 (en) 2014-02-03 2019-06-18 Brain Corporation Apparatus and methods for control of robot actions based on corrective user inputs
US9789605B2 (en) 2014-02-03 2017-10-17 Brain Corporation Apparatus and methods for control of robot actions based on corrective user inputs
US9358685B2 (en) 2014-02-03 2016-06-07 Brain Corporation Apparatus and methods for control of robot actions based on corrective user inputs
US10166675B2 (en) 2014-03-13 2019-01-01 Brain Corporation Trainable modular robotic apparatus
US9862092B2 (en) 2014-03-13 2018-01-09 Brain Corporation Interface for use with trainable modular robotic apparatus
US10391628B2 (en) 2014-03-13 2019-08-27 Brain Corporation Trainable modular robotic apparatus and methods
US9364950B2 (en) 2014-03-13 2016-06-14 Brain Corporation Trainable modular robotic methods
US9533413B2 (en) 2014-03-13 2017-01-03 Brain Corporation Trainable modular robotic apparatus and methods
US9987743B2 (en) 2014-03-13 2018-06-05 Brain Corporation Trainable modular robotic apparatus and methods
US9346167B2 (en) 2014-04-29 2016-05-24 Brain Corporation Trainable convolutional network apparatus and methods for operating a robotic vehicle
US10131052B1 (en) 2014-10-02 2018-11-20 Brain Corporation Persistent predictor apparatus and methods for task switching
US9630318B2 (en) 2014-10-02 2017-04-25 Brain Corporation Feature detection apparatus and methods for training of robotic navigation
US10105841B1 (en) 2014-10-02 2018-10-23 Brain Corporation Apparatus and methods for programming and training of robotic devices
US9687984B2 (en) 2014-10-02 2017-06-27 Brain Corporation Apparatus and methods for training of robots
US9902062B2 (en) 2014-10-02 2018-02-27 Brain Corporation Apparatus and methods for training path navigation by robots
US9604359B1 (en) 2014-10-02 2017-03-28 Brain Corporation Apparatus and methods for training path navigation by robots
US9426946B2 (en) 2014-12-02 2016-08-30 Brain Corporation Computerized learning landscaping apparatus and methods
US10376117B2 (en) 2015-02-26 2019-08-13 Brain Corporation Apparatus and methods for programming and training of robotic household appliances
US9505128B1 (en) 2015-06-24 2016-11-29 Boris Kesil Method of teaching robotic station for processing objects
US9873196B2 (en) 2015-06-24 2018-01-23 Brain Corporation Bistatic object detection apparatus and methods
US10807230B2 (en) 2015-06-24 2020-10-20 Brain Corporation Bistatic object detection apparatus and methods
US9840003B2 (en) 2015-06-24 2017-12-12 Brain Corporation Apparatus and methods for safe navigation of robotic devices
US9925662B1 (en) 2015-06-28 2018-03-27 X Development Llc Generating a trained robot path based on physical manipulation of the robot and based on training user interface input(s) associated with the physical manipulation
US20200301510A1 (en) * 2019-03-19 2020-09-24 Nvidia Corporation Force estimation using deep learning
WO2022140151A1 (en) * 2020-12-21 2022-06-30 Boston Dynamics, Inc. Constrained manipulation of objects

Similar Documents

Publication Publication Date Title
WO2010136961A1 (en) Control device and method for controlling a robot
US9919416B1 (en) Methods and systems for providing feedback during teach mode
CN108883533B (en) Robot control
US9381642B2 (en) Wearable robot assisting manual tasks
JP5512048B2 (en) ROBOT ARM CONTROL DEVICE AND CONTROL METHOD, ROBOT, CONTROL PROGRAM, AND INTEGRATED ELECTRONIC CIRCUIT
CN108422420B (en) Robot system having learning control function and learning control method
US9919424B1 (en) Analog control switch for end-effector
KR20110041950A (en) Teaching and playback method using redundancy resolution control for manipulator
JPWO2011161765A1 (en) Robot controller
JP2008238396A (en) Apparatus and method for generating and controlling motion of robot
JP6831530B2 (en) Disturbance observer and robot control device
US20220105625A1 (en) Device and method for controlling a robotic device
JP7230128B2 (en) LEARNING METHOD FOR ROBOT WORK AND ROBOT SYSTEM
KR101086361B1 (en) robot pose controlling method and apparatus thereof
Lecours et al. Computed-torque control of a four-degree-of-freedom admittance controlled intelligent assist device
CN112894827B (en) Method, system and device for controlling motion of mechanical arm and readable storage medium
JP2008217260A (en) Force feedback apparatus
KR101474778B1 (en) Control device using motion recognition in artculated robot and method thereof
Ma et al. Unknown constrained mechanisms operation based on dynamic hybrid compliance control
JP3884249B2 (en) Teaching system for humanoid hand robot
Devie et al. Accurate force control and co-manipulation control using hybrid external command
JP2019512785A (en) System and method for spatially moving an object using a manipulator
Marayong et al. Control methods for guidance virtual fixtures in compliant human-machine interfaces
JP7333197B2 (en) Control system, machine system and control method
Qi et al. A lead-through robot programming approach using a 6-DOF wire-based motion tracking device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10727908

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10727908

Country of ref document: EP

Kind code of ref document: A1