WO2023165807A1 - Robot and method for controlling a robot - Google Patents

Robot and method for controlling a robot Download PDF

Info

Publication number
WO2023165807A1
WO2023165807A1 PCT/EP2023/053646 EP2023053646W WO2023165807A1 WO 2023165807 A1 WO2023165807 A1 WO 2023165807A1 EP 2023053646 W EP2023053646 W EP 2023053646W WO 2023165807 A1 WO2023165807 A1 WO 2023165807A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
camera
robot
effector
neural network
Prior art date
Application number
PCT/EP2023/053646
Other languages
French (fr)
Inventor
Oren Spector
Dotan Di Castro
Vladimir TCHUIEV
Original Assignee
Robert Bosch Gmbh
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Robert Bosch Gmbh filed Critical Robert Bosch Gmbh
Publication of WO2023165807A1 publication Critical patent/WO2023165807A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1628Programme controls characterised by the control loop
    • B25J9/163Programme controls characterised by the control loop learning, adaptive, model based, rule based expert control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39271Ann artificial neural network, ffw-nn, feedforward neural network
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/39Robotics, robotics to robotics hand
    • G05B2219/39397Map image error directly to robot movement, position with relation to world, base not needed, image based visual servoing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40032Peg and hole insertion, mating and joining, remote center compliance
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40532Ann for vision processing
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40584Camera, non-contact sensor mounted on wrist, indep from gripper
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05BCONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
    • G05B2219/00Program-control systems
    • G05B2219/30Nc systems
    • G05B2219/40Robotics, robotics mapping to robotics vision
    • G05B2219/40609Camera to monitor end effector as well as object to be handled
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/06Recognition of objects for industrial automation

Definitions

  • the present disclosure relates to robots and methods for controlling a robot.
  • Assembly such as electrical wiring assembly
  • Examples are electrical panel assembly and in- house switchgear assembly.
  • Complicated assembly processes can typically be described as a sequence of two main activities: grasping and insertion. Similar tasks occur for example in cable manufacturing which typically includes cable insertion for validation and verification.
  • Determining a movement vector on the basis of two images taken from cameras arranged at opposite sides of the gripper plane ensures that the neural network has sufficient information for deriving a movement vector for an insertion task. Even if one finger obstructs the view of the insertion, for example, for one of the cameras, the other camera will likely have an unobstructed view of the insertion. It should be noted that for an insertion task, in particular, a good view is of importance. Furthermore, having two cameras allows the neural network to derive depth information.
  • two cameras e.g. arranged symmetrically with respect to the gripper plane
  • each camera is placed at a 45-degree angle with respect to its respective finger opening, resulting in a good view of the scene as well as of the object between the fingers.
  • the gripper plane may be understood as the plane in which a flat object is oriented when being gripped by the gripper.
  • Example 1 is a robot as described above.
  • Example 2 is the robot of Example 1 , wherein the first camera and the second camera are arranged symmetric to each other with respect the gripper plane. This improves the chances that at least one of the cameras has an unobstructed view of the insertion.
  • Example 3 is the robot of Example 1 or 2, wherein the positions of the first camera and the second camera are rotated with respect to gripper plane.
  • Example 4 is the robot of Example 3, wherein the positions of the first camera and the second camera are rotated with respect to gripper plane between 30 and 60 degrees, preferably 40 and 50 degrees.
  • Angles in that range provide good views of the relevant parts (tip of objects to be inserted, insertion) in typical cases.
  • Example 5 is the robot of any one of Examples 1 to 4, wherein the controller is configured to process the first image and the second image by the neural network by generating an input image for the neural network having a first number of channels equal to the number of channels of the first image which hold the image data of the first image and having a second number of channels equal to the number of channels of the second image which hold the image data of the first image and supplying the input image to the neural network.
  • the image data of both images is combined in a single image and the neural network may process it in the manner of a single image (having an increased number of channels, e.g. six channels for two sets of RGB channels).
  • the neural network may for example be or comprise a convolutional neural network.
  • Example 6 is a method for controlling a robot, comprising receiving, for a position of an end-effector of the robot, a first image from a first camera and a second image from a second camera, wherein the end-effector has a gripper with at least a first finger and a second finger, wherein the two fingers are arranged opposite with respect to each other such that they define a gripper plane between them and wherein the first camera and the second camera are attached to the end effector at opposite sides of the gripper plane, processing the first image and the second image by a neural network configured to output a movement vector for the end-effector for an insertion task and controlling the robot to move as specified by the movement vector.
  • Example 7 is the method of Example 6, comprising training the neural network to derive movement vectors for an insertion task from input data elements comprising image data taken from two cameras.
  • Example 8 is a computer program comprising instructions which, when executed by a processor, makes the processor perform a method according to any one of Examples 6 to 7.
  • Example 9 is a computer readable medium storing instructions which, when executed by a processor, makes the processor perform a method according to any one of Examples 6 to 7.
  • Figure 1 shows a robot
  • Figure 3 illustrates the training of an encoder network according to an embodiment.
  • Figure 4 shows the determination of a delta movement from an image data element and a force input.
  • Figure 5 shows the determination of a delta movement from two image data elements.
  • Figure 6 illustrates an example of a multi-step insertion task.
  • Figure 7 shows a flow diagram illustrating a method for controlling a robot.
  • Figure 1 shows a robot 100.
  • the robot 100 includes a robot arm 101 , for example an industrial robot arm for handling or assembling a work piece (or one or more other objects).
  • the robot arm 101 includes manipulators 102, 103, 104 and a base (or support) 105 by which the manipulators 102, 103, 104 are supported.
  • manipulator refers to the movable members of the robot arm 101 , the actuation of which enables physical interaction with the environment, e.g. to carry out a task.
  • the robot 100 includes a (robot) controller 106 configured to implement the interaction with the environment according to a control program.
  • the last member 104 (furthest from the support 105) of the manipulators 102, 103, 104 is also referred to as the end-effector 104 and may include one or more tools such as a welding torch, gripping instrument, painting equipment, or the like.
  • the other manipulators 102, 103 may form a positioning device such that, together with the end-effector 104, the robot arm 101 with the end-effector 104 at its end is provided.
  • the robot arm 101 is a mechanical arm that can provide similar functions as a human arm (possibly with a tool at its end).
  • the robot arm 101 may include joint elements 107, 108, 109 interconnecting the manipulators 102, 103, 104 with each other and with the support 105.
  • a joint element 107, 108, 109 may have one or more joints, each of which may provide rotatable motion (i.e. rotational motion) and/or translatory motion (i.e. displacement) to associated manipulators relative to each other.
  • the movement of the manipulators 102, 103, 104 may be initiated by means of actuators controlled by the controller 106.
  • the term "actuator” may be understood as a component adapted to affect a mechanism or process in response to be driven.
  • the actuator can implement instructions issued by the controller 106 (the so-called activation) into mechanical movements.
  • the actuator e.g. an electromechanical converter, may be configured to convert electrical energy into mechanical energy in response to driving.
  • controller may be understood as any type of logic implementing entity, which may include, for example, a circuit and/or a processor capable of executing software stored in a storage medium, firmware, or a combination thereof, and which can issue instructions, e.g. to an actuator in the present example.
  • the controller may be configured, for example, by program code (e.g., software) to control the operation of a system, a robot in the present example.
  • the controller 106 includes one or more processors 110 and a memory 111 storing code and data according to which the processor 110 controls the robot arm 101. According to various embodiments, the controller 106 controls the robot arm 101 on the basis of a machine learning model 112 stored in the memory 111.
  • the machine learning model 112 is configured and trained to allow the robot 100 to perform an inserting (e.g. peg-in- hole) task, for example inserting a plug 113 in a corresponding socket 114.
  • the controller 106 takes pictures of the plug 113 and socket 114 by means of cameras 117, 119.
  • the plug 113 is for example a USB (Universal Serial Bus) plug or may also be a power plug. It should be noted that if the plug has multiple pegs like a power plug, then each peg may be regarded as object to be inserted (wherein the insertion is a corresponding hole). Alternatively, the whole plug may be seen as the object to be inserted (wherein the insertion is a power socket).
  • the object 113 is not necessarily completely inserted in the insertion.
  • the USB plug is considered to be inserted if the metal contact part 116 is inserted in the socket 114.
  • Robot control to perform a peg-in-hole task typically involves two main phases: searching and inserting.
  • searching the socket 114 is identified and localized to provide the essential information required for inserting the plug 113.
  • Searching an insertion may be based on vision or on blind strategies involving, for example, spiral paths.
  • Visual techniques depend greatly on the location of the camera 117, 119 and the board 118 (in which the socket 114 is placed in the board’s surface 115) and obstructions, and are typically about three times slower than human operators. Due to the limitations of visual methods, the controller 106 may take into account force-torque and haptic feedback, either exclusively or in combination with vision.
  • a data-efficient, safe and supervised approach to acquire a robot policy is provided. It allows learning a control policy, in particular for a multi-step insertion task, with few data points by utilizing contrastive methodologies and one-shot learning techniques.
  • the training and/or robot control comprises one or more of the following: 1) Usage of two cameras in order to avoid the one image ambiguity problem and to extract depth information. This in particular allows eliminating the requirement to touch the socket’s surface 115 when inserting the object 113.
  • Figure 2 shows a robot end-effector 201 in more detail.
  • the end-effector 201 for example corresponds to the end-effector 105 of the robot arm 101 , e.g. a robot arm six degrees of freedom (DoF).
  • the robot has two sensory inputs which the controller 106 may use for controlling the robot arm 101 .
  • the first is stereoscopic perception, provided by two (see item 1 above) wrist cameras 202 tilted at, for example, a 45° angle and focused on a point between the end-effector fingers (EEF) 203.
  • Images 205, 206 are examples of images taken by the first camera and the second camera respectively for a position of the end-effector 201 .
  • each camera 202 is directed such that, for example, pictures taken show a part (here a pin 207) of an object 204 gripped by the end-effector 201 which is to be inserted and a region around it such that, for example, the insertion 208 is visible.
  • the insertion refers to the hole for the pin but it may also refer to comprise the holes for the pins as well as the opening for the cylindrical part of the plug. Inserting an object into an insertion thus does not necessarily mean that the object is completely inserted into the insertion but only one or more parts.
  • an image data element for a current (or origin) position of the robot provided arm is denoted by (having six channels since it includes the images from both cameras 202).
  • the second sensory input is a force input, i.e. measurements of a force sensor 120 which measures a moment and force experienced by the end-effector 105, 201 when it presses the object 113 on a plane (e.g. the surface 115 of the board 118).
  • the force measurement can be taken by the robot or by an external force and torque sensor.
  • the force input for example comprises a force indication and a moment indication
  • Obs (Img; F; M).
  • high-frequency communication may be used between the sensor devices (cameras 202 and force sensor 120) and the controller 106.
  • force and torque (moment) measurements are sampled at 500Hz and commands are sent at the actuators with 125Hz.
  • the end-effector fingers 203 form a gripper whose pose is denoted by L. Specifically, , is the gripper’s location, and is its pose.
  • the robot’s action in Cartesian space is defined by where ⁇ x, ⁇ y and ⁇ z are the desired corrections needed for the EEF in the Cartesian space with respect to the current location.
  • This robot action specifies the robot’s movement from a current (or origin) pose (in particular a current position) to a target pose (in particular to a target position).
  • the two-camera scheme i.e. having for each robot position considered an image data element which comprises two images, allows recovering the distance between two points shown in the images, i.e. allows avoiding the vision ambiguity problem which arises when trying to recover the distance between two points in world coordinates without depth information using a single image.
  • backward learning is used (e.g. by the controller 106) wherein images of the two cameras 202 are used to collect training data (speicifally images) not only after touching the surface 115 but also along the moving trajectory.
  • the controller 106 places the robotic arm in its final (target) position L final , i.e. when the plug 116 is inserted into the socket 114 (or similarly for any intermediate target for which the machine learning model 112 should be trained).
  • a target image data element may be collected in this position which is used according to various embodiments as described below.
  • two points are sampled from a probability (e.g. normal) distribution: one is T high , which is positioned in a random location above the socket, and the second one is T low which is randomly positioned around the socket’s height
  • force sensor data is gathered. That is not necessary according to various embodiments, in particular those which operate with target image data elements as described further below with reference to figure 5.
  • the machine learning model 112 comprises multiple components, one of them being an encoder network which the controller 106 uses to determine an encoding for each image data element Img.
  • Figure 3 illustrates the training of an encoder network 301 according to an embodiment.
  • the encoder network 301 is for example a convolutional neural network, e.g. having a ResNet18 architecture.
  • Contrastive learning i.e. training based on contrastive loss
  • contrastive learning is a framework of learning representations that obey similarity or dissimilarity constraints in a dataset that map onto positive or negative labels, respectively.
  • a possible contrastive learning approach is Instance Discrimination, where an example and an image are a positive pair if they are data-augmentations of the same instance and negative otherwise.
  • a key challenge in contrastive learning is the choice of negative samples, as it may influence the quality of the underlying representations learned.
  • two representations are a positive pair if they were generated from the same input data element 302 and are a negative pair if they were generated from different input data elements 302. This means that a positive pair holds two augmentations of the same original image data element or holds an original image image data element and a augmentation thereof. All other pairs are negative pairs.
  • a similarity function sim(.) which measures the likeness (or distance between two embeddings is used. It may use Euclidean distance (in latent space, i.e. space of the embeddings) but more complicated functions may be used, e.g. using a kernel.
  • the contrastive loss is then for example given by the sum over i,j of where the z i are the embeddings, ⁇ is here a temperature normalization factor (not to be confused with the task ⁇ used above).
  • Figure 4 shows the determination of a delta movement 405 from an image data element 401 and a force input 402. the movement in z direction since, according to various embodiments, the controller 106 controls the movement in z direction ⁇ z independently from other information, e.g. using the height of the table where the socket 114 is placed from prior knowledge or from the depth camera.
  • the encoder network 403 (corresponding to encoder network 301) generates an embedding for the image data element
  • the embedding is passed, together with the force input 402, to a neural network 404 which provides the delta movement 405.
  • the neural network 404 is for example a convolutional neural network and is referred to as delta network and is said to implement a delta (control) policy.
  • the training data set D generated by algorithm 1 includes the training input data elements Obs and the ground truth labels for the delta loss.
  • Figure 5 shows the determination of a delta movement 505 from two image data elements 501 , 502.
  • the encoder network 503 (corresponding to encoder network 301) generates an embedding for each image data element 501 , 502.
  • the embeddings are passed to a neural network 504 which provides the delta movement 505.
  • the neural network 504 is for example a convolutional neural network and is referred to as relation network and is said to implement a relation (control) policy.
  • the relation loss l relation for training the encoder network 503 (as well as the relation network 504) is determined by having ground truth delta movement labels for pairs of image data elements 501 , 502.
  • the ground truth delta movement label for a pair of image data elements 501 , 502 may for example be generated by taking the difference between the ground truth delta movement labels (i.e. the actions) included for the image data elements in the data set D generated by algorithm 1 .
  • Img i and Img j are augmented for this training, augmentations are used which are consistent.
  • the relation loss l relation facilitates one-shot learning as well as enables multi- step insertion and improves the exploitation of the collected data.
  • the controller 106 approximates the location of the hole, i.e. for example localizes the holes, sockets, threads, etc. in the scene, for example from images and uses, for example, a PD controller to follow a path calculated from the approximation.
  • An action of the residual policy is a delta movement
  • various augmentations for training data elements may be used.
  • the order as well as the properties of each augmentation has a large effect on generalization.
  • visual augmentation i.e. augmentation of training image data
  • this may include, for training based on the delta loss and the relation loss, resize, random crop, colour jitter, translation, rotation, erase and random convolution.
  • contrastive loss exemplary augmentations are random resized crop, strong translation, strong rotation and erase. Similar augmentations may be used for training data elements within the same batch.
  • force augmentation i.e.
  • the force input 402 to the delta network is the direction of the force and moment vectors. These may be augmented for training (e.g. by jittering).
  • the relation policy may in particular be used for a multi-step task, e.g. multi-step insertion.
  • a multi-step insertion tasks like locking a door, it is typically harder to collect training data and verify that each step can be completed.
  • Figure 6 illustrates an example of a multi-step insertion task.
  • a task to lock a door the task is composed of three steps starting from an initial position 601 : inserting the key 602, turning the lock 603, and then turning back 604.
  • an image of the state is taken with the two (e.g. 45-degree) cameras 202 and pre-saved.
  • the execution of the task follows algorithm 2 with the relation policy and a similarity function to switch between steps. For example, an intermediate target is deemed to be reached when the similarity between images taken and the pre-saved target images for the current step is below a threshold.
  • Figure 7 shows a flow diagram 700 illustrating a method for controlling a robot.
  • a first image is received from a first camera and a second image is received from a second camera, wherein the end-effector has a gripper with at least a first finger and a second finger, wherein the two fingers are arranged opposite with respect to each other such that they define a gripper plane between them and wherein the first camera and the second camera are attached to the end effector at opposite sides of the gripper plane.
  • the robot is controlled to move as specified by the movement vector.
  • the method of figure 7 may be performed by one or more computers including one or more data processing units.
  • data processing unit can be understood as any type of entity that allows the processing of data or signals.
  • the data or signals may be treated according to at least one (i.e., one or more than one) specific function performed by the data processing unit.
  • a data processing unit may include an analogue circuit, a digital circuit, a composite signal circuit, a logic circuit, a microprocessor, a micro controller, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a programmable gate array (FPGA) integrated circuit or any combination thereof or be formed from it. Any other way of implementing the respective functions may also be understood as data processing unit or logic circuitry.
  • a data processing unit may execute (e.g., implemented) by a data processing unit through one or more specific functions performed by the data processing unit.
  • Various embodiments may receive and use image data from various visual sensors (cameras) such as video, radar, LiDAR, ultrasonic, thermal imaging etc.
  • Embodiments may be used for training a machine learning system and controlling a robot, e.g. a robotic manipulators autonomously to achieve various inserting tasks under different scenarios.
  • the neural network may be trained for a new inserting task which reduces training time compared to training from scratch (transfer learning capabilities).
  • Embodiments are in particular applicable to the control and monitoring of execution of manipulation tasks, e.g., in assembly lines.
  • the method is computer-implemented.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Evolutionary Computation (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Manipulator (AREA)

Abstract

According to various embodiments, a robot is described comprising an end- effector having a gripper with at least a first finger and a second finger, wherein the two fingers are arranged opposite with respect to each other such that they define a gripper plane between them, at least a first camera and a second camera attached to the end effector at opposite sides of the gripper plane and a controller configured to receive, for a position of the end-effector, a first image from the first camera and a second image from the second camera, to process the first image and the second image by a neural network wherein the neural network is configured to output a movement vector for the end-effector for an insertion task and to control the robot to move as specified by the movement vector.

Description

Description
Title
Robot and method for controlling a robot
Prior Art
The present disclosure relates to robots and methods for controlling a robot.
Assembly, such as electrical wiring assembly, is one of the most common manual labour jobs in industry. Examples are electrical panel assembly and in- house switchgear assembly. Complicated assembly processes can typically be described as a sequence of two main activities: grasping and insertion. Similar tasks occur for example in cable manufacturing which typically includes cable insertion for validation and verification.
While for grasping tasks, suitable robot control schemes are typically available in industry, performing insertion or “peg-in-hole” tasks by robots is typically still only applicable to small subsets of problems, mainly ones involving simple shapes in fixed locations and in which the variations are not taken into consideration.
Moreover, existing visual techniques are slow, typically about three times slower than human operators.
Therefore, efficient methods for training a controller for a robot to perform tasks like inserting task are desirable.
Disclosure of the Invention
According to various embodiments, a robot is provided comprising an end- effector having a gripper with at least a first finger and a second finger, wherein the two fingers are arranged opposite with respect to each other such that they define a gripper plane between them, at least a first camera and a second camera attached to the end effector at opposite sides of the gripper plane and a controller configured to receive, for a position of the end-effector, a first image from the first camera and a second image from the second camera, to process the first image and the second image by a neural network wherein the neural network is configured to output a movement vector for the end-effector for an insertion task and to control the robot to move as specified by the movement vector.
Determining a movement vector on the basis of two images taken from cameras arranged at opposite sides of the gripper plane ensures that the neural network has sufficient information for deriving a movement vector for an insertion task. Even if one finger obstructs the view of the insertion, for example, for one of the cameras, the other camera will likely have an unobstructed view of the insertion. It should be noted that for an insertion task, in particular, a good view is of importance. Furthermore, having two cameras allows the neural network to derive depth information.
So, two cameras (e.g. arranged symmetrically with respect to the gripper plane) allows bypassing the one image ambiguity problem and extract depth information while avoiding occlusion during the entire insertion trajectory (if one camera’s view is occluded, the other one has a clear view). For example, each camera is placed at a 45-degree angle with respect to its respective finger opening, resulting in a good view of the scene as well as of the object between the fingers.
The gripper plane may be understood as the plane in which a flat object is oriented when being gripped by the gripper.
Various Examples are given in the following.
Example 1 is a robot as described above.
Example 2 is the robot of Example 1 , wherein the first camera and the second camera are arranged symmetric to each other with respect the gripper plane. This improves the chances that at least one of the cameras has an unobstructed view of the insertion.
Example 3 is the robot of Example 1 or 2, wherein the positions of the first camera and the second camera are rotated with respect to gripper plane.
This further improves the chances that at least one of the cameras has an unobstructed view of the insertion.
Example 4 is the robot of Example 3, wherein the positions of the first camera and the second camera are rotated with respect to gripper plane between 30 and 60 degrees, preferably 40 and 50 degrees.
Angles in that range provide good views of the relevant parts (tip of objects to be inserted, insertion) in typical cases.
Example 5 is the robot of any one of Examples 1 to 4, wherein the controller is configured to process the first image and the second image by the neural network by generating an input image for the neural network having a first number of channels equal to the number of channels of the first image which hold the image data of the first image and having a second number of channels equal to the number of channels of the second image which hold the image data of the first image and supplying the input image to the neural network.
Thus, the image data of both images is combined in a single image and the neural network may process it in the manner of a single image (having an increased number of channels, e.g. six channels for two sets of RGB channels). The neural network may for example be or comprise a convolutional neural network.
Example 6 is a method for controlling a robot, comprising receiving, for a position of an end-effector of the robot, a first image from a first camera and a second image from a second camera, wherein the end-effector has a gripper with at least a first finger and a second finger, wherein the two fingers are arranged opposite with respect to each other such that they define a gripper plane between them and wherein the first camera and the second camera are attached to the end effector at opposite sides of the gripper plane, processing the first image and the second image by a neural network configured to output a movement vector for the end-effector for an insertion task and controlling the robot to move as specified by the movement vector.
Example 7 is the method of Example 6, comprising training the neural network to derive movement vectors for an insertion task from input data elements comprising image data taken from two cameras.
Example 8 is a computer program comprising instructions which, when executed by a processor, makes the processor perform a method according to any one of Examples 6 to 7.
Example 9 is a computer readable medium storing instructions which, when executed by a processor, makes the processor perform a method according to any one of Examples 6 to 7.
It should be noted that embodiments and examples described in context of the robot are analogously valid for the method for controlling a robot and vice versa.
In the drawings, similar reference characters generally refer to the same parts throughout the different views. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the following description, various aspects are described with reference to the following drawings, in which:
Figure 1 shows a robot.
Figure 2 shows a robot end-effector in more detail.
Figure 3 illustrates the training of an encoder network according to an embodiment.
Figure 4 shows the determination of a delta movement from an image data element and a force input.
Figure 5 shows the determination of a delta movement from two image data elements. Figure 6 illustrates an example of a multi-step insertion task.
Figure 7 shows a flow diagram illustrating a method for controlling a robot.
The following detailed description refers to the accompanying drawings that show, by way of illustration, specific details and aspects of this disclosure in which the invention may be practiced. Other aspects may be utilized and structural, logical, and electrical changes may be made without departing from the scope of the invention. The various aspects of this disclosure are not necessarily mutually exclusive, as some aspects of this disclosure can be combined with one or more other aspects of this disclosure to form new aspects.
In the following, various examples will be described in more detail.
Figure 1 shows a robot 100.
The robot 100 includes a robot arm 101 , for example an industrial robot arm for handling or assembling a work piece (or one or more other objects). The robot arm 101 includes manipulators 102, 103, 104 and a base (or support) 105 by which the manipulators 102, 103, 104 are supported. The term “manipulator” refers to the movable members of the robot arm 101 , the actuation of which enables physical interaction with the environment, e.g. to carry out a task. For control, the robot 100 includes a (robot) controller 106 configured to implement the interaction with the environment according to a control program. The last member 104 (furthest from the support 105) of the manipulators 102, 103, 104 is also referred to as the end-effector 104 and may include one or more tools such as a welding torch, gripping instrument, painting equipment, or the like.
The other manipulators 102, 103 (closer to the support 105) may form a positioning device such that, together with the end-effector 104, the robot arm 101 with the end-effector 104 at its end is provided. The robot arm 101 is a mechanical arm that can provide similar functions as a human arm (possibly with a tool at its end). The robot arm 101 may include joint elements 107, 108, 109 interconnecting the manipulators 102, 103, 104 with each other and with the support 105. A joint element 107, 108, 109 may have one or more joints, each of which may provide rotatable motion (i.e. rotational motion) and/or translatory motion (i.e. displacement) to associated manipulators relative to each other. The movement of the manipulators 102, 103, 104 may be initiated by means of actuators controlled by the controller 106.
The term "actuator" may be understood as a component adapted to affect a mechanism or process in response to be driven. The actuator can implement instructions issued by the controller 106 (the so-called activation) into mechanical movements. The actuator, e.g. an electromechanical converter, may be configured to convert electrical energy into mechanical energy in response to driving.
The term "controller" may be understood as any type of logic implementing entity, which may include, for example, a circuit and/or a processor capable of executing software stored in a storage medium, firmware, or a combination thereof, and which can issue instructions, e.g. to an actuator in the present example. The controller may be configured, for example, by program code (e.g., software) to control the operation of a system, a robot in the present example.
In the present example, the controller 106 includes one or more processors 110 and a memory 111 storing code and data according to which the processor 110 controls the robot arm 101. According to various embodiments, the controller 106 controls the robot arm 101 on the basis of a machine learning model 112 stored in the memory 111.
According to various embodiments, the machine learning model 112 is configured and trained to allow the robot 100 to perform an inserting (e.g. peg-in- hole) task, for example inserting a plug 113 in a corresponding socket 114. For this, the controller 106 takes pictures of the plug 113 and socket 114 by means of cameras 117, 119. The plug 113 is for example a USB (Universal Serial Bus) plug or may also be a power plug. It should be noted that if the plug has multiple pegs like a power plug, then each peg may be regarded as object to be inserted (wherein the insertion is a corresponding hole). Alternatively, the whole plug may be seen as the object to be inserted (wherein the insertion is a power socket). It should be noted that (depending on what is regarded as the object) the object 113 is not necessarily completely inserted in the insertion. Like in case of the USB plug, the USB plug is considered to be inserted if the metal contact part 116 is inserted in the socket 114.
Robot control to perform a peg-in-hole task typically involves two main phases: searching and inserting. During searching, the socket 114 is identified and localized to provide the essential information required for inserting the plug 113.
Searching an insertion may be based on vision or on blind strategies involving, for example, spiral paths. Visual techniques depend greatly on the location of the camera 117, 119 and the board 118 (in which the socket 114 is placed in the board’s surface 115) and obstructions, and are typically about three times slower than human operators. Due to the limitations of visual methods, the controller 106 may take into account force-torque and haptic feedback, either exclusively or in combination with vision.
Constructing a robot that reliably inserts diverse objects (e.g., plugs, engine gears) is a grand challenge in the design of manufacturing, inspection, and home-service robots. Minimizing action time, maximizing reliability, and minimizing contact between the grasped object and the target component is difficult due to the inherent uncertainty concerning sensing, control, sensitivity to applied forces, and occlusions.
According to various embodiments, a data-efficient, safe and supervised approach to acquire a robot policy is provided. It allows learning a control policy, in particular for a multi-step insertion task, with few data points by utilizing contrastive methodologies and one-shot learning techniques.
According to various embodiments, the training and/or robot control comprises one or more of the following: 1) Usage of two cameras in order to avoid the one image ambiguity problem and to extract depth information. This in particular allows eliminating the requirement to touch the socket’s surface 115 when inserting the object 113.
2) Integration of contrastive learning in order to reduce the amount of labelled data.
3) A relation network that enables one-shot learning and multi-step insertion.
4) Multi-step insertion using this relation network.
Figure 2 shows a robot end-effector 201 in more detail.
The end-effector 201 for example corresponds to the end-effector 105 of the robot arm 101 , e.g. a robot arm six degrees of freedom (DoF). According to various embodiments, the robot has two sensory inputs which the controller 106 may use for controlling the robot arm 101 . The first is stereoscopic perception, provided by two (see item 1 above) wrist cameras 202 tilted at, for example, a 45° angle and focused on a point between the end-effector fingers (EEF) 203.
Images 205, 206 are examples of images taken by the first camera and the second camera respectively for a position of the end-effector 201 .
In general, each camera 202 is directed such that, for example, pictures taken show a part (here a pin 207) of an object 204 gripped by the end-effector 201 which is to be inserted and a region around it such that, for example, the insertion 208 is visible. It should be noted that here, the insertion refers to the hole for the pin but it may also refer to comprise the holes for the pins as well as the opening for the cylindrical part of the plug. Inserting an object into an insertion thus does not necessarily mean that the object is completely inserted into the insertion but only one or more parts.
Assuming a height H, width W and three channels for each camera image, an image data element for a current (or origin) position of the robot provided arm is denoted by (having six channels since it includes the
Figure imgf000010_0001
images from both cameras 202). The second sensory input is a force input, i.e. measurements of a force sensor 120 which measures a moment and force experienced by the end-effector 105, 201 when it presses the object 113 on a plane (e.g. the surface 115 of the board 118). The force measurement can be taken by the robot or by an external force and torque sensor. The force input for example comprises a force indication and a
Figure imgf000011_0001
moment indication
Figure imgf000011_0002
For following explanations, the robot’s observation for a current position is denoted by Obs = (Img; F; M). To accurately capture the contact forces and generate smooth movements, high-frequency communication (Real Time Data Exchange) may be used between the sensor devices (cameras 202 and force sensor 120) and the controller 106. For example, force and torque (moment) measurements are sampled at 500Hz and commands are sent at the actuators with 125Hz. The end-effector fingers 203 form a gripper whose pose is denoted by L. Specifically,
Figure imgf000011_0004
, is the gripper’s location, and
Figure imgf000011_0005
is its pose. The robot’s action in Cartesian space is defined by where
Figure imgf000011_0003
Δx, Δy and Δz are the desired corrections needed for the EEF in the Cartesian space with respect to the current location. This robot action specifies the robot’s movement from a current (or origin) pose (in particular a current position) to a target pose (in particular to a target position). The two-camera scheme, i.e. having for each robot position considered an image data element which comprises two images, allows recovering the distance between two points shown in the images, i.e. allows avoiding the vision ambiguity problem which arises when trying to recover the distance between two points in world coordinates without depth information using a single image. According to various embodiments, backward learning is used (e.g. by the controller 106) wherein images of the two cameras 202 are used to collect training data (speicifally images) not only after touching the surface 115 but also along the moving trajectory. This means that collecting a training data element, the controller 106 places the robotic arm in its final (target) position Lfinal, i.e. when the plug 116 is inserted into the socket 114 (or similarly for any intermediate target for which the machine learning model 112 should be trained). (It should be noted that using the cameras 202, a target image data element may be collected in this position which is used according to various embodiments as described below.) Then, for each training data element, two points are sampled from a probability (e.g. normal) distribution: one is Thigh, which is positioned in a random location above the socket, and the second one is Tlow which is randomly positioned around the socket’s height
A correction for this training data element is defined by
Figure imgf000012_0001
Figure imgf000013_0001
According to algorithm 1 , force sensor data is gathered. That is not necessary according to various embodiments, in particular those which operate with target image data elements as described further below with reference to figure 5.
According to various embodiments, the machine learning model 112 comprises multiple components, one of them being an encoder network which the controller 106 uses to determine an encoding for each image data element Img.
Figure 3 illustrates the training of an encoder network 301 according to an embodiment. The encoder network 301 is for example a convolutional neural network, e.g. having a ResNet18 architecture.
The encoder network 301 (realizing the function φ ) is trained using a contrastive loss, and one or both of a delta policy loss and a relation data loss, so for example according to the following loss: loss = Icontrastive + Idelta + Irelation.
These loss components are described in the following.
Contrastive learning, i.e. training based on contrastive loss, is a framework of learning representations that obey similarity or dissimilarity constraints in a dataset that map onto positive or negative labels, respectively. A possible contrastive learning approach is Instance Discrimination, where an example and an image are a positive pair if they are data-augmentations of the same instance and negative otherwise. A key challenge in contrastive learning is the choice of negative samples, as it may influence the quality of the underlying representations learned.
According to various embodiments, the encoder network 301 is trained (e.g. by the controller 106 or by an external device to be later stored in the controller 106) using a contrastive technique such that it learns relevant features for the task at hand without any specific labels. An example is InfoNCE loss (NCE: Noise- Contrastive Examination). Thereby, by stacking two images from the two cameras 202 to an image data element 302, a depth registration of the plug 113 and socket 114 are obtained. This depth information is used to augment the image data element in various different ways. From the original (i.e. non- augmented image data element) and one or more augmentations obtained in this manner, pairs of image data elements are sampled, wherein one element of the pair is supplied to the encoder network 301 and the other is supplied to another version 303 of the encoder network. The other version 303 realizes the functio'nφ’ which has for example the same parameters as the encoder network 301 and is updated using a Polyak averaging according to φ' = φ' + (1 - m) φ with m = 0.999 (where φ , φ’ have been used to represent the weights of the two encoder network versions 301 , 302). The two encoder network versions 301 , 302 each output a representation (i.e. embedding) of size L for the input data element 302 (i.e. a 2 x L output of the pair). Doing this for a batch of N image data elements (i.e. forming a pair of augmentations or original and augmentation for each image data element), i.e. for training input image data of size N x 6 x H x W gives an N pairs of representations output by the two encoder network versions 301 , 302 (i.e. representation output data of size 2 x N x L). Using these N pairs of representations, the contrastive loss of the encoder network 301 is calculated by forming positive pairs and negative pairs from the representations included in the pairs). Here, two representations are a positive pair if they were generated from the same input data element 302 and are a negative pair if they were generated from different input data elements 302. This means that a positive pair holds two augmentations of the same original image data element or holds an original image image data element and a augmentation thereof. All other pairs are negative pairs.
For determining the contrastive loss 304, a similarity function sim(.) which measures the likeness (or distance between two embeddings is used. It may use Euclidean distance (in latent space, i.e. space of the embeddings) but more complicated functions may be used, e.g. using a kernel. The contrastive loss is then for example given by the sum over i,j of
Figure imgf000015_0001
where the zi are the embeddings, τ is here a temperature normalization factor (not to be confused with the task τ used above).
Figure 4 shows the determination of a delta movement 405 from an image data element 401 and a force input 402.
Figure imgf000015_0002
the movement in z direction since, according to various embodiments, the controller 106 controls the movement in z direction Δz independently from other information, e.g. using the height of the table where the socket 114 is placed from prior knowledge or from the depth camera.
In this case, the encoder network 403 (corresponding to encoder network 301) generates an embedding for the image data element The embedding is passed, together with the force input 402, to a neural network 404 which provides the delta movement 405. The neural network 404 is for example a convolutional neural network and is referred to as delta network and is said to implement a delta (control) policy.
The delta loss ldelta for training the encoder network 403 (as well as the delta network 404) is determined by having ground truth delta movement labels for training input data elements (comprising an image data element 401 and a force input 402, i.e. what was denoted above, in particular in algorithm 1 , by Obs = (Img; F; M )). The training data set D generated by algorithm 1 includes the training input data elements Obs and the ground truth labels for the delta loss.
Figure 5 shows the determination of a delta movement 505 from two image data elements 501 , 502.
In this case, the encoder network 503 (corresponding to encoder network 301) generates an embedding for each image data element 501 , 502. The embeddings are passed to a neural network 504 which provides the delta movement 505. The neural network 504 is for example a convolutional neural network and is referred to as relation network and is said to implement a relation (control) policy.
The relation loss lrelation for training the encoder network 503 (as well as the relation network 504) is determined by having ground truth delta movement labels for pairs of image data elements 501 , 502. The ground truth delta movement label for a pair of image data elements 501 , 502 may for example be generated by taking the difference between the ground truth delta movement labels (i.e. the actions) included for the image data elements in the data set D generated by algorithm 1 . This means that for training with the relation loss the data set D is used to calculate the delta movement between two images of the same plugging task, Imgj and Imgj, with by calculating the ground truth
Figure imgf000017_0001
by the difference In case Imgi and Imgj are
Figure imgf000017_0002
augmented for this training, augmentations are used which are consistent.
The relation loss lrelation facilitates one-shot learning as well as enables multi- step insertion and improves the exploitation of the collected data.
When trained, the encoder network 403 and the delta network 404, used as described with reference to figure 4 to derive a delta movement from an image data element 401 and a force input 402, implement what is referred to as delta (control) policy. Similarly, the trained encoder network 503 and the relation network 504, used as described with reference to figure 5 to derive a delta movement from an image data element 501 (for a current position) and an image data element 502 (for a target position) implement what is referred to as relation (control) policy.
The controller may use the delta policy or the relation policy as a residual policy in combination with a main policy So, for inference, the
Figure imgf000017_0005
Figure imgf000017_0004
controller 106 uses the encoder network 403, 503 for the delta policy or the relation policy. This may be decided depending on the use-case. For example, for one-shot or multi-step insertion tasks, the relation policy (and relation architecture according to figure 5) is used since it can generalize better in these tasks. For other tasks, the delta policy (and delta architecture of figure 4) is used.
Following the main policy, the controller 106 approximates the location of the hole, i.e. for example localizes the holes, sockets, threads, etc. in the scene, for example from images and uses, for example, a PD controller to follow a path calculated from the approximation.
It then activates the residual policy, e.g. at a certain R of the plug 113 from the surface 115 and does the actual insertion according to the residual policy. An action of the residual policy is a delta movement
Figure imgf000017_0003
Figure imgf000018_0001
According to various embodiments, in order to improve robustness as well as for generalization over colour and shape, various augmentations for training data elements may be used. The order as well as the properties of each augmentation has a large effect on generalization. For visual augmentation (i.e. augmentation of training image data), this may include, for training based on the delta loss and the relation loss, resize, random crop, colour jitter, translation, rotation, erase and random convolution. For the contrastive loss, exemplary augmentations are random resized crop, strong translation, strong rotation and erase. Similar augmentations may be used for training data elements within the same batch. Regarding force augmentation (i.e. augmentation of training force input data), the direction of vectors (F, M), rather than their magnitude, is typically the more important valuable factor. Therefore, according to various embodiments, the force input 402 to the delta network is the direction of the force and moment vectors. These may be augmented for training (e.g. by jittering).
As mentioned above, the relation policy may in particular be used for a multi-step task, e.g. multi-step insertion. It should be noted that in a multi-step insertion tasks, like locking a door, it is typically harder to collect training data and verify that each step can be completed.
According to various embodiments, for a multi-step task, images are pre-saved for each target including one or more intermediate targets and a final target. Then, for each target that is currently to be achieved (depending on the current step, i.e. according to a sequence of intermediate targets and, as last element, the final target), an image data element is taken for the current position (e.g. by taking images from both cameras 202) and fed, together with the image (or images) for the target, to the encoder network 503 and a delta movement is derived by the relation network 504 as described with reference to figure 5.
Figure 6 illustrates an example of a multi-step insertion task.
In the example of figure 6, a task to lock a door, the task is composed of three steps starting from an initial position 601 : inserting the key 602, turning the lock 603, and then turning back 604. For each step, an image of the state is taken with the two (e.g. 45-degree) cameras 202 and pre-saved. The execution of the task follows algorithm 2 with the relation policy and a similarity function to switch between steps. For example, an intermediate target is deemed to be reached when the similarity between images taken and the pre-saved target images for the current step is below a threshold.
Even if regular backward data collection (e.g. according to algorithm 1) is used and the the locking and insertion states are themselves not visited in the training (only visiting states above or touching the hole surface), the controller 106 can successfully perform the task in this manner.
In summary, according to various embodiments, a method is provided as illustrated in figure 7.
Figure 7 shows a flow diagram 700 illustrating a method for controlling a robot.
In 701 , for a position of an end-effector of the robot, a first image is received from a first camera and a second image is received from a second camera, wherein the end-effector has a gripper with at least a first finger and a second finger, wherein the two fingers are arranged opposite with respect to each other such that they define a gripper plane between them and wherein the first camera and the second camera are attached to the end effector at opposite sides of the gripper plane.
In 702, the first image and the second image are processed by a neural network configured (i.e. in particular trained) to output a movement vector for the end- effector for an insertion task.
In 703, the robot is controlled to move as specified by the movement vector.
The method of figure 7 may be performed by one or more computers including one or more data processing units. The term "data processing unit" can be understood as any type of entity that allows the processing of data or signals. For example, the data or signals may be treated according to at least one (i.e., one or more than one) specific function performed by the data processing unit. A data processing unit may include an analogue circuit, a digital circuit, a composite signal circuit, a logic circuit, a microprocessor, a micro controller, a central processing unit (CPU), a graphics processing unit (GPU), a digital signal processor (DSP), a programmable gate array (FPGA) integrated circuit or any combination thereof or be formed from it. Any other way of implementing the respective functions may also be understood as data processing unit or logic circuitry. It will be understood that one or more of the method steps described in detail herein may be executed (e.g., implemented) by a data processing unit through one or more specific functions performed by the data processing unit. Various embodiments may receive and use image data from various visual sensors (cameras) such as video, radar, LiDAR, ultrasonic, thermal imaging etc. Embodiments may be used for training a machine learning system and controlling a robot, e.g. a robotic manipulators autonomously to achieve various inserting tasks under different scenarios. It should be noted that after training for an inserting task, the neural network may be trained for a new inserting task which reduces training time compared to training from scratch (transfer learning capabilities). Embodiments are in particular applicable to the control and monitoring of execution of manipulation tasks, e.g., in assembly lines.
According to one embodiment, the method is computer-implemented.
Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations may be substituted for the specific embodiments shown and described without departing from the scope of the present invention. This application is intended to cover any adaptations or variations of the specific embodiments discussed herein. Therefore, it is intended that this invention be limited only by the claims and the equivalents thereof.

Claims

Claims
1 . A robot comprising:
An end-effector having a gripper with at least a first finger and a second finger, wherein the two fingers are arranged opposite with respect to each other such that they define a gripper plane between them; at least a first camera and a second camera attached to the end effector at opposite sides of the gripper plane; a controller configured to receive, for a position of the end-effector, a first image from the first camera and a second image from the second camera; to process the first image and the second image by a neural network wherein the neural network is configured to output a movement vector for the end-effector for an insertion task, and control the robot to move as specified by the movement vector.
2. The robot of claim 1 , wherein the first camera and the second camera are arranged symmetric to each other with respect the gripper plane
3. The robot of claim 1 or 2, wherein the positions of the first camera and the second camera are rotated with respect to gripper plane
4. The robot of claim 3, wherein the positions of the first camera and the second camera are rotated with respect to gripper plane between 30 and 60 degrees, preferably 40 and 50 degrees.
5. The robot of any one of claims 1 to 4, wherein the controller is configured to process the first image and the second image by the neural network by generating an input image for the neural network having a first number of channels equal to the number of channels of the first image which hold the image data of the first image and having a second number of channels equal to the number of channels of the second image which hold the image data of the first image and supplying the input image to the neural network. A method for controlling a robot, comprising: receiving, for a position of an end-effector of the robot, a first image from a first camera and a second image from a second camera, wherein the end-effector has a gripper with at least a first finger and a second finger, wherein the two fingers are arranged opposite with respect to each other such that they define a gripper plane between them and wherein the first camera and the second camera are attached to the end effector at opposite sides of the gripper plane; processing the first image and the second image by a neural network configured to output a movement vector for the end-effector for an insertion task; and controlling the robot to move as specified by the movement vector. The method of claim 6, comprising training the neural network to derive movement vectors for an insertion task from input data elements comprising image data taken from two cameras. A computer program comprising instructions which, when executed by a processor, makes the processor perform a method according to any one of claims 6 to 7. A computer readable medium storing instructions which, when executed by a processor, makes the processor perform a method according to any one of claims 6 to 7.
PCT/EP2023/053646 2022-03-02 2023-02-14 Robot and method for controlling a robot WO2023165807A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102022202145.2A DE102022202145A1 (en) 2022-03-02 2022-03-02 Robot and method for controlling a robot
DE102022202145.2 2022-03-02

Publications (1)

Publication Number Publication Date
WO2023165807A1 true WO2023165807A1 (en) 2023-09-07

Family

ID=85251746

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/EP2023/053646 WO2023165807A1 (en) 2022-03-02 2023-02-14 Robot and method for controlling a robot

Country Status (2)

Country Link
DE (1) DE102022202145A1 (en)
WO (1) WO2023165807A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190381670A1 (en) * 2018-06-17 2019-12-19 Robotic Materials, Inc. Systems, Devices, Components, and Methods for a Compact Robotic Gripper with Palm-Mounted Sensing, Grasping, and Computing Devices and Components
WO2020142296A1 (en) * 2019-01-01 2020-07-09 Giant.Ai, Inc. Software compensated robotics
EP3772786A1 (en) * 2019-08-09 2021-02-10 The Boeing Company Method and system for alignment of wire contact with wire contact insertion holes of a connector
US20210114209A1 (en) * 2019-10-21 2021-04-22 Canon Kabushiki Kaisha Robot control device, and method and non-transitory computer-readable storage medium for controlling the same

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB9803364D0 (en) 1998-02-18 1998-04-15 Armstrong Healthcare Ltd Improvements in or relating to a method of an apparatus for registering a robot
JP6380828B2 (en) 2014-03-07 2018-08-29 セイコーエプソン株式会社 Robot, robot system, control device, and control method
JP6587761B2 (en) 2017-02-09 2019-10-09 三菱電機株式会社 Position control device and position control method
EP3693138B1 (en) 2017-06-19 2022-08-03 Google LLC Robotic grasping prediction using neural networks and geometry aware object representation
JP6810087B2 (en) 2018-03-29 2021-01-06 ファナック株式会社 Machine learning device, robot control device and robot vision system using machine learning device, and machine learning method
DE102019122790B4 (en) 2018-08-24 2021-03-25 Nvidia Corp. Robot control system
DE102019106458A1 (en) 2019-03-13 2020-09-17 ese-robotics GmbH Method for controlling an industrial robot
US11679508B2 (en) 2019-08-01 2023-06-20 Fanuc Corporation Robot device controller for controlling position of robot
DE102021109332B4 (en) 2021-04-14 2023-07-06 Robert Bosch Gesellschaft mit beschränkter Haftung Apparatus and method for controlling a robot to insert an object into an insertion site
DE102021109334B4 (en) 2021-04-14 2023-05-25 Robert Bosch Gesellschaft mit beschränkter Haftung Device and method for training a neural network for controlling a robot for an insertion task

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190381670A1 (en) * 2018-06-17 2019-12-19 Robotic Materials, Inc. Systems, Devices, Components, and Methods for a Compact Robotic Gripper with Palm-Mounted Sensing, Grasping, and Computing Devices and Components
WO2020142296A1 (en) * 2019-01-01 2020-07-09 Giant.Ai, Inc. Software compensated robotics
EP3772786A1 (en) * 2019-08-09 2021-02-10 The Boeing Company Method and system for alignment of wire contact with wire contact insertion holes of a connector
US20210114209A1 (en) * 2019-10-21 2021-04-22 Canon Kabushiki Kaisha Robot control device, and method and non-transitory computer-readable storage medium for controlling the same

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
SPECTOR OREN ET AL: "InsertionNet - A Scalable Solution for Insertion", IEEE ROBOTICS AND AUTOMATION LETTERS, IEEE, vol. 6, no. 3, 30 April 2021 (2021-04-30), pages 5509 - 5516, XP011856902, DOI: 10.1109/LRA.2021.3076971 *

Also Published As

Publication number Publication date
DE102022202145A1 (en) 2023-09-07

Similar Documents

Publication Publication Date Title
Li et al. Survey on mapping human hand motion to robotic hands for teleoperation
Ekvall et al. Learning and evaluation of the approach vector for automatic grasp generation and planning
US20220331964A1 (en) Device and method for controlling a robot to insert an object into an insertion
US20220335622A1 (en) Device and method for training a neural network for controlling a robot for an inserting task
Sanz et al. Vision-guided grasping of unknown objects for service robots
Lin et al. Peg-in-hole assembly under uncertain pose estimation
Hoffmann et al. Adaptive robotic tool use under variable grasps
JP6322949B2 (en) Robot control apparatus, robot system, robot, robot control method, and robot control program
Ma et al. Modeling and evaluation of robust whole-hand caging manipulation
Jha et al. Generalizable human-robot collaborative assembly using imitation learning and force control
Wang et al. Learning robotic insertion tasks from human demonstration
US20220335710A1 (en) Device and method for training a neural network for controlling a robot for an inserting task
Schiebener et al. Discovery, segmentation and reactive grasping of unknown objects
Ranjan et al. Identification and control of NAO humanoid robot to grasp an object using monocular vision
Yang et al. Fast programming of peg-in-hole actions by human demonstration
Du et al. Robot teleoperation using a vision-based manipulation method
US20230311331A1 (en) Device and method for controlling a robot to perform a task
Haugaard et al. Self-supervised deep visual servoing for high precision peg-in-hole insertion
WO2023165807A1 (en) Robot and method for controlling a robot
US20230278204A1 (en) Device and method for controlling a robot to perform a task
US20230278227A1 (en) Device and method for training a machine learning model to derive a movement vector for a robot from image data
JP2022142773A (en) Device and method for localizing location of object from camera image of object
US20220335295A1 (en) Device and method for training a neural network for controlling a robot for an inserting task
Ota et al. Tactile Pose Feedback for Closed-loop Manipulation Tasks
US20220301209A1 (en) Device and method for training a neural network for controlling a robot

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23705380

Country of ref document: EP

Kind code of ref document: A1