US20150375396A1 - Automatic in-situ registration and calibration of robotic arm/sensor/workspace system - Google Patents
Automatic in-situ registration and calibration of robotic arm/sensor/workspace system Download PDFInfo
- Publication number
- US20150375396A1 US20150375396A1 US14/314,970 US201414314970A US2015375396A1 US 20150375396 A1 US20150375396 A1 US 20150375396A1 US 201414314970 A US201414314970 A US 201414314970A US 2015375396 A1 US2015375396 A1 US 2015375396A1
- Authority
- US
- United States
- Prior art keywords
- point
- sensor
- arm
- workspace
- calibration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000011065 in-situ storage Methods 0.000 title abstract description 6
- 239000012636 effector Substances 0.000 claims abstract description 153
- 238000000034 method Methods 0.000 claims abstract description 76
- 230000009466 transformation Effects 0.000 claims abstract description 41
- 238000013507 mapping Methods 0.000 claims description 37
- 241000228740 Procrustes Species 0.000 claims description 22
- 238000004458 analytical method Methods 0.000 claims description 21
- 230000011218 segmentation Effects 0.000 claims description 17
- 238000005070 sampling Methods 0.000 abstract description 10
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000000844 transformation Methods 0.000 abstract description 4
- 230000006870 function Effects 0.000 description 22
- 239000003550 marker Substances 0.000 description 11
- 230000033001 locomotion Effects 0.000 description 9
- 238000003860 storage Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000013459 approach Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 230000004075 alteration Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000013213 extrapolation Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000012886 linear function Methods 0.000 description 2
- 239000011159 matrix material Substances 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 101000822695 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C1 Proteins 0.000 description 1
- 101000655262 Clostridium perfringens (strain 13 / Type A) Small, acid-soluble spore protein C2 Proteins 0.000 description 1
- 241000699666 Mus <mouse, genus> Species 0.000 description 1
- 241000699670 Mus sp. Species 0.000 description 1
- 101000655256 Paraclostridium bifermentans Small, acid-soluble spore protein alpha Proteins 0.000 description 1
- 101000655264 Paraclostridium bifermentans Small, acid-soluble spore protein beta Proteins 0.000 description 1
- 230000002745 absorbent Effects 0.000 description 1
- 239000002250 absorbent Substances 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 230000004807 localization Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000001537 neural effect Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000009897 systematic effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B21/00—Measuring arrangements or details thereof, where the measuring technique is not covered by the other groups of this subclass, unspecified or not relevant
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1679—Programme controls characterised by the tasks executed
- B25J9/1692—Calibration of manipulator
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J19/00—Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1674—Programme controls characterised by safety, monitoring, diagnostic
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39016—Simultaneous calibration of manipulator and camera
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39021—With probe, touch reference positions
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39022—Transform between measuring and manipulator coordinate system
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/39—Robotics, robotics to robotics hand
- G05B2219/39367—Using a motion map, association between visual position and joint position
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05B—CONTROL OR REGULATING SYSTEMS IN GENERAL; FUNCTIONAL ELEMENTS OF SUCH SYSTEMS; MONITORING OR TESTING ARRANGEMENTS FOR SUCH SYSTEMS OR ELEMENTS
- G05B2219/00—Program-control systems
- G05B2219/30—Nc systems
- G05B2219/40—Robotics, robotics mapping to robotics vision
- G05B2219/40607—Fixed camera to observe workspace, object, workpiece, global
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10—TECHNICAL SUBJECTS COVERED BY FORMER USPC
- Y10S—TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y10S901/00—Robots
- Y10S901/02—Arm motion controller
- Y10S901/09—Closed loop, sensor feedback controls arm movement
Definitions
- Manipulation of objects can be a function of a wide class of robotic systems (e.g., manipulation systems).
- a typical closed-loop manipulation system can include at least one optical sensor (e.g., depth sensor) that perceives and interprets real-world scenes and a physical manipulator (e.g., actuator, robotic arm, etc.) that can reach into the scenes and effect change (e.g., pick up and move a physical object).
- An ability to perform useful tasks with a physical object can depend on the ability of the robotic arm to manipulate the object with sufficient accuracy for the task. The accuracy can depend on a mapping between a coordinate system of the depth sensor and a coordinate system of the robotic arm.
- a mapping can be created between coordinates extracted from a depth image generated by a depth sensor and Cartesian positions of a robotic arm in a workspace. Such mapping between coordinate systems can also be referred to as registration. Precise and accurate mapping between depth sensors and robotic arms is a long-standing challenge in robotics and is relevant for substantially any robotic system that has one or more depth sensors. Errors in the mapping can result in inaccuracies of the overall manipulation system.
- a Cartesian coordinate system oftentimes is chosen to represent the real world, and coordinates of the robotic arm and the depth sensor in that system are determined by measurements.
- a linear function e.g., commonly represented as a transformation matrix
- Such a typical model can have a number of potential error sources.
- errors can result from measuring exact placement of the depth sensor and the robotic arm in a common coordinate frame, which can be difficult at best. Further, the overall system may be prone to falling out of calibration due to mechanical movement of parts within the system.
- a depth sensor may have a bias that varies non-linearly with distance or across sensor areas.
- a robotic arm can have less than ideal ability to achieve exact placement of an end effector of such robotic arm at a desired Cartesian coordinate.
- depth sensors or robotic arms, even those from the same vendor may have slightly different biases, which can further complicate typical approaches for registration.
- the robotic arm can include an end effector.
- a non-parametric technique for registration between the depth sensor and the robotic arm can be implemented.
- the registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration).
- a point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame.
- Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.
- the depth sensor and the robotic arm can be controlled (e.g., during registration, by a control system, etc.).
- an input point from the depth sensor can be received.
- the input point can include coordinates indicative of a location in a sensor coordinate frame, where the location is in the workspace.
- Sensor calibration points within proximity of the input point can be identified.
- a sensor calibration point can include first coordinates of the end effector in the sensor coordinate frame, where the first coordinates are previously collected during calibration (e.g., recalibration) with the end effector at a given position within the workspace.
- arm calibration points that respectively correspond to the sensor calibration points can be identified.
- An arm calibration point that corresponds to the sensor calibration point can include second coordinates of the end effector in an arm coordinate frame, where the second coordinates are previously collected during the calibration (e.g., recalibration) with the end effector at the given position within the workspace.
- a processor can be employed to compute an estimated point that maps to the input point.
- the estimated point can include coordinates indicative of the location in the arm coordinate frame.
- the estimated point can be computed based upon the sensor calibration points and the arm calibration points as identified. According to various embodiments, it is contemplated that the techniques described herein can similarly enable computing an estimated point, which includes coordinates in the sensor coordinate frame, that maps to an input point, which includes coordinates in the arm coordinate frame.
- calibration e.g., recalibration of the depth sensor and the robotic arm can be performed.
- the end effector can be caused to non-continuously traverse through the workspace based upon a pattern, where the end effector is stopped at positions within the workspace according to the pattern.
- a sensor calibration point for the position of the end effector within the workspace detected by the depth sensor can be collected and an arm calibration point for the position of the end effector within the workspace detected by the robotic arm can be collected.
- the depth sensor and the robotic arm can be recalibrated, for instance, responsive to movement of the depth sensor in the workspace, movement of the robotic arm in the workspace, a temperature change in the workspace that exceeds a threshold temperature value, a mapping error detected that exceeds a threshold error value, or the like.
- FIG. 1 illustrates a functional block diagram of an exemplary system that that controls a depth sensor and a robotic arm that operate in a workspace.
- FIG. 2 illustrates an example of the workspace of FIG. 1 .
- FIG. 3 illustrates an exemplary pattern used for calibration (e.g., recalibration) that specifies positions within the workspace of FIG. 2 .
- FIG. 4 illustrates a functional block diagram of an exemplary system that includes a control system that controls the depth sensor and the robotic arm during calibration and registration.
- FIG. 5 illustrates an example of tetrahedrons that can be formed using Delaunay triangulation on sensor calibration points in the workspace.
- FIG. 6 illustrates an example where a preset number of sensor calibration points nearest to an input point are identified and used to form tetrahedrons that include the input point.
- FIG. 7 illustrates a functional block diagram of another exemplary system that includes the control system that controls the depth sensor and the robotic arm during calibration and registration.
- FIG. 8 is a flow diagram that illustrates an exemplary methodology of controlling a depth sensor and a robotic arm that operate in the workspace.
- FIG. 9 is a flow diagram that illustrates an exemplary methodology of controlling the depth sensor and the robotic arm that operate in the workspace.
- FIG. 10 illustrates an exemplary computing device.
- the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from the context, the phrase “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, the phrase “X employs A or B” is satisfied by any of the following instances: X employs A; X employs B; or X employs both A and B.
- the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form.
- a non-parametric technique for registration between a depth sensor and a robotic arm can be implemented.
- the registration technique can utilize a sparse sampling of a workspace (e.g., with a marker placed on an end effector of the robotic arm).
- a point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame.
- Such techniques can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.
- a result of the registration described herein can be used to generate reaching motions with the robotic arm towards targets sensed by the depth sensor. For instance, the Euclidean error in the resulting transformation from coordinates of the depth sensor to coordinates of the robotic arm can be on the order of sub-millimeters or millimeters; however, the claimed subject matter is not so limited.
- the transformation function can be a non-linear function that can account for non-linear characteristics of the depth sensor and/or the robotic arm.
- the transformation function can be a closed form function, a collection of closed form functions, described using a lookup table or a neural net, or the like.
- FIG. 1 illustrates a system 100 that controls a depth sensor 102 and a robotic arm 104 that operate in a workspace 106 .
- the robotic arm 104 can include an end effector.
- the system 100 includes a control system 108 .
- the control system 108 can control the depth sensor 102 and the robotic arm 104 ; more particularly, the control system 108 can automatically control in-situ calibration and registration of the depth sensor 102 and the robotic arm 104 in the workspace 106 .
- the control system 108 can create a mapping between coordinates extracted from a depth image generated by the depth sensor 102 and a corresponding Cartesian position of the robotic arm 104 (e.g., a position of the end effector of the robotic arm 104 ) in the workspace 106 .
- Coordinates extracted from a depth image generated by the depth sensor 102 can be referred to herein as coordinates in a sensor coordinate frame (e.g., sensor coordinates in the sensor coordinate frame).
- a Cartesian position of the robotic arm 104 in the workspace 106 can be referred to herein as coordinates in an arm coordinate frame (e.g., arm coordinates in the arm coordinate frame).
- the control system 108 can compute a transformation between sensor coordinates in the sensor coordinate frame from the depth sensor 102 and arm coordinates in an arm coordinate frame from the robotic arm 104 , while simultaneously compensating for distortions in a depth field of the depth sensor 102 .
- the transformation of coordinates in the sensor coordinate frame to coordinates in the arm coordinate frame allows for generation of Cartesian position goals for the robotic arm 104 to perform a reaching motion towards a target identified in the depth image by the depth sensor 102 .
- coordinates in the arm coordinate frame can be transformed to coordinates in the sensor coordinate frame by the control system 108 .
- control system 108 simultaneously calibrates the depth sensor 102 (e.g., compensates for distortions in the depth field) and computes the transformation between sensor coordinates from the depth sensor 102 and arm coordinates from the robotic arm 104 .
- Such simultaneous calibration and computation of the transformation enables accurate coordination between the depth sensor 102 and the robotic arm 104 when performing subsequent tasks.
- the control system 108 can transmit data to and receive data from the depth sensor 102 and the robotic arm 104 over a network (or networks).
- a network or networks
- the depth sensor 102 , the robotic arm 104 , and the control system 108 can each be connected to a local network (over which data can be transmitted).
- the control system 108 can be executed by one or more processors of one or more server computing devices (e.g., one or more datacenters can include the control system 108 , etc.).
- the control system 108 can be part of the depth sensor 102 and/or the robotic arm 104 .
- the control system 108 can enable automatic discovery of a coordinate transformation function that compensates for non-linear biases of the depth sensor 102 and the robotic arm 104 . Such coordinate transformation function can be discovered by the control system 108 without separate pre-calibration of the depth sensor 102 .
- the coordinate transformation function can account for non-linear biases of the robotic arm 104 (e.g., characteristics of the robotic arm 104 can change over time, when an end effector is placed at differing locations in the workspace 106 due to mechanical deformation of the robotic arm 104 , etc.).
- the depth sensor 102 can be substantially any type of depth sensor.
- the depth sensor 102 can be a structured light three-dimensional (3D) scanner, a time-of-flight scanner, a modulated light 3D scanner, or the like.
- a scene captured by the depth sensor 102 e.g., a time-of-flight scanner
- IR infrared
- mean depth readings at a particular pixel provided by the depth sensor 102 can be stable in time, displaying small variance.
- depth readings from the depth sensor 102 can be stable over a number of readings and can have a high degree of precision over much of the scene, provided sampling occurs over enough frames.
- a systematic distortion of the depth field may be present for the depth sensor 102 .
- the low-frequency bias patterns can reach several centimeters at sides of the depth image and several millimeters in the middle of the depth image.
- Biases and placement errors are commonly repeatable for a given device, and thus, can potentially be minimized with a non-uniform mapping function.
- the distortion can change shape at different distances, which suggests a relatively complex non-linear bias model.
- the control system 108 can account for the distortion to achieve millimeter or sub-millimeter levels of accuracy. Rather than trying to separately model the distortion, the control system 108 automatically compensates for the distortion during a transformation of coordinates between the sensor coordinate frame and the arm coordinate frame.
- the control system 108 can include a data repository 110 .
- the data repository 110 can include sensor calibration points 112 and arm calibration points 114 .
- the sensor calibration points 112 can include a sensor calibration point 1, . . . , and a sensor calibration point n, where n can be substantially any integer.
- the arm calibration points 114 can include an arm calibration point 1, . . . , and an arm calibration point n.
- the arm calibration points 114 respectively correspond to the sensor calibration points 112 .
- the control system 108 can include a calibration component 122 that can collect and store (e.g., in the data repository 110 ) the sensor calibration points 112 and the arm calibration points 114 .
- a sensor calibration point (e.g., from the sensor calibration points 112 ) includes first coordinates of the end effector of the robotic arm 104 in the sensor coordinate frame. The first coordinates are previously collected (e.g., by the calibration component 122 ) during calibration with the end effector at a given position within the workspace 106 .
- an arm calibration point (e.g., from the arm calibration points 114 ) that corresponds to the sensor calibration point includes second coordinates of the end effector of the robotic arm 104 in the arm coordinate frame. The second coordinates are previously collected during the calibration (e.g., by the calibration component 122 ) with the end effector at the given position within the workspace 106 .
- the sensor calibration points 112 and the corresponding arm calibration points 114 are collected for a plurality of positions throughout the workspace 106 .
- a number and placement of the positions throughout the workspace 106 can be predetermined (e.g., based upon a pre-determined placement grid) or actively identified (e.g., based upon where a larger mapping error is measured or expected).
- the control system 108 can include an interface component 116 .
- the interface component 116 can receive an input point from the depth sensor 102 .
- the input point can include coordinates indicative of a location in the sensor coordinate frame, where the location is in the workspace 106 .
- the interface component 116 can additionally or alternatively receive an input point from the robotic arm 104 .
- the input point from the robotic arm 104 can include coordinates indicative of a location in the arm coordinate frame, where such location is in the workspace 106 .
- the control system 108 can further include a sample selection component 118 .
- the sample selection component 118 can identify sensor calibration points within proximity of the input point from the data repository 110 .
- the sample selection component 118 can identify a subset (less than n) of the sensor calibration points 112 from the data repository 110 as being within proximity of the input point.
- the sample selection component 118 can identify arm calibration points that respectively correspond to the sensor calibration points within proximity of the input point from the data repository 110 .
- the sample selection component 118 can identify a subset (less than n) of the arm calibration points 114 from the data repository 110 that respectively correspond to the sensor calibration points within proximity of the input point.
- control system 108 can include an interpolation component 120 that computes an estimated point that maps to the input point.
- the estimated point can include coordinates indicative of the location in the arm coordinate frame.
- the input point received by the interface component 116 can include coordinates indicative of the location in the sensor coordinate frame and the estimated point computed by the interpolation component 120 can include coordinates indicative of the location in the arm coordinate frame, where the location is in the workspace 106 .
- the estimated point can be computed by the interpolation component 120 based upon the sensor calibration points within proximity of the input point and the arm calibration points that respectively correspond to the sensor calibration points within proximity of the input point.
- control system 108 can include the calibration component 122 that performs the calibration of the depth sensor 102 and the robotic arm 104 .
- the calibration component 122 can perform in-situ calibration or recalibration of the depth sensor 102 and the robotic arm 104 in the workspace 106 .
- the calibration component 122 can cause the end effector of the robotic arm 104 to non-continuously traverse through the workspace 106 based on a pattern. The end effector can be stopped at positions within the workspace 106 according to the pattern.
- the calibration component 122 at each position from the positions within the workspace 106 at which the end effector is stopped, can collect a sensor calibration point for the position of the end effector within the workspace 106 detected by the depth sensor 102 and an arm calibration point for the position of the end effector within the workspace 106 detected by the robotic arm 104 .
- the sensor calibration point for the position can include coordinates of the end effector at the position within the workspace 106 in the sensor coordinate frame.
- the coordinates of the end effector included as part of the sensor calibration point can be coordinates of a centroid (e.g., of a given portion of the end effector, of an object mechanically attached to the end effector, etc.), where the centroid can be computed based on image moments of a standard deviation image from the depth sensor 102 .
- the arm calibration point for the position can include coordinates of the end effector at the position within the workspace 106 in the arm coordinate frame.
- the calibration component 122 can store the sensor calibration point and the arm calibration point for the position in the data repository 110 (e.g., as part of the sensor calibration points 112 and the arm calibration points 114 , respectively).
- FIG. 2 illustrated is an example of the workspace 106 .
- the depth sensor 102 and the robotic arm 104 operate in the workspace 106 .
- the robotic arm 104 can include an end effector 202 .
- the end effector 202 can be caused to non-continuously traverse through the workspace 106 based on a pattern, where the end effector 202 is stopped at positions within the workspace 106 according to the pattern.
- the end effector 202 of the robotic arm 104 can be placed at regular intervals in the workspace 106 .
- interval size can be a function of measured mapping error for a given volume in the workspace 106
- differing preset intervals can be set in the pattern for a given type of depth sensor, etc.
- the depth sensor 102 can detect coordinates of a position of the end effector 202 (e.g., a calibration target on the end effector 202 ) in the workspace 106 in the sensor coordinate frame, while the robotic arm 104 can detect coordinates of the position of the end effector 202 (e.g., the calibration target) in the workspace 106 in the arm coordinate frame.
- the robotic arm 104 can detect coordinates of the position of the end effector 202 (e.g., the calibration target) in the workspace 106 in the arm coordinate frame.
- pairs of corresponding points in the sensor coordinate frame and the arm coordinate frame can be captured when the depth sensor 102 and the robotic arm 104 are calibrated (e.g., recalibrated).
- FIG. 3 shows an exemplary pattern used for calibration (e.g., recalibration) that specifies positions within the workspace 106 .
- the end effector 202 of the robotic arm 104 can be caused to traverse through the workspace 106 based upon the pattern, stopping at positions 302 - 316 shown in FIG. 3 .
- the end effector 202 can follow substantially any path between the positions 302 - 316 specified by the pattern when traversing through the workspace 106 .
- the robotic arm 104 can be caused to stop the end effector 202 of the robotic arm 104 at eight positions 302 - 316 in the volume of the workspace 106 ; however, it is contemplated that the claimed subject matter is not limited to the depicted example, as the pattern shown is provided for illustration purposes.
- the end effector 202 of the robotic arm 104 can be stopped at substantially any number of positions within the volume of the workspace 106 , the positions within the volume of the workspace 106 need not be equally spaced, and so forth.
- the number and placement of the positions 302 - 316 can be predetermined (e.g., based upon a pre-determined placement grid) or actively identified (e.g., based upon where a larger mapping error is measured or expected). For instance, volumes within the workspace 106 that have (or are expected to have) lower mapping errors can be sparsely sampled, while volumes within the workspace 106 that have (or are expected to have) higher mapping errors can be more densely sampled. The foregoing can reduce an amount of time for performing calibration, while enhancing accuracy of a resulting transformation function.
- the depth sensor 102 (represented by circle 318 ) and the robotic arm 104 (represented by circle 320 ) can detect respective coordinates of the end effector 202 of the robotic arm 104 at each of the positions 302 - 316 .
- the depth sensor 102 can detect a sensor calibration point for the position 314 of the end effector 202 within the workspace 106 and the robotic arm 104 can detect an arm calibration point for the position 314 of the end effector 202 within the workspace 106 .
- the sensor calibration point for the position 314 includes first coordinates of the end effector 202 in the sensor coordinate frame
- the arm calibration point for the position 314 includes second coordinates of the end effector 202 in the arm coordinate frame.
- the calibration component 122 can cause the end effector of the robotic arm 104 to be placed at the positions as specified by the pattern.
- a pair of corresponding points e.g., a sensor calibration point and an arm calibration point
- (p c , p a ) ( ⁇ x c , y c , z c ⁇ , ⁇ x a , y a , z a ⁇ ).
- P cn (p a1 , . . . , P an ) be the sets of coordinates collected in the sensor coordinate frame and the arm coordinate frame respectively (e.g., P c can be the sensor calibration points 112 and P a can be the arm calibration points 114 ).
- the depth sensor 102 can be placed in a fixed location with a relevant portion of the workspace 106 (e.g., the portion in which the robotic arm 104 operates) in view.
- the depth sensor 102 can have an unobstructed view of the workspace 106 within the range of the depth sensor 102 . It is to be appreciated that other restrictions on the placement of the depth sensor 102 need not be employed, though a marker on the end effector can be oriented in a general direction of the depth sensor 102 .
- the robotic arm 104 can be equipped with a calibration marker attached to the end effector that allows precise localization of the end effector in 3D space of the depth sensor 102 .
- substantially any technique can be used by the depth sensor 102 to estimate coordinates of the end effector in the sensor coordinate frame, as long as such technique provides sufficiently precise results.
- An example of such a technique employed by the depth sensor 102 to estimate coordinates of the end effector in the sensor coordinate frame which provides sub-millimeter precision is described below; yet, is it contemplated that the claimed subject matter is not so limited as other techniques can additionally or alternatively be utilized.
- a plurality of markers can be attached to the end effector of the robotic arm 104 .
- several calibration point pairs can be collected at each position in the workspace 106 .
- a fork that is attachable to the end effector of the robotic arm 104 can include spatially separated teeth, and an end of each tooth can include or be mechanically attached to a marker.
- the plurality of markers can be oriented in a row, a rectangle, or substantially any pattern so long as parameters of the pattern are known. Use of a plurality of markers can reduce an amount of time for sampling the volume of the workspace 106 .
- the calibration component 122 can cause the robotic arm 104 to move the end effector through the workspace 106 in a regular pattern, stopping at each of the positions specified by the pattern for a duration of time to allow the depth sensor 102 to collect depth samples (e.g., the sensor calibration points 112 ).
- p ci can be computed based on segmentation of the end effector tip in the sensor coordinate frame
- p ai can be computed based on a forward kinematic model of the robotic arm 104 .
- a desired workspace e.g., the workspace 106 of the robotic arm 104 can be contained within a convex hull of P a .
- a denser set of points can provide a more accurate transformation.
- an amount of time it takes to traverse the sample points is inversely proportional to the cube of the distance between points.
- a larger number of sample points can be collected (e.g., during calibration or recalibration) in a volume of the workspace 106 that has a larger mapping error relative to a disparate volume of the workspace 106 ; yet, the claimed subject matter is not so limited.
- the control system 108 can further include an initialization component 124 that can initialize the depth sensor 102 and the robotic arm 104 by employing global Procrustes analysis.
- the initialization component 124 can match two shapes in different coordinate frames, possibly with different scales, utilizing Procrustes analysis.
- the initialization component 124 can utilize two sets of points X and Y as input for the Procrustes analysis. Further, the initialization component 124 can determine a translation, scaling, and rotation operation that, when applied to points in Y, minimize a sum of the squared distances to the points in X.
- the initialization component 124 enables mapping between the sensor coordinate frame and the arm coordinate frame by performing Procrustes analysis using the entire sets P c and P a . Once the transformation is computed, the initialization component 124 can estimate coordinates in the arm coordinate space for any 3D point in the sensor coordinate space, or vice versa. This technique can be simple to implement, and once the transformation is computed, it can be applied to any point in P c . However, such approach may not account for local distortions in the depth field.
- the initialization component 124 can provide a rough approximation of registration configuration for the depth sensor 102 and the robotic arm 104 .
- the initialization component 124 can generate the rough approximation of the registration configuration.
- the calibration component 122 can thereafter cause additional sampling of the workspace 106 as described herein (e.g., the rough approximation of the registration configuration can be used during at least part of the additional sampling of the workspace 106 ); yet, the claimed subject matter is not so limited.
- the control system 108 can further include a monitor component 126 that monitors conditions of the depth sensor 102 , the robotic arm 104 , and the workspace 106 .
- the calibration component 122 can selectively initiate recalibration based upon a condition as detected by the monitor component 126 . Examples of conditions that can be monitored by the monitor component 126 include movement of the depth sensor 102 in the workspace 106 , movement of the robotic arm 104 in the workspace 106 , a temperature change in the workspace 106 exceeding a threshold temperature value, a mapping error exceeding a threshold error value, a combination thereof, or the like.
- the calibration component 122 can recalibrate the depth sensor 102 and the robotic arm 104 . It is contemplated that the monitor component 126 can monitor a subset of the above-noted conditions and/or other conditions can be monitored by the monitor component 126 .
- a vision-guided manipulation system e.g., the depth sensor 102 and the robotic arm 104 deployed in the workspace 106 may fall out of initial calibration due to a number of factors (e.g., intentional and unintentional moving of equipment, temperature changes, etc.).
- the monitor component 126 and the recalibration component 122 allow for recalibration to be performed on-demand and on-site.
- the monitor component 126 can verify the calibration (e.g., periodically, prior to a task to be performed by the depth sensor 102 and the robotic arm 104 , etc.).
- Verification can be performed by placing a known object at a few known locations in the workspace 106 with the robotic arm 104 , and observing the known object with the depth sensor 102 . If the monitor component 126 detects a mapping error above the threshold error value, the monitor component 126 can cause the calibration component 122 to recalibrate (e.g., until the mapping error is below the threshold error value, etc.).
- Recalibration performed by the calibration component 122 can include causing the end effector to non-continuously traverse through the workspace 106 based upon a pattern, where the end effector is stopped at positions within the workspace 106 according to the pattern.
- the pattern used for recalibration can be substantially similar to or differ from a previously used pattern (e.g., a pattern used for calibration, a pattern used for prior recalibration, etc.).
- a pattern used for recalibration can allow for sampling a portion of the workspace 106 , whereas a previously used pattern allowed for sampling across the workspace 106 .
- a pattern used for recalibration can allow for more densely sampling a given volume of the workspace 106 . Similar to above, at each position within the workspace 106 at which the end effector is stopped, a sensor calibration point for the position detected by the depth sensor 102 can be collected and an arm calibration point for the position detected by the robotic arm 104 can be collected.
- the calibration component 122 can recalibrate the depth sensor 102 and the robotic arm 104 subsequent to computation of the estimated point that maps to the input point.
- the monitor component 126 can receive a measured point from the robotic arm 104 .
- the measured point can include coordinates indicative of the location in the arm coordinate frame detected by the robotic arm 104 (e.g., the location specified by the coordinates of the input point and the coordinates of the estimated point).
- the monitor component 126 can compute a mapping error based at least in part upon the measured point and the estimated point computed by the interpolation component 120 . Further, the monitor component 126 can compare the mapping error to a threshold error value.
- the monitor component 126 can cause the calibration component 122 to recalibrate the depth sensor 102 and the robotic arm 104 .
- the calibration component 122 can cause a volume of the workspace 106 that includes the location to be resampled or more densely sampled responsive to the mapping error exceeding the threshold error value.
- the calibration component 122 can cause the entire workspace 106 to be resampled responsive to the mapping error exceeding the threshold error value.
- a number of positions within a given volume of the workspace 106 specified by the pattern can be a function of a mapping error for the given volume.
- a volume with a larger mapping error e.g., larger relative bias
- a lower mapping error e.g., smaller relative bias
- the sample selection component 118 can location a tetrahedron ⁇ with vertices from P c that includes w c .
- the vertices can further be used, along with known correspondences in P a , by the interpolation component 120 to generate the estimated point w′ c .
- the interpolation component 120 can generate the estimated point w′ c using local Procrustes analysis or linear interpolation using barycentric coordinates of w c in ⁇ .
- FIG. 4 illustrated is another system 400 that includes the control system 108 that controls the depth sensor 102 and the robotic arm 104 during calibration and registration.
- the control system 108 can include the interface component 116 , the sample selection component 118 , the interpolation component 120 , the calibration component 122 , the initialization component 124 , the monitor component 126 , and the data repository 110 as described herein.
- the control system 108 can include a segmentation component 402 that forms tetrahedrons using a Delaunay triangulation on the sensor calibration points 112 throughout the workspace 106 .
- the segmentation component 402 can find the Delaunay triangulation I of P c to form tetrahedrons 404 .
- the tetrahedrons 404 can be retained in the data repository 110 .
- FIG. 5 illustrates an example of tetrahedrons 502 - 506 that can be formed using Delaunay triangulation on sensor calibration points 506 - 514 (e.g., the sensor calibration points 112 ) in the workspace 106 .
- the tetrahedron 502 can include the sensor calibration point 508 , the sensor calibration point 510 , the sensor calibration point 512 , and the sensor calibration point 514 as vertices.
- the tetrahedron 504 can include the sensor calibration point 506 , the sensor calibration point 508 , the sensor calibration point 510 , and the sensor calibration point 512 as vertices.
- the sample selection component 118 can identify sensor calibration points that are within proximity of an input point 516 by identifying a particular tetrahedron that comprises the input point 516 . Thus, as depicted, the sample selection component 118 can identify that the input point 516 is within the tetrahedron 502 . Further, vertices of the tetrahedron 502 can be identified as being the sensor calibration points within proximity of the input point 516 .
- control system 108 can employ local Procrustes analysis to compute estimated points that map to input points received by the interface component 116 .
- Local distortions in the depth field can be mitigated by performing Procrustes analysis using data from a neighborhood around the input points to be transformed in the sensor space.
- the segmentation component 402 can find the Delaunay triangulation I of P c .
- the sample selection component 118 can identify a particular tetrahedron from the tetrahedrons 404 formed by the segmentation component 402 .
- the particular tetrahedron identified by the sample selection component 118 includes the input point.
- the sensor calibration points within proximity of the input point identified by the sample selection component 118 are vertices of the particular tetrahedron.
- the sample selection component 118 can identify arm calibration points that respectively correspond to the vertices of the particular tetrahedron.
- the interpolation component 120 can compute a transformation using Procrustes analysis based upon the vertices of the particular tetrahedron and the arm calibration points that respectively correspond to the vertices of the particular tetrahedron.
- the transformation can be computed by the interpolation component 120 responsive to receipt of the input point, for example; however, according to other examples, it is contemplated that transformations can be computed prior to receipt of the input point (e.g., responsive to creation of the tetrahedrons 404 by the segmentation component 402 , etc.).
- the interpolation component 120 can further apply the transformation to the input point to compute the estimated point.
- the tetrahedron ⁇ ⁇ I that includes w c can be found by the sample selection component 118 , and the interpolation component 120 can perform the Procrustes analysis using the vertices of ⁇ , ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 ⁇ ⁇ P c , along with corresponding points in P a .
- the local Procrustes analysis can be performed by locating a particular tetrahedron in I, and then performing the Procrustes analysis utilizing vertices of such tetrahedron.
- the local Procrustes analysis can better handle local distortions in the depth field as compared to global Procrustes analysis. Further, such approach can generally provide a more accurate transformation from a sensor coordinate frame to the arm coordinate frame as compared to the global Procrustes analysis.
- the control system 108 can employ a Delaunay barycentric technique to compute estimated points that map to input points received by the interface component 116 .
- the segmentation component 402 can find the Delaunay triangulation I of P c .
- the sample selection component 118 can identify a particular tetrahedron from the tetrahedrons 404 formed by the segmentation component 402 .
- the particular tetrahedron identified by the sample selection component 118 includes the input point.
- the sensor calibration points within proximity of the input point identified by the sample selection component 118 are vertices of the particular tetrahedron.
- the sample selection component 118 can identify arm calibration points that respectively correspond to the vertices of the particular tetrahedron.
- the interpolation component 120 can compute barycentric coordinates of the input point with respect to the vertices of the particular tetrahedron.
- the interpolation component 120 can interpolate the estimated point based upon the barycentric coordinates and the arm calibration points that respectively correspond to the vertices of the particular tetrahedron.
- a combined barycentric interpolation technique can be employed by the control system 108 .
- the sample selection component 118 can identify a preset number of the sensor calibration points 112 that are nearest to the input point.
- the interpolation component 120 can compute the estimated point that maps to the input point by forming one or more tetrahedrons that include the input point. The tetrahedrons can be created with vertices being from the preset number of sensor calibration points nearest to the input point.
- the interpolation component 120 can compute barycentric coordinates of the input point with respect to vertices of the tetrahedron.
- the interpolation component 120 can further interpolate a value of the estimated point from the tetrahedron based upon the barycentric coordinates and arm calibration points that respectively correspond to the vertices of the tetrahedron.
- the interpolation component 120 can combine values of the estimated point from the one or more tetrahedrons to compute the estimated point that maps to the input point.
- FIG. 6 shows an example where a preset number of sensor calibration points nearest to an input point are identified and used to form tetrahedrons that include the input point.
- six sensor calibration points 602 - 612 nearest to an input point 614 are identified by the sample selection component 118 .
- the interpolation component 120 can form a tetrahedron 616 and a tetrahedron 618 .
- the tetrahedron 616 and the tetrahedron 618 can each include the input point 614 .
- the interpolation component 120 can compute barycentric coordinates of the input point 614 with respect to the vertices of the tetrahedron 616 (e.g., the sensor calibration point 604 , the sensor calibration point 608 , the sensor calibration point 610 , and the sensor calibration point 612 ). The interpolation component 120 can further interpolate a value of the estimated point from the tetrahedron 616 based upon the barycentric coordinates and arm calibration points that respectively correspond to the vertices of tetrahedron 616 .
- the interpolation component 120 can similarly compute a value of the estimated point from the tetrahedron 618 (and any disparate tetrahedrons formed from the k nearest sensor calibration points 602 - 612 that includes the input point 614 ). Moreover, the interpolation component 120 can combine values of the estimated point from the one or more tetrahedrons (e.g., the tetrahedron 616 , the tetrahedron 618 , any disparate tetrahedron, etc.) to compute the estimated point that maps to the input point 614 .
- the one or more tetrahedrons e.g., the tetrahedron 616 , the tetrahedron 618 , any disparate tetrahedron, etc.
- Barycentric coordinates can locate a point on an interior of a simplex (e.g., a triangle, a tetrahedron, etc.) in relation to vertices of that simplex. Homogeneous barycentric coordinates can be normalized so that coordinates inside a simplex sum to 1, and can be used to interpolate function values for points inside a simplex if values of the function are known at the vertices.
- a simplex e.g., a triangle, a tetrahedron, etc.
- B a D ⁇ ( w c , ⁇ 2 , ⁇ 3 , ⁇ 4 ) D ⁇ ( ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 )
- B b D ⁇ ( ⁇ 1 , w c , ⁇ 3 , ⁇ 4 ) D ⁇ ( ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 )
- B c D ⁇ ( ⁇ 1 , ⁇ 2 , w c , ⁇ 4 ) D ⁇ ( ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 )
- B d D ⁇ ( ⁇ 1 , ⁇ 2 , ⁇ 3 , w c ) D ⁇ ( ⁇ 1 , ⁇ 2 , ⁇ 3 , ⁇ 4 )
- D is defined as the determinant of the following matrix:
- the interpolation component 120 can interpolate in the arm frame according to:
- ⁇ circumflex over ( ⁇ ) ⁇ n is the coordinate in the arm coordinate frame that corresponds to ⁇ n in the calibration data.
- ⁇ can be constructed which is a set of k nearest neighbors of w c in P c . This yields a set of candidate vertices from P c paired with the corresponding points in P a :
- I can be generated, which can include unique tetrahedrons whose vertices are members of ⁇ and include w c .
- the final estimate of w c can be computed in the arm coordinate frame as w′ a using:
- m is the number of members in I.
- the foregoing can be referred to as combined barycentric interpolation.
- barycentric interpolation inside a single tetrahedron obtained using a Delaunay triangulation of P a can also be utilized.
- barycentric interpolation techniques can enable a computed transformation to be adjusted based on relative proximity to neighboring points, whereas local Procrustes techniques can yield a common transformation for any location in a given tetrahedron in camera space.
- the barycentric techniques can compute three dimensional positions; however, such techniques may not provide an estimate of rotations between sensor and arm frames. Yet, it is contemplated that barycentric interpolation and Procrustes analysis can be implemented simultaneously by the control system 108 .
- FIG. 7 illustrated is another system 700 that includes the control system 108 that controls the depth sensor 102 and the robotic arm 104 during calibration and registration.
- the control system 108 can further include an extrapolation component 702 that can use the transformation generated by the initialization component 124 for a given input point, where the initialization component 124 employed the global Procrustes analysis to generate the transformation.
- the given input point can include coordinates indicative of a location in the sensor coordinate frame that is outside a convex hull of the sensor calibration points 112 .
- the extrapolation component 702 can extrapolate a first estimated point that maps to a first input point using the transformation generated from the global Procrustes analysis if the first input point is outside the convex hull of the sensor calibration points 112 , while the interpolation component 120 can interpolate a second estimated point that maps to a second input point using one or more of the techniques described herein if the second input point is within the convex hull of the sensor calibration points 112 .
- a centroid can be computed based on image moments of a standard deviation image from the depth sensor 102 .
- the coordinates of the end effector can be coordinates of the centroid. It is to be appreciated, however, that the claimed subject matter is not limited to the exemplary technique, and other techniques can additionally or alternatively be used by the depth sensor 102 .
- at least a portion of the below technique described as being performed by the depth sensor 102 can be performed by the calibration component 122 (or the control system 108 in general).
- the depth sensor 102 can employ marker segmentation.
- a marker can be mounted to a tool flange of the robotic arm 104 .
- the marker can allow precise and accurate segmentation of the end effector tip.
- a number of frames can be accumulated by the depth sensor 102 before doing segmentation.
- image segmentation can be performed and Euclidean coordinates of a centroid can be calculated by the depth sensor 102 in a standard deviation image as opposed to depth frame.
- the standard deviation image can be a two-dimensional (2D) array of the same dimensions as the depth image (e.g., 512 ⁇ 424 pixels, etc.):
- S ij being a standard deviation estimate of sensor depth readings for a pixel (i,j) computed over N samples.
- the depth sensor 102 can filter out depth values that lie outside of a work envelope of interest (e.g., 1.2 m, etc.). Further, the scene can be cleared of objects that may be confused for the marker.
- the depth sensor 102 can scan the image top down, left to right, looking for a first stable pixel on the scene (e.g., based upon a preset standard deviation threshold, 1.5-2 mm, etc.). Once such pixel is found, the depth sensor 102 can begin scanning its neighborhood, and if enough stable pixels are seen, it can be assumed that the marker has been discovered. Thus, the depth sensor 102 can compute the centroid using image moments:
- ⁇ x _ , y _ ⁇ ⁇ M 10 M 00 , M 01 M 00 ⁇
- M ij is the first order image moment on standard deviation frame:
- T is a cutoff threshold for standard deviation of individual pixels; above the cutoff threshold, a pixel is not included in calculating the image moments. Negating values of standard deviation can have the effect of assigning higher weights to pixels with stable depth readings. Accordingly, X and Y coordinates for the centroid of the marker can be computed.
- the depth sensor 102 can average a region of pixels (e.g., 3 ⁇ 3 or 5 ⁇ 5) around X and Y and record an average depth.
- an estimated 3D coordinate for the tip of the end effector can be computed.
- Cartesian coordinates of a common point in the arm coordinate frame can be obtained based on its kinematic model:
- Both sets of coordinates can be recorded as a linked pair in the data repository 110 . Repeating this process n times, two sets of n matching 3D points can be collected and retained in the data repository 110 (one from the depth sensor 102 and one from the robotic arm 104 ).
- V ( n ) ⁇ C k1 ,C a1 ⁇ , ⁇ C k2 ,C a2 ⁇ , . . . , ⁇ C kn ,C an ⁇
- FIGS. 8-9 illustrate exemplary methodologies relating to controlling a depth sensor and a robotic arm that operate in the workspace. While the methodologies are shown and described as being a series of acts that are performed in a sequence, it is to be understood and appreciated that the methodologies are not limited by the order of the sequence. For example, some acts can occur in a different order than what is described herein. In addition, an act can occur concurrently with another act. Further, in some instances, not all acts may be required to implement a methodology described herein.
- the acts described herein may be computer-executable instructions that can be implemented by one or more processors and/or stored on a computer-readable medium or media.
- the computer-executable instructions can include a routine, a sub-routine, programs, a thread of execution, and/or the like.
- results of acts of the methodologies can be stored in a computer-readable medium, displayed on a display device, and/or the like.
- FIG. 8 illustrates a methodology 800 of controlling a depth sensor and a robotic arm that operate in the workspace.
- the robotic arm can include an end effector.
- an input point can be received from the depth sensor.
- the input point includes coordinates indicative of a location in a sensor coordinate frame, where the location is in the workspace.
- sensor calibration points within proximity of the input point can be identified.
- a sensor calibration point includes first coordinates of the end effector in the sensor coordinate frame, the first coordinates being previously collected during calibration (e.g., recalibration) with the end effector at a given position within the workspace.
- arm calibration points that respectively correspond to the sensor calibration points can be identified.
- An arm calibration point that corresponds to the sensor calibration point includes second coordinates of the end effector in an arm coordinate frame. Further, the second coordinates are previously collected during the calibration (e.g., recalibration) with the end effector at the given position within the workspace.
- a processor can be employed to compute an estimated point that maps to the input point.
- the estimated point can include coordinates indicative of the location in the arm coordinate frame.
- the estimated point can be computed based upon the sensor calibration points (e.g., the sensor calibration points within proximity of the input point) and the arm calibration points (e.g., the arm calibration points that respectively correspond to the sensor calibration points within proximity of the input point).
- the robotic arm can include an end effector.
- an input point can be received from the robotic arm.
- the input point can include coordinates indicative of the location in an arm coordinate frame, where the location is in the workspace.
- arm calibration points within proximity of the input point can be identified.
- An arm calibration point can include first coordinates of the end effector in the arm coordinate frame. The first coordinates are previously collected during calibration (e.g., recalibration) with the end effector at a given position within the workspace.
- sensor calibration points that respectively correspond to the arm calibration points can be identified.
- a sensor calibration point that corresponds to the arm calibration point can include second coordinates of the end effector in a sensor coordinate frame, the second coordinates being previously collected during the calibration (e.g., recalibration) with the end effector at the given position within the workspace.
- a processor can be employed to compute an estimated point that maps to the input point.
- the estimated point can include coordinates indicative of the location in the sensor coordinate frame.
- the estimated point can be computed based upon the arm calibration points (e.g., the arm calibration points within proximity of the input point) and the sensor calibration points (e.g., the sensor calibration points that respectively corresponding to the arm calibration points within proximity of the input point).
- the method according to Example 1, further comprising performing the calibration, performance of the calibration comprises: causing the end effector to non-continuously traverse through the workspace based on a pattern, wherein the end effector is stopped at positions within the workspace according to the pattern; and at each position from the positions within the workspace at which the end effector is stopped: collecting a sensor calibration point for the position of the end effector within the workspace detected by the depth sensor, the sensor calibration point for the position comprises coordinates of the end effector at the position within the workspace in the sensor coordinate frame; and collecting an arm calibration point for the position of the end effector within the workspace detected by the robotic arm, the arm calibration point for the position comprises coordinates of the end effector at the position within the workspace in the arm coordinate frame.
- Example 2 The method according to Example 2, further comprising computing a centroid based on image moments of a standard deviation image from the depth sensor, the coordinates of the sensor calibration point being coordinates of the centroid.
- recalibration further comprises: causing the end effector to non-continuously traverse through the workspace based on a pattern, wherein the end effector is stopped at positions within the workspace according to the pattern; and at each position from the positions within the workspace at which the end effector is stopped: collecting a sensor calibration point for the position of the end effector within the workspace detected by the depth sensor, the sensor calibration point for the position comprises coordinates of the end effector at the position within the workspace in the sensor coordinate frame; and collecting an arm calibration point for the position of the end effector within the workspace detected by the robotic arm, the arm calibration point for the position comprises coordinates of the end effector at the position within the workspace in the arm coordinate frame.
- Example 4 further comprising: receiving a measured point from the robotic arm, the measured point comprises coordinates indicative of the location in the arm coordinate frame detected by the robotic arm; computing a mapping error based at least in part upon the measured point and the estimated point; comparing the mapping error to a threshold error value; and responsive to the mapping error being greater than the threshold error value, recalibrating the depth sensor and the robotic arm.
- Example 4 The method according to Example 4, wherein a number of positions within a given volume of the workspace specified by the pattern is a function of a mapping error for the given volume.
- the method according to any of Examples 1-6 further comprising: creating tetrahedrons using a Delaunay triangulation on sensor calibration points collected throughout the workspace, the sensor calibration points collected throughout the workspace comprise: the sensor calibration points within proximity of the input point; and at least one disparate sensor calibration point outside proximity of the input point.
- identifying the sensor calibration points within proximity of the input point further comprises: identifying a particular tetrahedron from the tetrahedrons, the particular tetrahedron comprises the input point, the sensor calibration points within proximity of the input point being vertices of the particular tetrahedron.
- computing the estimated point that maps to the input point further comprises: computing a transformation using Procrustes analysis based upon: the vertices of the particular tetrahedron; and the arm calibration points, wherein the arm calibration points respectively correspond to the vertices of the particular tetrahedron; and applying the transformation to the input point to compute the estimated point.
- computing the estimated point that maps to the input point further comprises: computing barycentric coordinates of the input point with respect to the vertices of the particular tetrahedron; and interpolating the estimated point based upon the barycentric coordinates and the arm calibration points, wherein the arm calibration points respectively correspond to the vertices of the particular tetrahedron.
- identifying the sensor calibration points within proximity of the input point further comprises: identifying a preset number of sensor calibration points nearest to the input point; and computing the estimated point that maps to the input point further comprises: creating one or more tetrahedrons that comprise the input point, the tetrahedrons created with vertices being from the preset number of sensor calibration points nearest to the input point; for each tetrahedron of the one or more tetrahedrons: computing barycentric coordinates of the input point with respect to vertices of the tetrahedron; and interpolating a value of the estimated point from the tetrahedron based upon the barycentric coordinates and arm calibration points that respectively correspond to the vertices of the tetrahedron; and combining values of the estimated point from the one or more tetrahedrons to compute the estimated point that maps to the input point.
- the method according to any of Examples 1-11 further comprising: receiving a disparate input point from the robotic arm, the disparate input point comprises coordinates indicative of a disparate location in the arm coordinate frame, the disparate location being in the workspace; identifying disparate arm calibration points within proximity of the disparate input point; identifying disparate sensor calibration points that respectively correspond to the disparate arm calibration points; and employing the processor to compute a disparate estimate point that maps to the disparate input point, the disparate estimated point comprises coordinates indicative of the disparate location in the sensor coordinate frame, the disparate estimate point computed based upon: the disparate arm calibration points; and the disparate sensor calibration points.
- a system that controls a depth sensor and a robotic arm that operate in a workspace the robotic arm comprises an end effector
- the system comprises: a data repository, the data repository retains: sensor calibration points throughout the workspace, a sensor calibration point comprises first coordinates of the end effector in a sensor coordinate frame, the first coordinates previously collected during calibration with the end effector at a given position within the workspace; and arm calibration points throughout the workspace, the arm calibration points respectively correspond to the sensor calibration points, an arm calibration point that corresponds to the sensor calibration point comprises second coordinates of the end effector in an arm coordinate frame, the second coordinates previously collected during the calibration with the end effector at the given position within the workspace; an interface component that receives an input point from the depth sensor, the input point comprises coordinates indicative of a location in the sensor coordinate frame, the location being in the workspace; a sample selection component that: identifies sensor calibration points within proximity of the input point from the data repository; and identifies arm calibration points that respectively correspond to the sensor calibration points within proximity of the input point from the data repository; and an inter
- Example 14 The system according to Example 14, further comprising a monitor component that monitors conditions of the depth sensor, the robotic arm, and the workspace, wherein the calibration component selectively initiates recalibration based upon the conditions.
- the sample selection component identifies a particular tetrahedron from the tetrahedrons formed by the segmentation component, the particular tetrahedron comprises the input point, the sensor calibration points within proximity of the input point being vertices of the particular tetrahedron; and identifies arm calibration points that respectively correspond to the vertices of the particular tetrahedron; and the interpolation component: computes a transformation using Procrustes analysis based upon: the vertices of the particular tetrahedron; and the arm calibration points that respectively corresponding to the vertices of the particular tetrahedron; and applies the transformation to the input point to compute the estimated point.
- the sample selection component identifies a particular tetrahedron from the tetrahedrons formed by the segmentation component, the particular tetrahedron comprises the input point, the sensor calibration points within proximity of the input point being vertices of the particular tetrahedron; and identifies arm calibration points that respectively correspond to the vertices of the particular tetrahedron; and the interpolation component: computes barycentric coordinates of the input point with respect to the vertices of the particular tetrahedron; and interpolates the estimated point based upon the barycentric coordinates and the arm calibration points that respectively correspond to the vertices of the particular tetrahedron.
- the sample selection component identifies a preset number of sensor calibration points nearest to the input point
- the interpolation component forms one or more tetrahedrons that comprise the input point, the tetrahedrons created with vertices being from the preset number of sensor calibration points nearest to the input point; for each tetrahedron of the one or more tetrahedrons: computes barycentric coordinates of the input point with respect to vertices of the tetrahedron; and interpolates a value of the estimated point from the tetrahedron based upon the barycentric coordinates and arm calibration points that respectively correspond to the vertices of the tetrahedron; and combines values of the estimated point from the one or more tetrahedrons to compute the estimated point that maps to the input point.
- a method of controlling a depth sensor and a robotic arm that operate in a workspace the robotic arm comprises an end effector, the method comprising: receiving an input point from the robotic arm, the input point comprises coordinates indicative of a location in an arm coordinate frame, the location being in the workspace; identifying arm calibration points within proximity of the input point, an arm calibration point comprises first coordinates of the end effector in the arm coordinate frame, the first coordinates previously collected during calibration with the end effector at a given position within the workspace; identifying sensor calibration points that respectively correspond to the arm calibration points, a sensor calibration point that corresponds to the arm calibration point comprises second coordinates of the end effector in a sensor coordinate frame, the second coordinates previously collected during the calibration with the end effector at the given position within the workspace; and employing a processor to compute an estimated point that maps to the input point, the estimated point comprises coordinates indicative of the location in the sensor coordinate frame, the estimated point computed based upon: the arm calibration points; and the sensor calibration points.
- a system of controlling a depth sensor and a robotic arm that operate in a workspace the robotic arm comprises an end effector, the system comprising: means for receiving an input point from the depth sensor, the input point comprises coordinates indicative of a location in a sensor coordinate frame, the location being in the workspace; means for identifying sensor calibration points within proximity of the input point, a sensor calibration point comprises first coordinates of the end effector in the sensor coordinate frame, the first coordinates previously collected during calibration with the end effector at a given position within the workspace; means for identifying arm calibration points that respectively correspond to the sensor calibration points, an arm calibration point that corresponds to the sensor calibration point comprises second coordinates of the end effector in an arm coordinate frame, the second coordinates previously collected during the calibration with the end effector at the given position within the workspace; and means for computing an estimated point that maps to the input point, the estimated point comprises coordinates indicative of the location in the arm coordinate frame, the estimated point computed based upon: the sensor calibration points; and the arm calibration points.
- the computing device 1000 may be used in a system that controls calibration and/or registration of a depth sensor and a robotic arm operating in a workspace.
- the computing device 1000 includes at least one processor 1002 that executes instructions that are stored in a memory 1004 .
- the instructions may be, for instance, instructions for implementing functionality described as being carried out by one or more components discussed above or instructions for implementing one or more of the methods described above.
- the processor 1002 may access the memory 1004 by way of a system bus 1006 .
- the memory 1004 may also store sensor calibration points, arm calibration points, transformation functions, tetrahedrons, and so forth.
- the computing device 1000 additionally includes a data store 1008 that is accessible by the processor 1002 by way of the system bus 1006 .
- the data store 1008 may include executable instructions, sensor calibration points, arm calibration points, transformation functions, tetrahedrons, etc.
- the computing device 1000 also includes an input interface 1010 that allows external devices to communicate with the computing device 1000 .
- the input interface 1010 may be used to receive instructions from an external computer device, from a user, etc.
- the computing device 1000 also includes an output interface 1012 that interfaces the computing device 1000 with one or more external devices.
- the computing device 1000 may display text, images, etc. by way of the output interface 1012 .
- the external devices that communicate with the computing device 1000 via the input interface 1010 and the output interface 1012 can be included in an environment that provides substantially any type of user interface with which a user can interact.
- user interface types include graphical user interfaces, natural user interfaces, and so forth.
- a graphical user interface may accept input from a user employing input device(s) such as a keyboard, mouse, remote control, or the like and provide output on an output device such as a display.
- a natural user interface may enable a user to interact with the computing device 1000 in a manner free from constraints imposed by input device such as keyboards, mice, remote controls, and the like. Rather, a natural user interface can rely on speech recognition, touch and stylus recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, machine intelligence, and so forth.
- the computing device 1000 may be a distributed system. Thus, for instance, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by the computing device 1000 .
- the terms “component” and “system” are intended to encompass computer-readable data storage that is configured with computer-executable instructions that cause certain functionality to be performed when executed by a processor.
- the computer-executable instructions may include a routine, a function, or the like. It is also to be understood that a component or system may be localized on a single device or distributed across several devices.
- Computer-readable media includes computer-readable storage media.
- a computer-readable storage media can be any available storage media that can be accessed by a computer.
- such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer.
- Disk and disc include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc (BD), where disks usually reproduce data magnetically and discs usually reproduce data optically with lasers. Further, a propagated signal is not included within the scope of computer-readable storage media.
- Computer-readable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another. A connection, for instance, can be a communication medium.
- the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave
- coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave
- the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave
- the functionality described herein can be performed, at least in part, by one or more hardware logic components.
- illustrative types of hardware logic components include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Manipulator (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
- Manipulation of objects can be a function of a wide class of robotic systems (e.g., manipulation systems). For instance, a typical closed-loop manipulation system can include at least one optical sensor (e.g., depth sensor) that perceives and interprets real-world scenes and a physical manipulator (e.g., actuator, robotic arm, etc.) that can reach into the scenes and effect change (e.g., pick up and move a physical object). An ability to perform useful tasks with a physical object can depend on the ability of the robotic arm to manipulate the object with sufficient accuracy for the task. The accuracy can depend on a mapping between a coordinate system of the depth sensor and a coordinate system of the robotic arm. For instance, a mapping can be created between coordinates extracted from a depth image generated by a depth sensor and Cartesian positions of a robotic arm in a workspace. Such mapping between coordinate systems can also be referred to as registration. Precise and accurate mapping between depth sensors and robotic arms is a long-standing challenge in robotics and is relevant for substantially any robotic system that has one or more depth sensors. Errors in the mapping can result in inaccuracies of the overall manipulation system.
- Several independent sources of registration errors can each contribute to overall system inaccuracy. In a typical naïve setup, a Cartesian coordinate system oftentimes is chosen to represent the real world, and coordinates of the robotic arm and the depth sensor in that system are determined by measurements. A linear function (e.g., commonly represented as a transformation matrix) can then transform coordinates of objects as seen by the depth sensor to coordinates of the world (or coordinates of the robotic arm). Such a typical model can have a number of potential error sources. According to an example, errors can result from measuring exact placement of the depth sensor and the robotic arm in a common coordinate frame, which can be difficult at best. Further, the overall system may be prone to falling out of calibration due to mechanical movement of parts within the system. According to another example, while coordinate origins may be determined automatically, linear mapping between reference frames of the robotic arm, the depth sensor, and the world oftentimes can be incorrect. For instance, a depth sensor may have a bias that varies non-linearly with distance or across sensor areas. According to yet another example, a robotic arm can have less than ideal ability to achieve exact placement of an end effector of such robotic arm at a desired Cartesian coordinate. Further, depth sensors or robotic arms, even those from the same vendor, may have slightly different biases, which can further complicate typical approaches for registration.
- Described herein are various technologies that pertain to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace. The robotic arm can include an end effector. A non-parametric technique for registration between the depth sensor and the robotic arm can be implemented. The registration technique can utilize a sparse sampling of the workspace (e.g., collected during calibration or recalibration). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such technique can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized.
- In accordance with various embodiments, the depth sensor and the robotic arm can be controlled (e.g., during registration, by a control system, etc.). For instance, an input point from the depth sensor can be received. The input point can include coordinates indicative of a location in a sensor coordinate frame, where the location is in the workspace. Sensor calibration points within proximity of the input point can be identified. A sensor calibration point can include first coordinates of the end effector in the sensor coordinate frame, where the first coordinates are previously collected during calibration (e.g., recalibration) with the end effector at a given position within the workspace. Moreover, arm calibration points that respectively correspond to the sensor calibration points can be identified. An arm calibration point that corresponds to the sensor calibration point can include second coordinates of the end effector in an arm coordinate frame, where the second coordinates are previously collected during the calibration (e.g., recalibration) with the end effector at the given position within the workspace. Further, a processor can be employed to compute an estimated point that maps to the input point. The estimated point can include coordinates indicative of the location in the arm coordinate frame. The estimated point can be computed based upon the sensor calibration points and the arm calibration points as identified. According to various embodiments, it is contemplated that the techniques described herein can similarly enable computing an estimated point, which includes coordinates in the sensor coordinate frame, that maps to an input point, which includes coordinates in the arm coordinate frame.
- In accordance with various embodiments, calibration (e.g., recalibration) of the depth sensor and the robotic arm can be performed. During calibration (e.g., recalibration), the end effector can be caused to non-continuously traverse through the workspace based upon a pattern, where the end effector is stopped at positions within the workspace according to the pattern. At each position from the positions within the workspace at which the end effector is stopped, a sensor calibration point for the position of the end effector within the workspace detected by the depth sensor can be collected and an arm calibration point for the position of the end effector within the workspace detected by the robotic arm can be collected. The depth sensor and the robotic arm can be recalibrated, for instance, responsive to movement of the depth sensor in the workspace, movement of the robotic arm in the workspace, a temperature change in the workspace that exceeds a threshold temperature value, a mapping error detected that exceeds a threshold error value, or the like.
- The above summary presents a simplified summary in order to provide a basic understanding of some aspects of the systems and/or methods discussed herein. This summary is not an extensive overview of the systems and/or methods discussed herein. It is not intended to identify key/critical elements or to delineate the scope of such systems and/or methods. Its sole purpose is to present some concepts in a simplified form as a prelude to the more detailed description that is presented later.
-
FIG. 1 illustrates a functional block diagram of an exemplary system that that controls a depth sensor and a robotic arm that operate in a workspace. -
FIG. 2 illustrates an example of the workspace ofFIG. 1 . -
FIG. 3 illustrates an exemplary pattern used for calibration (e.g., recalibration) that specifies positions within the workspace ofFIG. 2 . -
FIG. 4 illustrates a functional block diagram of an exemplary system that includes a control system that controls the depth sensor and the robotic arm during calibration and registration. -
FIG. 5 illustrates an example of tetrahedrons that can be formed using Delaunay triangulation on sensor calibration points in the workspace. -
FIG. 6 illustrates an example where a preset number of sensor calibration points nearest to an input point are identified and used to form tetrahedrons that include the input point. -
FIG. 7 illustrates a functional block diagram of another exemplary system that includes the control system that controls the depth sensor and the robotic arm during calibration and registration. -
FIG. 8 is a flow diagram that illustrates an exemplary methodology of controlling a depth sensor and a robotic arm that operate in the workspace. -
FIG. 9 is a flow diagram that illustrates an exemplary methodology of controlling the depth sensor and the robotic arm that operate in the workspace. -
FIG. 10 illustrates an exemplary computing device. - Various technologies pertaining to automatic in-situ calibration and registration of a depth sensor and a robotic arm, where the depth sensor and the robotic arm operate in a workspace, are now described with reference to the drawings, wherein like reference numerals are used to refer to like elements throughout. In the following description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of one or more aspects. It may be evident, however, that such aspect(s) may be practiced without these specific details. In other instances, well-known structures and devices are shown in block diagram form in order to facilitate describing one or more aspects. Further, it is to be understood that functionality that is described as being carried out by certain system components may be performed by multiple components. Similarly, for instance, a component may be configured to perform functionality that is described as being carried out by multiple components.
- Moreover, the term “or” is intended to mean an inclusive “or” rather than an exclusive “or.” That is, unless specified otherwise, or clear from the context, the phrase “X employs A or B” is intended to mean any of the natural inclusive permutations. That is, the phrase “X employs A or B” is satisfied by any of the following instances: X employs A; X employs B; or X employs both A and B. In addition, the articles “a” and “an” as used in this application and the appended claims should generally be construed to mean “one or more” unless specified otherwise or clear from the context to be directed to a singular form.
- As set forth herein, a non-parametric technique for registration between a depth sensor and a robotic arm can be implemented. The registration technique can utilize a sparse sampling of a workspace (e.g., with a marker placed on an end effector of the robotic arm). A point cloud can be formed over calibration points and interpolation can be performed within the point cloud to map coordinates in a sensor coordinate frame to coordinates in an arm coordinate frame. Such techniques can automatically incorporate intrinsic sensor parameters into transformations between the depth sensor and the robotic arm. Accordingly, an explicit model of intrinsics or biases of the depth sensor need not be utilized. A result of the registration described herein can be used to generate reaching motions with the robotic arm towards targets sensed by the depth sensor. For instance, the Euclidean error in the resulting transformation from coordinates of the depth sensor to coordinates of the robotic arm can be on the order of sub-millimeters or millimeters; however, the claimed subject matter is not so limited.
- Techniques set forth herein can automatically generate a coordinate frame transformation function for mapping coordinates between a sensor coordinate frame and an arm coordinate frame. The transformation function can be a non-linear function that can account for non-linear characteristics of the depth sensor and/or the robotic arm. For instance, the transformation function can be a closed form function, a collection of closed form functions, described using a lookup table or a neural net, or the like.
- Referring now to the drawings,
FIG. 1 illustrates asystem 100 that controls adepth sensor 102 and arobotic arm 104 that operate in aworkspace 106. Therobotic arm 104 can include an end effector. Moreover, thesystem 100 includes acontrol system 108. Thecontrol system 108 can control thedepth sensor 102 and therobotic arm 104; more particularly, thecontrol system 108 can automatically control in-situ calibration and registration of thedepth sensor 102 and therobotic arm 104 in theworkspace 106. - The
control system 108 can create a mapping between coordinates extracted from a depth image generated by thedepth sensor 102 and a corresponding Cartesian position of the robotic arm 104 (e.g., a position of the end effector of the robotic arm 104) in theworkspace 106. Coordinates extracted from a depth image generated by thedepth sensor 102 can be referred to herein as coordinates in a sensor coordinate frame (e.g., sensor coordinates in the sensor coordinate frame). Moreover, a Cartesian position of therobotic arm 104 in theworkspace 106 can be referred to herein as coordinates in an arm coordinate frame (e.g., arm coordinates in the arm coordinate frame). - The
control system 108 can compute a transformation between sensor coordinates in the sensor coordinate frame from thedepth sensor 102 and arm coordinates in an arm coordinate frame from therobotic arm 104, while simultaneously compensating for distortions in a depth field of thedepth sensor 102. The transformation of coordinates in the sensor coordinate frame to coordinates in the arm coordinate frame allows for generation of Cartesian position goals for therobotic arm 104 to perform a reaching motion towards a target identified in the depth image by thedepth sensor 102. Moreover, it is contemplated that coordinates in the arm coordinate frame can be transformed to coordinates in the sensor coordinate frame by thecontrol system 108. Thus, thecontrol system 108 simultaneously calibrates the depth sensor 102 (e.g., compensates for distortions in the depth field) and computes the transformation between sensor coordinates from thedepth sensor 102 and arm coordinates from therobotic arm 104. Such simultaneous calibration and computation of the transformation enables accurate coordination between thedepth sensor 102 and therobotic arm 104 when performing subsequent tasks. - Pursuant to various embodiments, the
control system 108 can transmit data to and receive data from thedepth sensor 102 and therobotic arm 104 over a network (or networks). For example, thedepth sensor 102, therobotic arm 104, and thecontrol system 108 can each be connected to a local network (over which data can be transmitted). According to another example, it is contemplated that thecontrol system 108 can be executed by one or more processors of one or more server computing devices (e.g., one or more datacenters can include thecontrol system 108, etc.). In accordance with yet another example, it is to be appreciated that thecontrol system 108 can be part of thedepth sensor 102 and/or therobotic arm 104. - The
control system 108 can enable automatic discovery of a coordinate transformation function that compensates for non-linear biases of thedepth sensor 102 and therobotic arm 104. Such coordinate transformation function can be discovered by thecontrol system 108 without separate pre-calibration of thedepth sensor 102. The coordinate transformation function can account for non-linear biases of the robotic arm 104 (e.g., characteristics of therobotic arm 104 can change over time, when an end effector is placed at differing locations in theworkspace 106 due to mechanical deformation of therobotic arm 104, etc.). - The
depth sensor 102 can be substantially any type of depth sensor. For example, thedepth sensor 102 can be a structured light three-dimensional (3D) scanner, a time-of-flight scanner, a modulated light 3D scanner, or the like. - According to an example, if a scene captured by the depth sensor 102 (e.g., a time-of-flight scanner) lacks highly infrared (IR) reflective, shiny, or absorbent surfaces, then mean depth readings at a particular pixel provided by the
depth sensor 102 can be stable in time, displaying small variance. Thus, depth readings from thedepth sensor 102 can be stable over a number of readings and can have a high degree of precision over much of the scene, provided sampling occurs over enough frames. By way of another example, a systematic distortion of the depth field may be present for thedepth sensor 102. The low-frequency bias patterns can reach several centimeters at sides of the depth image and several millimeters in the middle of the depth image. Biases and placement errors are commonly repeatable for a given device, and thus, can potentially be minimized with a non-uniform mapping function. The distortion can change shape at different distances, which suggests a relatively complex non-linear bias model. Thecontrol system 108 can account for the distortion to achieve millimeter or sub-millimeter levels of accuracy. Rather than trying to separately model the distortion, thecontrol system 108 automatically compensates for the distortion during a transformation of coordinates between the sensor coordinate frame and the arm coordinate frame. - The
control system 108 can include adata repository 110. Thedata repository 110 can include sensor calibration points 112 and arm calibration points 114. The sensor calibration points 112 can include asensor calibration point 1, . . . , and a sensor calibration point n, where n can be substantially any integer. Moreover, the arm calibration points 114 can include anarm calibration point 1, . . . , and an arm calibration point n. The arm calibration points 114 respectively correspond to the sensor calibration points 112. As described in greater detail below, thecontrol system 108 can include acalibration component 122 that can collect and store (e.g., in the data repository 110) the sensor calibration points 112 and the arm calibration points 114. - A sensor calibration point (e.g., from the sensor calibration points 112) includes first coordinates of the end effector of the
robotic arm 104 in the sensor coordinate frame. The first coordinates are previously collected (e.g., by the calibration component 122) during calibration with the end effector at a given position within theworkspace 106. Moreover, an arm calibration point (e.g., from the arm calibration points 114) that corresponds to the sensor calibration point includes second coordinates of the end effector of therobotic arm 104 in the arm coordinate frame. The second coordinates are previously collected during the calibration (e.g., by the calibration component 122) with the end effector at the given position within theworkspace 106. Thus, the sensor calibration points 112 and the corresponding arm calibration points 114 are collected for a plurality of positions throughout theworkspace 106. A number and placement of the positions throughout theworkspace 106 can be predetermined (e.g., based upon a pre-determined placement grid) or actively identified (e.g., based upon where a larger mapping error is measured or expected). - The
control system 108 can include aninterface component 116. Theinterface component 116, for example, can receive an input point from thedepth sensor 102. The input point can include coordinates indicative of a location in the sensor coordinate frame, where the location is in theworkspace 106. Continued reference is made below to the example where theinterface component 116 receives the input point from thedepth sensor 102, and such input point is mapped to an estimated point in the arm coordinate frame. However, it is contemplated that theinterface component 116 can additionally or alternatively receive an input point from therobotic arm 104. Accordingly, the input point from therobotic arm 104 can include coordinates indicative of a location in the arm coordinate frame, where such location is in theworkspace 106. Thus, much of the below discussion can be extended to the example where theinterface component 116 receives the input point from therobotic arm 104, where such input point can be mapped to an estimated point in the sensor coordinated frame. - The
control system 108 can further include asample selection component 118. Thesample selection component 118 can identify sensor calibration points within proximity of the input point from thedata repository 110. Thus, thesample selection component 118 can identify a subset (less than n) of the sensor calibration points 112 from thedata repository 110 as being within proximity of the input point. Moreover, thesample selection component 118 can identify arm calibration points that respectively correspond to the sensor calibration points within proximity of the input point from thedata repository 110. Accordingly, thesample selection component 118 can identify a subset (less than n) of the arm calibration points 114 from thedata repository 110 that respectively correspond to the sensor calibration points within proximity of the input point. - Moreover, the
control system 108 can include aninterpolation component 120 that computes an estimated point that maps to the input point. The estimated point can include coordinates indicative of the location in the arm coordinate frame. Thus, the input point received by theinterface component 116 can include coordinates indicative of the location in the sensor coordinate frame and the estimated point computed by theinterpolation component 120 can include coordinates indicative of the location in the arm coordinate frame, where the location is in theworkspace 106. The estimated point can be computed by theinterpolation component 120 based upon the sensor calibration points within proximity of the input point and the arm calibration points that respectively correspond to the sensor calibration points within proximity of the input point. - As noted above, the
control system 108 can include thecalibration component 122 that performs the calibration of thedepth sensor 102 and therobotic arm 104. Thecalibration component 122 can perform in-situ calibration or recalibration of thedepth sensor 102 and therobotic arm 104 in theworkspace 106. Thecalibration component 122 can cause the end effector of therobotic arm 104 to non-continuously traverse through theworkspace 106 based on a pattern. The end effector can be stopped at positions within theworkspace 106 according to the pattern. - The
calibration component 122, at each position from the positions within theworkspace 106 at which the end effector is stopped, can collect a sensor calibration point for the position of the end effector within theworkspace 106 detected by thedepth sensor 102 and an arm calibration point for the position of the end effector within theworkspace 106 detected by therobotic arm 104. The sensor calibration point for the position can include coordinates of the end effector at the position within theworkspace 106 in the sensor coordinate frame. According to an example, the coordinates of the end effector included as part of the sensor calibration point can be coordinates of a centroid (e.g., of a given portion of the end effector, of an object mechanically attached to the end effector, etc.), where the centroid can be computed based on image moments of a standard deviation image from thedepth sensor 102. Moreover, the arm calibration point for the position can include coordinates of the end effector at the position within theworkspace 106 in the arm coordinate frame. Further, thecalibration component 122 can store the sensor calibration point and the arm calibration point for the position in the data repository 110 (e.g., as part of the sensor calibration points 112 and the arm calibration points 114, respectively). - Turning to
FIG. 2 , illustrated is an example of theworkspace 106. Thedepth sensor 102 and therobotic arm 104 operate in theworkspace 106. Therobotic arm 104 can include anend effector 202. - During calibration (e.g., recalibration) of the
depth sensor 102 and therobotic arm 104, theend effector 202 can be caused to non-continuously traverse through theworkspace 106 based on a pattern, where theend effector 202 is stopped at positions within theworkspace 106 according to the pattern. For example, theend effector 202 of therobotic arm 104 can be placed at regular intervals in theworkspace 106. However, other patterns are intended to fall within the scope of the hereto appended claims (e.g., interval size can be a function of measured mapping error for a given volume in theworkspace 106, differing preset intervals can be set in the pattern for a given type of depth sensor, etc.). Further, thedepth sensor 102 can detect coordinates of a position of the end effector 202 (e.g., a calibration target on the end effector 202) in theworkspace 106 in the sensor coordinate frame, while therobotic arm 104 can detect coordinates of the position of the end effector 202 (e.g., the calibration target) in theworkspace 106 in the arm coordinate frame. Thus, pairs of corresponding points in the sensor coordinate frame and the arm coordinate frame can be captured when thedepth sensor 102 and therobotic arm 104 are calibrated (e.g., recalibrated). -
FIG. 3 shows an exemplary pattern used for calibration (e.g., recalibration) that specifies positions within theworkspace 106. Theend effector 202 of therobotic arm 104 can be caused to traverse through theworkspace 106 based upon the pattern, stopping at positions 302-316 shown inFIG. 3 . Theend effector 202 can follow substantially any path between the positions 302-316 specified by the pattern when traversing through theworkspace 106. - As depicted, the
robotic arm 104 can be caused to stop theend effector 202 of therobotic arm 104 at eight positions 302-316 in the volume of theworkspace 106; however, it is contemplated that the claimed subject matter is not limited to the depicted example, as the pattern shown is provided for illustration purposes. For instance, theend effector 202 of therobotic arm 104 can be stopped at substantially any number of positions within the volume of theworkspace 106, the positions within the volume of theworkspace 106 need not be equally spaced, and so forth. - The number and placement of the positions 302-316 can be predetermined (e.g., based upon a pre-determined placement grid) or actively identified (e.g., based upon where a larger mapping error is measured or expected). For instance, volumes within the
workspace 106 that have (or are expected to have) lower mapping errors can be sparsely sampled, while volumes within theworkspace 106 that have (or are expected to have) higher mapping errors can be more densely sampled. The foregoing can reduce an amount of time for performing calibration, while enhancing accuracy of a resulting transformation function. - The depth sensor 102 (represented by circle 318) and the robotic arm 104 (represented by circle 320) can detect respective coordinates of the
end effector 202 of therobotic arm 104 at each of the positions 302-316. For instance, when theend effector 202 is stopped at theposition 314, thedepth sensor 102 can detect a sensor calibration point for theposition 314 of theend effector 202 within theworkspace 106 and therobotic arm 104 can detect an arm calibration point for theposition 314 of theend effector 202 within theworkspace 106. The sensor calibration point for theposition 314 includes first coordinates of theend effector 202 in the sensor coordinate frame, and the arm calibration point for theposition 314 includes second coordinates of theend effector 202 in the arm coordinate frame. - Again, reference is made to
FIG. 1 . Thecalibration component 122 can cause the end effector of therobotic arm 104 to be placed at the positions as specified by the pattern. At each of the positions in theworkspace 106, a pair of corresponding points (e.g., a sensor calibration point and an arm calibration point) in the sensor coordinate frame and the arm coordinate frame can be collected, (pc, pa)=({xc, yc, zc}, {xa, ya, za}). Let n be the number of point pairs collected and Pc=(pc1, . . . , Pcn) and Pa=(pa1, . . . , Pan) be the sets of coordinates collected in the sensor coordinate frame and the arm coordinate frame respectively (e.g., Pc can be the sensor calibration points 112 and Pa can be the arm calibration points 114). - To collect the calibration data (Pc, Pa), the
depth sensor 102 can be placed in a fixed location with a relevant portion of the workspace 106 (e.g., the portion in which therobotic arm 104 operates) in view. Thedepth sensor 102 can have an unobstructed view of theworkspace 106 within the range of thedepth sensor 102. It is to be appreciated that other restrictions on the placement of thedepth sensor 102 need not be employed, though a marker on the end effector can be oriented in a general direction of thedepth sensor 102. Therobotic arm 104 can be equipped with a calibration marker attached to the end effector that allows precise localization of the end effector in 3D space of thedepth sensor 102. Moreover, it is contemplated that substantially any technique can be used by thedepth sensor 102 to estimate coordinates of the end effector in the sensor coordinate frame, as long as such technique provides sufficiently precise results. An example of such a technique employed by thedepth sensor 102 to estimate coordinates of the end effector in the sensor coordinate frame which provides sub-millimeter precision is described below; yet, is it contemplated that the claimed subject matter is not so limited as other techniques can additionally or alternatively be utilized. - According to an example, a plurality of markers can be attached to the end effector of the
robotic arm 104. Following this example, several calibration point pairs can be collected at each position in theworkspace 106. By way of illustration, a fork that is attachable to the end effector of therobotic arm 104 can include spatially separated teeth, and an end of each tooth can include or be mechanically attached to a marker. The plurality of markers can be oriented in a row, a rectangle, or substantially any pattern so long as parameters of the pattern are known. Use of a plurality of markers can reduce an amount of time for sampling the volume of theworkspace 106. - During calibration, the
calibration component 122 can cause therobotic arm 104 to move the end effector through theworkspace 106 in a regular pattern, stopping at each of the positions specified by the pattern for a duration of time to allow thedepth sensor 102 to collect depth samples (e.g., the sensor calibration points 112). At each point i, pci can be computed based on segmentation of the end effector tip in the sensor coordinate frame, and pai can be computed based on a forward kinematic model of therobotic arm 104. - According to an example, to construct (Pc, Pa), a desired workspace (e.g., the workspace 106) of the
robotic arm 104 can be contained within a convex hull of Pa. In general, a denser set of points can provide a more accurate transformation. However, given a fixed volume, an amount of time it takes to traverse the sample points (e.g., the positions specified by the pattern) is inversely proportional to the cube of the distance between points. According to an example, a larger number of sample points can be collected (e.g., during calibration or recalibration) in a volume of theworkspace 106 that has a larger mapping error relative to a disparate volume of theworkspace 106; yet, the claimed subject matter is not so limited. Once calibration data has been captured by thecalibration component 122, the target can be removed from the end effector (if utilized) and replaced with a task appropriate tool. - The
control system 108 can further include aninitialization component 124 that can initialize thedepth sensor 102 and therobotic arm 104 by employing global Procrustes analysis. Theinitialization component 124 can match two shapes in different coordinate frames, possibly with different scales, utilizing Procrustes analysis. Theinitialization component 124 can utilize two sets of points X and Y as input for the Procrustes analysis. Further, theinitialization component 124 can determine a translation, scaling, and rotation operation that, when applied to points in Y, minimize a sum of the squared distances to the points in X. - The
initialization component 124 enables mapping between the sensor coordinate frame and the arm coordinate frame by performing Procrustes analysis using the entire sets Pc and Pa. Once the transformation is computed, theinitialization component 124 can estimate coordinates in the arm coordinate space for any 3D point in the sensor coordinate space, or vice versa. This technique can be simple to implement, and once the transformation is computed, it can be applied to any point in Pc. However, such approach may not account for local distortions in the depth field. - The
initialization component 124 can provide a rough approximation of registration configuration for thedepth sensor 102 and therobotic arm 104. By way of example, when thedepth sensor 102 and therobotic arm 104 are initialized in theworkspace 106, theinitialization component 124 can generate the rough approximation of the registration configuration. Following this example, thecalibration component 122 can thereafter cause additional sampling of theworkspace 106 as described herein (e.g., the rough approximation of the registration configuration can be used during at least part of the additional sampling of the workspace 106); yet, the claimed subject matter is not so limited. - The
control system 108 can further include amonitor component 126 that monitors conditions of thedepth sensor 102, therobotic arm 104, and theworkspace 106. Thecalibration component 122 can selectively initiate recalibration based upon a condition as detected by themonitor component 126. Examples of conditions that can be monitored by themonitor component 126 include movement of thedepth sensor 102 in theworkspace 106, movement of therobotic arm 104 in theworkspace 106, a temperature change in theworkspace 106 exceeding a threshold temperature value, a mapping error exceeding a threshold error value, a combination thereof, or the like. For example, responsive to themonitor component 126 detecting the movement of thedepth sensor 102 in theworkspace 106, the movement of therobotic arm 104 in theworkspace 106, the temperature change in the workspace that exceeds the threshold temperature value, the mapping error that exceeds the threshold error value, etc., thecalibration component 122 can recalibrate thedepth sensor 102 and therobotic arm 104. It is contemplated that themonitor component 126 can monitor a subset of the above-noted conditions and/or other conditions can be monitored by themonitor component 126. - By way of illustration, a vision-guided manipulation system (e.g., the
depth sensor 102 and the robotic arm 104) deployed in theworkspace 106 may fall out of initial calibration due to a number of factors (e.g., intentional and unintentional moving of equipment, temperature changes, etc.). Themonitor component 126 and therecalibration component 122 allow for recalibration to be performed on-demand and on-site. Moreover, themonitor component 126 can verify the calibration (e.g., periodically, prior to a task to be performed by thedepth sensor 102 and therobotic arm 104, etc.). Verification can be performed by placing a known object at a few known locations in theworkspace 106 with therobotic arm 104, and observing the known object with thedepth sensor 102. If themonitor component 126 detects a mapping error above the threshold error value, themonitor component 126 can cause thecalibration component 122 to recalibrate (e.g., until the mapping error is below the threshold error value, etc.). - Recalibration performed by the
calibration component 122, for instance, can include causing the end effector to non-continuously traverse through theworkspace 106 based upon a pattern, where the end effector is stopped at positions within theworkspace 106 according to the pattern. It is to be appreciated that the pattern used for recalibration can be substantially similar to or differ from a previously used pattern (e.g., a pattern used for calibration, a pattern used for prior recalibration, etc.). According to an example, a pattern used for recalibration can allow for sampling a portion of theworkspace 106, whereas a previously used pattern allowed for sampling across theworkspace 106. By way of another example, a pattern used for recalibration can allow for more densely sampling a given volume of theworkspace 106. Similar to above, at each position within theworkspace 106 at which the end effector is stopped, a sensor calibration point for the position detected by thedepth sensor 102 can be collected and an arm calibration point for the position detected by therobotic arm 104 can be collected. - It is contemplated that the
calibration component 122 can recalibrate thedepth sensor 102 and therobotic arm 104 subsequent to computation of the estimated point that maps to the input point. According to an illustration, themonitor component 126 can receive a measured point from therobotic arm 104. The measured point can include coordinates indicative of the location in the arm coordinate frame detected by the robotic arm 104 (e.g., the location specified by the coordinates of the input point and the coordinates of the estimated point). Themonitor component 126 can compute a mapping error based at least in part upon the measured point and the estimated point computed by theinterpolation component 120. Further, themonitor component 126 can compare the mapping error to a threshold error value. Responsive to the mapping error being greater than the threshold error value, themonitor component 126 can cause thecalibration component 122 to recalibrate thedepth sensor 102 and therobotic arm 104. For example, thecalibration component 122 can cause a volume of theworkspace 106 that includes the location to be resampled or more densely sampled responsive to the mapping error exceeding the threshold error value. Additionally or alternatively, thecalibration component 122 can cause theentire workspace 106 to be resampled responsive to the mapping error exceeding the threshold error value. - Pursuant to an example, a number of positions within a given volume of the
workspace 106 specified by the pattern can be a function of a mapping error for the given volume. Thus, a volume with a larger mapping error (e.g., larger relative bias) can have more positions at which the pattern causes the end effector to be stopped for collection of sensor calibration points and corresponding arm calibration points as compared to a volume with a lower mapping error (e.g., smaller relative bias); however, the claimed subject matter is not so limited. - To map an input point wc from the sensor coordinate frame to an estimated point w′a in the arm coordinate frame, the
sample selection component 118 can location a tetrahedron τ with vertices from Pc that includes wc. The vertices can further be used, along with known correspondences in Pa, by theinterpolation component 120 to generate the estimated point w′c. As described in greater detail below, theinterpolation component 120 can generate the estimated point w′c using local Procrustes analysis or linear interpolation using barycentric coordinates of wc in τ. - Turning to
FIG. 4 , illustrated is anothersystem 400 that includes thecontrol system 108 that controls thedepth sensor 102 and therobotic arm 104 during calibration and registration. Thecontrol system 108 can include theinterface component 116, thesample selection component 118, theinterpolation component 120, thecalibration component 122, theinitialization component 124, themonitor component 126, and thedata repository 110 as described herein. Moreover, thecontrol system 108 can include asegmentation component 402 that forms tetrahedrons using a Delaunay triangulation on the sensor calibration points 112 throughout theworkspace 106. Thus, thesegmentation component 402 can find the Delaunay triangulation ℑ of Pc to formtetrahedrons 404. Thetetrahedrons 404 can be retained in thedata repository 110. -
FIG. 5 illustrates an example of tetrahedrons 502-506 that can be formed using Delaunay triangulation on sensor calibration points 506-514 (e.g., the sensor calibration points 112) in theworkspace 106. Thetetrahedron 502 can include thesensor calibration point 508, thesensor calibration point 510, thesensor calibration point 512, and thesensor calibration point 514 as vertices. Thetetrahedron 504 can include thesensor calibration point 506, thesensor calibration point 508, thesensor calibration point 510, and thesensor calibration point 512 as vertices. - As described in greater detail below, the
sample selection component 118 can identify sensor calibration points that are within proximity of aninput point 516 by identifying a particular tetrahedron that comprises theinput point 516. Thus, as depicted, thesample selection component 118 can identify that theinput point 516 is within thetetrahedron 502. Further, vertices of thetetrahedron 502 can be identified as being the sensor calibration points within proximity of theinput point 516. - Again, reference is made to
FIG. 4 . According to various embodiments, thecontrol system 108 can employ local Procrustes analysis to compute estimated points that map to input points received by theinterface component 116. Local distortions in the depth field can be mitigated by performing Procrustes analysis using data from a neighborhood around the input points to be transformed in the sensor space. - As noted above, the
segmentation component 402 can find the Delaunay triangulation ℑ of Pc. Responsive to theinterface component 116 receiving the input point from thedepth sensor 102, thesample selection component 118 can identify a particular tetrahedron from thetetrahedrons 404 formed by thesegmentation component 402. The particular tetrahedron identified by thesample selection component 118 includes the input point. Accordingly, the sensor calibration points within proximity of the input point identified by thesample selection component 118 are vertices of the particular tetrahedron. Moreover, thesample selection component 118 can identify arm calibration points that respectively correspond to the vertices of the particular tetrahedron. Theinterpolation component 120 can compute a transformation using Procrustes analysis based upon the vertices of the particular tetrahedron and the arm calibration points that respectively correspond to the vertices of the particular tetrahedron. The transformation can be computed by theinterpolation component 120 responsive to receipt of the input point, for example; however, according to other examples, it is contemplated that transformations can be computed prior to receipt of the input point (e.g., responsive to creation of thetetrahedrons 404 by thesegmentation component 402, etc.). Theinterpolation component 120 can further apply the transformation to the input point to compute the estimated point. Thus, to derive the estimate for a point wc in the arm coordinate frame, the tetrahedron τ ε ℑ that includes wc can be found by thesample selection component 118, and theinterpolation component 120 can perform the Procrustes analysis using the vertices of τ, {τ1, τ2, τ3, τ4} ε Pc, along with corresponding points in Pa. - The local Procrustes analysis can be performed by locating a particular tetrahedron in ℑ, and then performing the Procrustes analysis utilizing vertices of such tetrahedron. The local Procrustes analysis can better handle local distortions in the depth field as compared to global Procrustes analysis. Further, such approach can generally provide a more accurate transformation from a sensor coordinate frame to the arm coordinate frame as compared to the global Procrustes analysis.
- According to other embodiments, the
control system 108 can employ a Delaunay barycentric technique to compute estimated points that map to input points received by theinterface component 116. Again, as set forth above, thesegmentation component 402 can find the Delaunay triangulation ℑ of Pc. Responsive to theinterface component 116 receiving the input point from thedepth sensor 102, thesample selection component 118 can identify a particular tetrahedron from thetetrahedrons 404 formed by thesegmentation component 402. The particular tetrahedron identified by thesample selection component 118 includes the input point. Further, the sensor calibration points within proximity of the input point identified by thesample selection component 118 are vertices of the particular tetrahedron. Moreover, thesample selection component 118 can identify arm calibration points that respectively correspond to the vertices of the particular tetrahedron. Theinterpolation component 120 can compute barycentric coordinates of the input point with respect to the vertices of the particular tetrahedron. Moreover, theinterpolation component 120 can interpolate the estimated point based upon the barycentric coordinates and the arm calibration points that respectively correspond to the vertices of the particular tetrahedron. - Again, reference is made to
FIG. 1 . According to other embodiments, a combined barycentric interpolation technique can be employed by thecontrol system 108. In accordance with such embodiments, responsive to theinterface component 116 receiving the input point from thedepth sensor 102, thesample selection component 118 can identify a preset number of the sensor calibration points 112 that are nearest to the input point. Further, theinterpolation component 120 can compute the estimated point that maps to the input point by forming one or more tetrahedrons that include the input point. The tetrahedrons can be created with vertices being from the preset number of sensor calibration points nearest to the input point. For each tetrahedron, theinterpolation component 120 can compute barycentric coordinates of the input point with respect to vertices of the tetrahedron. Theinterpolation component 120 can further interpolate a value of the estimated point from the tetrahedron based upon the barycentric coordinates and arm calibration points that respectively correspond to the vertices of the tetrahedron. Moreover, theinterpolation component 120 can combine values of the estimated point from the one or more tetrahedrons to compute the estimated point that maps to the input point. -
FIG. 6 shows an example where a preset number of sensor calibration points nearest to an input point are identified and used to form tetrahedrons that include the input point. As illustrated, six sensor calibration points 602-612 nearest to aninput point 614 are identified by thesample selection component 118. Theinterpolation component 120 can form atetrahedron 616 and atetrahedron 618. Thetetrahedron 616 and thetetrahedron 618 can each include theinput point 614. For thetetrahedron 616, theinterpolation component 120 can compute barycentric coordinates of theinput point 614 with respect to the vertices of the tetrahedron 616 (e.g., thesensor calibration point 604, thesensor calibration point 608, thesensor calibration point 610, and the sensor calibration point 612). Theinterpolation component 120 can further interpolate a value of the estimated point from thetetrahedron 616 based upon the barycentric coordinates and arm calibration points that respectively correspond to the vertices oftetrahedron 616. Theinterpolation component 120 can similarly compute a value of the estimated point from the tetrahedron 618 (and any disparate tetrahedrons formed from the k nearest sensor calibration points 602-612 that includes the input point 614). Moreover, theinterpolation component 120 can combine values of the estimated point from the one or more tetrahedrons (e.g., thetetrahedron 616, thetetrahedron 618, any disparate tetrahedron, etc.) to compute the estimated point that maps to theinput point 614. - Reference is again made to
FIG. 1 . Barycentric coordinates can locate a point on an interior of a simplex (e.g., a triangle, a tetrahedron, etc.) in relation to vertices of that simplex. Homogeneous barycentric coordinates can be normalized so that coordinates inside a simplex sum to 1, and can be used to interpolate function values for points inside a simplex if values of the function are known at the vertices. - As employed herein, homogeneous barycentric coordinates can be used to approximate arm coordinates throughout a convex hull of the sampled volume. Specifically, given a tetrahedron τ={τ1, τ2, τ3, τ4} in the sensor coordinate frame, barycentric coordinates of wc can be found with respect to vertices τ1, τ2, τ3 and τ4 using the following formula:
-
- In the foregoing, D is defined as the determinant of the following matrix:
-
- If all four barycentric coordinates are positive, then wc is included within τ, and using the arm coordinate values wa associated with the vertices of τ, the
interpolation component 120 can interpolate in the arm frame according to: -
w′ a ·x=B a*{circumflex over (τ)}1 ·x+B b*{circumflex over (τ)}2 ·x+B c*{circumflex over (τ)}3 ·x+B d*{circumflex over (τ)}4 ·x -
w′ a ·y=B a*{circumflex over (τ)}1 ·y+B b*{circumflex over (τ)}2 ·y+B c*{circumflex over (τ)}3 ·y+B d*{circumflex over (τ)}4 ·y -
w′ a ·z=B a*{circumflex over (τ)}1 ·z+B b*{circumflex over (τ)}2 ·z+B c*{circumflex over (τ)}3 ·z+B d*{circumflex over (τ)}4 ·z - In the foregoing, {circumflex over (τ)}n is the coordinate in the arm coordinate frame that corresponds to τn in the calibration data.
- To compensate for possible noise in a training set Pa, multiple tetrahedrons can be used. Specifically, given a point wc in the sensor coordinate frame, Ω can be constructed which is a set of k nearest neighbors of wc in Pc. This yields a set of candidate vertices from Pc paired with the corresponding points in Pa:
-
Ω={{p c1 ,p a1 },{p c2 ,p a2 }, . . . ,{p ck ,P ak}} - Moreover, ℑ can be generated, which can include unique tetrahedrons whose vertices are members of Ω and include wc. Let w′ai be the estimate of wc in the arm coordinate frame obtained by using the barycentric interpolation of the ith member of ℑ. The final estimate of wc can be computed in the arm coordinate frame as w′a using:
-
- In the above, m is the number of members in ℑ. The foregoing can be referred to as combined barycentric interpolation.
- Further, as described above in connection with the Delaunay barycentric technique, barycentric interpolation inside a single tetrahedron obtained using a Delaunay triangulation of Pa can also be utilized.
- In general, barycentric interpolation techniques can enable a computed transformation to be adjusted based on relative proximity to neighboring points, whereas local Procrustes techniques can yield a common transformation for any location in a given tetrahedron in camera space. The barycentric techniques can compute three dimensional positions; however, such techniques may not provide an estimate of rotations between sensor and arm frames. Yet, it is contemplated that barycentric interpolation and Procrustes analysis can be implemented simultaneously by the
control system 108. - Turning to
FIG. 7 , illustrated is anothersystem 700 that includes thecontrol system 108 that controls thedepth sensor 102 and therobotic arm 104 during calibration and registration. Thecontrol system 108 can further include anextrapolation component 702 that can use the transformation generated by theinitialization component 124 for a given input point, where theinitialization component 124 employed the global Procrustes analysis to generate the transformation. More particularly, the given input point can include coordinates indicative of a location in the sensor coordinate frame that is outside a convex hull of the sensor calibration points 112. Thus, theextrapolation component 702 can extrapolate a first estimated point that maps to a first input point using the transformation generated from the global Procrustes analysis if the first input point is outside the convex hull of the sensor calibration points 112, while theinterpolation component 120 can interpolate a second estimated point that maps to a second input point using one or more of the techniques described herein if the second input point is within the convex hull of the sensor calibration points 112. - More generally, below is an example of a technique that can be employed by the
depth sensor 102 to estimate coordinates of the end effector of therobotic arm 104 in the sensor coordinate frame. As described below, a centroid can be computed based on image moments of a standard deviation image from thedepth sensor 102. Moreover, the coordinates of the end effector can be coordinates of the centroid. It is to be appreciated, however, that the claimed subject matter is not limited to the exemplary technique, and other techniques can additionally or alternatively be used by thedepth sensor 102. Moreover, it is to be appreciated that at least a portion of the below technique described as being performed by thedepth sensor 102 can be performed by the calibration component 122 (or thecontrol system 108 in general). - The
depth sensor 102 can employ marker segmentation. To automate the process of calibrating thedepth sensor 102 with therobotic arm 104, a marker can be mounted to a tool flange of therobotic arm 104. The marker can allow precise and accurate segmentation of the end effector tip. To ensure that a Cartesian coordinate of the tip is precise, a number of frames can be accumulated by thedepth sensor 102 before doing segmentation. Further, since depth readings at edges of objects are usually noisy, image segmentation can be performed and Euclidean coordinates of a centroid can be calculated by thedepth sensor 102 in a standard deviation image as opposed to depth frame. The standard deviation image can be a two-dimensional (2D) array of the same dimensions as the depth image (e.g., 512×424 pixels, etc.): -
StdDevImg(n,m)={S 01 ,S 02 , . . . ,S nm} - with Sij being a standard deviation estimate of sensor depth readings for a pixel (i,j) computed over N samples.
-
- Once the standard deviation image is generated, the
depth sensor 102 can filter out depth values that lie outside of a work envelope of interest (e.g., 1.2 m, etc.). Further, the scene can be cleared of objects that may be confused for the marker. Thedepth sensor 102 can scan the image top down, left to right, looking for a first stable pixel on the scene (e.g., based upon a preset standard deviation threshold, 1.5-2 mm, etc.). Once such pixel is found, thedepth sensor 102 can begin scanning its neighborhood, and if enough stable pixels are seen, it can be assumed that the marker has been discovered. Thus, thedepth sensor 102 can compute the centroid using image moments: -
- In the foregoing, Mij is the first order image moment on standard deviation frame:
-
- As set forth above, T (e.g., between 2 mm and 5 mm, etc.) is a cutoff threshold for standard deviation of individual pixels; above the cutoff threshold, a pixel is not included in calculating the image moments. Negating values of standard deviation can have the effect of assigning higher weights to pixels with stable depth readings. Accordingly, X and Y coordinates for the centroid of the marker can be computed.
- To compute a Z coordinate (depth), the
depth sensor 102 can average a region of pixels (e.g., 3×3 or 5×5) around X and Y and record an average depth. Thus, an estimated 3D coordinate for the tip of the end effector can be computed. -
C k ={X k ,Y k ,Z k} - Moreover, since the marker dimensions can be known, Cartesian coordinates of a common point in the arm coordinate frame can be obtained based on its kinematic model:
-
C a ={X a ,Y a ,Z a} - Both sets of coordinates can be recorded as a linked pair in the
data repository 110. Repeating this process n times, two sets of n matching 3D points can be collected and retained in the data repository 110 (one from thedepth sensor 102 and one from the robotic arm 104). -
V(n)={{C k1 ,C a1 },{C k2 ,C a2 }, . . . ,{C kn ,C an}} - Conventional approaches for calibrating optical sensors (e.g., depth sensors) oftentimes include calibrating both intrinsic parameters, such as scale and skew of image plane axes as well as lens distortion, and extrinsic parameters, such as a spatial transformation between frames. Traditional approaches may be performed by decomposing estimation of intrinsics and extrinsics, or the calibrations may be combined. In contrast, the techniques set forth herein enable automatic discovery of a translation function between a sensor coordinate frame and an arm coordinate frame without separate pre-calibration of the depth sensor.
-
FIGS. 8-9 illustrate exemplary methodologies relating to controlling a depth sensor and a robotic arm that operate in the workspace. While the methodologies are shown and described as being a series of acts that are performed in a sequence, it is to be understood and appreciated that the methodologies are not limited by the order of the sequence. For example, some acts can occur in a different order than what is described herein. In addition, an act can occur concurrently with another act. Further, in some instances, not all acts may be required to implement a methodology described herein. - Moreover, the acts described herein may be computer-executable instructions that can be implemented by one or more processors and/or stored on a computer-readable medium or media. The computer-executable instructions can include a routine, a sub-routine, programs, a thread of execution, and/or the like. Still further, results of acts of the methodologies can be stored in a computer-readable medium, displayed on a display device, and/or the like.
-
FIG. 8 illustrates amethodology 800 of controlling a depth sensor and a robotic arm that operate in the workspace. The robotic arm can include an end effector. At 802, an input point can be received from the depth sensor. The input point includes coordinates indicative of a location in a sensor coordinate frame, where the location is in the workspace. At 804, sensor calibration points within proximity of the input point can be identified. A sensor calibration point includes first coordinates of the end effector in the sensor coordinate frame, the first coordinates being previously collected during calibration (e.g., recalibration) with the end effector at a given position within the workspace. At 806, arm calibration points that respectively correspond to the sensor calibration points can be identified. An arm calibration point that corresponds to the sensor calibration point includes second coordinates of the end effector in an arm coordinate frame. Further, the second coordinates are previously collected during the calibration (e.g., recalibration) with the end effector at the given position within the workspace. At 808, a processor can be employed to compute an estimated point that maps to the input point. The estimated point can include coordinates indicative of the location in the arm coordinate frame. The estimated point can be computed based upon the sensor calibration points (e.g., the sensor calibration points within proximity of the input point) and the arm calibration points (e.g., the arm calibration points that respectively correspond to the sensor calibration points within proximity of the input point). - With reference to
FIG. 9 , illustrated is anothermethodology 900 of controlling the depth sensor and the robotic arm that operate in the workspace. Again, the robotic arm can include an end effector. At 902, an input point can be received from the robotic arm. The input point can include coordinates indicative of the location in an arm coordinate frame, where the location is in the workspace. At 904, arm calibration points within proximity of the input point can be identified. An arm calibration point can include first coordinates of the end effector in the arm coordinate frame. The first coordinates are previously collected during calibration (e.g., recalibration) with the end effector at a given position within the workspace. At 906, sensor calibration points that respectively correspond to the arm calibration points can be identified. A sensor calibration point that corresponds to the arm calibration point can include second coordinates of the end effector in a sensor coordinate frame, the second coordinates being previously collected during the calibration (e.g., recalibration) with the end effector at the given position within the workspace. At 908, a processor can be employed to compute an estimated point that maps to the input point. The estimated point can include coordinates indicative of the location in the sensor coordinate frame. The estimated point can be computed based upon the arm calibration points (e.g., the arm calibration points within proximity of the input point) and the sensor calibration points (e.g., the sensor calibration points that respectively corresponding to the arm calibration points within proximity of the input point). - Various examples are now set forth.
- A method of controlling a depth sensor and a robotic arm that operate in a workspace, the robotic arm comprises an end effector, the method comprising: receiving an input point from the depth sensor, the input point comprises coordinates indicative of a location in a sensor coordinate frame, the location being in the workspace; identifying sensor calibration points within proximity of the input point, a sensor calibration point comprises first coordinates of the end effector in the sensor coordinate frame, the first coordinates previously collected during calibration with the end effector at a given position within the workspace; identifying arm calibration points that respectively correspond to the sensor calibration points, an arm calibration point that corresponds to the sensor calibration point comprises second coordinates of the end effector in an arm coordinate frame, the second coordinates previously collected during the calibration with the end effector at the given position within the workspace; and employing a processor to compute an estimated point that maps to the input point, the estimated point comprises coordinates indicative of the location in the arm coordinate frame, the estimated point computed based upon: the sensor calibration points; and the arm calibration points.
- The method according to Example 1, further comprising performing the calibration, performance of the calibration comprises: causing the end effector to non-continuously traverse through the workspace based on a pattern, wherein the end effector is stopped at positions within the workspace according to the pattern; and at each position from the positions within the workspace at which the end effector is stopped: collecting a sensor calibration point for the position of the end effector within the workspace detected by the depth sensor, the sensor calibration point for the position comprises coordinates of the end effector at the position within the workspace in the sensor coordinate frame; and collecting an arm calibration point for the position of the end effector within the workspace detected by the robotic arm, the arm calibration point for the position comprises coordinates of the end effector at the position within the workspace in the arm coordinate frame.
- The method according to Example 2, further comprising computing a centroid based on image moments of a standard deviation image from the depth sensor, the coordinates of the sensor calibration point being coordinates of the centroid.
- The method according to any of Examples 1-3, further comprising recalibrating the depth sensor and the robotic arm subsequent to computing the estimated point that maps to the input point, recalibration further comprises: causing the end effector to non-continuously traverse through the workspace based on a pattern, wherein the end effector is stopped at positions within the workspace according to the pattern; and at each position from the positions within the workspace at which the end effector is stopped: collecting a sensor calibration point for the position of the end effector within the workspace detected by the depth sensor, the sensor calibration point for the position comprises coordinates of the end effector at the position within the workspace in the sensor coordinate frame; and collecting an arm calibration point for the position of the end effector within the workspace detected by the robotic arm, the arm calibration point for the position comprises coordinates of the end effector at the position within the workspace in the arm coordinate frame.
- The method according to Example 4, further comprising: receiving a measured point from the robotic arm, the measured point comprises coordinates indicative of the location in the arm coordinate frame detected by the robotic arm; computing a mapping error based at least in part upon the measured point and the estimated point; comparing the mapping error to a threshold error value; and responsive to the mapping error being greater than the threshold error value, recalibrating the depth sensor and the robotic arm.
- The method according to Example 4, wherein a number of positions within a given volume of the workspace specified by the pattern is a function of a mapping error for the given volume.
- The method according to any of Examples 1-6, further comprising: creating tetrahedrons using a Delaunay triangulation on sensor calibration points collected throughout the workspace, the sensor calibration points collected throughout the workspace comprise: the sensor calibration points within proximity of the input point; and at least one disparate sensor calibration point outside proximity of the input point.
- The method according to any of Examples 1-7, wherein identifying the sensor calibration points within proximity of the input point further comprises: identifying a particular tetrahedron from the tetrahedrons, the particular tetrahedron comprises the input point, the sensor calibration points within proximity of the input point being vertices of the particular tetrahedron.
- The method according to any of Examples 1-8, wherein computing the estimated point that maps to the input point further comprises: computing a transformation using Procrustes analysis based upon: the vertices of the particular tetrahedron; and the arm calibration points, wherein the arm calibration points respectively correspond to the vertices of the particular tetrahedron; and applying the transformation to the input point to compute the estimated point.
- The method according to any of Examples 1-8, wherein computing the estimated point that maps to the input point further comprises: computing barycentric coordinates of the input point with respect to the vertices of the particular tetrahedron; and interpolating the estimated point based upon the barycentric coordinates and the arm calibration points, wherein the arm calibration points respectively correspond to the vertices of the particular tetrahedron.
- The method according to any of Examples 1-10, wherein: identifying the sensor calibration points within proximity of the input point further comprises: identifying a preset number of sensor calibration points nearest to the input point; and computing the estimated point that maps to the input point further comprises: creating one or more tetrahedrons that comprise the input point, the tetrahedrons created with vertices being from the preset number of sensor calibration points nearest to the input point; for each tetrahedron of the one or more tetrahedrons: computing barycentric coordinates of the input point with respect to vertices of the tetrahedron; and interpolating a value of the estimated point from the tetrahedron based upon the barycentric coordinates and arm calibration points that respectively correspond to the vertices of the tetrahedron; and combining values of the estimated point from the one or more tetrahedrons to compute the estimated point that maps to the input point.
- The method according to any of Examples 1-11, further comprising: receiving a disparate input point from the robotic arm, the disparate input point comprises coordinates indicative of a disparate location in the arm coordinate frame, the disparate location being in the workspace; identifying disparate arm calibration points within proximity of the disparate input point; identifying disparate sensor calibration points that respectively correspond to the disparate arm calibration points; and employing the processor to compute a disparate estimate point that maps to the disparate input point, the disparate estimated point comprises coordinates indicative of the disparate location in the sensor coordinate frame, the disparate estimate point computed based upon: the disparate arm calibration points; and the disparate sensor calibration points.
- A system that controls a depth sensor and a robotic arm that operate in a workspace, the robotic arm comprises an end effector, the system comprises: a data repository, the data repository retains: sensor calibration points throughout the workspace, a sensor calibration point comprises first coordinates of the end effector in a sensor coordinate frame, the first coordinates previously collected during calibration with the end effector at a given position within the workspace; and arm calibration points throughout the workspace, the arm calibration points respectively correspond to the sensor calibration points, an arm calibration point that corresponds to the sensor calibration point comprises second coordinates of the end effector in an arm coordinate frame, the second coordinates previously collected during the calibration with the end effector at the given position within the workspace; an interface component that receives an input point from the depth sensor, the input point comprises coordinates indicative of a location in the sensor coordinate frame, the location being in the workspace; a sample selection component that: identifies sensor calibration points within proximity of the input point from the data repository; and identifies arm calibration points that respectively correspond to the sensor calibration points within proximity of the input point from the data repository; and an interpolation component that computes an estimated point that maps to the input point, the estimated point comprises coordinates indicative of the location in the arm coordinate frame, the estimated point computed based upon: the sensor calibration points within proximity of the input point; and the arm calibration points that respectively correspond to the sensor calibration points within proximity of the input point.
- The system according to Example 13, further comprising a calibration component that performs the calibration, the calibration component: causes the end effector to non-continuously traverse through the workspace based on a pattern, wherein the end effector is stopped at positions within the workspace according to the pattern; and at each position from the positions within the workspace at which the end effector is stopped: collects a sensor calibration point for the position of the end effector within the workspace detected by the depth sensor, the sensor calibration point for the position comprises coordinates of the end effector at the position within the workspace in the sensor coordinate frame; stores the sensor calibration point for the position in the data repository; collects an arm calibration point for the position of the end effector within the workspace detected by the robotic arm, the arm calibration point for the position comprises coordinates of the end effector at the position within the workspace in the arm coordinate frame; and stores the arm calibration point for the position in the data repository.
- The system according to Example 14, further comprising a monitor component that monitors conditions of the depth sensor, the robotic arm, and the workspace, wherein the calibration component selectively initiates recalibration based upon the conditions.
- The system according to any of Examples 13-15, further comprising a segmentation component that forms tetrahedrons using a Delaunay triangulation on the sensor calibration points throughout the workspace.
- The system according to Example 16, wherein: the sample selection component: identifies a particular tetrahedron from the tetrahedrons formed by the segmentation component, the particular tetrahedron comprises the input point, the sensor calibration points within proximity of the input point being vertices of the particular tetrahedron; and identifies arm calibration points that respectively correspond to the vertices of the particular tetrahedron; and the interpolation component: computes a transformation using Procrustes analysis based upon: the vertices of the particular tetrahedron; and the arm calibration points that respectively corresponding to the vertices of the particular tetrahedron; and applies the transformation to the input point to compute the estimated point.
- The system according to Example 16, wherein: the sample selection component: identifies a particular tetrahedron from the tetrahedrons formed by the segmentation component, the particular tetrahedron comprises the input point, the sensor calibration points within proximity of the input point being vertices of the particular tetrahedron; and identifies arm calibration points that respectively correspond to the vertices of the particular tetrahedron; and the interpolation component: computes barycentric coordinates of the input point with respect to the vertices of the particular tetrahedron; and interpolates the estimated point based upon the barycentric coordinates and the arm calibration points that respectively correspond to the vertices of the particular tetrahedron.
- The system according to any of Examples 13-18, wherein: the sample selection component identifies a preset number of sensor calibration points nearest to the input point; and the interpolation component: forms one or more tetrahedrons that comprise the input point, the tetrahedrons created with vertices being from the preset number of sensor calibration points nearest to the input point; for each tetrahedron of the one or more tetrahedrons: computes barycentric coordinates of the input point with respect to vertices of the tetrahedron; and interpolates a value of the estimated point from the tetrahedron based upon the barycentric coordinates and arm calibration points that respectively correspond to the vertices of the tetrahedron; and combines values of the estimated point from the one or more tetrahedrons to compute the estimated point that maps to the input point.
- A method of controlling a depth sensor and a robotic arm that operate in a workspace, the robotic arm comprises an end effector, the method comprising: receiving an input point from the robotic arm, the input point comprises coordinates indicative of a location in an arm coordinate frame, the location being in the workspace; identifying arm calibration points within proximity of the input point, an arm calibration point comprises first coordinates of the end effector in the arm coordinate frame, the first coordinates previously collected during calibration with the end effector at a given position within the workspace; identifying sensor calibration points that respectively correspond to the arm calibration points, a sensor calibration point that corresponds to the arm calibration point comprises second coordinates of the end effector in a sensor coordinate frame, the second coordinates previously collected during the calibration with the end effector at the given position within the workspace; and employing a processor to compute an estimated point that maps to the input point, the estimated point comprises coordinates indicative of the location in the sensor coordinate frame, the estimated point computed based upon: the arm calibration points; and the sensor calibration points.
- A system of controlling a depth sensor and a robotic arm that operate in a workspace, the robotic arm comprises an end effector, the system comprising: means for receiving an input point from the depth sensor, the input point comprises coordinates indicative of a location in a sensor coordinate frame, the location being in the workspace; means for identifying sensor calibration points within proximity of the input point, a sensor calibration point comprises first coordinates of the end effector in the sensor coordinate frame, the first coordinates previously collected during calibration with the end effector at a given position within the workspace; means for identifying arm calibration points that respectively correspond to the sensor calibration points, an arm calibration point that corresponds to the sensor calibration point comprises second coordinates of the end effector in an arm coordinate frame, the second coordinates previously collected during the calibration with the end effector at the given position within the workspace; and means for computing an estimated point that maps to the input point, the estimated point comprises coordinates indicative of the location in the arm coordinate frame, the estimated point computed based upon: the sensor calibration points; and the arm calibration points.
- Referring now to
FIG. 10 , a high-level illustration of anexemplary computing device 1000 that can be used in accordance with the systems and methodologies disclosed herein is illustrated. For instance, thecomputing device 1000 may be used in a system that controls calibration and/or registration of a depth sensor and a robotic arm operating in a workspace. Thecomputing device 1000 includes at least oneprocessor 1002 that executes instructions that are stored in amemory 1004. The instructions may be, for instance, instructions for implementing functionality described as being carried out by one or more components discussed above or instructions for implementing one or more of the methods described above. Theprocessor 1002 may access thememory 1004 by way of asystem bus 1006. In addition to storing executable instructions, thememory 1004 may also store sensor calibration points, arm calibration points, transformation functions, tetrahedrons, and so forth. - The
computing device 1000 additionally includes adata store 1008 that is accessible by theprocessor 1002 by way of thesystem bus 1006. Thedata store 1008 may include executable instructions, sensor calibration points, arm calibration points, transformation functions, tetrahedrons, etc. Thecomputing device 1000 also includes aninput interface 1010 that allows external devices to communicate with thecomputing device 1000. For instance, theinput interface 1010 may be used to receive instructions from an external computer device, from a user, etc. Thecomputing device 1000 also includes anoutput interface 1012 that interfaces thecomputing device 1000 with one or more external devices. For example, thecomputing device 1000 may display text, images, etc. by way of theoutput interface 1012. - It is contemplated that the external devices that communicate with the
computing device 1000 via theinput interface 1010 and theoutput interface 1012 can be included in an environment that provides substantially any type of user interface with which a user can interact. Examples of user interface types include graphical user interfaces, natural user interfaces, and so forth. For instance, a graphical user interface may accept input from a user employing input device(s) such as a keyboard, mouse, remote control, or the like and provide output on an output device such as a display. Further, a natural user interface may enable a user to interact with thecomputing device 1000 in a manner free from constraints imposed by input device such as keyboards, mice, remote controls, and the like. Rather, a natural user interface can rely on speech recognition, touch and stylus recognition, gesture recognition both on screen and adjacent to the screen, air gestures, head and eye tracking, voice and speech, vision, touch, gestures, machine intelligence, and so forth. - Additionally, while illustrated as a single system, it is to be understood that the
computing device 1000 may be a distributed system. Thus, for instance, several devices may be in communication by way of a network connection and may collectively perform tasks described as being performed by thecomputing device 1000. - As used herein, the terms “component” and “system” are intended to encompass computer-readable data storage that is configured with computer-executable instructions that cause certain functionality to be performed when executed by a processor. The computer-executable instructions may include a routine, a function, or the like. It is also to be understood that a component or system may be localized on a single device or distributed across several devices.
- Further, as used herein, the term “exemplary” is intended to mean “serving as an illustration or example of something.”
- Various functions described herein can be implemented in hardware, software, or any combination thereof. If implemented in software, the functions can be stored on or transmitted over as one or more instructions or code on a computer-readable medium. Computer-readable media includes computer-readable storage media. A computer-readable storage media can be any available storage media that can be accessed by a computer. By way of example, and not limitation, such computer-readable storage media can comprise RAM, ROM, EEPROM, CD-ROM or other optical disk storage, magnetic disk storage or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. Disk and disc, as used herein, include compact disc (CD), laser disc, optical disc, digital versatile disc (DVD), floppy disk, and blu-ray disc (BD), where disks usually reproduce data magnetically and discs usually reproduce data optically with lasers. Further, a propagated signal is not included within the scope of computer-readable storage media. Computer-readable media also includes communication media including any medium that facilitates transfer of a computer program from one place to another. A connection, for instance, can be a communication medium. For example, if the software is transmitted from a website, server, or other remote source using a coaxial cable, fiber optic cable, twisted pair, digital subscriber line (DSL), or wireless technologies such as infrared, radio, and microwave, then the coaxial cable, fiber optic cable, twisted pair, DSL, or wireless technologies such as infrared, radio and microwave are included in the definition of communication medium. Combinations of the above should also be included within the scope of computer-readable media.
- Alternatively, or in addition, the functionality described herein can be performed, at least in part, by one or more hardware logic components. For example, and without limitation, illustrative types of hardware logic components that can be used include Field-programmable Gate Arrays (FPGAs), Program-specific Integrated Circuits (ASICs), Program-specific Standard Products (ASSPs), System-on-a-chip systems (SOCs), Complex Programmable Logic Devices (CPLDs), etc.
- What has been described above includes examples of one or more embodiments. It is, of course, not possible to describe every conceivable modification and alteration of the above devices or methodologies for purposes of describing the aforementioned aspects, but one of ordinary skill in the art can recognize that many further modifications and permutations of various aspects are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims. Furthermore, to the extent that the term “includes” is used in either the details description or the claims, such term is intended to be inclusive in a manner similar to the term “comprising” as “comprising” is interpreted when employed as a transitional word in a claim.
Claims (20)
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/314,970 US9211643B1 (en) | 2014-06-25 | 2014-06-25 | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
PCT/US2015/036857 WO2015200152A1 (en) | 2014-06-25 | 2015-06-22 | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
CN201580034128.2A CN106461383B (en) | 2014-06-25 | 2015-06-22 | The automatic scene registration of robots arm/sensor/work sound zone system and calibration |
EP15736346.6A EP3160690B1 (en) | 2014-06-25 | 2015-06-22 | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
US14/937,061 US10052766B2 (en) | 2014-06-25 | 2015-11-10 | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US14/314,970 US9211643B1 (en) | 2014-06-25 | 2014-06-25 | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/937,061 Continuation US10052766B2 (en) | 2014-06-25 | 2015-11-10 | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
Publications (2)
Publication Number | Publication Date |
---|---|
US9211643B1 US9211643B1 (en) | 2015-12-15 |
US20150375396A1 true US20150375396A1 (en) | 2015-12-31 |
Family
ID=53540852
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/314,970 Expired - Fee Related US9211643B1 (en) | 2014-06-25 | 2014-06-25 | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
US14/937,061 Active US10052766B2 (en) | 2014-06-25 | 2015-11-10 | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US14/937,061 Active US10052766B2 (en) | 2014-06-25 | 2015-11-10 | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
Country Status (4)
Country | Link |
---|---|
US (2) | US9211643B1 (en) |
EP (1) | EP3160690B1 (en) |
CN (1) | CN106461383B (en) |
WO (1) | WO2015200152A1 (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106839985A (en) * | 2017-03-22 | 2017-06-13 | 常熟理工学院 | The automatic identification localization method of unmanned overhead traveling crane coil of strip crawl |
CN108225180A (en) * | 2017-12-31 | 2018-06-29 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of application alignment system and method |
US20180189565A1 (en) * | 2015-08-28 | 2018-07-05 | Imperial College Of Science, Technology And Medicine | Mapping a space using a multi-directional camera |
US10290118B2 (en) | 2015-08-06 | 2019-05-14 | Cognex Corporation | System and method for tying together machine vision coordinate spaces in a guided assembly environment |
WO2019152360A1 (en) * | 2018-01-30 | 2019-08-08 | Brooks Automation, Inc. | Automatic wafer centering method and apparatus |
WO2019202482A1 (en) * | 2018-04-18 | 2019-10-24 | Pirelli Tyre S.P.A. | Method for controlling a robotized arm |
WO2020233777A1 (en) * | 2019-05-17 | 2020-11-26 | Telefonaktiebolaget Lm Ericsson (Publ) | Technique for parameter conversion between a robotic device and a controller for the robotic device |
US11433542B2 (en) * | 2019-06-05 | 2022-09-06 | Kabushiki Kaisha Toshiba | Calibration detecting apparatus, method, and program |
US20230300319A1 (en) * | 2021-07-23 | 2023-09-21 | Phillip James Haeusler | Automated real-time calibration |
Families Citing this family (46)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9211643B1 (en) * | 2014-06-25 | 2015-12-15 | Microsoft Technology Licensing, Llc | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
US10179407B2 (en) * | 2014-11-16 | 2019-01-15 | Robologics Ltd. | Dynamic multi-sensor and multi-robot interface system |
US10564031B1 (en) * | 2015-08-24 | 2020-02-18 | X Development Llc | Methods and systems for determining errors based on detected sounds during operation of a robotic device |
US11267125B2 (en) | 2016-04-08 | 2022-03-08 | Delta Electronics, Inc. | Mechanism-parameter-calibration method for robotic arm system |
TWI601611B (en) * | 2016-04-08 | 2017-10-11 | 台達電子工業股份有限公司 | Mechanism parametric calibration method for robotic arm system |
US10596706B2 (en) * | 2016-04-08 | 2020-03-24 | Delta Electronics, Inc. | Mechanism-parameter-calibration method for robotic arm system |
CN109996653B (en) * | 2016-11-17 | 2022-09-02 | 株式会社富士 | Working position correction method and working robot |
JP6484287B2 (en) * | 2017-05-19 | 2019-03-13 | ファナック株式会社 | Damage detection device and damage detection method for linear guide |
CN107153380A (en) * | 2017-06-07 | 2017-09-12 | 合肥汇之新机械科技有限公司 | A kind of automation control system of industrial robot |
CN110914703A (en) * | 2017-07-31 | 2020-03-24 | 深圳市大疆创新科技有限公司 | Correction of motion-based inaccuracies in point clouds |
US11648678B2 (en) | 2017-11-20 | 2023-05-16 | Kindred Systems Inc. | Systems, devices, articles, and methods for calibration of rangefinders and robots |
JP6911777B2 (en) * | 2018-01-23 | 2021-07-28 | トヨタ自動車株式会社 | Motion trajectory generator |
US10689831B2 (en) * | 2018-03-27 | 2020-06-23 | Deere & Company | Converting mobile machines into high precision robots |
JP6888580B2 (en) * | 2018-04-05 | 2021-06-16 | オムロン株式会社 | Information processing equipment, information processing methods, and programs |
CN109146979B (en) * | 2018-08-01 | 2022-02-01 | 苏州乐佰图信息技术有限公司 | Method for compensating for deviation of mechanical arm from walking position |
WO2020150929A1 (en) * | 2019-01-23 | 2020-07-30 | Abb Schweiz Ag | Method and apparatus for managing robot arm |
US10369698B1 (en) * | 2019-03-07 | 2019-08-06 | Mujin, Inc. | Method and system for performing automatic camera calibration for robot control |
GB2582139B (en) * | 2019-03-11 | 2021-07-21 | Arrival Ltd | A method for determining positional error within a robotic cell environment |
US11583350B2 (en) | 2019-03-15 | 2023-02-21 | Cilag Gmbh International | Jaw coordination of robotic surgical controls |
US11284957B2 (en) | 2019-03-15 | 2022-03-29 | Cilag Gmbh International | Robotic surgical controls with force feedback |
US11690690B2 (en) | 2019-03-15 | 2023-07-04 | Cilag Gmbh International | Segmented control inputs for surgical robotic systems |
US11992282B2 (en) | 2019-03-15 | 2024-05-28 | Cilag Gmbh International | Motion capture controls for robotic surgery |
US11666401B2 (en) | 2019-03-15 | 2023-06-06 | Cilag Gmbh International | Input controls for robotic surgery |
US11701190B2 (en) | 2019-03-15 | 2023-07-18 | Cilag Gmbh International | Selectable variable response of shaft motion of surgical robotic systems |
US20200289228A1 (en) * | 2019-03-15 | 2020-09-17 | Ethicon Llc | Dual mode controls for robotic surgery |
US11471229B2 (en) * | 2019-03-15 | 2022-10-18 | Cilag Gmbh International | Robotic surgical systems with selectively lockable end effectors |
US11490981B2 (en) | 2019-03-15 | 2022-11-08 | Cilag Gmbh International | Robotic surgical controls having feedback capabilities |
FR3094101B1 (en) * | 2019-03-21 | 2021-09-03 | Saint Gobain | Method of timing synchronization between an automatic displacement means and a non-contact sensing means disposed on said automatic displacement means |
US10906184B2 (en) | 2019-03-29 | 2021-02-02 | Mujin, Inc. | Method and control system for verifying and updating camera calibration for robot control |
US10399227B1 (en) | 2019-03-29 | 2019-09-03 | Mujin, Inc. | Method and control system for verifying and updating camera calibration for robot control |
GB2582931B (en) * | 2019-04-08 | 2021-09-01 | Arrival Ltd | A method for determining camera placement within a robotic cell environment |
EP3745310A1 (en) * | 2019-05-28 | 2020-12-02 | Robert Bosch GmbH | Method for calibrating a multi-sensor system using an artificial neural network |
KR102361219B1 (en) * | 2019-09-09 | 2022-02-11 | (주)미래컴퍼니 | Method and apparatus for obtaining surgical data in units of sub blocks |
CN110682293A (en) * | 2019-10-24 | 2020-01-14 | 广东拓斯达科技股份有限公司 | Robot arm correction method, robot arm correction device, robot arm controller and storage medium |
CN110640745B (en) * | 2019-11-01 | 2021-06-22 | 苏州大学 | Vision-based robot automatic calibration method, equipment and storage medium |
CN115552476A (en) * | 2020-02-06 | 2022-12-30 | 伯克希尔格雷营业股份有限公司 | System and method for camera calibration using a reference on an articulated arm of a programmable motion device whose position is unknown |
CN111452048B (en) * | 2020-04-09 | 2023-06-02 | 亚新科国际铸造(山西)有限公司 | Calibration method and device for relative spatial position relation of multiple robots |
JP2022100627A (en) * | 2020-12-24 | 2022-07-06 | セイコーエプソン株式会社 | Method of determining control position of robot, and robot system |
US12070287B2 (en) | 2020-12-30 | 2024-08-27 | Cilag Gmbh International | Robotic surgical tools having dual articulation drives |
US12059170B2 (en) | 2020-12-30 | 2024-08-13 | Cilag Gmbh International | Surgical tool with tool-based translation and lock for the same |
US11813746B2 (en) | 2020-12-30 | 2023-11-14 | Cilag Gmbh International | Dual driving pinion crosscheck |
EP4108390B1 (en) * | 2021-06-25 | 2023-08-02 | Sick Ag | Method for secure operation of a movable machine part |
CN114310881B (en) * | 2021-12-23 | 2024-09-13 | 中国科学院自动化研究所 | Calibration method and system of mechanical arm quick-change device and electronic equipment |
WO2023133254A1 (en) * | 2022-01-06 | 2023-07-13 | Liberty Reach Inc. | Method and system for registering a 3d sensor with an autonomous manipulator |
CN117310200B (en) * | 2023-11-28 | 2024-02-06 | 成都瀚辰光翼生物工程有限公司 | Pipetting point calibration method and device, pipetting control equipment and readable storage medium |
CN118219265A (en) * | 2024-04-09 | 2024-06-21 | 北京纳通医用机器人科技有限公司 | Method, device, equipment and storage medium for determining working space |
Family Cites Families (20)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1586421B1 (en) | 2004-04-16 | 2008-03-05 | Honda Research Institute Europe GmbH | Self-calibrating orienting system for a manipulating device |
JP3946716B2 (en) * | 2004-07-28 | 2007-07-18 | ファナック株式会社 | Method and apparatus for recalibrating a three-dimensional visual sensor in a robot system |
US9316743B2 (en) * | 2004-11-09 | 2016-04-19 | Biosensors International Group, Ltd. | System and method for radioactive emission measurement |
CN101390027A (en) * | 2006-02-23 | 2009-03-18 | Abb公司 | A system for controlling the position and orientation of an object in dependence on received forces and torques from a user |
US7457686B2 (en) * | 2007-03-14 | 2008-11-25 | Ortho—Clinical Diagnostics, Inc. | Robotic arm alignment |
JP5417343B2 (en) * | 2007-12-27 | 2014-02-12 | ラム リサーチ コーポレーション | System and method for calibrating an end effector alignment using at least one light source |
EP2269783A1 (en) * | 2009-06-30 | 2011-01-05 | Leica Geosystems AG | Calibration method for a measuring system |
US8543240B2 (en) * | 2009-11-13 | 2013-09-24 | Intuitive Surgical Operations, Inc. | Master finger tracking device and method of use in a minimally invasive surgical system |
US8935003B2 (en) * | 2010-09-21 | 2015-01-13 | Intuitive Surgical Operations | Method and system for hand presence detection in a minimally invasive surgical system |
US8996173B2 (en) * | 2010-09-21 | 2015-03-31 | Intuitive Surgical Operations, Inc. | Method and apparatus for hand gesture control in a minimally invasive surgical system |
US8630314B2 (en) * | 2010-01-11 | 2014-01-14 | Faro Technologies, Inc. | Method and apparatus for synchronizing measurements taken by multiple metrology devices |
US9393694B2 (en) * | 2010-05-14 | 2016-07-19 | Cognex Corporation | System and method for robust calibration between a machine vision system and a robot |
CN102087096B (en) | 2010-11-12 | 2012-07-25 | 浙江大学 | Automatic calibration apparatus for robot tool coordinate system based on laser tracking measurement and method thereof |
US8958911B2 (en) * | 2012-02-29 | 2015-02-17 | Irobot Corporation | Mobile robot |
US9605952B2 (en) * | 2012-03-08 | 2017-03-28 | Quality Manufacturing Inc. | Touch sensitive robotic gripper |
CN102922521B (en) * | 2012-08-07 | 2015-09-09 | 中国科学技术大学 | A kind of mechanical arm system based on stereoscopic vision servo and real-time calibration method thereof |
CN103115615B (en) | 2013-01-28 | 2015-01-21 | 山东科技大学 | Fully-automatic calibration method for hand-eye robot based on exponential product model |
US9789462B2 (en) * | 2013-06-25 | 2017-10-17 | The Boeing Company | Apparatuses and methods for accurate structure marking and marking-assisted structure locating |
US20150065916A1 (en) * | 2013-08-29 | 2015-03-05 | Vasculogic, Llc | Fully automated vascular imaging and access system |
US9211643B1 (en) * | 2014-06-25 | 2015-12-15 | Microsoft Technology Licensing, Llc | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system |
-
2014
- 2014-06-25 US US14/314,970 patent/US9211643B1/en not_active Expired - Fee Related
-
2015
- 2015-06-22 WO PCT/US2015/036857 patent/WO2015200152A1/en active Application Filing
- 2015-06-22 CN CN201580034128.2A patent/CN106461383B/en active Active
- 2015-06-22 EP EP15736346.6A patent/EP3160690B1/en active Active
- 2015-11-10 US US14/937,061 patent/US10052766B2/en active Active
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10290118B2 (en) | 2015-08-06 | 2019-05-14 | Cognex Corporation | System and method for tying together machine vision coordinate spaces in a guided assembly environment |
US11049280B2 (en) | 2015-08-06 | 2021-06-29 | Cognex Corporation | System and method for tying together machine vision coordinate spaces in a guided assembly environment |
US10796151B2 (en) * | 2015-08-28 | 2020-10-06 | Imperial College Of Science, Technology And Medicine | Mapping a space using a multi-directional camera |
US20180189565A1 (en) * | 2015-08-28 | 2018-07-05 | Imperial College Of Science, Technology And Medicine | Mapping a space using a multi-directional camera |
CN106839985A (en) * | 2017-03-22 | 2017-06-13 | 常熟理工学院 | The automatic identification localization method of unmanned overhead traveling crane coil of strip crawl |
CN108225180A (en) * | 2017-12-31 | 2018-06-29 | 芜湖哈特机器人产业技术研究院有限公司 | A kind of application alignment system and method |
WO2019152360A1 (en) * | 2018-01-30 | 2019-08-08 | Brooks Automation, Inc. | Automatic wafer centering method and apparatus |
US11088004B2 (en) | 2018-01-30 | 2021-08-10 | Brooks Automation, Inc. | Automatic wafer centering method and apparatus |
US11764093B2 (en) | 2018-01-30 | 2023-09-19 | Brooks Automation Us, Llc | Automatic wafer centering method and apparatus |
WO2019202482A1 (en) * | 2018-04-18 | 2019-10-24 | Pirelli Tyre S.P.A. | Method for controlling a robotized arm |
WO2020233777A1 (en) * | 2019-05-17 | 2020-11-26 | Telefonaktiebolaget Lm Ericsson (Publ) | Technique for parameter conversion between a robotic device and a controller for the robotic device |
US20220219322A1 (en) * | 2019-05-17 | 2022-07-14 | Telefonaktiebolaget Lm Ericsson (Publ) | Technique for Parameter Conversion Between a Robotic Device and a Controller for the Robotic Device |
US11433542B2 (en) * | 2019-06-05 | 2022-09-06 | Kabushiki Kaisha Toshiba | Calibration detecting apparatus, method, and program |
US20230300319A1 (en) * | 2021-07-23 | 2023-09-21 | Phillip James Haeusler | Automated real-time calibration |
Also Published As
Publication number | Publication date |
---|---|
EP3160690B1 (en) | 2023-12-13 |
US10052766B2 (en) | 2018-08-21 |
EP3160690A1 (en) | 2017-05-03 |
US20160059417A1 (en) | 2016-03-03 |
CN106461383A (en) | 2017-02-22 |
WO2015200152A1 (en) | 2015-12-30 |
US9211643B1 (en) | 2015-12-15 |
CN106461383B (en) | 2019-07-16 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10052766B2 (en) | Automatic in-situ registration and calibration of robotic arm/sensor/workspace system | |
US9279662B2 (en) | Laser scanner | |
CN110136208B (en) | Joint automatic calibration method and device for robot vision servo system | |
Chen et al. | Active sensor planning for multiview vision tasks | |
EP3134866B1 (en) | Depth sensor calibration and per-pixel correction | |
KR102276259B1 (en) | Calibration and operation of vision-based manipulation systems | |
JP5567908B2 (en) | Three-dimensional measuring apparatus, measuring method and program | |
CN103959012B (en) | 6DOF position and orientation determine | |
US9322646B2 (en) | Adaptive mechanism control and scanner positioning for improved three-dimensional laser scanning | |
CN103020952A (en) | Information processing apparatus and information processing method | |
EP3435028B1 (en) | Live metrology of an object during manufacturing or other operations | |
Forouher et al. | Sensor fusion of depth camera and ultrasound data for obstacle detection and robot navigation | |
da Silva Neto et al. | Comparison of RGB-D sensors for 3D reconstruction | |
Wang et al. | Modelling and calibration of the laser beam-scanning triangulation measurement system | |
CN101013065A (en) | Pixel frequency based star sensor high accuracy calibration method | |
Cheng et al. | 3D radar and camera co-calibration: A flexible and accurate method for target-based extrinsic calibration | |
Xu et al. | A flexible 3D point reconstruction with homologous laser point array and monocular vision | |
Axelrod et al. | Improving hand-eye calibration for robotic grasping and manipulation | |
CN110866951A (en) | Correction method for inclination of optical axis of monocular camera | |
Kita et al. | Robot and 3D-sensor calibration using a planar part of a robot hand | |
WO2024069886A1 (en) | Calculation device, calculation system, robot system, calculation method and computer program | |
Zhang et al. | An efficient method for dynamic calibration and 3D reconstruction using homographic transformation | |
Li et al. | Research on dynamic stability precision test of artillery based on dual-target and CCD | |
Ahmadabadian | Photogrammetric multi-view stereo and imaging network design | |
Bai et al. | A comparison of two different approaches to camera calibration in LSDM photogrammetric systems |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: MICROSOFT CORPORATION, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:SHIRAKYAN, GRIGOR;REVOW, MICHAEL;JALOBEANU, MIHAI;AND OTHERS;SIGNING DATES FROM 20140620 TO 20140625;REEL/FRAME:033179/0491 |
|
AS | Assignment |
Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:034747/0417 Effective date: 20141014 Owner name: MICROSOFT TECHNOLOGY LICENSING, LLC, WASHINGTON Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MICROSOFT CORPORATION;REEL/FRAME:039025/0454 Effective date: 20141014 |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 4TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1551); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 4 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20231215 |