WO2022061673A1 - Calibration method and device for robot - Google Patents

Calibration method and device for robot Download PDF

Info

Publication number
WO2022061673A1
WO2022061673A1 PCT/CN2020/117538 CN2020117538W WO2022061673A1 WO 2022061673 A1 WO2022061673 A1 WO 2022061673A1 CN 2020117538 W CN2020117538 W CN 2020117538W WO 2022061673 A1 WO2022061673 A1 WO 2022061673A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
robot
calibration
coordinate system
coordinates
Prior art date
Application number
PCT/CN2020/117538
Other languages
French (fr)
Chinese (zh)
Inventor
华韬
李�浩
席宝时
吴剑强
Original Assignee
西门子(中国)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 西门子(中国)有限公司 filed Critical 西门子(中国)有限公司
Priority to PCT/CN2020/117538 priority Critical patent/WO2022061673A1/en
Priority to CN202080105042.5A priority patent/CN116157837A/en
Publication of WO2022061673A1 publication Critical patent/WO2022061673A1/en

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present disclosure relates to the field of machine vision, and more particularly, to a calibration method, apparatus, computing device, computer-readable storage medium, and program product for a robot.
  • Robots can be flexibly combined with different equipment to meet demanding production process requirements. It can easily realize multi-machine linkage automatic production line and digital factory layout, save manpower to the greatest extent and improve enterprise efficiency.
  • Robot calibration is one of the key technologies in the process of robot operation.
  • the existing calibration methods require users to have considerable professional knowledge and spend a lot of time and energy to calibrate the robot to determine the operation of the robot. Whether it is ideal or not, this increases the user's operating threshold.
  • the existing calibration methods require a lot of manual operations, the calibration efficiency is low, and the operation process is complicated, and is highly dependent on the operator's subjective judgment and experience.
  • a first embodiment of the present disclosure proposes a calibration method for a robot, the calibration method includes performing a camera calibration process, wherein performing the camera calibration process includes the steps of capturing an image of an environment and forming the environment based on the captured image of the environment and the three-dimensional virtual object of the environment, placing a calibration object in a target area in the environment; through the first camera associated with the robot and the calibration relative movement between objects to capture an image of the calibration object using the first camera, wherein the first camera is a 2D camera; and determining based on the image of the calibration object captured by the first camera Parameters of the first camera used for calibration.
  • the augmented reality (AR) technology can be used to guide the user to perform camera calibration, so that the operation is intuitive and convenient, which advantageously reduces the operation complexity and improves the accuracy, thereby effectively lowering the user's operation threshold, making Users without professional knowledge can also easily achieve camera calibration.
  • AR augmented reality
  • a second embodiment of the present disclosure provides a calibration device for a robot, the calibration device includes a camera calibration unit, the camera calibration unit includes an environment capture module configured to capture an image of an environment and form the environment the three-dimensional virtual object; the placement module is configured to place the calibration object in the target area in the environment based on the captured image of the environment and the three-dimensional virtual object of the environment; the first camera capture module is configured to capture an image of the calibration object using the first camera through relative movement between the first camera associated with the robot and the calibration object, wherein the first camera is a 2D camera; parameter determination module , configured to determine parameters of the first camera for calibration based on the image of the calibration object captured by the first camera.
  • a third embodiment of the present disclosure provides a computing device comprising: a processor; and a memory for storing computer-executable instructions that, when executed, cause the processor to The method described in the first embodiment is performed.
  • a fourth embodiment of the present disclosure proposes a computer-readable storage medium having computer-executable instructions stored thereon for executing the steps described in the first embodiment. method described.
  • a fifth embodiment of the present disclosure proposes a computer program product tangibly stored on a computer-readable storage medium and comprising computer-executable instructions that, when executed, cause At least one processor executes the method described in the first embodiment.
  • FIG. 1 illustrates an exemplary scenario in which embodiments of the present disclosure may be applied.
  • FIG. 2 illustrates another exemplary scenario in which embodiments of the present disclosure may be applied.
  • FIG. 3 shows a flowchart of a calibration method for a robot according to an embodiment of the present disclosure.
  • FIG. 4 illustrates an exemplary arrangement of a camera calibration process that may be used with a robot, according to embodiments of the present disclosure.
  • FIG. 5 shows another flowchart of a calibration method for a robot according to an embodiment of the present disclosure.
  • FIG. 6 illustrates an exemplary calibration artifact for TCP calibration according to embodiments of the present disclosure.
  • FIG. 7 illustrates an exemplary arrangement of a TCP calibration process that may be used with a robot, according to embodiments of the present disclosure.
  • FIG. 8 shows another flowchart of a calibration method for a robot according to an embodiment of the present disclosure.
  • FIG 9 illustrates an exemplary arrangement of a hand-eye calibration process that may be used in a robot, according to embodiments of the present disclosure.
  • FIG. 10 shows a block diagram of an exemplary calibration apparatus for a robot according to an embodiment of the present disclosure.
  • FIG. 11 shows a block diagram of an exemplary computing device for robotic calibration according to an embodiment of the present disclosure.
  • the terms “including”, “including” and similar terms are open-ended terms, ie, “including/including but not limited to,” meaning that other content may also be included.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment” and so on.
  • the base coordinate system of the robot is based on the robot installation base and is used to describe the coordinate system of the movement of the robot body.
  • the tool coordinate system TCS is a coordinate system established with the tool center point (TCP) as the origin. Before the tool is not assembled, the default TCP is the center point of the robot end flange. After the tool is assembled, the robot TCP moves to the end of the tool.
  • the camera coordinate system is based on the camera and is used to describe the coordinate system of the object movement.
  • FIG. 1 illustrates an exemplary scenario 100 in which embodiments of the present disclosure may be applied.
  • the scene 100 includes a robot 101 and an associated first camera 102 , which is affixed to the robot 101 .
  • the robot 101 may be an industrial-oriented multi-joint manipulator or a multi-degree-of-freedom robotic device.
  • the robot 101 includes a movable end 103 on which the first camera 102 can be fixed and move together with the movement of the end 103 of the robot 101 . That is, in this scene 100, since the end of the robot 101 and the first camera 102 are fixed together, it is called eye in hand.
  • the end 103 of the robot 101 may also have a tool portion 104 for assembling (eg, suction, insertion, etc.) tools or components (eg, welding guns, nozzles, bolts, etc.) for processing the target workpiece.
  • the scene 100 is also arranged with a target object 105 .
  • the target object 105 may be an actual object to be processed (eg, a target workpiece) or a calibration object (eg, a calibration plate, etc.).
  • the first camera 102 is a 2D camera, and a second camera 106 is also arranged in the scene 100 .
  • the second camera 106 is a 3D camera, which may be a binocular camera, a structured light camera, or any camera capable of returning depth information.
  • a second camera 106 may be disposed relatively above the robot 101 to capture and track the movement of the robot 101 (eg, its tip, etc.) and the like.
  • the user 107 may carry (eg, hand-held, worn on the head, etc.) an AR device 108, which may include, but is not limited to, a smartphone, tablet, teach pendant, or headset equipment.
  • the AR device 108 can take pictures of the real environment or objects in the environment from different positions and different angles within the field of view 109 of the AR device 108 to form a three-dimensional virtual object through a plurality of cameras arranged on or outside the AR device 108 .
  • the AR device 108 may have a display to display in the virtual environment corresponding three-dimensional virtual objects of the real environment or objects in the environment.
  • FIG. 2 illustrates an exemplary scenario 200 in which embodiments of the present disclosure may be applied.
  • Scene 200 is similar to scene 100 except that the associated first camera 102 is fixed outside the robot 101 . That is, in this scene 200, the first camera 102 is located outside the robot 101 and does not move with the movement of the tip 103 of the robot 101, referred to as eye to hand.
  • FIG. 3 shows a flowchart of a calibration method 300 for a robot according to an embodiment of the present disclosure
  • FIG. 4 shows an exemplary arrangement 400 of a camera calibration process that may be used for a robot according to an embodiment of the present disclosure.
  • the method 300 may be applied to the exemplary scenario 100 shown in FIG. 1 (eyes in hands) and the exemplary scenario 200 shown in FIG. 2 (eyes out of hands). The method 300 is described below in conjunction with FIGS. 3 and 4 .
  • method 300 includes steps 301-304 for performing a camera calibration process. Additionally, the method 300 may further include steps for performing a TCP calibration process (see FIG. 5 ) and for performing a hand-eye calibration process (see FIG. 8 ).
  • a geometric model of camera imaging In the process of image measurement and machine vision applications, in order to determine the relationship between the three-dimensional geometric position of a point on the surface of a space object and its corresponding point in the image, a geometric model of camera imaging must be established.
  • These geometric model parameters are camera parameters (for example, , the camera's intrinsic and extrinsic parameters, distortion parameters). These parameters are usually obtained through experiments and calculations, and this process of solving parameters is called camera calibration.
  • camera parameters for example, the camera's intrinsic and extrinsic parameters, distortion parameters.
  • the method 300 begins at step 301 by capturing an image of the environment and forming a three-dimensional virtual object of the environment.
  • step 301 may include: photographing the environment from different positions and angles, thereby obtaining at least two images for each scene; The depth information of each pixel, and the three-dimensional virtual object of the environment is formed based on the depth information and the two-dimensional information contained in the image.
  • an AR device held by the user 107 may be equipped with or connected to multiple cameras that capture images of the real environment from different locations and at different angles. Taking two cameras as an example, when the two cameras take pictures of the same scene from different positions in the real environment, due to the different shooting angles of the two cameras, the different positions of the pixels in the two images of the same scene can be used. Triangulate the depth of each pixel in a two-dimensional image. Therefore, although the image captured by a single camera is two-dimensional, the three-dimensional information required to form a three-dimensional virtual object of a real environment can be obtained by supplementing the depth information obtained by triangulation on this basis.
  • step 302 a calibration object is placed in a target area in the environment based on the captured image of the environment and the three-dimensional virtual objects of the environment.
  • the target area can be a specific area in the workbench.
  • step 302 may include: detecting visual markers disposed in the environment from captured images of the environment; determining a target area based on the detected visual markers; capturing an image of the calibration object and forming the calibration object
  • the 3D virtual object of the calibration object is superimposed on the 3D virtual object of the environment to be displayed as an augmented reality image; and the calibration object is moved so that the 3D virtual object of the calibration object is located in the 3D virtual object of the target area.
  • a target area in the environment can be defined by a number of identifiable visual markers, and by detecting the visual markers, the target area can be quickly determined.
  • a plurality of visual markers may be arranged to overlay within the field of view of the first camera to be captured by the first camera.
  • the AR device 108 held by the user 107 can capture images of the calibration object through multiple cameras and form a three-dimensional virtual object of the calibration object.
  • the 3D virtual object of the calibration object is superimposed on the 3D virtual object of the environment to be displayed as an augmented reality image (eg, real-time tracking display)
  • the user 107 or the robot 101 can be guided to move the calibration by, for example, voice, text, or image. object.
  • the AR device 108 may prompt the user 107 or guide the robot 101 to continue to move the calibration object until the 3D virtual object of the calibration object is located in 3D of the target area in virtual objects.
  • an exemplary arrangement 400 includes a plurality of visual markers 402 (eg, four illustrated) that define a target area 401 .
  • the visual marker 402 may have a characteristic color (eg, a different color from the rest of the environment) and/or a characteristic pattern (eg, a specific graphic code (barcode, QR code, etc.), etc.).
  • the AR device 108 may capture an image of the environment through a camera and identify the visual marker 402 from the image, thereby determining the target area 401 defined by the visual marker 402 .
  • the example arrangement 400 also includes a calibration object 403 (eg, a calibration plate) that has been placed in the target area 401.
  • the calibration object 403 may have a reference pattern attached to the surface for camera calibration, the reference pattern including, but not limited to, a checkerboard pattern, a circular array pattern, a non-circular array pattern, and the like.
  • step 303 the first camera is used to capture an image of the calibration object through relative movement between the first camera associated with the robot and the calibration object, wherein the first camera is a 2D camera.
  • step 303 may include: obtaining a plurality of actual positions for placing the calibration object in the target area, the plurality of actual positions corresponding to the three-dimensional virtual objects in the environment moving the calibration object so that the three-dimensional virtual objects of the calibration object are respectively located at the plurality of virtual positions; and using the first camera to capture images of the calibration object at the plurality of actual positions, respectively.
  • the calibration object 403 needs to be placed at a suitable position in the target area 401 so that the corners on the calibration object 403 cover the first camera 102, and the correlation between the camera parameters and the different poses of the calibration object 403 will affect the determination of camera-specific parameters.
  • the calculation method for placing the calibration object 403 in the target area 401 can be calculated. Multiple actual positions, and the user 107 or the robot 101 is designated by the AR device 108 to place the calibration object 403 at the calculated multiple actual positions.
  • the calibration object 403 may be placed at multiple actual locations by determining whether the three-dimensional virtual object of the calibration object 403 is placed at multiple virtual locations corresponding to multiple actual locations, and the first camera 102 may be used to place the calibration object 403 at multiple actual locations, respectively. Images of calibration object 403 are captured at multiple actual locations.
  • step 303 when the first camera is fixed on the robot, step 303 may be performed by moving the first camera around the calibration object to a plurality of actual positions, and using the first camera to capture images of the calibration object at the plurality of actual positions respectively .
  • step 304 parameters of the first camera for calibration are determined based on the image of the calibration object captured by the first camera.
  • the parameters of the first camera may include at least one of intrinsic parameters, extrinsic parameters, and distortion parameters.
  • the classical Zhang Zhengyou calibration method can be used to determine the parameters of the first camera, using The number of checkerboard grid points is p x q. Each time the calibration board is placed, the corresponding camera parameters will change.
  • n x p x q an equation.
  • the parameters of the first camera may also be determined according to any applicable existing camera calibration method, which will not be described in detail.
  • the camera calibration process according to the method 300 can provide guidance to the user based on the AR technology during the camera calibration, so that the operation is intuitive and convenient, and the operation complexity is advantageously reduced and the accuracy is improved, so as to be effective.
  • the user's operating threshold is lowered, so that users without professional knowledge can easily calibrate the camera.
  • FIG. 5 shows a flowchart of a calibration method 500 for a robot according to an embodiment of the present disclosure
  • FIG. 6 shows an exemplary calibration workpiece 600 for TCP calibration according to an embodiment of the present disclosure
  • FIG. 7 shows An exemplary arrangement 700 of a TCP calibration process that may be used with a robot in accordance with embodiments of the present disclosure is presented.
  • the method 500 may be applied to the exemplary scenario 100 shown in FIG. 1 (eyes in hands) and the exemplary scenario 200 shown in FIG. 2 (eyes out of hands).
  • the method 500 is described below in conjunction with FIGS. 5 , 6 and 7 .
  • method 500 includes steps 501-504 for performing a TCP calibration process.
  • Method 500 may be included in method 300 or performed independently of method 300 .
  • the tool portion 104 on the end 103 of the robot 101 can be equipped with tools to handle the target workpiece.
  • a coordinate system is bound (defined) on the tool, that is, the tool coordinate system TCS, and the origin of the tool coordinate system TCS is TCP.
  • the TCP is located on the end 103 of the robot 101 , eg the center point of the flange of the tool part 104 , before the tool is not assembled.
  • the coordinates of the center point of the robot end flange can be obtained from the robot controller.
  • the coordinates of the center point of the flange at the end of the robot can be obtained from the teach pendant, where the teach pendant is a handheld device for manual manipulation, programming, parameter configuration and monitoring of the robot.
  • the robot TCP eg, on the tool end
  • the origin of the tool coordinate system TCS is set from the default TCP to the robot TCP.
  • the method 500 begins at step 501 by using a second camera to separately capture a plurality of images of a first workpiece placed in a plurality of poses, wherein the plurality of poses have different spatial inclination angles, and the first workpiece is mounted on the robot.
  • the second camera is a 3D camera and is fixed outside the robot.
  • the first workpiece may be, for example, a calibration workpiece 600 as shown in FIGS. 6 and 7 for simulating a tool in actual use.
  • a side view 601 and a top view 602 of a calibration workpiece 600 are shown in FIG. 6 , respectively, having a top end 611 and an end 612 .
  • the calibration workpiece 600 may be mounted to the tool portion 104 of the distal end 103 of the robot 101 through the tip 612 .
  • an exemplary arrangement 700 includes a second camera 106 and a calibration workpiece 600 placed in a number of different poses (eg, four poses 701, 702, 703, 704 in FIG. 7), each pose having Different spatial inclination angles.
  • step 502 based on the captured images, a plurality of workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system of the second camera under the plurality of poses are determined.
  • the default TCP is the center point of the robot end flange, while after assembling the tool/workpiece, the robot TCP (eg, on the end of the tool/workpiece) needs to be calibrated.
  • the position of the workpiece end in the 3D camera coordinate system is determined based on the 3D camera.
  • the number of poses may be at least four.
  • step 502 may include: detecting marker points arranged on the first workpiece from the captured plurality of images to obtain a plurality of marker point coordinates of the marker points in a 3D camera coordinate system; and based on the plurality of marker points Mark the coordinates of the point, and determine the coordinates of the ends of the first workpiece in the 3D camera coordinate system.
  • the calibration workpiece 600 may have one or more marking points 613 (eg, four in FIG. 6 ), which may be used, for example, to enable the calibration workpiece 600 or at least a portion thereof (eg, including the marking points) 613 and end 612 ) are overlaid in the field of view of the second camera 106 .
  • a plurality of marker point coordinates in the 3D camera coordinate system under different poses can be obtained.
  • the end 612 in the 3D camera coordinate system can be obtained The corresponding multiple workpiece end coordinates in .
  • the marker points may have characteristic colors and/or characteristic patterns.
  • marker points 613 may have a characteristic color (eg, a different color than the rest of the calibration workpiece 600) and/or a characteristic pattern (eg, a specific graphic code (barcode, QR code, etc.), etc.) for identification/detection.
  • a characteristic color eg, a different color than the rest of the calibration workpiece 600
  • a characteristic pattern eg, a specific graphic code (barcode, QR code, etc.), etc.
  • step 503 a plurality of robot end coordinates of the end of the robot in the base coordinate system of the robot under the plurality of poses are obtained.
  • a plurality of robot end coordinates in the base coordinate system of the robot under different poses can be obtained from the robot controller or the teach pendant.
  • step 504 based on multiple workpiece end coordinates and multiple robot end coordinates, determine the coordinates of the robot TCP to be calibrated in the robot's base coordinate system under one posture of the robot, and determine the robot TCP and the robot TCP located on the end of the robot The relative relationship between the default TCP.
  • An exemplary TCP calibration process is described below in conjunction with FIG. 7 .
  • the robot 101 assumes 4 different poses, the second camera 106 recognizes the marker point 613 on the calibration workpiece 600 , obtains the coordinates of the marker point 613 in the 3D camera coordinate system, and deduces the end of the calibration workpiece 600 612 the coordinates in the 3D camera coordinate system, while obtaining the coordinates of the center point of the end flange of the robot 101 from, for example, the robot controller.
  • the following coordinates can be obtained:
  • the coordinates of the robot end in the robot base coordinate system (x1, y1, z1), and the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1, mz1) are calibrated.
  • the coordinates of the robot end in the robot base coordinate system (x2, y2, z2), and the coordinates of the workpiece end in the 3D camera coordinate system (mx2, my2, mz2) are calibrated.
  • the coordinates of the robot end in the robot base coordinate system (x3, y3, z3), and the coordinates of the workpiece end in the 3D camera coordinate system (mx3, my3, mz3) are calibrated.
  • the coordinates of the robot end in the robot base coordinate system (x4, y4, z4), and the coordinates of the workpiece end in the 3D camera coordinate system (mx3, my3, mz3) are calibrated.
  • the coordinates of the robot end in the robot base coordinate system (x1, y1, z1), and the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1, mz1) are calibrated.
  • the coordinates of the robot end in the robot base coordinate system (x2-mx2+mx1, y2-my2+my1, z2-mz2+mz1), and the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1,mz1).
  • the coordinates of the robot end in the robot base coordinate system (x3-mx3+mx1, y3-my3+my1, z3-mz3+mz1), the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1,mz1).
  • the coordinates of the robot end in the robot base coordinate system (x4-mx4+mx1, y4-my4+my1, z4-mz4+mz1), the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1,mz1).
  • the coordinates (x0, y0, z0) of the robot TCP in the robot base coordinate system are at the center of the sphere, and the default TCP point (the coordinate point of the center of the end flange is obtained from the teach pendant) is located on the spherical surface, which can be determined according to the following equation (1)-(4) to solve the coordinates (x0, y0, z0) of the robot TCP:
  • the TCP value (x tcp , y tcp , z tcp , rx tcp , ry tcp , rz tcp ) to determine the relative relationship between the robot TCP and the default TCP located on the end of the robot.
  • the traditional TCP calibration process usually uses a 3, 4 or 5 point method, where the user needs to move the TCP to a reference point (for example, a fixed point placed within the robot's workspace) 3, 4 or 5 times using different poses to make the TCP coincides with the reference point.
  • a reference point for example, a fixed point placed within the robot's workspace
  • Such traditional TCP calibration methods require the manual participation of users, require users to be familiar with operations and master professional knowledge, and have disadvantages such as slow calibration speed and insufficient calibration accuracy.
  • the TCP calibration can be automatically performed without manual participation, the operation complexity is effectively reduced, the TCP calibration can be quickly realized, and errors caused by manual participation are avoided at the same time. , which improves the calibration accuracy.
  • FIG. 8 shows another flowchart of a calibration method 800 for a robot according to an embodiment of the present disclosure
  • FIG. 9 shows an exemplary arrangement 900 of a hand-eye calibration process that may be used for a robot according to an embodiment of the present disclosure.
  • the method 800 may be applied to the exemplary scenario 100 shown in FIG. 1 (eyes in hands) and the exemplary scenario 200 shown in FIG. 2 (eyes out of hands).
  • the method 800 is described below in conjunction with FIGS. 8 and 9 .
  • method 800 includes steps 801-805 for performing a hand-eye calibration process.
  • Method 800 may be included in method 300 or performed independently of method 300 .
  • the purpose of hand-eye calibration is to obtain the relationship between the robot coordinate system and the camera coordinate system, and finally transfer the result of visual recognition to the robot coordinate system.
  • the hand-eye calibration can be divided into two forms. If the camera and the robot end are fixed together, it is called eye in hand; It is out of sight.
  • the method 800 begins at step 801 by capturing a first target image of a second workpiece to be processed by the robot using a first camera, and capturing a second target image of the second workpiece using a second camera.
  • the second workpiece may be, for example, a target workpiece 901 to be processed by the robot 101 as shown in FIG. 9 .
  • a first target image of the target workpiece 901 may be captured by the first camera 102 and a second target image of the target workpiece 901 may be captured by the second camera 106 .
  • step 802 a first transformation relationship between the 2D camera coordinate system of the first camera and the 3D camera coordinate system of the second camera is determined based on the first target image and the second target image.
  • the transformation relationship between different camera coordinate systems can be determined, that is, the mapping relationship between the coordinates of the points in the first target image and the coordinates of the corresponding points in the second target image can be determined.
  • step 802 may include: detecting a plurality of feature points arranged in the second workpiece from the first target image and the second target image, so as to obtain the first position of the plurality of feature points in the 2D camera coordinate system A plurality of feature coordinates and a second plurality of feature coordinates in the 3D camera coordinate system.
  • the number of feature points may be at least four.
  • the coordinates of the 2D camera coordinate system of the first camera 102 may be based on camera parameters obtained after camera calibration of the first camera 102 (the camera calibration may be, for example, the camera calibration process described herein or any other suitable camera calibration process ), calculated using the 2D information in the target image.
  • a target workpiece 901 may have a plurality of feature points 902 (eg, 4 in the figure) arranged on the target workpiece 901, and the feature points 902 may have, for example, a specific shape, size, mark, etc., so as to be recognized/detected from the captured image Feature points.
  • the target workpiece 901 may be a PCB board, and the feature points 902 may be CAD marked screw holes arranged on the PCB board.
  • the first camera 102 and the second camera 106 can respectively identify the feature points 902 from the captured target image, and can obtain a plurality of feature points 902 in the 2D camera coordinate system of the first camera 102 and the 3D camera coordinate system of the second camera 106 as follows: Coordinates in:
  • the first feature point 902 is the coordinates of the 3D camera coordinate system (x1, y1, z1), and the coordinates of the 2D camera coordinate system (X1, Y1, Z1).
  • the second feature point 902 3D camera coordinate system coordinates (x2, y2, z2), 2D camera coordinate system coordinates (X2, Y2, Z2).
  • the third feature point 903 is the coordinates of the 3D camera coordinate system (x3, y3, z3), and the coordinates of the 2D camera coordinate system (X3, Y3, Z3).
  • the fourth feature point 902 3D camera coordinate system coordinates (x4, y4, z4), 2D camera coordinate system coordinates (X4, Y4, Z4).
  • the homography matrix H1 between the two camera coordinate systems can be obtained, that is, to determine the first A first transformation relationship between the 2D camera coordinate system of the camera 102 and the 3D camera coordinate system of the second camera 106 .
  • step 803 the end of the robot is moved to a plurality of positions, the first plurality of tool coordinates of the robot TCP in the base coordinate system of the robot are determined, and the second plurality of tool coordinates of the robot TCP in the 3D camera coordinate system are determined.
  • the first plurality of tool coordinates of the robot TCP in the robot's base coordinate system may be determined, for example, by capturing an image of the end of the robot by a second camera (eg, using the TCP calibration process described herein or any other applicable TCP calibration process) and a second plurality of tool coordinates in the 3D camera coordinate system.
  • the number of locations may be at least four.
  • the end 103 of the robot For example, by moving (eg, translating, or posing in different poses) the end 103 of the robot to multiple positions (eg, four), the base coordinate system of the robot TCP in the robot 101 and the camera of the second camera 106 are respectively determined The coordinates in the coordinate system.
  • step 803 may include, for at least one of the plurality of positions, translating to the at least one position in the one pose of the robot, based on the robot TCP and the default The relative relationship between the TCPs determines at least one tool coordinate of the robot TCP in the base coordinate system of the robot. For example, in order to facilitate the calculation, based on the obtained relative relationship between the robot TCP and the default TCP in a certain posture of the robot, the end of the robot can be translated to at least one position in the robot posture, according to the control from the robot.
  • the coordinates of the default TCP in the robot base coordinate system obtained by the controller are used to determine the coordinates of the robot TCP in the robot base coordinate system.
  • step 804 a second transformation relationship between the base coordinate system of the robot and the 3D camera coordinate system is determined using the first plurality of tool coordinates and the second plurality of tool coordinates.
  • P 3D and P robot are the coordinates of the same point in the 3D camera coordinate system and the robot base coordinate system, respectively. From this, the homography matrix H2 between the 3D camera coordinate system and the robot base coordinate system can be obtained, that is, to determine the robot's The second transformation relationship between the base coordinate system and the 3D camera coordinate system.
  • step 805 based on the first transformation relationship and the second transformation relationship, a third transformation relationship between the base coordinate system of the robot and the 2D camera coordinate system is determined.
  • P robot and P 2D are the coordinates of the same point in the robot base coordinate system and the 2D camera coordinate system, respectively. From this, the homography matrix H2 -1 ⁇ H1 between the robot base coordinate system and the 2D camera coordinate system can be obtained, namely , and determine the third transformation relationship (hand-eye relationship) between the robot's base coordinate system and the 2D camera coordinate system.
  • the at least one location is at or near the at least one feature point.
  • the at least one location may be at or near the at least one identifiable feature point.
  • AR techniques as previously described herein may be employed to intuitively and conveniently guide the end of the robot to move at or near at least one recognizable feature point, such as inserting the end 612 of the calibration workpiece 600 into the feature point 902 to further reduce the hand-eye calibration error.
  • the hand-eye calibration can be automatically performed without manual participation, which effectively reduces the operation complexity, and the calculation is simple, so that the hand-eye calibration can be quickly realized, and the manual operation is avoided.
  • the error caused by participation improves the calibration accuracy.
  • the apparatus 1000 includes a camera calibration unit 1010, a TCP calibration unit 1020, and a hand-eye calibration unit 1030.
  • the device 1000 also includes a communication unit (not shown) to communicate with external other devices (eg, to receive/transmit instructions and data from and/or to external devices).
  • the camera unit 1010 includes an environment capture module 1011 , a placement module 1012 , a first camera capture module 1013 and a parameter determination module 1014 .
  • the environment capture module 1011 is configured to capture images of the environment and form three-dimensional virtual objects of the environment.
  • the environment capture module 1011 can be further configured to: take pictures of the environment from different positions and different angles, so as to obtain at least two images for each scene; The depth information of each pixel in the image; and the three-dimensional virtual object of the environment is formed based on the depth information and the two-dimensional information contained in the image.
  • the placement module 1012 is configured to place the calibration object in a target area in the environment based on the captured image of the environment and the three-dimensional virtual object of the environment.
  • the placement module 1012 may be further configured to: detect visual markers disposed in the environment from the captured image of the environment; determine a target area based on the detected visual markers; image and form a 3D virtual object of the calibration object; superimpose the 3D virtual object of the calibration object on the 3D virtual object of the environment for display as an augmented reality image; move the calibration object so that the 3D virtual object of the calibration object is located in the in the 3D virtual object of the target area.
  • the visual marker has a characteristic color and/or pattern.
  • the first camera capture module 1013 is configured to capture an image of the calibration object using the first camera through relative movement between the first camera associated with the robot and the calibration object, wherein the first camera is a 2D camera.
  • the first camera capturing module 1013 may be further configured to obtain a plurality of actual positions for placing the calibration object in the target area, the plurality of actual positions corresponding to moving the calibration object so that the 3D virtual objects of the calibration object are respectively located at the plurality of virtual positions; and using the first camera to capture images of the calibration object at the plurality of actual positions, respectively.
  • the parameter determination module 1014 is configured to determine parameters of the first camera for calibration based on the images of the calibration object captured by the first camera.
  • the TCP calibration unit 1020 includes a first workpiece capture module 1021 , a workpiece coordinate determination module 1022 , a robot coordinate acquisition module 1023 and a TCP coordinate determination module 1024 .
  • the first workpiece capture module 1021 is configured to use a second camera to capture a plurality of images of a first workpiece placed in a plurality of poses, respectively, wherein the first workpiece is mounted on the end of the robot, and the second camera is 3D camera and fixed outside the robot.
  • the workpiece coordinate determination module 1022 is configured to determine, based on the plurality of captured images, a plurality of workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system of the second camera under the plurality of poses.
  • the workpiece coordinate determination module 1022 may be configured to detect marker points arranged on the first workpiece from the captured plurality of images to obtain a plurality of marker point coordinates of the marker points in the 3D camera coordinate system and, based on the coordinates of the plurality of marked points, determining a plurality of workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system.
  • the marker points have characteristic colors and/or characteristic patterns.
  • the robot coordinate obtaining module 1023 is configured to obtain a plurality of robot end coordinates of the end of the robot in the base coordinate system of the robot under a plurality of poses.
  • the TCP coordinate determination module 1024 is configured to: based on the multiple workpiece end coordinates and the multiple robot end coordinates, determine the coordinates of the robot TCP to be calibrated in the base coordinate system of the robot under one posture of the robot, and determine the robot TCP and the robot TCP The relative relationship between the default TCPs on the end of the robot.
  • the hand-eye calibration unit 1030 includes a second workpiece capture module 1031 , a first transformation determination module 1032 , a tool coordinate determination module 1033 , a second transformation determination module 1034 and a third transformation determination module 1035 .
  • the second workpiece capture module 1031 is configured to capture a first target image of a second workpiece to be processed by the robot using the first camera, and to capture a second target image of the second workpiece using the second camera.
  • the first transformation determination module 1032 is configured to determine a first transformation relationship between the 2D camera coordinate system of the first camera and the 3D camera coordinate system of the second camera based on the first target image and the second target image.
  • the first transformation determination module 1032 may be further configured to: detect a plurality of feature points arranged in the second workpiece from the first target image and the second target image to obtain the plurality of feature points in 2D a first plurality of feature coordinates in a camera coordinate system and a second plurality of feature coordinates in a 3D camera coordinate system; and determining the first plurality of feature coordinates using the first plurality of feature coordinates and the second plurality of feature coordinates A transformation relationship.
  • the tool coordinate determination module 1033 is configured to: move the end of the robot to a plurality of positions, determine the first plurality of tool coordinates of the robot TCP in the base coordinate system of the robot, and determine the second plurality of tool coordinates of the robot TCP in the 3D camera coordinate system. tool coordinates.
  • the tool coordinate determination module 1033 may be further configured to: translate to at least one of the plurality of positions in one pose of the robot to the at least one position, based on the relative relationship between the robot TCP and the default TCP relationship, at least one tool coordinate of the robot TCP in the base coordinate system of the robot is determined.
  • the at least one location is at or near at least one feature point.
  • the second transformation determination module 1034 is configured to determine a second transformation relationship between the base coordinate system of the robot and the 3D camera coordinate system using the first plurality of tool coordinates and the second plurality of tool coordinates.
  • the third transformation determining module 1035 is configured to: determine a third transformation relationship between the base coordinate system of the robot and the 2D camera coordinate system based on the first transformation relationship and the second transformation relationship.
  • the apparatus 1000 may include at least one of these units to separately implement the corresponding Calibration process.
  • Computing device 1100 includes processor 1101 and memory 1102 coupled with processor 1101 .
  • the memory 1102 is used to store computer-executable instructions that, when executed, cause the processor 1101 to perform the methods in the above embodiments (eg, any one or more steps of the aforementioned methods 300 , 500 or 800 ).
  • the above-described method can be implemented by a computer-readable storage medium.
  • the computer-readable storage medium carries computer-readable program instructions for carrying out various embodiments of the present disclosure.
  • a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
  • the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
  • Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory static random access memory
  • SRAM static random access memory
  • CD-ROM compact disk read only memory
  • DVD digital versatile disk
  • memory sticks floppy disks
  • mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
  • Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
  • the present disclosure presents a computer-readable storage medium having computer-executable instructions stored thereon for performing various implementations of the present disclosure method in the example.
  • the present disclosure presents a computer program product tangibly stored on a computer-readable storage medium and comprising computer-executable instructions that, when executed, cause At least one processor executes the methods in various embodiments of the present disclosure.
  • the various example embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic, or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor or other computing device. While aspects of the embodiments of the present disclosure are illustrated or described as block diagrams, flowcharts, or using some other graphical representation, it is to be understood that the blocks, apparatus, systems, techniques, or methods described herein may be taken as non-limiting Examples are implemented in hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controllers or other computing devices, or some combination thereof.
  • Computer-readable program instructions or computer program products for executing various embodiments of the present disclosure can also be stored in the cloud, and when invoked, the user can access the data stored in the cloud for execution through the mobile Internet, fixed network or other network.
  • the computer-readable program instructions of an embodiment of the present disclosure implement the technical solutions disclosed in accordance with various embodiments of the present disclosure.

Abstract

A calibration method for a robot, comprising executing a camera calibration process, wherein executing a camera calibration process comprises the following steps: capturing an image of an environment and forming a three-dimensional virtual object of the environment (301); placing a calibration object (403) in a target area of the environment on the basis of the captured image of the environment and the three-dimensional virtual object of the environment (302); using a first camera (102) to capture an image of the calibration object (403) by means of relative movement between the first camera (102) associated with a robot (101) and the calibration object (403) (303), wherein the first camera (102) is a 2D camera; and determining, on the basis of the image of the calibration object (403) captured by the first camera (102), a parameter of the first camera (102) used for calibration (304). According to the calibration method, a user (107) can be guided to perform camera calibration by means of an AR technology, such that operation is intuitive and convenient, operation complexity is reduced, and accuracy is improved. The present invention further relates to a calibration device for a robot.

Description

用于机器人的标定方法和装置Calibration method and device for robot 技术领域technical field
本公开涉及机器视觉领域,更具体地说,涉及用于机器人的标定方法、装置、计算设备、计算机可读存储介质和程序产品。The present disclosure relates to the field of machine vision, and more particularly, to a calibration method, apparatus, computing device, computer-readable storage medium, and program product for a robot.
背景技术Background technique
随着机器人技术的发展,越来越多的机器人工作站被用于工业应用中,例如自动装卸、焊接、冲压、喷涂、以及其他各种处理。机器人可以灵活地与不同的设备组合,以满足艰苦的生产过程要求。它可以轻松实现多机联动自动化生产线和数字化工厂布局,最大程度地节省人力,提高企业效率。With the development of robotics, more and more robotic workstations are used in industrial applications, such as automatic loading and unloading, welding, stamping, spraying, and various other processes. Robots can be flexibly combined with different equipment to meet demanding production process requirements. It can easily realize multi-machine linkage automatic production line and digital factory layout, save manpower to the greatest extent and improve enterprise efficiency.
尽管机器人在工业中的应用具有许多优势,但是对机器人应用中的集成、操作和维护的高技术和专业要求限制了它的使用。尤其是,包括视觉系统标定(相机标定)、TCP(工具中心点)标定、和手眼标定在内的标定已成为用户使用机器人的主要难题。Although the application of robots in industry has many advantages, the high technical and professional requirements for integration, operation and maintenance in robotic applications limit its use. In particular, calibration including vision system calibration (camera calibration), TCP (tool center point) calibration, and hand-eye calibration has become a major problem for users using robots.
发明内容SUMMARY OF THE INVENTION
机器人的标定是机器人作业过程中的关键技术之一,然而现有的标定方法对于一般的用户而言,需要用户具备相当的专业知识和花费大量时间和精力实现机器人的标定,以确定机器人的操作是否理想,这增加了用户的操作门槛。例如,现有的标定方法需要大量手动操作,标定效率较低且操作过程复杂,并高度依赖于操作者的主观判断和经验。Robot calibration is one of the key technologies in the process of robot operation. However, for ordinary users, the existing calibration methods require users to have considerable professional knowledge and spend a lot of time and energy to calibrate the robot to determine the operation of the robot. Whether it is ideal or not, this increases the user's operating threshold. For example, the existing calibration methods require a lot of manual operations, the calibration efficiency is low, and the operation process is complicated, and is highly dependent on the operator's subjective judgment and experience.
本公开的第一实施例提出了一种用于机器人的标定方法,所述标定方法包括执行相机标定过程,其中,执行所述相机标定个过程包括以下步骤:捕捉环境的图像并形成所述环境的三维虚拟对象;基于所捕捉的所述环境的图像和所述环境的三维虚拟对象,将标定物体放置在所述环境中的目标区域中;通过与机器人相关联的第一相机和所述标定物体之间的相对移动来使用所 述第一相机捕捉所述标定物体的图像,其中,所述第一相机为2D相机;以及基于所述第一相机所捕捉的所述标定物体的图像,确定用于标定的所述第一相机的参数。A first embodiment of the present disclosure proposes a calibration method for a robot, the calibration method includes performing a camera calibration process, wherein performing the camera calibration process includes the steps of capturing an image of an environment and forming the environment based on the captured image of the environment and the three-dimensional virtual object of the environment, placing a calibration object in a target area in the environment; through the first camera associated with the robot and the calibration relative movement between objects to capture an image of the calibration object using the first camera, wherein the first camera is a 2D camera; and determining based on the image of the calibration object captured by the first camera Parameters of the first camera used for calibration.
在该实施例中,可以通过增强现实(AR)技术来引导用户进行相机标定,使得操作直观且便捷,有利地降低了操作复杂度并提高了准确性,从而有效降低了用户的操作门槛,使得不具备专业知识的用户也可容易实现相机标定。In this embodiment, the augmented reality (AR) technology can be used to guide the user to perform camera calibration, so that the operation is intuitive and convenient, which advantageously reduces the operation complexity and improves the accuracy, thereby effectively lowering the user's operation threshold, making Users without professional knowledge can also easily achieve camera calibration.
本公开的第二实施例提供了一种用于机器人的标定装置,所述标定装置包括相机标定单元,所述相机标定单元包括:环境捕捉模块,被配置为捕捉环境的图像并形成所述环境的三维虚拟对象;放置模块,被配置为基于所捕捉的所述环境的图像和所述环境的三维虚拟对象,将标定物体放置在所述环境中的目标区域中;第一相机捕捉模块,被配置为通过与机器人相关联的第一相机和所述标定物体之间的相对移动来使用所述第一相机捕捉所述标定物体的图像,其中,所述第一相机为2D相机;参数确定模块,被配置为基于所述第一相机所捕捉的所述标定物体的图像,确定用于标定的所述第一相机的参数。A second embodiment of the present disclosure provides a calibration device for a robot, the calibration device includes a camera calibration unit, the camera calibration unit includes an environment capture module configured to capture an image of an environment and form the environment the three-dimensional virtual object; the placement module is configured to place the calibration object in the target area in the environment based on the captured image of the environment and the three-dimensional virtual object of the environment; the first camera capture module is configured to capture an image of the calibration object using the first camera through relative movement between the first camera associated with the robot and the calibration object, wherein the first camera is a 2D camera; parameter determination module , configured to determine parameters of the first camera for calibration based on the image of the calibration object captured by the first camera.
本公开的第三实施例提供了一种计算设备,所述计算设备包括:处理器;以及存储器,其用于存储计算机可执行指令,当所述计算机可执行指令被执行时使得所述处理器执行第一实施例中所述的方法。A third embodiment of the present disclosure provides a computing device comprising: a processor; and a memory for storing computer-executable instructions that, when executed, cause the processor to The method described in the first embodiment is performed.
本公开的第四实施例提出了一种计算机可读存储介质,所述计算机可读存储介质具有存储在其上的计算机可执行指令,所述计算机可执行指令用于执行第一实施例中所述的方法。A fourth embodiment of the present disclosure proposes a computer-readable storage medium having computer-executable instructions stored thereon for executing the steps described in the first embodiment. method described.
本公开的第五实施例提出了一种计算机程序产品,所述计算机程序产品被有形地存储在计算机可读存储介质上,并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行第一实施例中所述的方法。A fifth embodiment of the present disclosure proposes a computer program product tangibly stored on a computer-readable storage medium and comprising computer-executable instructions that, when executed, cause At least one processor executes the method described in the first embodiment.
附图说明Description of drawings
结合附图并参考以下详细说明,本公开的各实施例的特征、优点及其他方面将变得更加明显,在此以示例性而非限制性的方式示出了本公开的若干 实施例,在附图中:The features, advantages and other aspects of various embodiments of the present disclosure will become more apparent when taken in conjunction with the accompanying drawings and with reference to the following detailed description, In the attached picture:
图1示出了其中可以应用本公开的实施例的示例性场景。FIG. 1 illustrates an exemplary scenario in which embodiments of the present disclosure may be applied.
图2示出了其中可以应用本公开的实施例的另一个示例性场景。FIG. 2 illustrates another exemplary scenario in which embodiments of the present disclosure may be applied.
图3示出了根据本公开的实施例的用于机器人的标定方法的流程图。FIG. 3 shows a flowchart of a calibration method for a robot according to an embodiment of the present disclosure.
图4示出了根据本公开的实施例的可用于机器人的相机标定过程的示例性布置。4 illustrates an exemplary arrangement of a camera calibration process that may be used with a robot, according to embodiments of the present disclosure.
图5示出了根据本公开的实施例的用于机器人的标定方法的另一个流程图。FIG. 5 shows another flowchart of a calibration method for a robot according to an embodiment of the present disclosure.
图6示出了根据本公开的实施例的用于TCP标定的示例性标定工件。FIG. 6 illustrates an exemplary calibration artifact for TCP calibration according to embodiments of the present disclosure.
图7示出了根据本公开的实施例的可用于机器人的TCP标定过程的示例性布置。7 illustrates an exemplary arrangement of a TCP calibration process that may be used with a robot, according to embodiments of the present disclosure.
图8示出了根据本公开的实施例的用于机器人的标定方法的另一个流程图。FIG. 8 shows another flowchart of a calibration method for a robot according to an embodiment of the present disclosure.
图9示出了根据本公开的实施例的可用于机器人的手眼标定过程的示例性布置。9 illustrates an exemplary arrangement of a hand-eye calibration process that may be used in a robot, according to embodiments of the present disclosure.
图10示出了根据本公开的实施例的用于机器人的示例性标定装置的框图。10 shows a block diagram of an exemplary calibration apparatus for a robot according to an embodiment of the present disclosure.
图11示出了根据本公开的实施例的用于机器人标定的示例性计算设备的框图。11 shows a block diagram of an exemplary computing device for robotic calibration according to an embodiment of the present disclosure.
具体实施方式detailed description
以下参考附图详细描述本公开的各个示例性实施例。虽然以下所描述的示例性方法、装置包括在其它组件当中的硬件上执行的软件和/或固件,但是应当注意,这些示例仅仅是说明性的,而不应看作是限制性的。例如,考虑在硬件中独占地、在软件中独占地、或在硬件和软件的任何组合中可以实施任何或所有硬件、软件和固件组件。因此,虽然以下已经描述了示例性的方法和装置,但是本领域的技术人员应容易理解,所提供的示例并不用于限制用于实现这些方法和装置的方式。Various exemplary embodiments of the present disclosure are described in detail below with reference to the accompanying drawings. Although the example methods, apparatuses described below include software and/or firmware executing on hardware among other components, it should be noted that these examples are merely illustrative and should not be regarded as limiting. For example, it is contemplated that any or all hardware, software and firmware components may be implemented exclusively in hardware, exclusively in software, or in any combination of hardware and software. Accordingly, while exemplary methods and apparatus have been described below, those skilled in the art will readily appreciate that the examples provided are not intended to limit the manner in which these methods and apparatus may be implemented.
此外,附图中的流程图和框图示出了根据本公开的各个实施例的方法和系统的可能实现的体系架构、功能和操作。应当注意,方框中所标注的功能 也可以按照不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,或者它们有时也可以按照相反的顺序执行,这取决于所涉及的功能。同样应当注意的是,流程图和/或框图中的每个方框、以及流程图和/或框图中的方框的组合,可以使用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以使用专用硬件与计算机指令的组合来实现。在不同的附图中,相同的标记表示相同或类似的元件。Additionally, the flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of methods and systems in accordance with various embodiments of the present disclosure. It should be noted that the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may in fact be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It should also be noted that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented using dedicated hardware-based systems that perform the specified functions or operations , or can be implemented using a combination of dedicated hardware and computer instructions. In the different drawings, the same reference numbers refer to the same or similar elements.
本文所使用的术语“包括”、“包含”及类似术语是开放性的术语,即“包括/包含但不限于”,表示还可以包括其它内容。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”等等。As used herein, the terms "including", "including" and similar terms are open-ended terms, ie, "including/including but not limited to," meaning that other content may also be included. The term "based on" is "based at least in part on." The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment" and so on.
本文涉及机器人系统的多种坐标系,例如机器人的基坐标系、工具坐标系、相机坐标系。机器人的基坐标系是以机器人安装基座为基准,用来描述机器人本体运动的坐标系。工具坐标系TCS是以工具中心点(TCP)为原点建立的坐标系,在未装配工具前,默认TCP为机器人末端法兰中心点,在装配工具后,机器人TCP移动至工具末端。相机坐标系是以相机为基准,用来描述物体运动的坐标系。This paper involves various coordinate systems of the robot system, such as the base coordinate system of the robot, the tool coordinate system, and the camera coordinate system. The base coordinate system of the robot is based on the robot installation base and is used to describe the coordinate system of the movement of the robot body. The tool coordinate system TCS is a coordinate system established with the tool center point (TCP) as the origin. Before the tool is not assembled, the default TCP is the center point of the robot end flange. After the tool is assembled, the robot TCP moves to the end of the tool. The camera coordinate system is based on the camera and is used to describe the coordinate system of the object movement.
图1示出了其中可以应用本公开的实施例的示例性场景100。场景100包括机器人101和相关联的第一相机102,第一相机102固定在机器人101上。例如,机器人101可以是面向工业领域的多关节机械手或多自由度的机器装置。机器人101包括可移动的末端103,第一相机102可以固定在末端103上并随机器人101的末端103的移动而一起移动。也就是说,在该场景100中,由于机器人101的末端和第一相机102固定在一起,称为眼在手中(eye in hand)。机器人101的末端103上还可具有工具部104,用于装配(例如,吸附、插入等)对目标工件进行处理的工具或部件(例如,焊枪、喷嘴、螺栓等)。场景100还布置有目标物体105。目标物体105可以是待处理的实际物体(例如,目标工件)或标定物体(例如,标定板等)。第一相机102为2D相机,并且场景100中还布置有第二相机106。第二相机106为3D相机,其可以是双目相机、结构光相机、或能够返回深度信息的任何相机。在场景100中,第二相机106可以相对地布置在机器人101的上方,以捕捉和跟踪机器人101(例如,其末端等)等的移动。在场景100中,用户107可 以携带(例如,手持、在头部佩戴等)AR设备108,AR设备108可以包括但不限于智能手机、平板电脑、示教器(teach pendant)、或头戴式设备。AR设备108可以通过设置在其上或其外部的多个摄像头,在其视野109内从各自不同的位置以不同的角度针对真实环境或环境中物体进行摄像,以形成三维虚拟对象。AR设备108可以具有显示器以在虚拟环境中显示真实环境或环境中物体的相应三维虚拟对象。FIG. 1 illustrates an exemplary scenario 100 in which embodiments of the present disclosure may be applied. The scene 100 includes a robot 101 and an associated first camera 102 , which is affixed to the robot 101 . For example, the robot 101 may be an industrial-oriented multi-joint manipulator or a multi-degree-of-freedom robotic device. The robot 101 includes a movable end 103 on which the first camera 102 can be fixed and move together with the movement of the end 103 of the robot 101 . That is, in this scene 100, since the end of the robot 101 and the first camera 102 are fixed together, it is called eye in hand. The end 103 of the robot 101 may also have a tool portion 104 for assembling (eg, suction, insertion, etc.) tools or components (eg, welding guns, nozzles, bolts, etc.) for processing the target workpiece. The scene 100 is also arranged with a target object 105 . The target object 105 may be an actual object to be processed (eg, a target workpiece) or a calibration object (eg, a calibration plate, etc.). The first camera 102 is a 2D camera, and a second camera 106 is also arranged in the scene 100 . The second camera 106 is a 3D camera, which may be a binocular camera, a structured light camera, or any camera capable of returning depth information. In the scene 100, a second camera 106 may be disposed relatively above the robot 101 to capture and track the movement of the robot 101 (eg, its tip, etc.) and the like. In the scenario 100, the user 107 may carry (eg, hand-held, worn on the head, etc.) an AR device 108, which may include, but is not limited to, a smartphone, tablet, teach pendant, or headset equipment. The AR device 108 can take pictures of the real environment or objects in the environment from different positions and different angles within the field of view 109 of the AR device 108 to form a three-dimensional virtual object through a plurality of cameras arranged on or outside the AR device 108 . The AR device 108 may have a display to display in the virtual environment corresponding three-dimensional virtual objects of the real environment or objects in the environment.
图2示出了其中可以应用本公开的实施例的示例性场景200。场景200类似于场景100,除了相关联的第一相机102固定在机器人101外部。也就是说,在该场景200中,第一相机102位于机器人101外部并且不随机器人101的末端103的移动而一起移动,称为眼在手外(eye to hand)。FIG. 2 illustrates an exemplary scenario 200 in which embodiments of the present disclosure may be applied. Scene 200 is similar to scene 100 except that the associated first camera 102 is fixed outside the robot 101 . That is, in this scene 200, the first camera 102 is located outside the robot 101 and does not move with the movement of the tip 103 of the robot 101, referred to as eye to hand.
图3示出了根据本公开的实施例的用于机器人的标定方法300的流程图,图4示出了根据本公开的实施例的可用于机器人的相机标定过程的示例性布置400。方法300可以应用于如图1所示的示例性场景100(眼在手中)和如图2所示的示例性场景200(眼在手外)。下面结合图3和图4来描述方法300。3 shows a flowchart of a calibration method 300 for a robot according to an embodiment of the present disclosure, and FIG. 4 shows an exemplary arrangement 400 of a camera calibration process that may be used for a robot according to an embodiment of the present disclosure. The method 300 may be applied to the exemplary scenario 100 shown in FIG. 1 (eyes in hands) and the exemplary scenario 200 shown in FIG. 2 (eyes out of hands). The method 300 is described below in conjunction with FIGS. 3 and 4 .
参考图3,方法300包括用于执行相机标定过程的步骤301-304。此外,方法300还可以包括用于执行TCP标定过程(参见图5)和用于执行手眼标定过程(参见图8)的步骤。Referring to Figure 3, method 300 includes steps 301-304 for performing a camera calibration process. Additionally, the method 300 may further include steps for performing a TCP calibration process (see FIG. 5 ) and for performing a hand-eye calibration process (see FIG. 8 ).
在图像测量过程以及机器视觉应用中,为确定空间物体表面某点的三维几何位置与其在图像中对应点之间的相互关系,必须建立相机成像的几何模型,这些几何模型参数就是相机参数(例如,相机的内、外参数、畸变参数)。这些参数通常通过实验与计算才能得到,这个求解参数的过程就称之为相机标定。通过相机标定确定相机参数,可以校正镜头畸变,生成矫正后的图像,并根据获得的图像重构三维场景。In the process of image measurement and machine vision applications, in order to determine the relationship between the three-dimensional geometric position of a point on the surface of a space object and its corresponding point in the image, a geometric model of camera imaging must be established. These geometric model parameters are camera parameters (for example, , the camera's intrinsic and extrinsic parameters, distortion parameters). These parameters are usually obtained through experiments and calculations, and this process of solving parameters is called camera calibration. By determining camera parameters through camera calibration, lens distortion can be corrected, a corrected image can be generated, and a three-dimensional scene can be reconstructed from the obtained image.
方法300开始于步骤301,捕捉环境的图像并形成环境的三维虚拟对象。The method 300 begins at step 301 by capturing an image of the environment and forming a three-dimensional virtual object of the environment.
在一些实施例中,步骤301可以包括:从不同的位置以不同的角度针对环境进行摄像,从而针对每个场景得到至少两个图像;基于立体图像的三角测量方法测量得出所述图像中的各个像素的深度信息,以及基于深度信息及图像包含的二维信息形成环境的三维虚拟对象。例如,由用户107持有的AR设备可配备有多个摄像头或与多个摄像头连接,该多个摄像头从各自不 同的位置以不同的角度针对真实环境进行摄像。以两个摄像头为例,当两个摄像头从不同的位置对真实环境拍下同一场景的照片,因两个摄像头的拍摄角度的不同,因而可以根据同一场景的两个图像中像素的不同位置,对二维的图像中的各个像素的深度进行三角测量。由此,尽管单个摄像头拍摄得到的图像本身是二维的,但在此基础上补充经由三角测量所得出的深度信息,即可获得形成真实环境的三维虚拟对象所需的三维信息。In some embodiments, step 301 may include: photographing the environment from different positions and angles, thereby obtaining at least two images for each scene; The depth information of each pixel, and the three-dimensional virtual object of the environment is formed based on the depth information and the two-dimensional information contained in the image. For example, an AR device held by the user 107 may be equipped with or connected to multiple cameras that capture images of the real environment from different locations and at different angles. Taking two cameras as an example, when the two cameras take pictures of the same scene from different positions in the real environment, due to the different shooting angles of the two cameras, the different positions of the pixels in the two images of the same scene can be used. Triangulate the depth of each pixel in a two-dimensional image. Therefore, although the image captured by a single camera is two-dimensional, the three-dimensional information required to form a three-dimensional virtual object of a real environment can be obtained by supplementing the depth information obtained by triangulation on this basis.
接着,方法300行进到步骤302。在步骤302中,基于所捕捉的环境的图像和环境的三维虚拟对象,将标定物体放置在环境中的目标区域中。例如,目标区域可以是工作台中的特定区域。Next, the method 300 proceeds to step 302 . In step 302, a calibration object is placed in a target area in the environment based on the captured image of the environment and the three-dimensional virtual objects of the environment. For example, the target area can be a specific area in the workbench.
在一些实施例中,步骤302可以包括:从所捕捉的环境的图像中检测布置在环境中的视觉标记物;基于检测到的视觉标记物,确定目标区域;捕捉标定物体的图像并形成标定物体的三维虚拟对象;将标定物体的三维虚拟对象叠加在环境的三维虚拟对象上以作为增强现实图像进行显示;以及移动标定物体以使得标定物体的三维虚拟对象位于目标区域的三维虚拟对象中。例如,环境中的目标区域可以由可识别的多个视觉标记物限定,通过检测视觉标记物可以快速确定目标区域。多个视觉标记物可以被布置为覆盖在第一相机的视野内以便被第一相机捕捉。类似地,可以由用户107持有的AR设备108通过多个摄像头捕捉标定物体的图像并形成标定物体的三维虚拟对象。当将标定物体的三维虚拟对象叠加在环境的三维虚拟对象上以作为增强现实图像进行显示(例如,实时跟踪显示)后,可以通过例如语音、文字、图像来引导用户107或引导机器人101移动标定物体。例如,当标定物体的三维虚拟对象至少部分地位于目标区域的三维虚拟对象外部时,AR设备108可以提示用户107或引导机器人101继续移动标定物体,直到标定物体的三维虚拟对象位于目标区域的三维虚拟对象中。In some embodiments, step 302 may include: detecting visual markers disposed in the environment from captured images of the environment; determining a target area based on the detected visual markers; capturing an image of the calibration object and forming the calibration object The 3D virtual object of the calibration object is superimposed on the 3D virtual object of the environment to be displayed as an augmented reality image; and the calibration object is moved so that the 3D virtual object of the calibration object is located in the 3D virtual object of the target area. For example, a target area in the environment can be defined by a number of identifiable visual markers, and by detecting the visual markers, the target area can be quickly determined. A plurality of visual markers may be arranged to overlay within the field of view of the first camera to be captured by the first camera. Similarly, the AR device 108 held by the user 107 can capture images of the calibration object through multiple cameras and form a three-dimensional virtual object of the calibration object. After the 3D virtual object of the calibration object is superimposed on the 3D virtual object of the environment to be displayed as an augmented reality image (eg, real-time tracking display), the user 107 or the robot 101 can be guided to move the calibration by, for example, voice, text, or image. object. For example, when the 3D virtual object of the calibration object is located at least partially outside the 3D virtual object of the target area, the AR device 108 may prompt the user 107 or guide the robot 101 to continue to move the calibration object until the 3D virtual object of the calibration object is located in 3D of the target area in virtual objects.
参考图4,示例性布置400包括限定目标区域401的多个视觉标记物402(例如,图示为4个)。视觉标记物402可以具有特征颜色(例如,与环境中的其他部分不同的颜色)和/或特征图案(例如,特定图形码(条形码、二维码等)等)。例如,AR设备108可以通过摄像头捕捉到环境的图像,并从图像中识别出视觉标记物402,从而确定由视觉标记物402限定的目标区域401。示例性布置400还包括已被放置在目标区域401中的标定物体403(例 如,标定板)。标定物体403可以具有附着在表面上的参考图案以用于相机标定,该参考图案包括但不限于棋盘格图案、圆阵列图案、非圆阵列图案等。Referring to FIG. 4 , an exemplary arrangement 400 includes a plurality of visual markers 402 (eg, four illustrated) that define a target area 401 . The visual marker 402 may have a characteristic color (eg, a different color from the rest of the environment) and/or a characteristic pattern (eg, a specific graphic code (barcode, QR code, etc.), etc.). For example, the AR device 108 may capture an image of the environment through a camera and identify the visual marker 402 from the image, thereby determining the target area 401 defined by the visual marker 402 . The example arrangement 400 also includes a calibration object 403 (eg, a calibration plate) that has been placed in the target area 401. The calibration object 403 may have a reference pattern attached to the surface for camera calibration, the reference pattern including, but not limited to, a checkerboard pattern, a circular array pattern, a non-circular array pattern, and the like.
接着,方法300行进到步骤303。在步骤303中,通过与机器人相关联的第一相机和标定物体之间的相对移动来使用第一相机捕捉标定物体的图像,其中,第一相机为2D相机。Next, the method 300 proceeds to step 303 . In step 303, the first camera is used to capture an image of the calibration object through relative movement between the first camera associated with the robot and the calibration object, wherein the first camera is a 2D camera.
在一些实施例中,当第一相机固定于机器人外部时,步骤303可以包括:获得用于在目标区域中放置标定物体的多个实际位置,该多个实际位置对应于环境的三维虚拟对象中的多个虚拟位置;移动标定物体以使得标定物体的三维虚拟对象分别位于该多个虚拟位置处;以及使用第一相机分别在该多个实际位置捕捉标定物体的图像。例如,当第一相机102固定于机器人101外部(即,眼在手外)时,需要在目标区域401中的合适位置摆放标定物体403,使得标定物体403上的角点覆盖在第一相机102的视野中,并且相机参数之间的相关性和标定物体403的不同位姿都会影响相机特定参数的确定。在一个示例中,可以基于标定物体403的角点是否覆盖在第一相机102的视野中、相机参数之间的相关性以及不同位姿等因素,计算出在目标区域401中放置标定物体403的多个实际位置,并通过AR设备108指定用户107或机器人101将标定物体403放置在计算出的多个实际位置。类似于步骤302,可以通过确定标定物体403的三维虚拟对象是否放置在与多个实际位置相对应的多个虚拟位置,来将标定物体403放置在多个实际位置,并使用第一相机102分别在多个实际位置处捕捉标定物体403的图像。In some embodiments, when the first camera is fixed outside the robot, step 303 may include: obtaining a plurality of actual positions for placing the calibration object in the target area, the plurality of actual positions corresponding to the three-dimensional virtual objects in the environment moving the calibration object so that the three-dimensional virtual objects of the calibration object are respectively located at the plurality of virtual positions; and using the first camera to capture images of the calibration object at the plurality of actual positions, respectively. For example, when the first camera 102 is fixed outside the robot 101 (ie, the eyes are outside the hand), the calibration object 403 needs to be placed at a suitable position in the target area 401 so that the corners on the calibration object 403 cover the first camera 102, and the correlation between the camera parameters and the different poses of the calibration object 403 will affect the determination of camera-specific parameters. In one example, based on factors such as whether the corners of the calibration object 403 are covered in the field of view of the first camera 102 , the correlation between camera parameters, and different poses, etc., the calculation method for placing the calibration object 403 in the target area 401 can be calculated. Multiple actual positions, and the user 107 or the robot 101 is designated by the AR device 108 to place the calibration object 403 at the calculated multiple actual positions. Similar to step 302 , the calibration object 403 may be placed at multiple actual locations by determining whether the three-dimensional virtual object of the calibration object 403 is placed at multiple virtual locations corresponding to multiple actual locations, and the first camera 102 may be used to place the calibration object 403 at multiple actual locations, respectively. Images of calibration object 403 are captured at multiple actual locations.
在一些实施例中,当第一相机固定于机器人上时,步骤303可以通过使第一相机围绕标定物体移动到多个实际位置,使用第一相机分别在多个实际位置处捕捉标定物体的图像。In some embodiments, when the first camera is fixed on the robot, step 303 may be performed by moving the first camera around the calibration object to a plurality of actual positions, and using the first camera to capture images of the calibration object at the plurality of actual positions respectively .
接着,方法300行进到步骤304。在步骤304中,基于第一相机所捕捉的标定物体的图像,确定用于标定的第一相机的参数。如前所述,所述第一相机的参数可以包括内参数、外参数、畸变参数中的至少一个。在一个示例中,可以使用经典的张正友标定法(参见“A Flexible New Technique for Camera Calibration”,IEEE TRANSACTIONS ANALYSIS AND MACHINE INTELLIGENCE,Volume 22,Issue 11,2000)来确定第一相机的参数,所使用的棋盘格格点数为p x q,每摆放一次标定板,对应的相机参数会有变化, 检测所捕捉的标定板的图像中的特征点(例如,棋盘格图案的角点),形成p x q个方程。当摆放n次标定板以后,形成n x p x q个方程,估算理想无畸变的情况下内参和所有外参,随后应用最小二乘法估算实际存在径向畸变下的畸变系数,最后通过极大似然法、优化估计来提升估计精度。在其他示例中,还可以根据任何适用的现有相机标定方法来确定第一相机的参数,不再详述。Next, the method 300 proceeds to step 304 . In step 304, parameters of the first camera for calibration are determined based on the image of the calibration object captured by the first camera. As mentioned above, the parameters of the first camera may include at least one of intrinsic parameters, extrinsic parameters, and distortion parameters. In one example, the classical Zhang Zhengyou calibration method (see "A Flexible New Technique for Camera Calibration", IEEE TRANSACTIONS ANALYSIS AND MACHINE INTELLIGENCE, Volume 22, Issue 11, 2000) can be used to determine the parameters of the first camera, using The number of checkerboard grid points is p x q. Each time the calibration board is placed, the corresponding camera parameters will change. Detect the feature points in the captured image of the calibration board (for example, the corner points of the checkerboard pattern) to form p x q an equation. When the calibration plate is placed n times, n x p x q equations are formed to estimate the internal parameters and all external parameters in the ideal case of no distortion, and then the least squares method is used to estimate the distortion coefficient under the actual radial distortion. Large likelihood method, optimized estimation to improve estimation accuracy. In other examples, the parameters of the first camera may also be determined according to any applicable existing camera calibration method, which will not be described in detail.
与传统相机标定过程相比,根据方法300的相机标定过程,可以在相机标定期间基于AR技术向用户提供引导,使得操作直观且便捷,有利地降低了操作复杂度并提高了准确性,从而有效降低了用户的操作门槛,使得不具备专业知识的用户也可容易实现相机标定。Compared with the traditional camera calibration process, the camera calibration process according to the method 300 can provide guidance to the user based on the AR technology during the camera calibration, so that the operation is intuitive and convenient, and the operation complexity is advantageously reduced and the accuracy is improved, so as to be effective. The user's operating threshold is lowered, so that users without professional knowledge can easily calibrate the camera.
图5示出了根据本公开的实施例的用于机器人的标定方法500的流程图,图6示出了根据本公开的实施例的用于TCP标定的示例性标定工件600,图7示出了根据本公开的实施例的可用于机器人的TCP标定过程的示例性布置700。方法500可以应用于如图1所示的示例性场景100(眼在手中)和如图2所示的示例性场景200(眼在手外)。下面结合图5、图6和图7来描述方法500。FIG. 5 shows a flowchart of a calibration method 500 for a robot according to an embodiment of the present disclosure, FIG. 6 shows an exemplary calibration workpiece 600 for TCP calibration according to an embodiment of the present disclosure, and FIG. 7 shows An exemplary arrangement 700 of a TCP calibration process that may be used with a robot in accordance with embodiments of the present disclosure is presented. The method 500 may be applied to the exemplary scenario 100 shown in FIG. 1 (eyes in hands) and the exemplary scenario 200 shown in FIG. 2 (eyes out of hands). The method 500 is described below in conjunction with FIGS. 5 , 6 and 7 .
参考图5,方法500包括用于执行TCP标定过程的步骤501-504。方法500可以包括在方法300中或独立于方法300来执行。Referring to Figure 5, method 500 includes steps 501-504 for performing a TCP calibration process. Method 500 may be included in method 300 or performed independently of method 300 .
如前所述,机器人101的末端103上的工具部104可以装配工具来处理目标工件。为了描述工具在空间的位姿,在工具上绑定(定义)一个坐标系,即工具坐标系TCS,工具坐标系TCS的原点即TCP。在未装配工具前,默认TCP位于机器人101的末端103上,例如工具部104的法兰中心点。例如,可以从机器人控制器获得机器人末端法兰中心点的坐标。或者,可以从示教器上获得机器人末端法兰中心点的坐标,其中,示教器是进行机器人的手动操纵、程序编写、参数配置以及监控用的手持装置。在装配工具后,需要对机器人TCP(例如,位于工具末端上)进行标定,并将工具坐标系TCS的原点从默认TCP设置为机器人TCP。As previously mentioned, the tool portion 104 on the end 103 of the robot 101 can be equipped with tools to handle the target workpiece. In order to describe the pose of the tool in space, a coordinate system is bound (defined) on the tool, that is, the tool coordinate system TCS, and the origin of the tool coordinate system TCS is TCP. By default the TCP is located on the end 103 of the robot 101 , eg the center point of the flange of the tool part 104 , before the tool is not assembled. For example, the coordinates of the center point of the robot end flange can be obtained from the robot controller. Alternatively, the coordinates of the center point of the flange at the end of the robot can be obtained from the teach pendant, where the teach pendant is a handheld device for manual manipulation, programming, parameter configuration and monitoring of the robot. After the tool is assembled, the robot TCP (eg, on the tool end) needs to be calibrated and the origin of the tool coordinate system TCS is set from the default TCP to the robot TCP.
方法500开始于步骤501,使用第二相机来分别捕捉以多个位姿放置的第一工件的多个图像,其中,该多个位姿具有不同的空间倾斜角,第一工件安装于机器人的末端上,第二相机是3D相机并且固定于机器人外部。例如, 第一工件可以是例如如图6和图7所示的标定工件600,用于模拟实际使用的工具。图6中分别示出了标定工件600的侧视图601和俯视图602,标定工件具有顶端611和末端612。标定工件600可以通过顶端612安装于机器人101的末端103的工具部104。参考图7,示例性布置700包括第二相机106和以多个不同位姿(例如,图7中的四个位姿701、702、703、704)放置的标定工件600,每个位姿具有不同的空间倾斜角。The method 500 begins at step 501 by using a second camera to separately capture a plurality of images of a first workpiece placed in a plurality of poses, wherein the plurality of poses have different spatial inclination angles, and the first workpiece is mounted on the robot. On the end, the second camera is a 3D camera and is fixed outside the robot. For example, the first workpiece may be, for example, a calibration workpiece 600 as shown in FIGS. 6 and 7 for simulating a tool in actual use. A side view 601 and a top view 602 of a calibration workpiece 600 are shown in FIG. 6 , respectively, having a top end 611 and an end 612 . The calibration workpiece 600 may be mounted to the tool portion 104 of the distal end 103 of the robot 101 through the tip 612 . Referring to FIG. 7, an exemplary arrangement 700 includes a second camera 106 and a calibration workpiece 600 placed in a number of different poses (eg, four poses 701, 702, 703, 704 in FIG. 7), each pose having Different spatial inclination angles.
接着,方法500行进到步骤502。在步骤502中,基于所捕捉的多个图像,确定在多个位姿下所述第一工件的末端在所述第二相机的3D相机坐标系中的多个工件末端坐标。如前所述,在装配工具前,默认TCP为机器人末端法兰中心点,而在装配工具/工件后,需要对机器人TCP(例如,位于工具/工件末端上)进行标定。在该步骤中,以3D相机为基准,确定工件末端在3D相机坐标系中的位置。在一些实施例中,位姿的数量可以是至少四个。Next, method 500 proceeds to step 502 . In step 502, based on the captured images, a plurality of workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system of the second camera under the plurality of poses are determined. As mentioned before, before assembling the tool, the default TCP is the center point of the robot end flange, while after assembling the tool/workpiece, the robot TCP (eg, on the end of the tool/workpiece) needs to be calibrated. In this step, the position of the workpiece end in the 3D camera coordinate system is determined based on the 3D camera. In some embodiments, the number of poses may be at least four.
在一些实施例中,步骤502可以包括:从所捕捉的多个图像中检测布置在第一工件上的标记点以获得标记点在3D相机坐标系中的多个标记点坐标;以及基于多个标记点坐标,确定第一工件的末端在3D相机坐标系中的多个工件末端坐标。如图6所示,标定工件600可以具有一个或多个标记点613(例如,图6中为4个),标记点613可以用于例如使标定工件600或其至少一部分(例如,包括标记点613和末端612)覆盖在第二相机106的视野中。通过在捕捉的图像中识别标记点613之后,可以获得在不同位姿下标记点在3D相机坐标系中的多个标记点坐标。通过标定工件600上的标记点613和末端612之间的刚性关系(例如,标记点到机器人TCP是刚性连接的,因此彼此之间的距离是确定的),可以获得末端612在3D相机坐标系中的相应的多个工件末端坐标。In some embodiments, step 502 may include: detecting marker points arranged on the first workpiece from the captured plurality of images to obtain a plurality of marker point coordinates of the marker points in a 3D camera coordinate system; and based on the plurality of marker points Mark the coordinates of the point, and determine the coordinates of the ends of the first workpiece in the 3D camera coordinate system. As shown in FIG. 6 , the calibration workpiece 600 may have one or more marking points 613 (eg, four in FIG. 6 ), which may be used, for example, to enable the calibration workpiece 600 or at least a portion thereof (eg, including the marking points) 613 and end 612 ) are overlaid in the field of view of the second camera 106 . After identifying the marker point 613 in the captured image, a plurality of marker point coordinates in the 3D camera coordinate system under different poses can be obtained. By calibrating the rigid relationship between the marker 613 on the workpiece 600 and the end 612 (eg, the marker is rigidly connected to the robot TCP, so the distance from each other is determined), the end 612 in the 3D camera coordinate system can be obtained The corresponding multiple workpiece end coordinates in .
在一些实施例中,标记点可以具有特征颜色和/或特征图案。例如,标记点613可以具有特征颜色(例如,与标定工件600的其他部分不同的颜色)和/或特征图案(例如,特定图形码(条形码、二维码等)等)以便识别/检测。In some embodiments, the marker points may have characteristic colors and/or characteristic patterns. For example, marker points 613 may have a characteristic color (eg, a different color than the rest of the calibration workpiece 600) and/or a characteristic pattern (eg, a specific graphic code (barcode, QR code, etc.), etc.) for identification/detection.
接着,方法500行进到步骤503。在步骤503中,获得在多个位姿下机器人的末端在机器人的基坐标系中的多个机器人末端坐标。例如,可以从机 器人控制器或示教器获得在不同位姿下机器人的末端在机器人的基坐标系中的多个机器人末端坐标。Next, the method 500 proceeds to step 503 . In step 503, a plurality of robot end coordinates of the end of the robot in the base coordinate system of the robot under the plurality of poses are obtained. For example, a plurality of robot end coordinates in the base coordinate system of the robot under different poses can be obtained from the robot controller or the teach pendant.
接着,方法500行进到步骤504。在步骤504中,基于多个工件末端坐标和多个机器人末端坐标,确定在机器人的一个姿态下待标定的机器人TCP在机器人的基坐标系中的坐标,并确定机器人TCP与位于机器人的末端上的默认TCP之间的相对关系。下面结合图7来描述示例性TCP标定过程。Next, method 500 proceeds to step 504 . In step 504, based on multiple workpiece end coordinates and multiple robot end coordinates, determine the coordinates of the robot TCP to be calibrated in the robot's base coordinate system under one posture of the robot, and determine the robot TCP and the robot TCP located on the end of the robot The relative relationship between the default TCP. An exemplary TCP calibration process is described below in conjunction with FIG. 7 .
参考图7,机器人101摆出4种不同的位姿,第二相机106识别出标定工件600上的标记点613,得到标记点613在3D相机坐标系中的坐标,推导出标定工件600的末端612在3D相机坐标系中的坐标,同时从例如机器人控制器获得机器人101的末端法兰中心点的坐标。可以得到如下坐标:Referring to FIG. 7 , the robot 101 assumes 4 different poses, the second camera 106 recognizes the marker point 613 on the calibration workpiece 600 , obtains the coordinates of the marker point 613 in the 3D camera coordinate system, and deduces the end of the calibration workpiece 600 612 the coordinates in the 3D camera coordinate system, while obtaining the coordinates of the center point of the end flange of the robot 101 from, for example, the robot controller. The following coordinates can be obtained:
在第一位姿701:机器人末端在机器人基坐标系中坐标(x1,y1,z1),标定工件末端在3D相机坐标系中的坐标(mx1,my1,mz1)。In the first pose 701: the coordinates of the robot end in the robot base coordinate system (x1, y1, z1), and the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1, mz1) are calibrated.
在第二位姿702:机器人末端在机器人基坐标系中坐标(x2,y2,z2),标定工件末端在3D相机坐标系中的坐标(mx2,my2,mz2)。In the second pose 702: the coordinates of the robot end in the robot base coordinate system (x2, y2, z2), and the coordinates of the workpiece end in the 3D camera coordinate system (mx2, my2, mz2) are calibrated.
在第三位姿703:机器人末端在机器人基坐标系中坐标(x3,y3,z3),标定工件末端在3D相机坐标系中的坐标(mx3,my3,mz3)。In the third pose 703: the coordinates of the robot end in the robot base coordinate system (x3, y3, z3), and the coordinates of the workpiece end in the 3D camera coordinate system (mx3, my3, mz3) are calibrated.
在第四位姿704:机器人末端在机器人基坐标系中坐标(x4,y4,z4),标定工件末端在3D相机坐标系中的坐标(mx3,my3,mz3)。In the fourth pose 704: the coordinates of the robot end in the robot base coordinate system (x4, y4, z4), and the coordinates of the workpiece end in the 3D camera coordinate system (mx3, my3, mz3) are calibrated.
对上述坐标进行数学意义上的平移变换,以使得标定工件末端在3D相机坐标系中的坐标相同。可以得到如下坐标:Perform mathematical translation transformation on the above coordinates, so that the coordinates of the end of the calibration workpiece in the 3D camera coordinate system are the same. The following coordinates can be obtained:
在第一位姿701:机器人末端在机器人基坐标系中坐标(x1,y1,z1),标定工件末端在3D相机坐标系中的坐标(mx1,my1,mz1)。In the first pose 701: the coordinates of the robot end in the robot base coordinate system (x1, y1, z1), and the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1, mz1) are calibrated.
在第二位姿702:机器人末端在机器人基坐标系中坐标(x2-mx2+mx1,y2-my2+my1,z2-mz2+mz1),标定工件末端在3D相机坐标系中的坐标(mx1,my1,mz1)。In the second pose 702: the coordinates of the robot end in the robot base coordinate system (x2-mx2+mx1, y2-my2+my1, z2-mz2+mz1), and the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1,mz1).
在第三位姿703:机器人末端在机器人基坐标系中坐标(x3-mx3+mx1,y3-my3+my1,z3-mz3+mz1),标定工件末端在3D相机坐标系中的坐标(mx1,my1,mz1)。In the third pose 703: the coordinates of the robot end in the robot base coordinate system (x3-mx3+mx1, y3-my3+my1, z3-mz3+mz1), the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1,mz1).
在第四位姿704:机器人末端在机器人基坐标系中坐标(x4-mx4+mx1,y4-my4+my1,z4-mz4+mz1),标定工件末端在3D相机坐标系中的坐标(mx1, my1,mz1)。In the fourth pose 704: the coordinates of the robot end in the robot base coordinate system (x4-mx4+mx1, y4-my4+my1, z4-mz4+mz1), the coordinates of the workpiece end in the 3D camera coordinate system (mx1, my1,mz1).
在平移变换后,机器人TCP在机器人基坐标系中的坐标(x0,y0,z0)处于球心,默认TCP点(末端法兰中心坐标点从示教器上获得)位于球面,可以根据以下方程(1)-(4)来求解机器人TCP的坐标(x0,y0,z0):After the translation transformation, the coordinates (x0, y0, z0) of the robot TCP in the robot base coordinate system are at the center of the sphere, and the default TCP point (the coordinate point of the center of the end flange is obtained from the teach pendant) is located on the spherical surface, which can be determined according to the following equation (1)-(4) to solve the coordinates (x0, y0, z0) of the robot TCP:
(x1-x0) 2+(y1-y0) 2+(z1-z0) 2=R 2               (1) (x1-x0) 2 +(y1-y0) 2 +(z1-z0) 2 =R 2 (1)
(x2-mx2+mx1-x0) 2+(y2-my2+my1-y0) 2        (2) (x2-mx2+mx1-x0) 2 +(y2-my2+my1-y0) 2 (2)
+(z2-mz2+mz1-z0) 2=R2 +(z2-mz2+mz1-z0) 2 =R2
(x3-mx3+mx1-x0) 2+(y3-my3+my1,-y0) 2        (3) (x3-mx3+mx1-x0) 2 +(y3-my3+my1, -y0) 2 (3)
+(z3-mz3+mz1-z0) 2=R 2 +(z3-mz3+mz1-z0) 2 =R 2
(x4-mx4+mx1-x0) 2+(y4-my4+my1-y0) 2        (4) (x4-mx4+mx1-x0) 2 +(y4-my4+my1-y0) 2 (4)
+(z4-mz4+mz1-z0) 2=R 2 +(z4-mz4+mz1-z0) 2 =R 2
根据求解出的(x0,y0,z0)以及第一个机器人姿态时的机器人状态(x1,y1,z1,rx1,ry1,rz1)就能求得TCP值(x tcp,y tcp,z tcp,rx tcp,ry tcp,rz tcp),即可以确定机器人TCP与位于机器人的末端上的默认TCP之间的相对关系。 According to the solved (x0, y0, z0) and the robot state (x1, y1, z1, rx1, ry1, rz1) at the first robot pose, the TCP value (x tcp , y tcp , z tcp , rx tcp , ry tcp , rz tcp ) to determine the relative relationship between the robot TCP and the default TCP located on the end of the robot.
传统的TCP标定过程通常使用3、4或5点方法,用户需要使用不同的姿态将TCP移动到参考点(例如,在机器人的工作空间内放置的一个固定点)3、4或5次使TCP与参考点重合。然而,此类传统的TCP标定方法需要用户的手动参与,需要用户熟悉操作和掌握专业知识,并且存在标定速度慢和标定精度不足等缺点。与传统的TCP标定过程相比,根据方法500的TCP标定过程,可以自动进行TCP标定,无需人工参与,有效地降低了操作复杂度,可以快速实现TCP标定,同时避免了由手动参与引起的误差,提高了标定准确性。The traditional TCP calibration process usually uses a 3, 4 or 5 point method, where the user needs to move the TCP to a reference point (for example, a fixed point placed within the robot's workspace) 3, 4 or 5 times using different poses to make the TCP coincides with the reference point. However, such traditional TCP calibration methods require the manual participation of users, require users to be familiar with operations and master professional knowledge, and have disadvantages such as slow calibration speed and insufficient calibration accuracy. Compared with the traditional TCP calibration process, according to the TCP calibration process of the method 500, the TCP calibration can be automatically performed without manual participation, the operation complexity is effectively reduced, the TCP calibration can be quickly realized, and errors caused by manual participation are avoided at the same time. , which improves the calibration accuracy.
图8示出了根据本公开的实施例的用于机器人的标定方法800的另一个流程图,图9示出了根据本公开的实施例的可用于机器人的手眼标定过程的示例性布置900。方法800可以应用于如图1所示的示例性场景100(眼在手中)和如图2所示的示例性场景200(眼在手外)。下面结合图8和图9来描述方法800。FIG. 8 shows another flowchart of a calibration method 800 for a robot according to an embodiment of the present disclosure, and FIG. 9 shows an exemplary arrangement 900 of a hand-eye calibration process that may be used for a robot according to an embodiment of the present disclosure. The method 800 may be applied to the exemplary scenario 100 shown in FIG. 1 (eyes in hands) and the exemplary scenario 200 shown in FIG. 2 (eyes out of hands). The method 800 is described below in conjunction with FIGS. 8 and 9 .
参考图8,方法800包括用于执行手眼标定过程的步骤801-805。方法800可以包括在方法300中或独立于方法300来执行。Referring to Figure 8, method 800 includes steps 801-805 for performing a hand-eye calibration process. Method 800 may be included in method 300 or performed independently of method 300 .
手眼标定的目的就是获取机器人坐标系和相机坐标系的关系,最后将视 觉识别的结果转移到机器人坐标系下。如前所述,根据相机固定的地方不同,手眼标定可以分为两种形式,如果相机和机器人末端固定在一起,就称之为眼在手中,如果相机固定在机器人外部的底座上,则称之为眼在手外。The purpose of hand-eye calibration is to obtain the relationship between the robot coordinate system and the camera coordinate system, and finally transfer the result of visual recognition to the robot coordinate system. As mentioned above, according to the different places where the camera is fixed, the hand-eye calibration can be divided into two forms. If the camera and the robot end are fixed together, it is called eye in hand; It is out of sight.
方法800开始于步骤801,使用第一相机来捕捉待由机器人处理的第二工件的第一目标图像,并使用第二相机来捕捉所述第二工件的第二目标图像。第二工件可以是例如如图9所示的待由机器人101处理的目标工件901。可以由第一相机102捕捉目标工件901的第一目标图像,并且可以由第二相机106捕捉目标工件901的第二目标图像。The method 800 begins at step 801 by capturing a first target image of a second workpiece to be processed by the robot using a first camera, and capturing a second target image of the second workpiece using a second camera. The second workpiece may be, for example, a target workpiece 901 to be processed by the robot 101 as shown in FIG. 9 . A first target image of the target workpiece 901 may be captured by the first camera 102 and a second target image of the target workpiece 901 may be captured by the second camera 106 .
接着方法800行进到步骤802。在步骤802中,基于第一目标图像和第二目标图像,确定第一相机的2D相机坐标系和第二相机的3D相机坐标系之间的第一变换关系。在该步骤中,可以确定不同相机坐标系之间的变换关系,即确定第一目标图像中的点的坐标到第二目标图像中的对应点的坐标之间的映射关系。The method 800 then proceeds to step 802 . In step 802, a first transformation relationship between the 2D camera coordinate system of the first camera and the 3D camera coordinate system of the second camera is determined based on the first target image and the second target image. In this step, the transformation relationship between different camera coordinate systems can be determined, that is, the mapping relationship between the coordinates of the points in the first target image and the coordinates of the corresponding points in the second target image can be determined.
在一些实施例中,步骤802可以包括:从第一目标图像和第二目标图像中检测布置在第二工件中的多个特征点,以获得该多个特征点在2D相机坐标系中的第一多个特征坐标和在3D相机坐标系中的第二多个特征坐标。在一些实施例中,特征点的数量可以是至少四个。例如,第一相机102的2D相机坐标系的坐标可以是基于对第一相机102进行相机标定后获得的相机参数(该相机标定可以是例如本文描述的相机标定过程或任何其他适用的相机标定过程),使用目标图像中的二维信息来计算出的。In some embodiments, step 802 may include: detecting a plurality of feature points arranged in the second workpiece from the first target image and the second target image, so as to obtain the first position of the plurality of feature points in the 2D camera coordinate system A plurality of feature coordinates and a second plurality of feature coordinates in the 3D camera coordinate system. In some embodiments, the number of feature points may be at least four. For example, the coordinates of the 2D camera coordinate system of the first camera 102 may be based on camera parameters obtained after camera calibration of the first camera 102 (the camera calibration may be, for example, the camera calibration process described herein or any other suitable camera calibration process ), calculated using the 2D information in the target image.
参考图9,目标工件901上可以布置有多个特征点902(例如,图中为4个),特征点902可以具有例如特定的形状、尺寸、标记等,以便从捕捉的图像中识别/检测特征点。例如,目标工件901可以是PCB板,特征点902可以是布置在PCB板上的CAD标记的螺丝孔。第一相机102和第二相机106可以分别从捕捉的目标图像中识别特征点902,可以如下得到多个特征点902在第一相机102的2D相机坐标系和第二相机106的3D相机坐标系中的坐标:Referring to FIG. 9, a target workpiece 901 may have a plurality of feature points 902 (eg, 4 in the figure) arranged on the target workpiece 901, and the feature points 902 may have, for example, a specific shape, size, mark, etc., so as to be recognized/detected from the captured image Feature points. For example, the target workpiece 901 may be a PCB board, and the feature points 902 may be CAD marked screw holes arranged on the PCB board. The first camera 102 and the second camera 106 can respectively identify the feature points 902 from the captured target image, and can obtain a plurality of feature points 902 in the 2D camera coordinate system of the first camera 102 and the 3D camera coordinate system of the second camera 106 as follows: Coordinates in:
第一特征点902,3D相机坐标系坐标(x1,y1,z1),2D相机坐标系坐标(X1,Y1,Z1)。The first feature point 902 is the coordinates of the 3D camera coordinate system (x1, y1, z1), and the coordinates of the 2D camera coordinate system (X1, Y1, Z1).
第二特征点902,3D相机坐标系坐标(x2,y2,z2),2D相机坐标系坐标 (X2,Y2,Z2)。The second feature point 902, 3D camera coordinate system coordinates (x2, y2, z2), 2D camera coordinate system coordinates (X2, Y2, Z2).
第三特征点903,3D相机坐标系坐标(x3,y3,z3),2D相机坐标系坐标(X3,Y3,Z3)。The third feature point 903 is the coordinates of the 3D camera coordinate system (x3, y3, z3), and the coordinates of the 2D camera coordinate system (X3, Y3, Z3).
第四特征点902,3D相机坐标系坐标(x4,y4,z4),2D相机坐标系坐标(X4,Y4,Z4)。The fourth feature point 902, 3D camera coordinate system coordinates (x4, y4, z4), 2D camera coordinate system coordinates (X4, Y4, Z4).
2D相机和3D相机坐标系中坐标转换关系满足以下方程(5):The coordinate conversion relationship in the 2D camera and 3D camera coordinate systems satisfies the following equation (5):
Figure PCTCN2020117538-appb-000001
Figure PCTCN2020117538-appb-000001
根据P 3D=H1·P 2D,其中,P 3D和P 2D分别为同一个点在不同相机坐标系中的坐标,可以求得两个相机坐标系间的单应矩阵H1,即,确定第一相机102的2D相机坐标系和第二相机106的3D相机坐标系之间的第一变换关系。 According to P 3D =H1·P 2D , where P 3D and P 2D are the coordinates of the same point in different camera coordinate systems, the homography matrix H1 between the two camera coordinate systems can be obtained, that is, to determine the first A first transformation relationship between the 2D camera coordinate system of the camera 102 and the 3D camera coordinate system of the second camera 106 .
接着方法800行进到步骤803。在步骤803中,移动机器人的末端到多个位置,确定机器人TCP在机器人的基坐标系中的第一多个工具坐标,并确定机器人TCP在3D相机坐标系中的第二多个工具坐标。在该步骤中,可以例如通过第二相机捕捉机器人的末端的图像,来确定机器人TCP在机器人的基坐标系中的第一多个工具坐标(例如,采用本文描述的TCP标定过程或或任何其他适用的TCP标定过程)和在3D相机坐标系中的第二多个工具坐标。在一些实施例中,位置的数量可以是至少四个。The method 800 then proceeds to step 803 . In step 803, the end of the robot is moved to a plurality of positions, the first plurality of tool coordinates of the robot TCP in the base coordinate system of the robot are determined, and the second plurality of tool coordinates of the robot TCP in the 3D camera coordinate system are determined. In this step, the first plurality of tool coordinates of the robot TCP in the robot's base coordinate system may be determined, for example, by capturing an image of the end of the robot by a second camera (eg, using the TCP calibration process described herein or any other applicable TCP calibration process) and a second plurality of tool coordinates in the 3D camera coordinate system. In some embodiments, the number of locations may be at least four.
例如,通过将机器人的末端103移动(例如,平移、或摆出不同位姿)到多个位置(例如,四个),分别确定机器人TCP在机器人101的基坐标系和第二相机106的相机坐标系中的坐标。For example, by moving (eg, translating, or posing in different poses) the end 103 of the robot to multiple positions (eg, four), the base coordinate system of the robot TCP in the robot 101 and the camera of the second camera 106 are respectively determined The coordinates in the coordinate system.
在一些实施例中,步骤803可以包括:针对所述多个位置中的至少一个位置,在所述机器人的所述一个姿态下平移至所述至少一个位置,基于所述机器人TCP与所述默认TCP之间的相对关系,确定所述机器人TCP在所述机器人的基坐标系中的至少一个工具坐标。例如,为了方便计算,可以基于在机器人的某个姿态下已获得的机器人TCP与所述默认TCP之间的相对关系,在机器人的该姿态下平移机器人的末端到至少一个位置,根据从机器人控制器获得的默认TCP在机器人基坐标系中的坐标来确定机器人TCP在机器人基坐标系中的坐标。In some embodiments, step 803 may include, for at least one of the plurality of positions, translating to the at least one position in the one pose of the robot, based on the robot TCP and the default The relative relationship between the TCPs determines at least one tool coordinate of the robot TCP in the base coordinate system of the robot. For example, in order to facilitate the calculation, based on the obtained relative relationship between the robot TCP and the default TCP in a certain posture of the robot, the end of the robot can be translated to at least one position in the robot posture, according to the control from the robot. The coordinates of the default TCP in the robot base coordinate system obtained by the controller are used to determine the coordinates of the robot TCP in the robot base coordinate system.
接着方法800行进到步骤804。在步骤804中,使用第一多个工具坐标和第二多个工具坐标来确定机器人的基坐标系和3D相机坐标系之间的第二变换关系。The method 800 then proceeds to step 804 . In step 804, a second transformation relationship between the base coordinate system of the robot and the 3D camera coordinate system is determined using the first plurality of tool coordinates and the second plurality of tool coordinates.
机器人101的基坐标系和3D相机坐标系中坐标转换关系满足以下方程(6):The coordinate transformation relationship between the base coordinate system of the robot 101 and the 3D camera coordinate system satisfies the following equation (6):
P 3D=H2·P robot                  (6) P 3D = H2 · P robot (6)
P 3D和P robot分别为同一个点在3D相机坐标系和机器人基坐标系中的坐标,由此可以求得3D相机坐标系和机器人基坐标系间的单应矩阵H2,即,确定机器人的基坐标系和3D相机坐标系之间的第二变换关系。 P 3D and P robot are the coordinates of the same point in the 3D camera coordinate system and the robot base coordinate system, respectively. From this, the homography matrix H2 between the 3D camera coordinate system and the robot base coordinate system can be obtained, that is, to determine the robot's The second transformation relationship between the base coordinate system and the 3D camera coordinate system.
接着方法800行进到步骤805。在步骤805中,基于第一变换关系和第二变换关系,确定机器人的基坐标系和2D相机坐标系之间的第三变换关系。The method 800 then proceeds to step 805 . In step 805, based on the first transformation relationship and the second transformation relationship, a third transformation relationship between the base coordinate system of the robot and the 2D camera coordinate system is determined.
基于方程(5)和(6),机器人基坐标系和3D相机坐标系中坐标转换关系满足以下方程(7):Based on equations (5) and (6), the coordinate transformation relationship in the robot base coordinate system and the 3D camera coordinate system satisfies the following equation (7):
P robot=H2 -1·H1·P 2D               (7) P robot = H2 -1 · H1 · P 2D (7)
P robot和P 2D分别为同一个点在机器人基坐标系和2D相机坐标系中的坐标,由此可以求得机器人基坐标系和2D相机标系间的单应矩阵H2 -1·H1,即,确定机器人的基坐标系和2D相机坐标系之间的第三变换关系(手眼关系)。 P robot and P 2D are the coordinates of the same point in the robot base coordinate system and the 2D camera coordinate system, respectively. From this, the homography matrix H2 -1 · H1 between the robot base coordinate system and the 2D camera coordinate system can be obtained, namely , and determine the third transformation relationship (hand-eye relationship) between the robot's base coordinate system and the 2D camera coordinate system.
在一些实施例中,该至少一个位置在该至少一个特征点处或其附近。例如,为了减少手眼标定误差,该至少一个位置可以在可识别的至少一个特征点处或其附近。在一些实施例中,可以采用如本文先前所描述的AR技术来直观且便捷地引导机器人的末端移动到可识别的至少一个特征点处或其附近,例如将标定工件600的末端612插入特征点902中,以进一步降低手眼标定误差。In some embodiments, the at least one location is at or near the at least one feature point. For example, to reduce hand-eye calibration errors, the at least one location may be at or near the at least one identifiable feature point. In some embodiments, AR techniques as previously described herein may be employed to intuitively and conveniently guide the end of the robot to move at or near at least one recognizable feature point, such as inserting the end 612 of the calibration workpiece 600 into the feature point 902 to further reduce the hand-eye calibration error.
与传统的手眼标定过程相比,根据方法800的手眼标定过程,可以自动进行手眼标定,无需人工参与,有效地降低了操作复杂度,并且计算简单,可以快速实现手眼标定,同时避免了由手动参与引起的误差,提高了标定准确性。Compared with the traditional hand-eye calibration process, according to the hand-eye calibration process of the method 800, the hand-eye calibration can be automatically performed without manual participation, which effectively reduces the operation complexity, and the calculation is simple, so that the hand-eye calibration can be quickly realized, and the manual operation is avoided. The error caused by participation improves the calibration accuracy.
图10示出了根据本公开的实施例的用于机器人的示例性标定装置1000的框图。装置1000包括相机标定单元1010、TCP标定单元1020和手眼标 定单元1030。装置1000还包括可以通信单元(未示出)以与外部其他装置通信(例如,从和/或向外部装置接收/发送指令和数据)。10 shows a block diagram of an exemplary calibration apparatus 1000 for a robot according to an embodiment of the present disclosure. The apparatus 1000 includes a camera calibration unit 1010, a TCP calibration unit 1020, and a hand-eye calibration unit 1030. The device 1000 also includes a communication unit (not shown) to communicate with external other devices (eg, to receive/transmit instructions and data from and/or to external devices).
相机单元1010包括环境捕捉模块1011、放置模块1012、第一相机捕捉模块1013和参数确定模块1014。The camera unit 1010 includes an environment capture module 1011 , a placement module 1012 , a first camera capture module 1013 and a parameter determination module 1014 .
环境捕捉模块1011被配置为:捕捉环境的图像并形成环境的三维虚拟对象。在一些实施例中,环境捕捉模块1011可以被进一步配置为:从不同的位置以不同的角度针对环境进行摄像,从而针对每个场景得到至少两个图像;基于立体图像的三角测量方法测量得出图像中的各个像素的深度信息;以及基于深度信息及图像包含的二维信息形成环境的三维虚拟对象。The environment capture module 1011 is configured to capture images of the environment and form three-dimensional virtual objects of the environment. In some embodiments, the environment capture module 1011 can be further configured to: take pictures of the environment from different positions and different angles, so as to obtain at least two images for each scene; The depth information of each pixel in the image; and the three-dimensional virtual object of the environment is formed based on the depth information and the two-dimensional information contained in the image.
放置模块1012被配置为:基于所捕捉的环境的图像和环境的三维虚拟对象,将标定物体放置在环境中的目标区域中。在一些实施例中,放置模块1012可以被进一步被配置为:从所捕捉的环境的图像中检测布置在环境中的视觉标记物;基于检测到的视觉标记物,确定目标区域;捕捉标定物体的图像并形成标定物体的三维虚拟对象;将标定物体的三维虚拟对象叠加在环境的三维虚拟对象上以作为增强现实图像进行显示;移动所述标定物体以使得所述标定物体的三维虚拟对象位于所述目标区域的三维虚拟对象中。在一些实施例中,视觉标记物具有特征颜色和/或图案。The placement module 1012 is configured to place the calibration object in a target area in the environment based on the captured image of the environment and the three-dimensional virtual object of the environment. In some embodiments, the placement module 1012 may be further configured to: detect visual markers disposed in the environment from the captured image of the environment; determine a target area based on the detected visual markers; image and form a 3D virtual object of the calibration object; superimpose the 3D virtual object of the calibration object on the 3D virtual object of the environment for display as an augmented reality image; move the calibration object so that the 3D virtual object of the calibration object is located in the in the 3D virtual object of the target area. In some embodiments, the visual marker has a characteristic color and/or pattern.
第一相机捕捉模块1013被配置为:通过与机器人相关联的第一相机和标定物体之间的相对移动来使用第一相机捕捉标定物体的图像,其中,第一相机为2D相机。在一些实施例中,当第一相机固定于机器人外部时,第一相机捕捉模块1013可以被进一步配置为:获得用于在目标区域中放置标定物体的多个实际位置,该多个实际位置对应于环境的三维虚拟对象中的多个虚拟位置;移动标定物体以使得标定物体的三维虚拟对象分别位于多个虚拟位置处;以及使用第一相机分别在多个实际位置捕捉标定物体的图像。The first camera capture module 1013 is configured to capture an image of the calibration object using the first camera through relative movement between the first camera associated with the robot and the calibration object, wherein the first camera is a 2D camera. In some embodiments, when the first camera is fixed outside the robot, the first camera capturing module 1013 may be further configured to obtain a plurality of actual positions for placing the calibration object in the target area, the plurality of actual positions corresponding to moving the calibration object so that the 3D virtual objects of the calibration object are respectively located at the plurality of virtual positions; and using the first camera to capture images of the calibration object at the plurality of actual positions, respectively.
参数确定模块1014被配置为:基于第一相机所捕捉的标定物体的图像,确定用于标定的第一相机的参数。The parameter determination module 1014 is configured to determine parameters of the first camera for calibration based on the images of the calibration object captured by the first camera.
TCP标定单元1020包括第一工件捕捉模块1021、工件坐标确定模块1022、机器人坐标获取模块1023和TCP坐标确定模块1024。The TCP calibration unit 1020 includes a first workpiece capture module 1021 , a workpiece coordinate determination module 1022 , a robot coordinate acquisition module 1023 and a TCP coordinate determination module 1024 .
第一工件捕捉模块1021被配置为:使用第二相机来分别捕捉以多个位姿放置的第一工件的多个图像,其中,所述第一工件安装于机器人的末端上, 第二相机是3D相机并且固定于机器人外部。The first workpiece capture module 1021 is configured to use a second camera to capture a plurality of images of a first workpiece placed in a plurality of poses, respectively, wherein the first workpiece is mounted on the end of the robot, and the second camera is 3D camera and fixed outside the robot.
工件坐标确定模块1022被配置为:基于所捕捉的多个图像,确定在多个位姿下第一工件的末端在第二相机的3D相机坐标系中的多个工件末端坐标。在一些实施例中,工件坐标确定模块1022可以被配置为:从所捕捉的多个图像中检测布置在第一工件上的标记点以获得标记点在3D相机坐标系中的多个标记点坐标;以及基于多个标记点坐标,确定第一工件的末端在3D相机坐标系中的多个工件末端坐标。在一些实施例中,标记点具有特征颜色和/或特征图案。The workpiece coordinate determination module 1022 is configured to determine, based on the plurality of captured images, a plurality of workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system of the second camera under the plurality of poses. In some embodiments, the workpiece coordinate determination module 1022 may be configured to detect marker points arranged on the first workpiece from the captured plurality of images to obtain a plurality of marker point coordinates of the marker points in the 3D camera coordinate system and, based on the coordinates of the plurality of marked points, determining a plurality of workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system. In some embodiments, the marker points have characteristic colors and/or characteristic patterns.
机器人坐标获取模块1023被配置为:获得在多个位姿下机器人的末端在机器人的基坐标系中的多个机器人末端坐标。The robot coordinate obtaining module 1023 is configured to obtain a plurality of robot end coordinates of the end of the robot in the base coordinate system of the robot under a plurality of poses.
TCP坐标确定模块1024被配置为:基于多个工件末端坐标和多个机器人末端坐标,确定在机器人的一个姿态下待标定的机器人TCP在机器人的基坐标系中的坐标,并确定机器人TCP与位于机器人的末端上的默认TCP之间的相对关系。The TCP coordinate determination module 1024 is configured to: based on the multiple workpiece end coordinates and the multiple robot end coordinates, determine the coordinates of the robot TCP to be calibrated in the base coordinate system of the robot under one posture of the robot, and determine the robot TCP and the robot TCP The relative relationship between the default TCPs on the end of the robot.
手眼标定单元1030包括第二工件捕捉模块1031、第一变换确定模块1032、工具坐标确定模块1033、第二变换确定模块1034和第三变换确定模块1035。The hand-eye calibration unit 1030 includes a second workpiece capture module 1031 , a first transformation determination module 1032 , a tool coordinate determination module 1033 , a second transformation determination module 1034 and a third transformation determination module 1035 .
第二工件捕捉模块1031被配置为:使用第一相机来捕捉待由机器人处理的第二工件的第一目标图像,并使用第二相机来捕捉第二工件的第二目标图像。The second workpiece capture module 1031 is configured to capture a first target image of a second workpiece to be processed by the robot using the first camera, and to capture a second target image of the second workpiece using the second camera.
第一变换确定模块1032被配置为:基于第一目标图像和第二目标图像,确定第一相机的2D相机坐标系和第二相机的3D相机坐标系之间的第一变换关系。在一些实施例中,第一变换确定模块1032可以被进一步配置为:从第一目标图像和第二目标图像中检测布置在第二工件中的多个特征点,以获得多个特征点在2D相机坐标系中的第一多个特征坐标和在3D相机坐标系中的第二多个特征坐标;以及使用所述第一多个特征坐标和所述第二多个特征坐标来确定所述第一变换关系。The first transformation determination module 1032 is configured to determine a first transformation relationship between the 2D camera coordinate system of the first camera and the 3D camera coordinate system of the second camera based on the first target image and the second target image. In some embodiments, the first transformation determination module 1032 may be further configured to: detect a plurality of feature points arranged in the second workpiece from the first target image and the second target image to obtain the plurality of feature points in 2D a first plurality of feature coordinates in a camera coordinate system and a second plurality of feature coordinates in a 3D camera coordinate system; and determining the first plurality of feature coordinates using the first plurality of feature coordinates and the second plurality of feature coordinates A transformation relationship.
工具坐标确定模块1033被配置为:移动机器人的末端到多个位置,确定机器人TCP在机器人的基坐标系中的第一多个工具坐标,并确定机器人TCP在3D相机坐标系中的第二多个工具坐标。在一些实施例中,工具坐标 确定模块1033可以被进一步配置为:针对多个位置中的至少一个位置,在机器人的一个姿态下平移至该至少一个位置,基于机器人TCP与默认TCP之间的相对关系,确定机器人TCP在所述机器人的基坐标系中的至少一个工具坐标。在一些实施例中,该至少一个位置在至少一个特征点处或其附近。The tool coordinate determination module 1033 is configured to: move the end of the robot to a plurality of positions, determine the first plurality of tool coordinates of the robot TCP in the base coordinate system of the robot, and determine the second plurality of tool coordinates of the robot TCP in the 3D camera coordinate system. tool coordinates. In some embodiments, the tool coordinate determination module 1033 may be further configured to: translate to at least one of the plurality of positions in one pose of the robot to the at least one position, based on the relative relationship between the robot TCP and the default TCP relationship, at least one tool coordinate of the robot TCP in the base coordinate system of the robot is determined. In some embodiments, the at least one location is at or near at least one feature point.
第二变换确定模块1034被配置为:使用第一多个工具坐标和第二多个工具坐标来确定机器人的基坐标系和3D相机坐标系之间的第二变换关系。The second transformation determination module 1034 is configured to determine a second transformation relationship between the base coordinate system of the robot and the 3D camera coordinate system using the first plurality of tool coordinates and the second plurality of tool coordinates.
第三变换确定模块1035被配置为:基于第一变换关系和第二变换关系,确定机器人的基坐标系和2D相机坐标系之间的第三变换关系。The third transformation determining module 1035 is configured to: determine a third transformation relationship between the base coordinate system of the robot and the 2D camera coordinate system based on the first transformation relationship and the second transformation relationship.
虽然在图10的示例中将相机标定单元1010、TCP标定单元1020和手眼标定单元1030示出为集成在装置1000中,但是应当理解,装置1000可以包括这些单元中的至少一个来单独实现相应的标定过程。Although the camera calibration unit 1010 , the TCP calibration unit 1020 and the hand-eye calibration unit 1030 are shown as being integrated in the apparatus 1000 in the example of FIG. 10 , it should be understood that the apparatus 1000 may include at least one of these units to separately implement the corresponding Calibration process.
图11示出了根据本公开的实施例的用于实现工业相机对焦的示例性计算设备1100的框图。计算设备1100包括处理器1101和与处理器1101耦合的存储器1102。存储器1102用于存储计算机可执行指令,当计算机可执行指令被执行时使得处理器1101执行以上实施例中的方法(例如,前述的方法300、500或800中的任何一个或多个步骤)。11 illustrates a block diagram of an exemplary computing device 1100 for implementing industrial camera focusing, according to an embodiment of the present disclosure. Computing device 1100 includes processor 1101 and memory 1102 coupled with processor 1101 . The memory 1102 is used to store computer-executable instructions that, when executed, cause the processor 1101 to perform the methods in the above embodiments (eg, any one or more steps of the aforementioned methods 300 , 500 or 800 ).
此外,替代地,上述方法能够通过计算机可读存储介质来实现。计算机可读存储介质上载有用于执行本公开的各个实施例的计算机可读程序指令。计算机可读存储介质可以是可以保持和存储由指令执行设备使用的指令的有形设备。计算机可读存储介质例如可以是但不限于电存储设备、磁存储设备、光存储设备、电磁存储设备、半导体存储设备或者上述的任意合适的组合。计算机可读存储介质的更具体的例子(非穷举的列表)包括:便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(EPROM或闪存)、静态随机存取存储器(SRAM)、便携式压缩盘只读存储器(CD-ROM)、数字多功能盘(DVD)、记忆棒、软盘、机械编码设备、例如其上存储有指令的打孔卡或凹槽内凸起结构、以及上述的任意合适的组合。这里所使用的计算机可读存储介质不被解释为瞬时信号本身,诸如无线电波或者其他自由传播的电磁波、通过波导或其他传输媒介传播的电磁波(例如,通过光纤电缆的光脉冲)、或者通过电线传输的电信号。Also, alternatively, the above-described method can be implemented by a computer-readable storage medium. The computer-readable storage medium carries computer-readable program instructions for carrying out various embodiments of the present disclosure. A computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device. The computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing. More specific examples (non-exhaustive list) of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above. Computer-readable storage media, as used herein, are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
因此,在另一个实施例中,本公开提出了一种计算机可读存储介质,该计算机可读存储介质具有存储在其上的计算机可执行指令,计算机可执行指令用于执行本公开的各个实施例中的方法。Accordingly, in another embodiment, the present disclosure presents a computer-readable storage medium having computer-executable instructions stored thereon for performing various implementations of the present disclosure method in the example.
在另一个实施例中,本公开提出了一种计算机程序产品,该计算机程序产品被有形地存储在计算机可读存储介质上,并且包括计算机可执行指令,该计算机可执行指令在被执行时使至少一个处理器执行本公开的各个实施例中的方法。In another embodiment, the present disclosure presents a computer program product tangibly stored on a computer-readable storage medium and comprising computer-executable instructions that, when executed, cause At least one processor executes the methods in various embodiments of the present disclosure.
一般而言,本公开的各个示例实施例可以在硬件或专用电路、软件、固件、逻辑,或其任何组合中实施。某些方面可以在硬件中实施,而其他方面可以在可以由控制器、微处理器或其他计算设备执行的固件或软件中实施。当本公开的实施例的各方面被图示或描述为框图、流程图或使用某些其他图形表示时,将理解此处描述的方框、装置、系统、技术或方法可以作为非限制性的示例在硬件、软件、固件、专用电路或逻辑、通用硬件或控制器或其他计算设备,或其某些组合中实施。In general, the various example embodiments of the present disclosure may be implemented in hardware or special purpose circuits, software, firmware, logic, or any combination thereof. Certain aspects may be implemented in hardware, while other aspects may be implemented in firmware or software that may be executed by a controller, microprocessor or other computing device. While aspects of the embodiments of the present disclosure are illustrated or described as block diagrams, flowcharts, or using some other graphical representation, it is to be understood that the blocks, apparatus, systems, techniques, or methods described herein may be taken as non-limiting Examples are implemented in hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controllers or other computing devices, or some combination thereof.
用于执行本公开的各个实施例的计算机可读程序指令或者计算机程序产品也能够存储在云端,在需要调用时,用户能够通过移动互联网、固网或者其他网络访问存储在云端上的用于执行本公开的一个实施例的计算机可读程序指令,从而实施依据本公开的各个实施例所公开的技术方案。Computer-readable program instructions or computer program products for executing various embodiments of the present disclosure can also be stored in the cloud, and when invoked, the user can access the data stored in the cloud for execution through the mobile Internet, fixed network or other network. The computer-readable program instructions of an embodiment of the present disclosure implement the technical solutions disclosed in accordance with various embodiments of the present disclosure.
虽然已经参考若干具体实施例描述了本公开的实施例,但是应当理解,本公开的实施例并不限于所公开的具体实施例。本公开的实施例旨在涵盖在所附权利要求的精神和范围内所包括的各种修改和等同布置。权利要求的范围符合最宽泛的解释,从而包含所有这样的修改及等同结构和功能。Although embodiments of the present disclosure have been described with reference to several specific embodiments, it is to be understood that embodiments of the present disclosure are not limited to the specific embodiments disclosed. The embodiments of the present disclosure are intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims. The scope of the claims is to be accorded the broadest interpretation so as to encompass all such modifications and equivalent structures and functions.

Claims (27)

  1. 用于机器人的标定方法,所述标定方法包括执行相机标定过程,其中,执行所述相机标定过程包括以下步骤:A calibration method for a robot, the calibration method comprising performing a camera calibration process, wherein performing the camera calibration process comprises the following steps:
    A,捕捉环境的图像并形成所述环境的三维虚拟对象,A, capturing an image of an environment and forming a three-dimensional virtual object of said environment,
    B,基于所捕捉的所述环境的图像和所述环境的三维虚拟对象,将标定物体放置在所述环境中的目标区域中,B, placing a calibration object in a target area in the environment based on a captured image of the environment and a three-dimensional virtual object of the environment,
    C,通过与机器人相关联的第一相机和所述标定物体之间的相对移动来使用所述第一相机捕捉所述标定物体的图像,其中,所述第一相机为2D相机,以及C, capturing an image of the calibration object using a first camera associated with the robot through relative movement between the calibration object and the calibration object, wherein the first camera is a 2D camera, and
    D,基于所述第一相机所捕捉的所述标定物体的图像,确定用于标定的所述第一相机的参数。D. Determine parameters of the first camera for calibration based on the image of the calibration object captured by the first camera.
  2. 根据权利要求1所述的标定方法,其中,所述步骤A包括:The calibration method according to claim 1, wherein the step A comprises:
    A1,从不同的位置以不同的角度针对所述环境进行摄像,从而针对每个场景得到至少两个图像,A1, photographing the environment from different positions and angles, so as to obtain at least two images for each scene,
    A2,基于立体图像的三角测量方法测量得出所述图像中的各个像素的深度信息,以及A2, the depth information of each pixel in the image is obtained by measuring the triangulation method based on the stereo image, and
    A3,基于所述深度信息及所述图像包含的二维信息形成所述环境的三维虚拟对象。A3: Based on the depth information and the two-dimensional information included in the image, a three-dimensional virtual object of the environment is formed.
  3. 根据权利要求1所述的标定方法,其中,所述步骤B包括:The calibration method according to claim 1, wherein the step B comprises:
    B1,从所捕捉的环境的图像中检测布置在所述环境中的视觉标记物,B1, detecting from a captured image of the environment visual markers arranged in the environment,
    B2,基于检测到的所述视觉标记物,确定所述目标区域,B2, determining the target area based on the detected visual markers,
    B3,捕捉所述标定物体的图像并形成所述标定物体的三维虚拟对象,B3, capturing the image of the calibration object and forming a three-dimensional virtual object of the calibration object,
    B4,将所述标定物体的三维虚拟对象叠加在所述环境的三维虚拟对象上以作为增强现实图像进行显示,以及B4, superimposing the three-dimensional virtual object of the calibration object on the three-dimensional virtual object of the environment for display as an augmented reality image, and
    B5,移动所述标定物体以使得所述标定物体的三维虚拟对象位于所述目标区域的三维虚拟对象中。B5. Move the calibration object so that the three-dimensional virtual object of the calibration object is located in the three-dimensional virtual object of the target area.
  4. 根据权利要求3所述的标定方法,其中,所述视觉标记物具有特征颜色和/或特征图案。The calibration method of claim 3, wherein the visual marker has a characteristic color and/or a characteristic pattern.
  5. 根据权利要求3所述的标定方法,其中,当所述第一相机固定于所述机器人外部时,所述步骤C包括:The calibration method according to claim 3, wherein when the first camera is fixed outside the robot, the step C comprises:
    C1,获得用于在所述目标区域中放置所述标定物体的多个实际位置,所述多个实际位置对应于所述环境的三维虚拟对象中的多个虚拟位置,C1, obtaining a plurality of actual positions for placing the calibration object in the target area, the plurality of actual positions corresponding to a plurality of virtual positions in the three-dimensional virtual object of the environment,
    C2,移动所述标定物体以使得所述标定物体的三维虚拟对象分别位于所述多个虚拟位置处,以及C2, moving the calibration object so that the three-dimensional virtual objects of the calibration object are respectively located at the plurality of virtual positions, and
    C3,使用所述第一相机分别在所述多个实际位置捕捉所述标定物体的图像。C3, using the first camera to capture images of the calibration object at the multiple actual positions respectively.
  6. 根据权利要求1所述的标定方法,其中,所述标定方法还包括执行TCP标定过程,其中,执行所述TCP标定过程包括以下步骤:The calibration method according to claim 1, wherein the calibration method further comprises performing a TCP calibration process, wherein performing the TCP calibration process comprises the following steps:
    E,使用第二相机来分别捕捉以多个位姿放置的所述第一工件的多个图像,其中,所述多个位姿具有不同的空间倾斜角,所述第一工件安装于所述机器人的末端上,所述第二相机是3D相机并且固定于所述机器人外部,E, using a second camera to capture a plurality of images of the first workpiece placed in a plurality of poses, wherein the plurality of poses have different spatial inclination angles, and the first workpiece is mounted on the on the end of the robot, the second camera is a 3D camera and is fixed outside the robot,
    F,基于所捕捉的多个图像,确定在多个位姿下所述第一工件的末端在所述第二相机的3D相机坐标系中的多个工件末端坐标,F, based on the multiple captured images, determine multiple workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system of the second camera under multiple poses,
    G,获得在所述多个位姿下所述机器人的末端在所述机器人的基坐标系中的多个机器人末端坐标,以及G, obtaining a plurality of robot end coordinates of the end of the robot in the base coordinate system of the robot under the plurality of poses, and
    H,基于所述多个工件末端坐标和所述多个机器人末端坐标,确定在所述机器人的一个姿态下待标定的机器人TCP在所述机器人的基坐标系中的坐标,并确定所述机器人TCP与位于所述机器人的末端上的默认TCP之间的相对关系。H, based on the multiple workpiece end coordinates and the multiple robot end coordinates, determine the coordinates of the robot TCP to be calibrated in the base coordinate system of the robot under one posture of the robot, and determine the robot The relative relationship between the TCP and the default TCP located on the end of the robot.
  7. 根据权利要求6所述的标定方法,其中,所述步骤F包括:The calibration method according to claim 6, wherein the step F comprises:
    F1,从所捕捉的多个图像中检测布置在所述第一工件上的标记点以获得所述标记点在3D相机坐标系中的多个标记点坐标,以及F1, detecting a marker point arranged on the first workpiece from the captured images to obtain a plurality of marker point coordinates of the marker point in a 3D camera coordinate system, and
    F2,基于所述多个标记点坐标,确定所述第一工件的末端在3D相机坐 标系中的多个工件末端坐标。F2: Determine, based on the coordinates of the plurality of marked points, a plurality of workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system.
  8. 根据权利要求7所述的标定方法,其中,所述标记点具有特征颜色和/或特征图案。The calibration method according to claim 7, wherein the marked points have characteristic colors and/or characteristic patterns.
  9. 根据权利要求6所述的标定方法,所述标定方法还包括执行手眼标定过程,其中,执行手眼标定过程包括以下步骤:The calibration method according to claim 6, further comprising performing a hand-eye calibration process, wherein performing the hand-eye calibration process comprises the following steps:
    I,使用所述第一相机来捕捉待由所述机器人处理的第二工件的第一目标图像,并使用所述第二相机来捕捉所述第二工件的第二目标图像,1, using the first camera to capture a first target image of a second workpiece to be processed by the robot, and using the second camera to capture a second target image of the second workpiece,
    J,基于所述第一目标图像和所述第二目标图像,确定所述第一相机的2D相机坐标系和所述第二相机的3D相机坐标系之间的第一变换关系,J, based on the first target image and the second target image, determine a first transformation relationship between the 2D camera coordinate system of the first camera and the 3D camera coordinate system of the second camera,
    K,移动所述机器人的末端到多个位置,确定所述机器人TCP在所述机器人的基坐标系中的第一多个工具坐标,并确定所述机器人TCP在所述3D相机坐标系中的第二多个工具坐标,K, move the end of the robot to a plurality of positions, determine the first plurality of tool coordinates of the robot TCP in the base coordinate system of the robot, and determine the coordinates of the robot TCP in the 3D camera coordinate system the second plurality of tool coordinates,
    L,使用所述第一多个工具坐标和所述第二多个工具坐标来确定所述机器人的基坐标系和所述3D相机坐标系之间的第二变换关系,以及L, using the first plurality of tool coordinates and the second plurality of tool coordinates to determine a second transformation relationship between the robot's base coordinate system and the 3D camera coordinate system, and
    M,基于所述第一变换关系和所述第二变换关系,确定所述机器人的基坐标系和所述2D相机坐标系之间的第三变换关系。M. Determine a third transformation relationship between the base coordinate system of the robot and the 2D camera coordinate system based on the first transformation relationship and the second transformation relationship.
  10. 根据权利要求9所述的标定方法,其中,所述步骤J包括:The calibration method according to claim 9, wherein, the step J comprises:
    J1,从所述第一目标图像和所述第二目标图像中检测布置在所述第二工件中的多个特征点,以获得所述多个特征点在所述2D相机坐标系中的第一多个特征坐标和在所述3D相机坐标系中的第二多个特征坐标,以及J1. Detect a plurality of feature points arranged in the second workpiece from the first target image and the second target image, so as to obtain the first position of the plurality of feature points in the 2D camera coordinate system a plurality of feature coordinates and a second plurality of feature coordinates in the 3D camera coordinate system, and
    J2,使用所述第一多个特征坐标和所述第二多个特征坐标来确定所述第一变换关系。J2, using the first plurality of feature coordinates and the second plurality of feature coordinates to determine the first transformation relationship.
  11. 根据权利要求10所述的标定方法,其中,所述步骤K包括:The calibration method according to claim 10, wherein the step K comprises:
    针对所述多个位置中的至少一个位置,在所述机器人的所述一个姿态下平移至所述至少一个位置,基于所述机器人TCP与所述默认TCP之间的相对关系,确定所述机器人TCP在所述机器人的基坐标系中的至少一个工具 坐标。For at least one of the plurality of positions, translate to the at least one position in the one pose of the robot, and determine the robot based on the relative relationship between the robot TCP and the default TCP At least one tool coordinate of the TCP in the base coordinate system of the robot.
  12. 根据权利要求10所述的标定方法,其中,所述至少一个位置在所述至少一个特征点处或其附近。11. The calibration method of claim 10, wherein the at least one location is at or near the at least one feature point.
  13. 用于机器人的标定装置,所述标定装置包括相机标定单元,所述相机标定单元包括:A calibration device for a robot, the calibration device comprising a camera calibration unit, the camera calibration unit comprising:
    环境捕捉模块,被配置为捕捉环境的图像并形成所述环境的三维虚拟对象,an environment capture module configured to capture images of the environment and form three-dimensional virtual objects of the environment,
    放置模块,被配置为基于所捕捉的所述环境的图像和所述环境的三维虚拟对象,将标定物体放置在所述环境中的目标区域中,a placement module configured to place a calibration object in a target area in the environment based on the captured image of the environment and the three-dimensional virtual object of the environment,
    第一相机捕捉模块,被配置为通过与机器人相关联的第一相机和所述标定物体之间的相对移动来使用所述第一相机捕捉所述标定物体的图像,其中,所述第一相机为2D相机,以及a first camera capture module configured to capture an image of the calibration object using the first camera through relative movement between the first camera associated with the robot and the calibration object, wherein the first camera for 2D cameras, and
    参数确定模块,被配置为基于所述第一相机所捕捉的所述标定物体的图像,确定用于标定的所述第一相机的参数。A parameter determination module configured to determine parameters of the first camera for calibration based on the image of the calibration object captured by the first camera.
  14. 根据权利要求13所述的标定装置,其中,所述环境捕捉模块被进一步配置为:The calibration device of claim 13, wherein the environment capture module is further configured to:
    从不同的位置以不同的角度针对所述环境进行摄像,从而针对每个场景得到至少两个图像,The environment is photographed from different positions and angles, resulting in at least two images for each scene,
    基于立体图像的三角测量方法测量得出所述图像中的各个像素的深度信息,以及Depth information for each pixel in the image is measured by a triangulation method based on a stereo image, and
    基于所述深度信息及所述图像包含的二维信息形成所述环境的三维虚拟对象。A three-dimensional virtual object of the environment is formed based on the depth information and two-dimensional information contained in the image.
  15. 根据权利要求13所述的标定装置,其中,所述放置模块被进一步配置为:The calibration device of claim 13, wherein the placement module is further configured to:
    从所捕捉的环境的图像中检测布置在所述环境中的视觉标记物,detecting from a captured image of the environment visual markers arranged in the environment,
    基于检测到的所述视觉标记物,确定所述目标区域,determining the target area based on the detected visual marker,
    捕捉所述标定物体的图像并形成所述标定物体的三维虚拟对象,capturing an image of the calibration object and forming a three-dimensional virtual object of the calibration object,
    将所述标定物体的三维虚拟对象叠加在所述环境的三维虚拟对象上以作为增强现实图像进行显示,以及superimposing the three-dimensional virtual object of the calibration object on the three-dimensional virtual object of the environment for display as an augmented reality image, and
    移动所述标定物体以使得所述标定物体的三维虚拟对象位于所述目标区域的三维虚拟对象中。The calibration object is moved such that the three-dimensional virtual object of the calibration object is located in the three-dimensional virtual object of the target area.
  16. 根据权利要求13所述的标定装置,其中,所述视觉标记物具有特征颜色和/或特征图案。14. The calibration device of claim 13, wherein the visual marker has a characteristic color and/or a characteristic pattern.
  17. 根据权利要求13所述的标定装置,其中,当所述第一相机固定于所述机器人外部时,第一相机捕捉模块被进一步配置为:The calibration device according to claim 13, wherein when the first camera is fixed outside the robot, the first camera capture module is further configured to:
    获得用于在所述目标区域中放置所述标定物体的多个实际位置,所述多个实际位置对应于所述环境的三维虚拟对象中的多个虚拟位置,obtaining a plurality of actual positions for placing the calibration object in the target area, the plurality of actual positions corresponding to a plurality of virtual positions in a three-dimensional virtual object of the environment,
    移动所述标定物体以使得所述标定物体的三维虚拟对象分别位于所述多个虚拟位置处,以及moving the calibration object such that three-dimensional virtual objects of the calibration object are located at the plurality of virtual locations, respectively, and
    使用所述第一相机分别在所述多个实际位置捕捉所述标定物体的图像。Using the first camera to capture images of the calibration object at the plurality of actual locations, respectively.
  18. 根据权利要求13所述的标定装置,所述标定装置还包括TCP标定单元,所述TCP标定单元包括:The calibration device according to claim 13, further comprising a TCP calibration unit, the TCP calibration unit comprising:
    第一工件捕捉模块,被配置为使用第二相机来分别捕捉以多个位姿放置的所述第一工件的多个图像,其中,所述第一工件安装于所述机器人的末端上,所述第二相机是3D相机并且固定于所述机器人外部,A first workpiece capture module configured to use a second camera to capture a plurality of images of the first workpiece placed in a plurality of poses, respectively, wherein the first workpiece is mounted on the end of the robot, so the second camera is a 3D camera and is fixed outside the robot,
    工件坐标确定模块,被配置为基于所捕捉的多个图像,确定在多个位姿下所述第一工件的末端在所述第二相机的3D相机坐标系中的多个工件末端坐标,a workpiece coordinate determination module configured to determine, based on the captured images, a plurality of workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system of the second camera under a plurality of poses,
    机器人坐标获取模块,被配置为获得在所述多个位姿下所述机器人的末端在所述机器人的基坐标系中的多个机器人末端坐标,以及a robot coordinate obtaining module configured to obtain a plurality of robot end coordinates of the end of the robot in the base coordinate system of the robot under the plurality of poses, and
    TCP坐标确定模块,被配置为基于所述多个工件末端坐标和所述多个机器人末端坐标,确定在所述机器人的一个姿态下待标定的机器人TCP在所述机器人的基坐标系中的坐标,并确定所述机器人TCP与位于所述机器人 的末端上的默认TCP之间的相对关系。The TCP coordinate determination module is configured to determine, based on the plurality of workpiece end coordinates and the plurality of robot end coordinates, the coordinates of the robot TCP to be calibrated in the base coordinate system of the robot under one posture of the robot , and determine the relative relationship between the robot TCP and the default TCP located on the end of the robot.
  19. 根据权利要求18所述的标定装置,其中,所述工件坐标确定模块被进一步配置为:The calibration device according to claim 18, wherein the workpiece coordinate determination module is further configured to:
    从所捕捉的多个图像中检测布置在所述第一工件上的标记点以获得所述标记点在3D相机坐标系中的多个标记点坐标,以及detecting marker points arranged on the first workpiece from the captured images to obtain marker coordinates of the marker points in a 3D camera coordinate system, and
    基于所述多个标记点坐标,确定所述第一工件的末端在3D相机坐标系中的多个工件末端坐标。Based on the plurality of marker point coordinates, a plurality of workpiece end coordinates of the end of the first workpiece in the 3D camera coordinate system are determined.
  20. 根据权利要求19所述的标定装置,其中,所述标记点具有特征颜色和/或特征图案。19. The calibration device according to claim 19, wherein the marking points have characteristic colors and/or characteristic patterns.
  21. 根据权利要求18所述的标定装置,所述标定装置还包括手眼标定单元,所述手眼标定单元包括:The calibration device according to claim 18, further comprising a hand-eye calibration unit, the hand-eye calibration unit comprising:
    第二工件捕捉模块,被配置为使用所述第一相机来捕捉待由所述机器人处理的第二工件的第一目标图像,并使用所述第二相机来捕捉所述第二工件的第二目标图像,A second workpiece capture module configured to capture a first target image of a second workpiece to be processed by the robot using the first camera, and to capture a second image of the second workpiece using the second camera target image,
    第一变换确定模块,被配置为基于所述第一目标图像和所述第二目标图像,确定所述第一相机的2D相机坐标系和所述第二相机的3D相机坐标系之间的第一变换关系,A first transformation determination module configured to determine a first transformation between the 2D camera coordinate system of the first camera and the 3D camera coordinate system of the second camera based on the first target image and the second target image A transformation relationship,
    工具坐标确定模块,被配置为移动所述机器人的末端到多个位置,确定所述机器人TCP在所述机器人的基坐标系中的第一多个工具坐标,并确定所述机器人TCP在所述3D相机坐标系中的第二多个工具坐标,a tool coordinate determination module configured to move the end of the robot to a plurality of positions, determine a first plurality of tool coordinates of the robot TCP in the base coordinate system of the robot, and determine the robot TCP in the the second plurality of tool coordinates in the 3D camera coordinate system,
    第二变换确定模块,被配置为使用所述第一多个工具坐标和所述第二多个工具坐标来确定所述机器人的基坐标系和所述3D相机坐标系之间的第二变换关系,以及A second transformation determination module configured to use the first plurality of tool coordinates and the second plurality of tool coordinates to determine a second transformation relationship between the robot's base coordinate system and the 3D camera coordinate system ,as well as
    第三变换确定模块,被配置为基于所述第一变换关系和所述第二变换关系,确定所述机器人的基坐标系和所述2D相机坐标系之间的第三变换关系。A third transformation determining module is configured to determine a third transformation relationship between the base coordinate system of the robot and the 2D camera coordinate system based on the first transformation relationship and the second transformation relationship.
  22. 根据权利要求21所述的标定装置,其中,所述第一变换确定模块 被进一步配置为:The calibration device of claim 21, wherein the first transformation determination module is further configured to:
    从所述第一目标图像和所述第二目标图像中检测布置在所述第二工件中的多个特征点,以获得所述多个特征点在所述2D相机坐标系中的第一多个特征坐标和在所述3D相机坐标系中的第二多个特征坐标,以及A plurality of feature points arranged in the second workpiece are detected from the first target image and the second target image, so as to obtain a first plurality of feature points of the plurality of feature points in the 2D camera coordinate system feature coordinates and a second plurality of feature coordinates in the 3D camera coordinate system, and
    使用所述第一多个特征坐标和所述第二多个特征坐标来确定所述第一变换关系。The first transformation relationship is determined using the first plurality of feature coordinates and the second plurality of feature coordinates.
  23. 根据权利要求22所述的标定装置,其中,所述工具坐标确定模块被进一步配置为:The calibration device of claim 22, wherein the tool coordinate determination module is further configured to:
    针对所述多个位置中的至少一个位置,在所述机器人的所述一个姿态下平移至所述至少一个位置,基于所述机器人TCP与所述默认TCP之间的相对关系,确定所述机器人TCP在所述机器人的基坐标系中的至少一个工具坐标。For at least one of the plurality of positions, translate to the at least one position in the one pose of the robot, and determine the robot based on the relative relationship between the robot TCP and the default TCP At least one tool coordinate of the TCP in the base coordinate system of the robot.
  24. 根据权利要求23所述的标定装置,其中,所述至少一个位置在所述至少一个特征点处或其附近。24. The calibration device of claim 23, wherein the at least one location is at or near the at least one feature point.
  25. 计算设备,所述计算机备包括:Computing equipment, the computing equipment includes:
    处理器;以及processor; and
    存储器,其用于存储计算机可执行指令,当所述计算机可执行指令被执行时使得所述处理器执行根据权利要求1-12中任一项所述的方法。a memory for storing computer-executable instructions which, when executed, cause the processor to perform the method of any of claims 1-12.
  26. 计算机可读存储介质,所述计算机可读存储介质具有存储在其上的计算机可执行指令,所述计算机可执行指令用于执行根据权利要求1-12中任一项所述的方法。A computer-readable storage medium having computer-executable instructions stored thereon for performing the method of any of claims 1-12.
  27. 计算机程序产品,所述计算机程序产品被有形地存储在计算机可读存储介质上,并且包括计算机可执行指令,所述计算机可执行指令在被执行时使至少一个处理器执行根据权利要求1-12中任一项所述的方法。A computer program product tangibly stored on a computer-readable storage medium and comprising computer-executable instructions which, when executed, cause at least one processor to perform the execution according to claims 1-12 The method of any of the above.
PCT/CN2020/117538 2020-09-24 2020-09-24 Calibration method and device for robot WO2022061673A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2020/117538 WO2022061673A1 (en) 2020-09-24 2020-09-24 Calibration method and device for robot
CN202080105042.5A CN116157837A (en) 2020-09-24 2020-09-24 Calibration method and device for robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2020/117538 WO2022061673A1 (en) 2020-09-24 2020-09-24 Calibration method and device for robot

Publications (1)

Publication Number Publication Date
WO2022061673A1 true WO2022061673A1 (en) 2022-03-31

Family

ID=80846046

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/117538 WO2022061673A1 (en) 2020-09-24 2020-09-24 Calibration method and device for robot

Country Status (2)

Country Link
CN (1) CN116157837A (en)
WO (1) WO2022061673A1 (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114770502A (en) * 2022-04-25 2022-07-22 深圳市超准视觉科技有限公司 Quick calibration method for tail end pose of mechanical arm tool
CN114833822A (en) * 2022-03-31 2022-08-02 西安航天精密机电研究所 Rapid hand-eye calibration method for robot
CN115284297A (en) * 2022-08-31 2022-11-04 深圳前海瑞集科技有限公司 Workpiece positioning method, robot and robot operation method
CN116222384A (en) * 2023-05-08 2023-06-06 成都飞机工业(集团)有限责任公司 Omnidirectional measurement calibration method, system, equipment and medium
CN116423526A (en) * 2023-06-12 2023-07-14 上海仙工智能科技有限公司 Automatic calibration method and system for mechanical arm tool coordinates and storage medium
CN116524022A (en) * 2023-04-28 2023-08-01 北京优酷科技有限公司 Offset data calculation method, image fusion device and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101186038A (en) * 2007-12-07 2008-05-28 北京航空航天大学 Method for demarcating robot stretching hand and eye
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN104715479A (en) * 2015-03-06 2015-06-17 上海交通大学 Scene reproduction detection method based on augmented virtuality
CN107481288A (en) * 2017-03-31 2017-12-15 触景无限科技(北京)有限公司 The inside and outside ginseng of binocular camera determines method and apparatus
CN108972559A (en) * 2018-08-20 2018-12-11 上海嘉奥信息科技发展有限公司 Hand and eye calibrating method based on infrared stereoscopic vision positioning system and mechanical arm
JP2019052983A (en) * 2017-09-15 2019-04-04 キヤノン株式会社 Calibration method and calibrator

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101186038A (en) * 2007-12-07 2008-05-28 北京航空航天大学 Method for demarcating robot stretching hand and eye
CN103400409A (en) * 2013-08-27 2013-11-20 华中师范大学 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN104715479A (en) * 2015-03-06 2015-06-17 上海交通大学 Scene reproduction detection method based on augmented virtuality
CN107481288A (en) * 2017-03-31 2017-12-15 触景无限科技(北京)有限公司 The inside and outside ginseng of binocular camera determines method and apparatus
JP2019052983A (en) * 2017-09-15 2019-04-04 キヤノン株式会社 Calibration method and calibrator
CN108972559A (en) * 2018-08-20 2018-12-11 上海嘉奥信息科技发展有限公司 Hand and eye calibrating method based on infrared stereoscopic vision positioning system and mechanical arm

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114833822A (en) * 2022-03-31 2022-08-02 西安航天精密机电研究所 Rapid hand-eye calibration method for robot
CN114833822B (en) * 2022-03-31 2023-09-19 西安航天时代精密机电有限公司 Rapid hand-eye calibration method for robot
CN114770502A (en) * 2022-04-25 2022-07-22 深圳市超准视觉科技有限公司 Quick calibration method for tail end pose of mechanical arm tool
CN115284297A (en) * 2022-08-31 2022-11-04 深圳前海瑞集科技有限公司 Workpiece positioning method, robot and robot operation method
CN115284297B (en) * 2022-08-31 2023-12-12 深圳前海瑞集科技有限公司 Workpiece positioning method, robot, and robot working method
CN116524022A (en) * 2023-04-28 2023-08-01 北京优酷科技有限公司 Offset data calculation method, image fusion device and electronic equipment
CN116524022B (en) * 2023-04-28 2024-03-26 神力视界(深圳)文化科技有限公司 Offset data calculation method, image fusion device and electronic equipment
CN116222384A (en) * 2023-05-08 2023-06-06 成都飞机工业(集团)有限责任公司 Omnidirectional measurement calibration method, system, equipment and medium
CN116222384B (en) * 2023-05-08 2023-08-04 成都飞机工业(集团)有限责任公司 Omnidirectional measurement calibration method, system, equipment and medium
CN116423526A (en) * 2023-06-12 2023-07-14 上海仙工智能科技有限公司 Automatic calibration method and system for mechanical arm tool coordinates and storage medium
CN116423526B (en) * 2023-06-12 2023-09-19 上海仙工智能科技有限公司 Automatic calibration method and system for mechanical arm tool coordinates and storage medium

Also Published As

Publication number Publication date
CN116157837A (en) 2023-05-23

Similar Documents

Publication Publication Date Title
WO2022061673A1 (en) Calibration method and device for robot
JP6966582B2 (en) Systems and methods for automatic hand-eye calibration of vision systems for robot motion
JP7292829B2 (en) Systems and methods for combining machine vision coordinate spaces in a guided assembly environment
US6816755B2 (en) Method and apparatus for single camera 3D vision guided robotics
US8095237B2 (en) Method and apparatus for single image 3D vision guided robotics
JP5815761B2 (en) Visual sensor data creation system and detection simulation system
CN108827154B (en) Robot non-teaching grabbing method and device and computer readable storage medium
CN110202573B (en) Full-automatic hand-eye calibration and working plane calibration method and device
US11090810B2 (en) Robot system
JP6703812B2 (en) 3D object inspection device
JP7111114B2 (en) Information processing device, information processing method, and information processing system
JP2016099257A (en) Information processing device and information processing method
CN113379849B (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN110751691A (en) Automatic pipe fitting grabbing method based on binocular vision
CN111360821A (en) Picking control method, device and equipment and computer scale storage medium
WO2018043524A1 (en) Robot system, robot system control device, and robot system control method
CN109900251A (en) A kind of robotic positioning device and method of view-based access control model technology
CN115519536A (en) System and method for error correction and compensation for 3D eye-hand coordination
Lin et al. Vision based object grasping of industrial manipulator
CN112529856A (en) Method for determining the position of an operating object, robot and automation system
WO2020192882A1 (en) Method and control arrangement for determining a relation between a robot coordinate system and a movable apparatus coordinate system
Kana et al. Robot-sensor calibration for a 3D vision assisted drawing robot
CN112743546A (en) Robot hand-eye calibration pose selection method and device, robot system and medium
Korak et al. Optical tracking system
CN114750151B (en) Calibration method, calibration device, electronic equipment and computer readable storage medium

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20954525

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20954525

Country of ref document: EP

Kind code of ref document: A1