WO2020024178A1 - 一种手眼标定方法、系统及计算机存储介质 - Google Patents

一种手眼标定方法、系统及计算机存储介质 Download PDF

Info

Publication number
WO2020024178A1
WO2020024178A1 PCT/CN2018/098119 CN2018098119W WO2020024178A1 WO 2020024178 A1 WO2020024178 A1 WO 2020024178A1 CN 2018098119 W CN2018098119 W CN 2018098119W WO 2020024178 A1 WO2020024178 A1 WO 2020024178A1
Authority
WO
WIPO (PCT)
Prior art keywords
coordinate system
camera device
robot
view
origin
Prior art date
Application number
PCT/CN2018/098119
Other languages
English (en)
French (fr)
Inventor
王少飞
Original Assignee
深圳配天智能技术研究院有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳配天智能技术研究院有限公司 filed Critical 深圳配天智能技术研究院有限公司
Priority to CN201880088580.0A priority Critical patent/CN111801198B/zh
Priority to PCT/CN2018/098119 priority patent/WO2020024178A1/zh
Publication of WO2020024178A1 publication Critical patent/WO2020024178A1/zh

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration

Definitions

  • the present application relates to the field of robotics, and in particular, to a hand-eye calibration method, system, and computer storage medium.
  • robots have been widely used in many industries. In industrial applications, robots can achieve various functions through their own power and control capabilities. An important part of supporting this technology is the robot's vision system. Using the vision system to obtain images, the robot can control the actuator to perform machining and installation. And so on.
  • Hand-eye calibration is the first step for all robots to cooperate with machine vision applications. Only when the robot and the vision system can convert their own coordinate system to the same world coordinate system can the robot and the vision system work together.
  • the existing method is generally to allow the robot and the vision system to calibrate the conversion relationship between their own coordinate system and the world coordinate system.
  • This method requires the operator to manually move the robotic arm to at least three specific positions for calibration, and the accuracy is unstable. In order to reach a specific position accurately, the calibration operation is complicated.
  • the technical problem solved by this application is to provide a hand-eye calibration method, system, and computer storage medium.
  • the camera-equipment mounted on the robot acquires reference object pictures in different directions to perform hand-eye calibration, which greatly simplifies the calibration process and improves calibration accuracy. degree.
  • the present application provides a hand-eye calibration method, which includes the following steps: determining a reference object coordinate system, a robot rotation axis coordinate system origin, and an imaging device coordinate system origin; obtaining multiple reference objects in a first direction through the imaging device
  • the first view of the camera device is mounted on the rotation axis of the robot, the movement of the camera device is controlled by the rotation axis of the robot, the camera device moves in a first direction, and the first direction is parallel to the origin of the coordinate system of the robot's rotation axis to the camera device
  • the direction of the origin of the coordinate system, and at most 3 photographing planes of the first view are in the same plane;
  • the second view of the multiple reference objects is obtained in the second direction by the camera device; wherein the camera device is translated in the second direction
  • the second direction of movement is a direction that is not parallel to the line connecting the origin of the robot rotation axis coordinate system to the origin of the camera device coordinate system; based on the first view, the second view, and the motion information of the
  • the present application also provides a hand-eye calibration system, including: a camera device, the camera device has a camera device coordinate system; a robot, a robot is equipped with a mechanical forearm, a mechanical forearm is equipped with a rotation axis, and the camera device is connected to the rotation axis ,
  • the rotation axis controls the camera device to perform rigid motion
  • the robot has a robot rotation axis coordinate system
  • the reference object the reference object has a reference object coordinate system
  • the processing unit is configured to obtain a first view of a plurality of reference objects in a first direction through the camera device ; Among them, the camera moves in a first direction and captures a reference object image.
  • the first direction is a direction parallel to the origin of the coordinate system of the robot's rotation axis to the origin of the camera system coordinate system, and there are at most 3 first views.
  • the planes are in the same plane; a second view of multiple reference objects is obtained in a second direction by the camera device; wherein the camera device moves in translation in the second direction and captures the image of the reference object, and the second direction is the origin of the coordinate system with the rotation axis of the robot Non-parallel direction of the connection line to the origin of the camera device coordinate system; based on the first view Image, second view, and motion information of the camera device to get the hand-eye calibration results.
  • the present application also provides a computer storage medium, wherein the computer storage medium stores program data, and when the program data is executed by a processor, the following steps are performed: determining a reference object coordinate system and a robot rotation axis coordinate The origin of the camera and the origin of the coordinate system of the camera device; the first view of the multiple reference objects is obtained in the first direction through the camera device; wherein the camera device is mounted on the robot's rotation axis, and the movement of the camera device is controlled by the robot's rotation axis, and the camera device along the The first direction is translational movement.
  • the first direction is a direction parallel to the origin of the coordinate system of the axis of rotation of the robot and the origin of the camera system coordinate system, and at most three first-view shooting planes are in the same plane.
  • the beneficial effect of the present application is that, unlike the prior art, the present application does not need to set a calibration point in advance and control the camera device to accurately move to a preset position, and only needs to control the camera device to move around the reference object to obtain multiple reference objects.
  • View and second view and obtain the conversion relationship between the camera device coordinate system, reference object coordinate system, and rotation axis coordinate system according to the first view and the second view, reducing the workload and calibration process, the calibration efficiency is high, and the guarantee The accuracy of hand-eye calibration.
  • FIG. 1 is a schematic flowchart of an embodiment of a hand-eye calibration method according to the present application.
  • FIG. 2 is a schematic diagram of coordinate systems in an embodiment of the hand-eye calibration method shown in FIG. 1;
  • FIG. 3 is a schematic diagram of a panning of a camera device in the hand-eye calibration method shown in FIG. 1;
  • FIG. 4 is a schematic diagram of translation and rotation of a camera device in the hand-eye calibration method shown in FIG. 1;
  • FIG. 5 is a schematic structural diagram of an embodiment of a hand-eye calibration system according to the present application.
  • FIG. 6 is a schematic structural diagram of an embodiment of a computer storage medium of the present application.
  • FIG. 1 is a schematic flowchart of an embodiment of a robot hand eye calibration method according to the present application.
  • the robot hand eye calibration method of this embodiment includes the following steps:
  • FIG. 2 is a schematic diagram of each coordinate system in an embodiment of the hand-eye calibration method shown in FIG. 1.
  • a robot is equipped with a mechanical forearm
  • a mechanical forearm is equipped with a rotation axis 203
  • a camera device 202 is installed on the rotation axis 203.
  • the rotation shaft 203 is used to control rigid movements of the imaging device 202, such as translation and rotation.
  • the reference object is a plane 201 including a calibration coordinate system, that is, the reference object coordinate system is a calibration coordinate system, and the calibration coordinate system may be a world coordinate system.
  • the plane 201 includes the X 1 axis and the Y 1 axis of the reference object coordinate system, and the Z 1 axis is preferably perpendicular to the plane 201.
  • the optical center of the imaging device 202 can be used as the origin of the imaging device coordinate system, and one of the joint points of the rotation axis 203 can be defined as the origin of the robot rotation axis coordinate system.
  • the intersection of X 2 axis, Y 2 axis, and Z 2 axis is the origin of the camera system
  • the intersection of X 3 axis, Y 3 axis, and Z 3 axis is the origin of the robot rotation axis coordinate system.
  • the rotation axis 203 of the robot drives the camera device 202 to move.
  • the camera device 202 acquires an image of a reference object during the movement.
  • the image is between the coordinate U in the camera device coordinate system and the coordinate P of the reference object coordinate system.
  • the projection relationship is:
  • R is the rotation relationship matrix between the camera coordinate system and the reference object coordinate system
  • t is the motion vector (mainly the translation vector in this application)
  • K is the parameter matrix within the camera.
  • f x and f y are scale factors of the reference object on the X 2 axis and Y 2 axis of the coordinate system of the imaging device
  • x 0 and y 0 are the intersection points of the imaging plane of the imaging device and the optical axis of the imaging device
  • s is a preset Distortion factor.
  • 102 Obtain a first view of a plurality of reference objects in a first direction through a camera device; wherein the first direction is a direction parallel to the origin of the coordinate system of the robot's rotation axis and the origin of the camera device coordinate system, and the translation is a rigid body movement, and At most 3 first planes are taken in the same plane.
  • FIG. 3 is a schematic diagram of the panning of the camera device in the hand-eye calibration method shown in FIG. 1.
  • the camera device sequentially acquires five photographing points (301, 302, 303, 304, and 305) in the first direction.
  • each photographing point does not need to be set in advance as in the prior art, and only the shooting planes of at least three of the first views among the five first views are in the same plane, so as to be able to pass the acquired first views.
  • the coordinates of each photographing point are expressed, and at most 3 photographing points have repeated x-axis coordinates or y-axis coordinates, so that the first view can more fully indicate the spatial position of the reference object and improve the accuracy of the calibration.
  • the number of the first views captured by the camera device in the present application is at least five, and in order to improve the accuracy of the result, the camera The first view acquired by the device can only have at most 3 shooting planes of the first view in the same plane.
  • 103 Obtain a second view of a plurality of reference objects in a second direction through the camera device; wherein the second direction is a direction that intersects with the line connecting the origin of the robot rotation axis coordinate system to the origin of the camera device coordinate system, that is, the robot rotates
  • the line connecting the origin of the axis coordinate system to the origin of the camera device coordinate system is not parallel.
  • At least two second views of the reference object need to be obtained through the camera device.
  • the following description is made by taking two second views of the reference object obtained by the imaging device as an example.
  • FIG. 4 is a schematic diagram of translation and rotation of the camera device in the hand-eye calibration method shown in FIG. 1.
  • the rotation and rotation of the camera device is used to drive the camera device to pan and rotate.
  • the camera device sequentially obtains the second view of the two reference objects 403 at the two camera points (401, 402) in the second direction.
  • the rotation angle can be controlled at 30 ° to 35 °, that is, the included angle between the second direction and the origin of the robot's rotation axis coordinate system to the origin of the camera device's coordinate system is 30 ° to 35 °, where the second direction is from the origin of the robot's rotation axis coordinate system to The direction of intersection of the line of origin of the camera coordinate system.
  • the transformation relationship between the coordinate system of the rotation axis of the robot and the coordinate system of the camera device needs to be determined first.
  • it is determined that one of the joint points of the rotation axis of the robot is the robot rotation
  • the origin of the axis coordinate system Assuming that the hand-eye relationship between the robot's rotation axis coordinate system and the camera equipment coordinate system is G, A is the transformation relationship between the rotation axis coordinate systems of two adjacent mobile robots, and B is the two adjacent motion camera devices.
  • the transformation relationship between the coordinate systems can be obtained:
  • the homography matrix is obtained through the first view; the scaling factors of the spatial points on the reference object in the X 2 axis and Y 2 axis of the camera coordinate system and the parameter matrix in the camera are obtained through the second view.
  • a first view is selected, and a spatial point S on a reference object in the first view is randomly selected, and the coordinate U of the spatial point in the coordinate system of the camera device passes any three non-collinear ones in the first view.
  • Space point representation Among them, a, b, and c are the coordinates of three non-collinear space points in the coordinate system of the imaging device. Represents the position of the spatial point S on the two-dimensional image plane of the imaging device.
  • the coordinate P of the spatial point S in the reference object coordinate system is: Where x 1 and y 1 are unit vectors of the space point S on the reference object on the X 1 axis and the Y 1 axis of the reference object coordinate system.
  • the homography matrix H can be obtained by transforming the above formula:
  • the distance from the optical center of the camera to the reference object is D 1
  • the length of the coordinate segment on the reference object is D 2
  • the pixel coordinate length of the camera image is D 3 , which is obtained from the optical geometric relationship of the camera:
  • f is the focal length of the imaging device
  • 2 / D 3 and fy S y D 1 D 2 / D 3 , where
  • the parameter matrices K 1 and K 2 in the imaging device are obtained by the above method.
  • this application does not need to set the calibration point in advance and control the camera device to accurately move to the preset point.
  • Set the position only need to control the camera device to move around the reference object to obtain multiple first and second views of the reference object, and obtain the camera device coordinate system, reference object coordinate system, and rotation axis coordinate system according to the first and second views.
  • the conversion relationship between them reduces the workload and calibration procedures, and has high calibration efficiency and accuracy.
  • noise, movement accuracy and visual calibration accuracy during the robot's operation will cause calibration errors, especially the camera equipment parameter K and translation relationship vector t during noise hand positioning, but will cause large errors, but These errors have a small impact on the camera device parameters K and the translation relationship vector t in the image information.
  • the method provided in the embodiment of the present application is mainly based on image information processing, so it has better robustness, and these error factors can be all As a noise source, these effects are minimized by modifying parameters.
  • the imaging device may include a viewfinder, an image analyzer, and an image processor.
  • the viewfinder is used to obtain an image of the reference object.
  • the image analyzer acquires the coordinate information of the imaging device of the reference object.
  • the image processor can be used to complete the conversion calculation between the coordinate system of the rotation axis of the robot, the coordinate system of the camera device, and the coordinate system of the reference object according to the image of the reference object.
  • FIG. 5 is a schematic structural diagram of an embodiment of a hand-eye calibration system of the present application.
  • the hand-eye calibration system includes a camera device 501, a reference object 504, a processing unit 505, and a robot (not labeled in the figure).
  • the robot is equipped with a mechanical forearm 502, the mechanical forearm 502 is equipped with a rotation shaft 503, and the camera device 501 is mounted on the rotation shaft 503.
  • the camera device 501 is electrically connected to the robot, and the rotation shaft 503 is used to control the camera device 501 to perform rigid movements, such as translation and rotation.
  • the mechanical forearm 502 cooperates with the rotation shaft 503 to control the imaging device 501 to perform a rigid motion.
  • the imaging device 501 has an imaging device coordinate system, and the optical center of the imaging device 501 can be used as the origin of the imaging device coordinate system.
  • the robot has a robot rotation axis coordinate system, and a joint point in the rotation axis 503 can be defined as the origin of the robot rotation axis coordinate system.
  • the reference object 504 is a plane including a calibration coordinate system, that is, the reference object coordinate system is a calibration coordinate system, and the calibration coordinate system may be a world coordinate system.
  • the robot cooperates with the imaging device 501 to take a picture of the reference object 504 to implement hand-eye calibration. Specific steps are as follows:
  • the rotation axis 503 of the robot drives the camera device 501 to move.
  • the camera device 501 acquires an image of the reference object 504 during the movement.
  • the coordinate U of the image in the coordinate system of the camera device and the coordinate P of the reference object coordinate system. The projection relationship between them is:
  • R is a rotation relationship matrix between the coordinate system of the imaging device and the coordinate system of the reference object
  • t is a motion vector (mainly a translation vector in this application)
  • K is an internal parameter matrix of the imaging device
  • f x and f y are scale factors of the reference object on the X 2 axis and Y 2 axis of the coordinate system of the imaging device
  • x 0 and y 0 are the intersection points of the imaging plane of the imaging device and the optical axis of the imaging device
  • s is a preset Distortion factor.
  • the processing unit 505 is communicatively connected to the imaging device 501.
  • the processing unit 505 is integrated with the imaging device 501.
  • the processing unit 505 may be integrated with the robot, or independent of the camera device 505 and the robot. It is only necessary to ensure that the processing unit 505 and the camera device 501 can communicate with each other, and there is no specific limitation on its location .
  • the processing unit 505 When the processing unit 505 is integrated with the camera 501, it may be specifically integrated with the processor of the camera 501; when the processing unit 505 is integrated with the robot, it may be specifically integrated with the processor of the robot.
  • the processing unit can simultaneously serve as the control unit of the robot rotation axis and the imaging device, and communicates with the robot and the imaging device to control the rotation axis to drive the movement of the imaging device, and control the imaging device to shoot.
  • the processing unit page may not be used as the control unit of the robot's rotation axis and camera equipment, but the robot's own control unit controls the movement of the rotation axis according to a preset program, thereby driving the movement of the camera equipment;
  • the predetermined program controls the shooting of the camera device, and the processing unit only acquires the first view and the second view obtained by the camera device and the motion information of the camera device to complete the hand-eye calibration calculation.
  • the robot can drive the imaging device 501 to translate within the reference object coordinate system of the reference object 504, and the processing unit 505 obtains the first views of the multiple reference objects 504 in the first direction through the camera device 501; wherein the camera device 501 is in the first direction Translate the movement and capture the reference object image.
  • the first direction is the direction parallel to the origin of the coordinate system of the robot's rotation axis to the origin of the camera system's coordinate system.
  • the translation is a rigid body movement, and there are at most 3 first-view shooting planes on the same plane. Inside.
  • the processing unit 505 sequentially obtains the first view of the five reference objects 504 through the five photographing points of the camera device 501 along the first direction; wherein the first direction is parallel to the origin of the coordinate system of the robot's rotation axis to the camera device coordinate system The direction of the origin line.
  • each photographing point does not need to be set in advance as in the prior art, and only the shooting planes of at least three of the first views among the five first views are in the same plane, so as to be able to pass the acquired first views.
  • the coordinates of each photographing point are expressed, and at most 3 photographing points have repeated x-axis coordinates or y-axis coordinates, so that the first view can more fully indicate the spatial position of the reference object and improve the accuracy of the calibration.
  • the number of the first views obtained by the processing unit 505 through the camera 501 in the present application is at least five, and to improve the accuracy of the results,
  • the first view acquired by the camera 501 during the panning process can only have at most three shooting planes located in the same plane.
  • the processing unit 505 is also required to acquire the second views of the multiple reference objects 504 through the camera device 501 in the second direction.
  • the camera device 501 moves in translation in a second direction and captures a reference object image.
  • the second direction is a direction that intersects the line connecting the origin of the robot rotation axis coordinate system to the origin of the camera device coordinate system, which is the origin of the robot rotation axis coordinate system.
  • the direction of the line connecting to the origin of the camera coordinate system is not parallel.
  • the camera device 501 in order to accurately obtain the hand-eye relationship between the robot rotation axis coordinate system and the camera device coordinate system, at least two second views of the reference object need to be obtained through the camera device 501.
  • the following description uses the camera device 501 to obtain two second views of the reference object 504 as an example for description.
  • the robot drives the imaging device 501 to pan and rotate through the rotation axis 503, and the imaging device 501 sequentially acquires the second view of the two reference objects 504 along the two photographing points in the second direction.
  • the rotation angle is controlled between 30 ° -35 °, that is, the included angle between the second direction and the origin of the robot rotation axis coordinate system to the origin of the camera device coordinate system is 30 ° -35 °.
  • the two directions are directions intersecting with the line connecting the origin of the robot rotation axis coordinate system to the origin of the camera device coordinate system.
  • the transformation relationship between the coordinate system of the rotation axis of the robot and the coordinate system of the camera device needs to be determined first.
  • it is determined that one of the joint points of the rotation axis 503 of the robot is the robot.
  • the origin of the rotation axis coordinate system assuming that the hand-eye relationship between the robot's rotation axis coordinate system and the camera device coordinate system is G, A is the transformation relationship between the rotation axis coordinate systems of the two adjacent moving robots, and B is the two adjacent motion cameras.
  • the transformation relationship between the device coordinate systems can be obtained:
  • the processing unit 505 can obtain the hand-eye calibration result based on the first view, the second view, and the motion information of the imaging device 501.
  • the processing unit 505 obtains the homography matrix according to the first view; obtains the scale factors of the spatial points on the reference object 504 in the X-axis and Y-axis of the camera device coordinate system and the camera according to the second view.
  • the parameter matrix in the device is used to construct a measurement matrix; the movement and rotation of the camera 501 are combined to obtain the hand-eye calibration result.
  • the processing unit 505 is integrated with the imaging device 501, and the imaging device 501 may include a viewfinder and an image analyzer.
  • the viewfinder is in communication with the image analyzer, and the image analyzer is in communication with the processing unit.
  • the viewfinder is used to obtain an image including a reference object, and the image includes the first view and the second view described above.
  • the image analyzer is used to obtain the coordinate information of the imaging device of the reference object.
  • the image processor is communicatively linked with the viewfinder and the image analyzer, and obtains the information that the image collected by the viewfinder can obtain with the image analyzer to complete the conversion calculation between the robot rotation axis coordinate system, the camera equipment coordinate system, and the reference object coordinate system.
  • the application does not need to set the calibration point in advance and control the camera device to accurately move to a preset position.
  • Set the position only need to control the camera device to move around the reference object to obtain multiple first and second views of the reference object, and obtain the camera device coordinate system, reference object coordinate system, and rotation axis coordinate system according to the first and second views.
  • the conversion relationship between them reduces the workload and calibration procedures, the calibration efficiency is high, and the accuracy of hand-eye calibration is guaranteed.
  • noise, movement accuracy and visual calibration accuracy during the robot's operation will cause calibration errors, especially the internal parameter matrix K and translation relationship vector t of the camera equipment during the positioning of the noisy eye. Errors, but these errors have little effect on the internal parameter matrix K and translation relationship vector t of the camera device in the image information.
  • the method provided in the embodiment of the present application is mainly based on image information processing, so it has better robustness, and These error factors can be used as noise sources, and these effects can be minimized by modifying parameters.
  • FIG. 6 is a schematic structural diagram of an embodiment of a computer storage medium of the present application.
  • the computer storage medium 60 includes program data 61, and when the program data 61 is executed, the hand-eye calibration method described in the above embodiment is implemented.
  • the hand-eye calibration method, system, and computer storage medium of the present application only need the reference object to be in the field of view of the camera device when the camera device is in the photographing position, and the camera device does not need to be accurately positioned and moved to a designated calibration point, which greatly simplifies calibration Process with stable accuracy.

Abstract

一种手眼标定方法,该方法包括如下步骤:确定参照物坐标系、机器人旋转轴坐标系原点和摄像设备坐标系原点;通过摄像设备沿第一方向获取多幅参照物的第一视图;第一方向是平行于机器人旋转轴坐标系原点到摄像设备坐标系原点连线的方向;通过摄像设备沿第二方向获取多幅参照物的第二视图;第二方向是与机器人旋转轴坐标系原点到摄像设备坐标系原点的连线非平行的方向;基于第一视图、第二视图和摄像设备的运动信息,得到手眼标定的结果。

Description

一种手眼标定方法、系统及计算机存储介质 【技术领域】
本申请涉及机器人技术领域,特别是涉及手眼标定方法、系统及计算机存储介质。
【背景技术】
随着人工智能技术的发展和社会需求的变化,机器人已经在多个行业中得到了广泛应用。在工业应用领域中,机器人能够通过自身动力和控制能力来实现各种功能,而支撑这种技术重要一环是机器人的视觉系统,利用视觉系统获取图像,机器人可以控制执行器执行机械加工及安装等动作。
手眼标定是所有机器人配合机器视觉应用的第一步,只有机器人和视觉系统均能够将自己的坐标系转换到同一世界坐标系时,机器人才能够和视觉系统协同作业。
现有的做法一般是让机器人和视觉系统分别标定自身坐标系相对于世界坐标系的转换关系,这种方式需要操作人员手动将机械臂精确移至至少三个特定位置进行标定,准确度不稳定,为精确达到特定位置,标定操作复杂。
【发明内容】
本申请解决的技术问题是,提供一种手眼标定方法、系统及计算机存储介质,通过搭载在机器人上的摄像设备沿不同方向获取参照物图片来进行手眼标定,大幅简化标定的工序,提高标定准确度。
为解决上述问题,本申请提供了一种手眼标定方法,包括如下步骤:确定参照物坐标系、机器人旋转轴坐标系原点和摄像设备坐标系原点;通过摄像设备沿第一方向获取多幅参照物的第一视图;其中,摄像设备安装于机器人旋转轴上,由机器人旋转轴控制摄像设备的运动,摄像设备沿第一方向平移运动,第一方向是平行于机器人旋转轴坐标系原点到摄像设备坐标系原点连线的方向,且至多有3幅第一视图的拍摄平面在同一平面内;通过摄像设备沿第二方向获取多幅参照物的第二视图;其中,摄像设备沿第二方向平移运动,第二方 向是与机器人旋转轴坐标系原点到摄像设备坐标系原点的连线非平行的方向;基于第一视图、第二视图和摄像设备的运动信息,得到手眼标定的结果。
为解决上述问题,本申请还提供了一种手眼标定系统,包括:摄像设备,摄像设备具有摄像设备坐标系;机器人,机器人上装有机械前臂,机械前臂上装有旋转轴,摄像设备与旋转轴连接,旋转轴控制摄像设备进行刚性运动,机器人具有机器人旋转轴坐标系;参照物,参照物具有参照物坐标系;处理单元,用于通过摄像设备沿第一方向获取多幅参照物的第一视图;其中,摄像设备沿第一方向平移运动并拍摄参照物图像,第一方向是平行于机器人旋转轴坐标系原点到摄像设备坐标系原点连线的方向,且至多有3幅第一视图的拍摄平面在同一平面内;通过摄像设备沿第二方向获取多幅参照物的第二视图;其中,摄像设备沿第二方向平移运动并拍摄参照物图像,第二方向是与机器人旋转轴坐标系原点到摄像设备坐标系原点的连线非平行的方向;基于第一视图、第二视图和摄像设备的运动信息,得到手眼标定的结果。
为解决上述问题,本申请还提供了一种计算机存储介质,其中,计算机存储介质上存储有程序数据,在程序数据被处理器执行时,执行如下步骤:确定参照物坐标系、机器人旋转轴坐标系原点和摄像设备坐标系原点;通过摄像设备沿第一方向获取多幅参照物的第一视图;其中,摄像设备安装于机器人旋转轴上,由机器人旋转轴控制摄像设备的运动,摄像设备沿第一方向平移运动,第一方向是平行于机器人旋转轴坐标系原点到摄像设备坐标系原点连线的方向,且至多有3幅第一视图的拍摄平面在同一平面内;通过摄像设备沿第二方向获取多幅参照物的第二视图;其中,摄像设备沿第二方向平移运动,第二方向是与机器人旋转轴坐标系原点到摄像设备坐标系原点的连线非平行的方向;基于第一视图、第二视图和摄像设备的运动信息,得到手眼标定的结果。
本申请的有益效果是:区别于现有技术,本申请不需要预先设定标定点和控制摄像设备精确移动到预设位置,只需控制摄像设备围绕参照物运动获取参照物的多幅第一视图和第二视图,根据该第一视图和第二视图获取摄像设备坐标系、参照物坐标系以及旋转轴坐标系之间的转换关系,减少了工作量和标定工序,标定效率高,并且保证了手眼标定的准确性。
【附图说明】
图1是本申请手眼标定方法一实施例的流程示意图;
图2是图1所示的手眼标定方法一实施例中各坐标系的示意图;
图3是图1所示的手眼标定方法中摄像设备平移示意图;
图4是图1所示的手眼标定方法中摄像设备平移和旋转示意图;
图5是本申请手眼标定系统一实施例的结构示意图;
图6是本申请计算机存储介质一实施例的结构示意图。
【具体实施方式】
请参阅图1,图1是本申请机器人手眼标定方法一实施例的流程示意图,本实施例的机器人手眼标定方法包括如下步骤:
101:确定参照物坐标系、机器人旋转轴坐标系原点和摄像设备坐标系原点。
图2是图1所示的手眼标定方法一实施例中各坐标系的示意图,本实施例中,机器人上装有机械前臂,机械前臂上装有旋转轴203,摄像设备202安装在旋转轴203上,旋转轴203用于控制摄像设备202刚性运动,例如平移和旋转。参照物为包含标定坐标系的平面201,也就是参照物坐标系为标定坐标系,所述标定坐标系可以为世界坐标系。在平面201上,包含参照物坐标系的X 1轴与Y 1轴,Z 1轴优选垂直于平面201。可以以摄像设备202的光心为摄像设备坐标系原点,并将旋转轴203的其中一个关节点定义为机器人旋转轴坐标系原点。例如,X 2轴、Y 2轴以及Z 2轴的交汇处为摄像设备坐标系原点,X 3轴、Y 3轴以及Z 3轴的交汇处为机器人旋转轴坐标系原点。
本申请中由机器人的旋转轴203带动摄像设备202运动,摄像设备202在运动过程中获取参照物的图像,该图像在摄像设备坐标系中的坐标U与在参照物坐标系的坐标P之间的投影关系为:
U≈K[R t]·P
其中,R为摄像设备坐标系与参照物坐标系之间的旋转关系矩阵;t为运动向量(本申请中主要为平移向量);K为摄像设备内参数矩阵,
Figure PCTCN2018098119-appb-000001
其中,f x和f y为参照物在摄像设备坐标系X 2轴和Y 2轴上的比例因子,x 0和y 0是摄像设备拍摄平面与摄像设备光轴的交点,s为预设的畸变因子。
102:通过摄像设备沿第一方向获取多幅参照物的第一视图;其中,第一方向是平行于机器人旋转轴坐标系原点到摄像设备坐标系原点连线的方向,平移 为刚体运动,且至多有3幅第一视图的拍摄平面在同一平面内。
具体的,如图3所示,图3是图1所示的手眼标定方法中摄像设备平移示意图,摄像设备沿第一方向的5个拍照点(301、302、303、304和305)依次获取5幅参照物306的第一视图;其中,第一方向是平行于机器人旋转轴坐标系原点到摄像设备坐标系原点连线的方向。其中,各拍照点不必如现有技术中一样预先设定,只需使5幅第一视图中至多有3幅第一视图的拍摄平面在同一平面内,以能够通过获取到的第一视图来表述各拍照点坐标,且至多有3个拍照点具有重复的x轴坐标或y轴坐标,从而使第一视图能够更全面地指示参照物的空间位置,提高标定的准确性。在上述实施例中,为了确保能够通过第一视图准确表述参照物坐标,本申请通过摄像设备拍摄的第一视图的数量为至少5幅,且为了提高结果的准确性,在平移过程中通过摄像设备获取的第一视图至多只能有3幅第一视图的拍摄平面位于同一个平面内。
103:通过摄像设备沿第二方向获取多幅参照物的第二视图;其中,第二方向是与机器人旋转轴坐标系原点到摄像设备坐标系原点的连线相交的方向,也就是与机器人旋转轴坐标系原点到摄像设备坐标系原点的连线非平行的方向。
在本实施例中,为了准确获取机器人旋转轴坐标系与摄像设备坐标系之间的手眼关系,需要通过摄像设备获取参照物的至少两幅第二视图。下面以摄像设备获取参照物的的两幅第二视图为例进行说明。
具体的,如图4所示,图4是图1所示的手眼标定方法中摄像设备平移和旋转示意图。通过旋转轴带动摄像设备平移和旋转,摄像设备沿第二方向的2个拍照点(401、402)依次获取2幅参照物403的第二视图,为减小计算误差,旋转角度可以控制在30°至35°之间,即第二方向与机器人旋转轴坐标系原点到摄像设备坐标系原点连线的夹角为30°至35°,其中,第二方向是与机器人旋转轴坐标系原点到摄像设备坐标系原点连线的相交的方向。
三维空间的坐标变换过程中,首先需要确定的是机器人旋转轴坐标系与摄像设备坐标系之间的变换关系,本实施例中为简化计算,确定机器人的旋转轴的其中一个关节点为机器人旋转轴坐标系原点,假设机器人旋转轴坐标系与摄像设备坐标系的手眼关系为G,A为相邻两次运动机器人的旋转轴坐标系之间的变换关系,B为相邻两次运动摄像设备坐标系之间的变换关系,可以得到:
A·G=B·G
在满足上述获取多幅第一视图和第二视图的基础拍照位置的要求后,可以 追加更多的拍照位置以增加手眼标定结果的鲁棒性和准确性。
104:基于第一视图、第二视图和摄像设备的运动信息,得到手眼标定的结果。
本实施例中,通过第一视图获取单应矩阵;通过第二视图获取参照物上的空间点在摄像设备坐标系中的X 2轴和Y 2轴上的比例因子以及摄像设备内参数矩阵,来构造测量矩阵;结合摄像设备的移动量和旋转量,得到手眼标定的结果。
具体的,选取一幅第一视图,随机选取该第一视图中参照物上的空间点S,将该空间点在摄像设备坐标系中的坐标U通过第一视图中任意3个不共线的空间点表示:
Figure PCTCN2018098119-appb-000002
其中,a、b、c为3个不共线的空间点在摄像设备坐标系中的坐标,
Figure PCTCN2018098119-appb-000003
表示空间点S在摄像设备二维图像平面上的位置。再次参阅图2,对应地,该空间点S在参照物坐标系中的坐标P为:
Figure PCTCN2018098119-appb-000004
其中x 1、y 1为参照物上的空间点S在参照物坐标系的X 1轴与Y 1轴上的单位向量。
在根据投影关系可以得到:
Figure PCTCN2018098119-appb-000005
通过对上述公式进行变换可以得到单应矩阵H:
Figure PCTCN2018098119-appb-000006
假设拍摄第二视图时的摄像设备的运动常微分方程为m×n阶,那么空间点S在第二视图中的坐标(x 0,y 0)可以表示为:x 0=m/2,y 0=n/2。假设摄像设备光心到参照物的距离为D 1,参照物上坐标线段长度为D 2,摄像设备图像的像素坐标长度为D 3,由摄像设备光学几何关系得到:
Figure PCTCN2018098119-appb-000007
其中,f为摄像设备的焦距,L为像素在图像中的长度,从而可以得到空间点S在摄像设备坐标系中的X 2轴和Y 2轴上的比例因子f x=S xD 1D 2/D 3和fy=S yD 1D 2/D 3,其中
Figure PCTCN2018098119-appb-000008
Figure PCTCN2018098119-appb-000009
通过上述方法求得摄像设备内参数矩阵K 1和K 2
Figure PCTCN2018098119-appb-000010
其中,i为第i个摄像设备拍摄点;j为机器人旋转轴运动的第j个位置;
Figure PCTCN2018098119-appb-000011
为比例因子,H为单应矩阵、R ir 1jR ir 2jR ir 3j+t为摄像设备的运动公式。
再对上述公式进行奇异值分解(Singular Value Decomposition,SVD),即可得到
Figure PCTCN2018098119-appb-000012
比例因子,通过计算即可得到测量矩阵M。最后,结合计算得到的比例因子、测量矩阵以及机器人的移动量和旋转量,根据常规坐标转换公式,可以得到机器人旋转轴坐标系、摄像设备坐标系以及参照物坐标系(作为世界坐标系)之间的转换关系,完成手眼标定。
相比于现有技术中进行手眼标定时需要特别设定标定点、需要将摄像设备精确移动到该标定点的位置的情况,本申请不需要预先设定标定点和控制摄像设备精确移动到预设位置,只需控制摄像设备围绕参照物运动获取参照物的多幅第一视图和第二视图,根据该第一视图和第二视图获取摄像设备坐标系、参照物坐标系以及旋转轴坐标系之间的转换关系,减少了工作量和标定工序,标定效率和准确度高。
另外,一般情况下,机器人运行过程中的噪声、移动精度和视觉标定精度都会使标定产生误差,特别是噪声对手眼定位过程中摄像设备参数K、平移关系向量t等会造成较大误差,但这些误差对图像信息中的摄像设备参数K、平移关系向量t影响较小,本申请实施例所提供的方法主要基于图像信息处理,因此具有更好的鲁棒性,并且可以将这些误差因素都作为噪声源,通过修正参数将这些影响降至最低。
上述实施例中,摄像设备可以包括:取景器、图像分析器及图像处理器。取景器用于获取参照物的图像。图像分析器获取参照物的摄像设备坐标信息。图像处理器可以用于根据参照物的图像,完成机器人旋转轴坐标系、摄像设备坐标系以及参照物坐标系之间的转换计算。
请参阅图5,图5是本申请手眼标定系统一实施例的结构示意图。本实施例中,手眼标定系统包括:摄像设备501,参照物504、处理单元505和机器人(图中未标注)。
机器人上装有机械前臂502,机械前臂502上装有旋转轴503,摄像设备501安装在旋转轴503上。摄像设备501与机器人电连接,旋转轴503用于控制摄像设备501进行刚性运动,例如平移和旋转。或者,由机械前臂502配合旋转轴503控制摄像设备501进行刚性运动。其中,摄像设备501具有摄像设备坐标系,可以以摄像设备501的光心为摄像设备坐标系原点。机器人具有机器人旋转轴坐标系,可以将旋转轴503中的一个关节点定义为机器人旋转轴坐标系原点。参照物504为包含标定坐标系的平面,也就是参照物坐标系为标定坐标系,所述标定坐标系可以为世界坐标系。机器人与摄像设备501配合对参照物504进行拍照以实现手眼标定的方法。具体步骤如下:
本申请中由机器人的旋转轴503带动摄像设备501运动,摄像设备501在运动过程中获取参照物504的图像,该图像在摄像设备坐标系中的坐标U与在参照物坐标系的坐标P之间的投影关系为:
U≈K[R t]·P
其中,R为摄像设备坐标系与参照物坐标系之间的旋转关系矩阵;t为运动向量(本申请中主要为平移向量);K为摄像设备的内参数矩阵,
Figure PCTCN2018098119-appb-000013
其中,f x和f y为参照物在摄像设备坐标系X 2轴和Y 2轴上的比例因子,x 0和y 0是摄像设备拍摄平面与摄像设备光轴的交点,s为预设的畸变因子。
处理单元505与摄像设备501通信连接,在本实施例中,处理单元505与摄像设备501为一体。在其他实施例中,也可以将处理单元505与机器人一体,或独立于摄像设备505与机器人,只需要保证处理单元505与摄像设备501能够进行数据互通即可,对其位置并无具体的限定。当处理单元505与摄像设备501为一体时,具体的可以是与摄像设备501的处理器一体;当处理单元505与机器人一体时,具体的可以是与机器人的处理器一体。所述处理单元可以同时作为机器人旋转轴和摄像设备的控制单元,与机器人和摄像设备通信链接,来控制旋转轴带动摄像设备运动,并控制摄像设备进行拍摄。处理单元页可以不作为机器人旋转轴和摄像设备的控制单元,而由机器人自身的控制单元根据预先设定的程序控制旋转轴运动,从而带动摄像设备运动;由摄像设备的自身控制单元根据预先设定的程序控制摄像设备拍摄,处理单元只获取摄像设备拍摄得到的第一视图和第二视图,以及摄像设备的运动信息,来完成手眼标定计算。
机器人可以带动摄像设备501在参照物504的参照物坐标系内做平移,处理单元505通过摄像设备501沿第一方向获取多幅参照物504的第一视图;其中,摄像设备501沿第一方向平移运动并拍摄参照物图像,第一方向是平行于机器人旋转轴坐标系原点到摄像设备坐标系原点连线的方向,平移为刚体运动,且至多有3幅第一视图的拍摄平面在同一平面内。
具体的,处理单元505通过摄像设备501沿第一方向的5个拍照点依次获取5幅参照物504的第一视图;其中,第一方向是平行于机器人旋转轴坐标系原点到摄像设备坐标系原点连线的方向。其中,各拍照点不必如现有技术中一样预先设定,只需使5幅第一视图中至多有3幅第一视图的拍摄平面在同一平面内,以能够通过获取到的第一视图来表述各拍照点坐标,且至多有3个拍照点具有重复的x轴坐标或y轴坐标,从而使第一视图能够更全面地指示参照物的空间位置,提高标定的准确性。在上述实施例中,为了确保能够通过第一视图准确表述参照物坐标,本申请中处理单元505通过摄像设备501获取的第一视图的数量为至少5幅,且为了提结果的准确性,在平移过程中通过摄像设备501获取的第一视图至多只能有3幅的拍摄平面位于同一个平面内。
为了准确获取机器人旋转轴坐标系与摄像设备坐标系之间的手眼关系,还需要处理单元505通过摄像设备501沿第二方向获取多幅参照物504的第二视图。其中,摄像设备501沿第二方向平移运动并拍摄参照物图像,第二方向是与机器人旋转轴坐标系原点到摄像设备坐标系原点的连线相交的方向,也就是与机器人旋转轴坐标系原点到摄像设备坐标系原点的连线非平行的方向。在本实施例中,为了准确获取机器人旋转轴坐标系与摄像设备坐标系之间的手眼关系,需要通过摄像设备501获取参照物的至少两幅第二视图。下面以摄像设备501获取参照物504两幅第二视图为例进行说明。
机器人通过旋转轴503带动摄像设备501平移和旋转,摄像设备501沿第二方向的2个拍照点依次获取2幅参照物504的第二视图。为减小计算误差,旋转角度控制在30°-35°之间,即第二方向与机器人旋转轴坐标系原点到摄像设备坐标系原点连线的夹角为30°-35°,其中,第二方向是与机器人旋转轴坐标系原点到摄像设备坐标系原点连线的相交的方向。
三维空间的坐标变换过程中,首先需要确定的是机器人旋转轴坐标系与摄像设备坐标系之间的变换关系,本实施例中为简化计算,确定机器人的旋转轴503的其中一个关节点为机器人旋转轴坐标系原点,假设机器人旋转轴坐标系与 摄像设备坐标系的手眼关系为G,A为相邻两次运动机器人的旋转轴坐标系之间的变换关系,B为相邻两次运动摄像设备坐标系之间的变换关系,可以得到:
A·G=B·G
在满足上述获取多幅第一视图和第二视图的基础拍照位置的要求后,可以追加更多的拍照位置以增加手眼标定结果的鲁棒性和准确性。
处理单元505基于第一视图、第二视图和摄像设备501的运动信息,能够得到手眼标定的结果。
在一个具体的实施方式中,处理单元505根据第一视图获取单应矩阵;根据第二视图获取参照物504上的空间点在摄像设备坐标系中的X轴和Y轴上的比例因子以及摄像设备内参数矩阵,构造测量矩阵;再结合摄像设备501的移动量和旋转量,得到手眼标定的结果。更详细的计算方法参阅上述方法实施例的描述,在此不再赘述。
本实施例中,处理单元505与摄像设备501一体,摄像设备501可以包括:取景器和图像分析器。取景器和图像分析器通信连接,图像分析器与处理单元通信连接。取景器用于获取包含参照物的图像,此图像包括上述的第一视图与第二视图。图像分析器用于获取参照物的摄像设备坐标信息。图像处理器与取景器和图像分析器通信链接,获取取景器采集的图像可和图像分析器获取的信息,以完成机器人旋转轴坐标系、摄像设备坐标系以及参照物坐标系之间的转换计算
相比于现有技术中进行手眼标定时需要特别设定标定点和将以摄像设备移动到该标定点的准确位置的情况,本申请不需要预先设定标定点和控制摄像设备精确移动到预设位置,只需控制摄像设备围绕参照物运动获取参照物的多幅第一视图和第二视图,根据该第一视图和第二视图获取摄像设备坐标系、参照物坐标系以及旋转轴坐标系之间的转换关系,减少了工作量和标定工序,标定效率高,并且保证了手眼标定的准确性。
另外,一般情况下,机器人运行过程中的噪声、移动精度和视觉标定精度都会使标定产生误差,特别是噪声对手眼定位过程中摄像设备的内参数矩阵K、平移关系向量t等会造成较大误差,但这些误差对图像信息中的摄像设备的内参数矩阵K、平移关系向量t影响较小,本申请实施例所提供的方法主要基于图像信息处理,因此具有更好的鲁棒性,并且可以将这些误差因素都作为噪声源, 通过修正参数将这些影响降至最低。
请参阅图6,图6是本申请计算机存储介质一实施例的结构示意图。在本实施例中,计算机存储介质60包括程序数据61,在该程序数据61被执行时实现如上述实施例所述的手眼标定方法。
本申请的手眼标定方法、系统及计算机存储介质,仅需要摄像设备在拍照位置时,参照物处于摄像设备视野中即可,不需要摄像设备精确定位移动到指定的标定点,大幅简化了标定的工序,并且具有稳定的准确度。
以上所述仅为本申请的实施方式,并非因此限制本申请的专利范围,凡是利用本申请说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本申请的专利保护范围内。

Claims (17)

  1. 一种手眼标定方法,其特征在于,包括:
    确定参照物坐标系、机器人旋转轴坐标系原点和摄像设备坐标系原点;
    通过摄像设备沿第一方向获取多幅参照物的第一视图;其中,所述摄像设备安装于机器人旋转轴上,由所述机器人旋转轴控制所述摄像设备的运动,所述摄像设备沿所述第一方向运动,所述第一方向是平行于所述机器人旋转轴坐标系原点到所述摄像设备坐标系原点连线的方向,且至多有3幅所述第一视图的拍摄平面在同一平面内;
    通过所述摄像设备沿第二方向获取多幅所述参照物的第二视图;其中,所述摄像设备沿所述第二方向运动,所述第二方向是与所述机器人旋转轴坐标系原点到所述摄像设备坐标系原点的连线相交的方向;
    基于所述第一视图、所述第二视图和所述摄像设备的运动信息,得到手眼标定的结果。
  2. 根据权利要求1所述的手眼标定方法,其特征在于,所述第一视图的数量至少为5幅。
  3. 根据权利要求1所述的手眼标定方法,其特征在于,所述第二视图的数量至少为2幅。
  4. 根据权利要求1所述的手眼标定方法,其特征在于,所述基于所述第一视图、所述第二视图和所述摄像设备的运动信息,得到手眼标定的结果,包括:
    根据所述第一视图获取单应矩阵;
    根据所述第二视图获取所述参照物上的空间点在所述摄像设备坐标系中的X轴和Y轴上的比例因子以及所述摄像设备的内参数矩阵,构造测量矩阵;
    结合所述摄像设备的运动信息,得到手眼标定的结果,所述摄像设备的运动信息包括所述摄像设备的移动量和旋转量。
  5. 根据权利要求1-4中任意一项所述的手眼标定方法,其特征在于,所述第二方向与所述机器人旋转轴坐标系原点到所述摄像设备坐标系原点连线的夹角为30°-35°。
  6. 一种手眼标定系统,其特征在于,包括:
    摄像设备,所述摄像设备具有摄像设备坐标系;
    机器人,所述机器人上安装有机械前臂,所述机械前臂上安装有旋转轴,所述摄像设备与所述旋转轴连接,所述旋转轴控制所述摄像设备进行刚性运动,所述机器人具有机器人旋转轴坐标系;
    参照物,所述参照物具有参照物坐标系;
    处理单元,用于通过摄像设备沿第一方向获取多幅所述参照物的第一视图;其中,所述摄像设备沿所述第一方向运动并拍摄参照物图像,所述第一方向是平行于所述机器人旋转轴坐标系原点到所述摄像设备坐标系原点连线的方向,且至多有3幅所述第一视图的拍摄平面在同一平面内;
    通过所述摄像设备沿第二方向获取多幅所述参照物的第二视图;其中,所述摄像设备沿所述第二方向运动并拍摄参照物图像,所述第二方向是与所述机器人旋转轴坐标系原点到所述摄像设备坐标系原点的连线相交的方向;
    基于所述第一视图、所述第二视图和所述摄像设备的运动信息,得到手眼标定的结果。
  7. 根据权利要求6所述的手眼标点系统,其特征在于,所述第一视图的数量至少为5幅。
  8. 根据权利要求6所述的手眼标定系统,其特征在于,所述第二视图的数量至少为2幅。
  9. 根据权利要求6所述的手眼标定系统,其特征在于,
    所述处理单元在基于所述第一视图、所述第二视图和所述摄像设备的运动信息,得到手眼标定的结果时,具体用于:
    根据所述第一视图获取单应矩阵;
    根据所述第二视图获取所述参照物上的空间点在所述摄像设备坐标系中的X轴和Y轴上的比例因子以及所述摄像设备的内参数矩阵,构造测量矩阵;
    结合所述摄像设备的运动信息,得到手眼标定的结果,所述摄像设备的运动信息包括所述摄像设备的移动量和旋转量。
  10. 根据权利要求6-9中任意一项所述的手眼标定系统,其特征在于,所述第二方向与所述机器人旋转轴坐标系原点到所述摄像设备坐标系原点连线的夹角为30°-35°。
  11. 根据权利要求10所述的手眼标定系统,其特征在于,所述处理单元与所述摄像设备一体,或者与所述机器人一体,或者独立于所述摄像设备与所述机器人。
  12. 根据权利要求11所述的手眼标定系统,其特征在与,所述处理单元与所述摄像设备一体,所述摄像设备包括取景器和图像分析器,所述取景器与所述图像分析器和所述处理单元通信连接,所述图像分析器与所述处理单元通信连接;所述取景器用于获取包含所述参照物的图像,所述图像包括所述第一视图与所述第二视图;所述图像分析器用于获取所述参照物的摄像设备坐标信息。
  13. 一种计算机存储介质,其特征在于,所述计算机存储介质上存储有程序数据,在所述程序数据被处理器执行时,执行如下步骤:
    确定参照物坐标系、机器人旋转轴坐标系原点和摄像设备坐标系原点;
    通过摄像设备沿第一方向获取多幅参照物的第一视图;其中,所述摄像设备安装于所述机器人旋转轴上,由所述机器人旋转轴控制所述摄像设备的运动,所述摄像设备沿所述第一方向运动,所述第一方向是平行于所述机器人旋转轴坐标系原点到所述摄像设备坐标系原点连线的方向,且至多有3幅所述第一视图的拍摄平面在同一平面内;
    通过所述摄像设备沿第二方向获取多幅所述参照物的第二视图;其中,所述摄像设备沿所述第二方向运动,所述第二方向是与所述机器人旋转轴坐标系原点到所述摄像设备坐标系原点的连线相交的方向;
    基于所述第一视图、所述第二视图和所述摄像设备的运动信息,得到手眼标定的结果。
  14. 根据权利要求13所述的计算机存储介质,其特征在于,所述第一视图的数量至少为5幅。
  15. 根据权利要求13所述的计算机存储介质,其特征在于,所述第二视图的数量至少为2幅。
  16. 根据权利要求13所述的计算机存储介质,其特征在于,所述基于所述第一视图、所述第二视图和所述摄像设备的运动信息,得到手眼标定的结果,包括:
    根据所述第一视图获取单应矩阵;
    根据所述第二视图获取所述参照物上的空间点在所述摄像设备坐标系中的X轴和Y轴上的比例因子以及所述摄像设备的内参数矩阵,构造测量矩阵;
    结合所述摄像设备的运动信息,得到手眼标定的结果,所述摄像设备的运动信息包括所述摄像设备的移动量和旋转量。
  17. 根据权利要求13-16中任意一项所述的计算机存储介质,其特征在于, 所述第二方向与所述机器人旋转轴坐标系原点到所述摄像设备坐标系原点连线的夹角为30°-35°。
PCT/CN2018/098119 2018-08-01 2018-08-01 一种手眼标定方法、系统及计算机存储介质 WO2020024178A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201880088580.0A CN111801198B (zh) 2018-08-01 2018-08-01 一种手眼标定方法、系统及计算机存储介质
PCT/CN2018/098119 WO2020024178A1 (zh) 2018-08-01 2018-08-01 一种手眼标定方法、系统及计算机存储介质

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/098119 WO2020024178A1 (zh) 2018-08-01 2018-08-01 一种手眼标定方法、系统及计算机存储介质

Publications (1)

Publication Number Publication Date
WO2020024178A1 true WO2020024178A1 (zh) 2020-02-06

Family

ID=69232288

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/098119 WO2020024178A1 (zh) 2018-08-01 2018-08-01 一种手眼标定方法、系统及计算机存储介质

Country Status (2)

Country Link
CN (1) CN111801198B (zh)
WO (1) WO2020024178A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112102419A (zh) * 2020-09-24 2020-12-18 烟台艾睿光电科技有限公司 双光成像设备标定方法及系统、图像配准方法
CN112802122A (zh) * 2021-01-21 2021-05-14 珠海市运泰利自动化设备有限公司 机器人视觉引导组装方法
CN113252066A (zh) * 2020-02-13 2021-08-13 纳恩博(北京)科技有限公司 里程计设备参数的标定方法及装置、存储介质、电子装置
WO2021159489A1 (zh) * 2020-02-14 2021-08-19 西门子(中国)有限公司 一种坐标系校准方法、示教板和突出部
CN113362396A (zh) * 2021-06-21 2021-09-07 上海仙工智能科技有限公司 一种移动机器人3d手眼标定方法及装置
CN113676696A (zh) * 2020-05-14 2021-11-19 杭州萤石软件有限公司 一种目标区域的监控方法、系统
CN113843792A (zh) * 2021-09-23 2021-12-28 四川锋准机器人科技有限公司 一种手术机器人的手眼标定方法
CN114872048A (zh) * 2022-05-27 2022-08-09 河南职业技术学院 一种机器人舵机角度校准方法
CN117103286A (zh) * 2023-10-25 2023-11-24 杭州汇萃智能科技有限公司 一种机械手手眼标定方法、系统和可读存储介质

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113146633B (zh) * 2021-04-23 2023-12-19 无锡信捷电气股份有限公司 一种基于自动贴盒系统的高精度手眼标定方法
CN116945195B (zh) * 2023-09-19 2024-01-12 成都飞机工业(集团)有限责任公司 全向测量设备系统装置、配准方法、电子设备和存储介质

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105021139A (zh) * 2015-07-16 2015-11-04 北京理工大学 一种机器人线结构光视觉测量系统的手眼标定方法
CN105451461A (zh) * 2015-11-25 2016-03-30 四川长虹电器股份有限公司 基于scara机器人的pcb板定位方法
CN106940894A (zh) * 2017-04-12 2017-07-11 无锡职业技术学院 一种基于主动视觉的手眼系统自标定方法
CN107808400A (zh) * 2017-10-24 2018-03-16 上海交通大学 一种摄像机标定系统及其标定方法

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3733364B2 (ja) * 2003-11-18 2006-01-11 ファナック株式会社 教示位置修正方法
JP2006289531A (ja) * 2005-04-07 2006-10-26 Seiko Epson Corp ロボット位置教示のための移動制御装置、ロボットの位置教示装置、ロボット位置教示のための移動制御方法、ロボットの位置教示方法及びロボット位置教示のための移動制御プログラム
US9393694B2 (en) * 2010-05-14 2016-07-19 Cognex Corporation System and method for robust calibration between a machine vision system and a robot
WO2018090323A1 (zh) * 2016-11-18 2018-05-24 深圳配天智能技术研究院有限公司 一种坐标系标定方法、系统及装置
CN106920261B (zh) * 2017-03-02 2019-09-03 江南大学 一种机器人手眼静态标定方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105021139A (zh) * 2015-07-16 2015-11-04 北京理工大学 一种机器人线结构光视觉测量系统的手眼标定方法
CN105451461A (zh) * 2015-11-25 2016-03-30 四川长虹电器股份有限公司 基于scara机器人的pcb板定位方法
CN106940894A (zh) * 2017-04-12 2017-07-11 无锡职业技术学院 一种基于主动视觉的手眼系统自标定方法
CN107808400A (zh) * 2017-10-24 2018-03-16 上海交通大学 一种摄像机标定系统及其标定方法

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113252066A (zh) * 2020-02-13 2021-08-13 纳恩博(北京)科技有限公司 里程计设备参数的标定方法及装置、存储介质、电子装置
CN113252066B (zh) * 2020-02-13 2024-04-09 纳恩博(北京)科技有限公司 里程计设备参数的标定方法及装置、存储介质、电子装置
WO2021159489A1 (zh) * 2020-02-14 2021-08-19 西门子(中国)有限公司 一种坐标系校准方法、示教板和突出部
CN113676696A (zh) * 2020-05-14 2021-11-19 杭州萤石软件有限公司 一种目标区域的监控方法、系统
CN112102419A (zh) * 2020-09-24 2020-12-18 烟台艾睿光电科技有限公司 双光成像设备标定方法及系统、图像配准方法
CN112102419B (zh) * 2020-09-24 2024-01-26 烟台艾睿光电科技有限公司 双光成像设备标定方法及系统、图像配准方法
CN112802122B (zh) * 2021-01-21 2023-08-29 珠海市运泰利自动化设备有限公司 机器人视觉引导组装方法
CN112802122A (zh) * 2021-01-21 2021-05-14 珠海市运泰利自动化设备有限公司 机器人视觉引导组装方法
CN113362396A (zh) * 2021-06-21 2021-09-07 上海仙工智能科技有限公司 一种移动机器人3d手眼标定方法及装置
CN113362396B (zh) * 2021-06-21 2024-03-26 上海仙工智能科技有限公司 一种移动机器人3d手眼标定方法及装置
CN113843792B (zh) * 2021-09-23 2024-02-06 四川锋准机器人科技有限公司 一种手术机器人的手眼标定方法
CN113843792A (zh) * 2021-09-23 2021-12-28 四川锋准机器人科技有限公司 一种手术机器人的手眼标定方法
CN114872048B (zh) * 2022-05-27 2024-01-05 河南职业技术学院 一种机器人舵机角度校准方法
CN114872048A (zh) * 2022-05-27 2022-08-09 河南职业技术学院 一种机器人舵机角度校准方法
CN117103286A (zh) * 2023-10-25 2023-11-24 杭州汇萃智能科技有限公司 一种机械手手眼标定方法、系统和可读存储介质
CN117103286B (zh) * 2023-10-25 2024-03-19 杭州汇萃智能科技有限公司 一种机械手手眼标定方法、系统和可读存储介质

Also Published As

Publication number Publication date
CN111801198A (zh) 2020-10-20
CN111801198B (zh) 2023-07-04

Similar Documents

Publication Publication Date Title
WO2020024178A1 (zh) 一种手眼标定方法、系统及计算机存储介质
JP6966582B2 (ja) ロボットモーション用のビジョンシステムの自動ハンドアイ校正のためのシステム及び方法
WO2021217976A1 (zh) 一种基于单目视觉定位的机械臂控制方法及装置
CN109658460A (zh) 一种机械臂末端相机手眼标定方法和系统
US20110320039A1 (en) Robot calibration system and calibrating method thereof
CN110717943A (zh) 用于二维平面的眼在手上机械手手眼标定方法及系统
CN110276806A (zh) 用于四自由度并联机器人立体视觉手眼系统的在线手眼标定和抓取位姿计算方法
WO2020063058A1 (zh) 一种多自由度可动视觉系统的标定方法
CN113379849B (zh) 基于深度相机的机器人自主识别智能抓取方法及系统
CN106003020A (zh) 机器人、机器人控制装置以及机器人系统
CN110465946B (zh) 一种像素坐标与机器人坐标关系标定方法
JP2007024647A (ja) 距離算出装置、距離算出方法、構造解析装置及び構造解析方法。
CN113172659B (zh) 基于等效中心点识别的柔性机器人臂形测量方法及系统
CN114519738A (zh) 一种基于icp算法的手眼标定误差修正方法
US20220395981A1 (en) System and method for improving accuracy of 3d eye-to-hand coordination of a robotic system
CN115446847A (zh) 用于提高机器人系统的3d眼手协调准确度的系统和方法
CN115588054A (zh) 无角度约束的相机标定方法、装置、电子设备及存储介质
CN112288801A (zh) 应用于巡检机器人的四位一体自适应跟踪拍摄方法及装置
CN116652970A (zh) 一种四轴机械臂2d手眼标定方法及系统、存储介质
CN113593050A (zh) 一种双目视觉引导的机器人智能装配方法、系统及装置
CN114589682A (zh) 一种机器人手眼自动标定的迭代方法
Barreto et al. A general framework for the selection of world coordinate systems in perspective and catadioptric imaging applications
Yousfi et al. Study on the strategy of 3D images based reconstruction using a camera attached to a robot arm
CN116079715A (zh) 一种手眼标定方法及装置
CN112060083B (zh) 用于机械臂的双目立体视觉系统及其测量方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18928718

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 09.06.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18928718

Country of ref document: EP

Kind code of ref document: A1