CN112238453A - Vision-guided robot arm correction method - Google Patents

Vision-guided robot arm correction method Download PDF

Info

Publication number
CN112238453A
CN112238453A CN201910654284.9A CN201910654284A CN112238453A CN 112238453 A CN112238453 A CN 112238453A CN 201910654284 A CN201910654284 A CN 201910654284A CN 112238453 A CN112238453 A CN 112238453A
Authority
CN
China
Prior art keywords
image
coordinate
coordinate system
axis
correction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910654284.9A
Other languages
Chinese (zh)
Other versions
CN112238453B (en
Inventor
洪兴隆
黄眉瑜
高伟勋
赖传钊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hiwin Technologies Corp
Original Assignee
Hiwin Technologies Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hiwin Technologies Corp filed Critical Hiwin Technologies Corp
Priority to CN201910654284.9A priority Critical patent/CN112238453B/en
Publication of CN112238453A publication Critical patent/CN112238453A/en
Application granted granted Critical
Publication of CN112238453B publication Critical patent/CN112238453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a method for correcting a vision guide robot arm, which is used for a robot arm and adopts the steps of A) operation condition setting: B) placing a correction target: C) moving the operating tool center point: D) moving the image sensor: E) analyzing the positioning mark image: F) correcting the image and the real distance: G) calculating the image correction data: H) and calculating the compensation quantity of the coordinate system of the image sensor. The vision guide robot arm correction method provided by the invention is not limited to a specific correction target, such as a bitmap, and the correction operation can be carried out only by specifying the positioning mark in the correction target, so that the time for the correction operation can be saved. In addition, the coordinate position is judged by an image analysis mode, and visual errors caused by artificial judgment can be reduced.

Description

Vision-guided robot arm correction method
Technical Field
The invention relates to the field related to robot arm correction, in particular to a vision-guided robot arm correction method.
Background
The vision-guided robot generally refers to a robot arm that is provided with an image sensor, such as a Charge-coupled Device (CCD), on an end effector of the robot arm, so that the robot arm looks more eye-like, and when the image sensor obtains a workpiece position, the robot arm is controlled by a robot arm controller to move the end effector to the workpiece position for taking or placing an object.
However, before the above-mentioned workpiece picking and placing operations are performed, the robot must first perform a calibration operation by the vision-guided robot, so that the controller can store the coordinate position difference between the end effector and the lens of the image sensor.
In the conventional vision-guided robot system calibration technique, the calibration target used is a bitmap. Since the bitmap is a regular graph and has no directionality, the user needs to determine three feature points on the bitmap in sequence. Then, the calibration personnel first controls the robot arm to move to a proper height, so that the camera can shoot a complete bitmap image, and the position is an image calibration point. The user inputs the image coordinates of the three characteristic points on the bitmap image in the image processing software, and inputs the center distance between the point in the bitmap and the center distance between the points in the real world, and the image processing software calculates the coordinate system conversion relation from the image coordinate system to the real world coordinate system, so that the image processing software defines the real world coordinate system X-Y.
After the calibration procedure is finished, the calibration personnel also needs to move the robot arm, move the working point of the operation tool of the robot arm to the three characteristic points in sequence and record the coordinate value of the robot arm of the working point of the tool at the characteristic point. After the calibration is completed, the interior of the robot arm controller automatically calculates and defines the base coordinate system of the robot arm according to the robot arm coordinate values. At this time, the base coordinate system of the robot arm will coincide with the real world coordinate system in the image processing software. Therefore, when the image processing software analyzes the image and obtains the real world coordinates of the object through conversion, the real world coordinates can be directly transmitted to the robot arm for operation without additional conversion.
However, the conventional vision-guided robot calibration technique is completely dependent on manpower, so that the procedure is time-consuming and error-prone. In addition, whether the operating tool operating point is correctly moved to each feature point depends on visual confirmation by the calibration staff, so that different calibration results may be generated due to different calibration staff, and a visual error may be generated.
Related art, for example, US6812665, describes an off-line relative calibration method that can compensate for an error between a robot center point (TCP) and a workpiece to create an accurate machining path. However, the robot arm needs to know the profile parameters of the standard workpiece in advance to perform the standard parameter correction, and the error between the current workpiece and the standard workpiece parameter obtained by the forced feedback or displacement sensor is compensated during the online operation.
US7019825 describes a hand-eye correction method for acquiring images of at least two workpieces by a camera mounted at the end of a robot arm. And the arm moves to obtain at least two images, and rotation and translation vector calculation of the arm and the camera is carried out through projection invariant description. However, at least two or more workpiece images are acquired for projection invariant calculation, and sufficient edge information is limited for shooting the workpiece, otherwise, the conversion needs to be performed for optimization calculation, which is time-consuming and cannot obtain good results.
Also, U.S. patent publication No. US20050225278a1 provides a measuring system for determining the moving manner of the robot arm so that the position of the tool center point on the light receiving surface is moved to a predetermined point on the light receiving surface, moving the robot by the determined position and storing the robot arm position to determine the position of the tool center point with respect to the tool mounting surface of the robot. In the image correction method, the center point of the correction tool is driven by the robot arm to the center point of the display of the alignment image, and the center point is used as the base for the calculation of the common coordinate system. Therefore, the manual checking operation process is complicated and time-consuming.
Disclosure of Invention
The present invention is directed to a method for calibrating a vision-guided robot arm, which saves time and reduces errors in calibration.
Therefore, the calibration method for the vision-guided robot arm provided by the invention is used for the robot arm, and the robot arm is provided with a base; the end of the robot arm is provided with a flange surface; the robot arm is electrically connected with a controller, and the controller has the functions of inputting data, outputting data, storing data, processing operation data and displaying data; the controller prestores a substrate coordinate system and a flange coordinate system, wherein the substrate coordinate system is a coordinate space formed by an X axis, a Y axis and a Z axis which are mutually vertical, and the substrate coordinate has a substrate coordinate origin; the robot arm has a working range; the flange coordinate system is a coordinate space formed by an X1 axis, a Y1 axis and a Z1 axis which are perpendicular to each other, and the flange coordinate system has a flange coordinate origin; an operating tool is mounted on the flange surface; the operating tool has an operating tool center point; the controller sets an operating tool coordinate system, the operating tool coordinate system is a coordinate space formed by an X2 axis, a Y2 axis and a Z2 axis which are perpendicular to each other, the operating tool coordinate system is provided with an operating tool coordinate origin, and the operating tool coordinate origin is positioned at the operating tool center point; the image sensor is arranged on the flange surface and is electrically connected with the controller; the image sensor is internally provided with an image sensing chip which is provided with an image sensing plane; the controller sets a first coordinate system of the image sensor, which is a coordinate space formed by an X3 axis, a Y3 axis and a Z3 axis that are perpendicular to each other, wherein an X3Y3 plane formed by the X3 axis and the Y3 axis of the first coordinate system of the image sensor needs to be parallel to the image sensing plane of the image sensing chip; the first coordinate system of the image sensor is provided with a first coordinate origin of the image sensor; the user can operate the controller to select the flange coordinate system, the operating tool coordinate system or the first coordinate system of the image sensor as a current coordinate system, and the current coordinate system represents a coordinate system currently used; the method is characterized in that: A) setting the operating conditions: setting a correction height, a first correction coordinate point, a second correction coordinate point, a third correction coordinate point and a fourth correction coordinate point under the base coordinate system by the controller; B) placing a correction target: placing a calibration target within the working range of the robot arm; the calibration target has a positioning mark; C) moving the operating tool center point: selecting the coordinate system of the operation tool as the current coordinate system, operating the robot arm to move the operation tool, so that the center point of the operation tool is moved to the positioning mark; the controller stores a current position coordinate under the base coordinate system; D) moving the image sensor: selecting the first coordinate system of the image sensor as the current coordinate system and adding the correction height; the controller controls the robot arm to move the image sensor, so that a first coordinate origin of the image sensor is moved to a correction reference position coordinate, the correction reference position coordinate is positioned above the positioning mark, and only the difference of Z-axis coordinate values is the correction height; E) analyzing the positioning mark image: the image sensor captures a positioning image which is an image with the positioning mark; the controller sets a positioning image center on the positioning image through an image analysis software and analyzes the positioning image; obtaining the position of the positioning mark in the positioning image relative to the center of the positioning image through the image analysis software, and enabling the controller to obtain a positioning mark image coordinate; F) correcting the image and the real distance: operating the robot arm to move the image sensor so that a first coordinate origin of the image sensor is moved to the first to fourth calibration coordinate points; when the first coordinate origin of the image sensor is moved to the first to fourth calibration coordinate points, the image sensor respectively captures a first image, a second image, a third image and a fourth image, the controller analyzes the first image, the second image, the third image and the fourth image through the image analysis software, and respectively obtains a first calibration image coordinate, a second calibration image coordinate, a third calibration image coordinate and a fourth calibration image coordinate of the positioning mark in the first to fourth images; G) calculating the image correction data: knowing the coordinate values of the first to fourth calibration coordinate points and the first to fourth calibration image coordinates under the substrate coordinate system, calculating to obtain an image calibration data; by the image correction data, the conversion relation between the distance in the image and the distance of the real world can be known; H) calculating the compensation quantity of the coordinate system of the image sensor: and calculating a first coordinate system compensation amount of an image sensor by using the coordinates of the positioning mark image and the image correction data, and compensating the error between the position in the image of the image sensor and the position of the operating tool.
By the method, the vision-guided robot arm correction method provided by the invention is not limited to a specific correction target such as a bitmap, and the correction operation can be performed only by specifying the positioning mark in the correction target, so that the time for the correction operation can be saved. In addition, the coordinate position is judged by an image analysis mode, and visual errors caused by artificial judgment can be reduced.
It should be noted that, in step a), the Z-axis components of the first to fourth calibration coordinate points are all the same and located at the same height.
Further, the vision-guided robot arm calibration method of claim 1, wherein: the number of correction coordinate points needs to be four or more. However, as more coordinate points are used for calibration, the calculation amount is larger, the calculation time is longer, and the calculation cost is increased, so that an appropriate number of calibration points is selected, and in the embodiment, four-point calibration is performed.
In step G), the image correction data is calculated by knowing the coordinates of the first to fourth correction coordinate points as Xri=[xri yri]TI is 1-4, and the corresponding coordinates of the first to fourth corrected images are Xci=[xciyci]TI is 1 to 4, each expressed in a matrix as follows:
Figure BDA0002136335300000041
Figure BDA0002136335300000051
the matrix XRIs formed by the first to fourth calibration coordinate points under the base coordinate system, and the matrix XCThe first to fourth corrected image coordinates in the image space are expressed by the following relations:
XR=AXc
the matrix A is an affine transformation matrix between two plane coordinate systems (A)ffine transformation matrix), by calculating matrix XcGeneralized inverse matrix of Mor-Pentos Xc +(Moore-Penrose pseudo-inverse matrix) the matrix A can be calculated, i.e.:
A=XRXc +
generalized inverse matrix Xc +The solution can be performed by Singular Value Decomposition (SVD), and the matrix a is the image correction data, and shows the conversion relationship between the distance in the image and the distance in the real world.
In step H), the compensation amount of the first coordinate system of the image sensor is set to the controller to generate a second coordinate system of the sensor.
Drawings
FIG. 1 is a schematic view of a system showing a robotic arm in accordance with a preferred embodiment of the present invention;
FIG. 2 is a schematic illustration of a calibration target according to a preferred embodiment of the present invention;
FIG. 3 is a flow chart block diagram of a preferred embodiment of the present invention;
FIG. 4 is a schematic diagram of an image captured by an image sensor according to a preferred embodiment of the present invention, showing an image with a calibration target, a positioning mark and an image center.
Description of the symbols:
10 robot arm 11 base
12 flange face 13 controller
15 operating tool 17 image sensor
171 image sensing chip 171a image sensing plane
18 alignment target 181 alignment mark
P1 first correction coordinate point P2 second correction coordinate point
P3 third corrective coordinate point P4 fourth corrective coordinate point
Psp Current position coordinate Pcp correction reference position coordinate
TCP operation tool center point Xcs positioning mark image coordinate
Xc1 first corrected video coordinates Xc2 second corrected video coordinates
Xc3 third corrected image coordinates Xc4 fourth corrected image coordinates
Zca1 correcting height
Base coordinate system:
x-axis Y-axis Z-axis
Flange coordinate system:
x1 axis Y1 axis Z1 axis
Operating tool coordinate system:
x2 axis Y2 axis Z2 axis
Image sensor first coordinate system:
x3 axis Y3 axis Z3 axis
Detailed Description
For a detailed description of the technical features of the present invention, reference will now be made to the following preferred embodiments, taken in conjunction with the accompanying drawings, in which:
referring to fig. 1-4, a vision-guided robot calibration method according to a preferred embodiment of the present invention is applied to a robot 10, which is a six-axis robot, and the robot 10 has a base 11. The robot arm 10 has a flange face 12 at the end for attachment to an object. The robot 10 is electrically connected to a controller 13, and the controller 13 has functions of inputting data, outputting data, storing data, processing operation data, and displaying data. When the robot arm 10 leaves the factory, the controller 13 stores a base coordinate system and a flange coordinate system in advance. The base coordinate system is a coordinate space formed by an X axis, a Y axis and a Z axis perpendicular to each other, and the base coordinate has a base coordinate origin, which is located on the base 11 in this embodiment, but not limited thereto, and may be selected elsewhere. The robot 10 has a working range in the base coordinate system. The flange coordinate system is a coordinate space formed by an X1 axis, a Y1 axis, and a Z1 axis perpendicular to each other, and has a flange coordinate origin, which is located at the geometric center of the flange surface 12 in this embodiment. The flange coordinate system and the base coordinate system have a relationship of x1, y1, z1, a1, b1, c1, wherein:
x 1: the distance relationship between the X1 axial direction of the flange coordinate system and the X axial direction of the base coordinate system;
y 1: the distance relationship between the Y1 axial direction of the flange coordinate system and the Y axial direction of the base coordinate system;
z 1: the distance relationship between the Z1 axis of the flange coordinate system and the Z axis of the base coordinate system;
a 1: a rotation angle of an X1 axis of the flange coordinate system about the X axis of the base coordinate system;
b 1: a rotation angle of the flange coordinate system Y1 axial direction about the Y axial direction of the base coordinate system;
c 1: the Z1 axis of the flange coordinate system is rotated about the Z axis of the base coordinate system.
An operating tool 15 is mounted on the flange surface 12, and in the present embodiment, the operating tool 15 is exemplified by a suction cup, but not limited thereto. The operating tool 15 has an operating Tool Center Point (TCP). The user sets an operating tool coordinate system in the controller 13, the operating tool coordinate system being a coordinate space formed by an X2 axis, a Y2 axis, and a Z2 axis perpendicular to each other, the operating tool coordinate system having an operating tool coordinate origin, the operating tool coordinate origin being located at the operating tool center point TCP. The relation between the operating tool coordinate system and the flange coordinate system is x2, y2, z2, a2, b2 and c2, wherein:
x 2: the distance relationship between the X2 axial direction of the operating tool coordinate system and the X1 axial direction of the flange coordinate system;
y 2: the distance relationship between the Y2 axial direction of the operating tool coordinate system and the Y1 axial direction of the flange coordinate system;
z 2: the Z2 axis of the operating tool coordinate system is in relation to the Z1 axis of the flange coordinate system;
a 2: a rotation angle of the X2 axis of the operating tool coordinate system about the X1 axis of the flange coordinate system;
b 2: a rotation angle of Y2 of the operating tool coordinate system axially about the Y1 axis of the flange coordinate system;
c 2: the Z2 axis of the operating tool coordinate system is rotated an angle of rotation about the Z1 axis of the flange coordinate system.
An image sensor 17, in this embodiment, a Charge Coupled Device (CCD), is mounted on the flange surface 12 and electrically connected to the controller 13, and the image sensor 17 is used for capturing images. It should be noted that the image sensor 17 has an image sensing chip 171 therein, and the image sensing chip 171 has an image sensing plane 171a (not shown). The user sets a first coordinate system of the image sensor in the controller 13, which is a coordinate space formed by an X3 axis, a Y3 axis and a Z3 axis perpendicular to each other, and an X3Y3 plane formed by the X3 axis and the Y3 axis of the first coordinate system of the image sensor needs to be parallel to the image sensing plane 171a of the image sensing chip 171. The first coordinate system of the image sensor has a first origin of coordinates of the image sensor, which is located on the image sensing plane 171a in this embodiment. The relation between the first coordinate system of the image sensor and the flange coordinate system is x3, y3, z3, a3, b3 and c3, wherein:
x 3: the distance relationship between the X3 axial direction of the first coordinate system of the image sensor and the X1 axial direction of the flange coordinate system;
y 3: the distance relationship between the Y3 axial direction of the first coordinate system of the image sensor and the Y1 axial direction of the flange coordinate system;
z 3: the distance relationship between the Z3 axial direction of the first coordinate system of the image sensor and the Z1 axial direction of the flange coordinate system;
a 3: a rotation angle of the X3 axis of the image sensor first coordinate system about the X1 axis of the flange coordinate system;
b 3: a rotation angle of the Y3 axis of the image sensor first coordinate system about the Y1 axis of the flange coordinate system;
c 3: the Z3 axis of the first coordinate system of the image sensor is rotated about the Z1 axis of the flange coordinate system.
It should be noted that the user can operate the controller 13 to select the flange coordinate system, the operation tool coordinate system or the first coordinate system of the image sensor as a current coordinate system, which represents the coordinate system currently being used. The user sets a location point under the base coordinate system, and after selecting the current coordinate system, the controller 13 controls the origin of the current coordinate system to move to the location point, and makes the X1Y1 plane, X2Y2 plane, or X3Y3 plane of the current coordinate system parallel to the XY plane of the base coordinate system. For example, when the user selects the operation tool coordinate system as the current coordinate system, the controller 13 controls the robot arm 10 such that the operation tool coordinate origin is moved to the position point, and the X2Y2 plane formed by the X2 axis and the Y2 axis of the tool coordinate system is parallel to the XY plane formed by the X axis and the Y axis of the base coordinate system. For another example, when the user selects the first coordinate system of the image sensor as the current coordinate system, the controller 13 controls the robot arm 10 to move the first origin of coordinates of the image sensor to the position point, and the X3Y3 plane formed by the X3 axis and the Y3 axis of the first coordinate system of the image sensor is parallel to the XY plane formed by the X axis and the Y axis of the base coordinate system.
As shown in fig. 3, the calibration method of the vision-guided robot arm provided by the present invention comprises the following steps:
A) setting of operating conditions
A user sets a calibration height Zca1, a first calibration coordinate point P1, a second calibration coordinate point P2, a third calibration coordinate point P3, and a fourth calibration coordinate point P4 under the base coordinate system at the controller 13. It should be noted that the Z-axis components of the first to fourth calibration coordinate points P1-P4 are all the same and are located at the same height.
B) Placing a calibration target
The user places a calibration target 18 within the working range of the robot 10. The calibration target 18 has a positioning mark 181, and the positioning mark 181 is a dot in this embodiment, but is not limited to a dot.
C) Center point of mobile operating tool
Selecting the operating tool coordinate system as the current coordinate system, operating the robot 10 to move the operating tool 15 such that the operating tool center point TCP is moved to the position mark 181. The controller 13 stores a current position coordinate Psp in the base coordinate system.
D) Mobile image sensor
The image sensor first coordinate system is selected as the current coordinate system and the corrected height Zca1 is added. The controller 13 controls the robot arm 10 to move the image sensor 17 such that the image sensor first coordinate origin is moved to a calibration reference position coordinate Pcp, which is located above the positioning mark 181. In the base coordinate system, the correction reference position coordinates Pcp are different from the current position coordinates Psp only by the correction height Zca1 in the Z-axis coordinate value, and the other X-axis and Y-axis component values are the same.
E) Alignment mark image analysis
The image sensor 17 captures a positioning image, which is an image with the positioning mark 181. The controller 13 sets a positioning image center in the positioning image through an image analysis software and analyzes the positioning image, wherein the positioning image center is a geometric center of the positioning image in the present embodiment, but not limited thereto. The image analysis software obtains the position of the positioning mark in the positioning image relative to the center of the positioning image, so that the controller 13 obtains a positioning mark image coordinate Xcs.
In addition, the aforementioned image analysis software is a general commercially available image analysis software for determining an object in an image and analyzing a coordinate position of the object in the image, which is not repeated herein.
F) Correction of image and real distance
The robot 10 is operated to move the image sensor 17 such that the image sensor first coordinate origin is moved to the first to fourth calibration coordinate points P1-P4. When the first origin of coordinates of the image sensor is moved to the first to fourth calibration coordinate points P1-P4, the image sensor 17 captures a first image, a second image, a third image and a fourth image, respectively, and the controller 13 analyzes the first image, the second image, the third image and the fourth image through the image analysis software to obtain a first calibration image coordinate Xc1, a second calibration image coordinate Xc2, a third calibration image coordinate Xc3 and a fourth calibration image coordinate Xc4 of the positioning mark 181 in the first to fourth images, respectively.
G) Calculating image correction data
Knowing the coordinate values (real space) of the first to fourth calibration coordinate points P1-P4 in the substrate coordinate system and the first calibration image coordinate Xc1, the second calibration image coordinate Xc2, the third calibration image coordinate Xc3 and the fourth calibration image coordinate Xc4 (image space) of the positioning mark 181 in the first to fourth images, the distance relationship between the distance in the image and the real space (substrate coordinate system) can be calculated to obtain an image calibration data. By the image correction data, the conversion relationship between the distance in the image and the distance in the real world can be known.
In the present embodiment, four-point correction is used as an example, but the present invention is not limited to four points, and four or more points may be used. The more coordinate points are used for calibration, the larger the calculation amount, the more calculation time and the calculation cost are increased, so that an appropriate number of calibration points is selected, and in the embodiment, four-point calibration is performed.
The method for calculating the image correction data in the present embodiment is as follows, but not limited thereto.
The coordinates of the first to fourth calibration coordinate points P1-P4 are known as Xri=[xri yri]TAnd i is 1-4. And the corresponding first to fourth corrected image coordinates are Xci=[xci yci]TAnd i is 1-4. Expressed in matrix respectively as follows:
Figure BDA0002136335300000101
Figure BDA0002136335300000111
the matrix XRIs the substrateThe first to fourth calibration coordinate points P1-P4 in the coordinate system, and the matrix XCThe first to fourth corrected image coordinates in the image space are expressed by the following relations:
XR=AXc
the matrix a is an Affine transformation matrix (Affine transformation matrix) between two planar coordinate systems. By calculating the matrix XcGeneralized inverse matrix of Mor-Pentos Xc +(Moore-Penrose pseudo-inverse matrix) the matrix A can be calculated, i.e.:
A=XRXc +
generalized inverse matrix Xc +The solution can be performed by Singular Value Decomposition (SVD). The matrix a is the image correction data, and shows the conversion relationship between the distance in the image and the distance of the real world.
H) Calculating the compensation amount of the first coordinate system of the image sensor
The alignment mark image coordinates Xcs and the image calibration data are used to calculate a first coordinate system compensation amount of the image sensor.
Under ideal conditions. Since the X2Y2 plane formed by the X2 axis and the Y2 axis of the tool coordinate system and the X3Y3 plane formed by the X3 axis and the Y3 axis of the first image sensor coordinate system are all parallel to the XY plane formed by the X axis and the Y axis of the base coordinate system, and the correction reference position coordinates Pcp and the current position coordinates Psp only differ by the correction height Zca1 without component differences in the X axis and the Y axis, if the conversion between the tool coordinate system and the first image sensor coordinate system is ideal, the positioning mark in the positioning image will be located at the positioning image center, which also represents the position of the positioning mark 181 in the operating tool coordinate system, and will coincide with the image center in the image sensor coordinate system. In this way, after obtaining the image correction data (the ratio of the distance in the image to the distance in the real world), the user can intuitively operate the controller 13 to control the robot arm 10 and the operation tool 15 through the frame data and the image correction data captured by the image sensor 17.
However, in general, the position of the positioning mark 181 in the image has an error with the image center and an image compensation amount T is requiredcompAnd (4) compensating. Since the image coordinates Xcs of the anchor mark are the coordinate values of the anchor point 181 with the center of the anchor image as the origin in the anchor image, the coordinate values of the image coordinates Xcs of the anchor mark can be converted into the image offset TcompAnd displaying the error to be compensated in the image for converting the tool coordinate system and the first coordinate system of the image sensor. If the operating tool is to be controlled with the anchor point 181 as the center and intuitively by the image captured from the image sensor 17, only the image compensation amount T is requiredcompThe image captured by the image sensor 17 is added to make the positioning point image in the image be located at the center of the image, so that the user can intuitively operate the operation tool through the image captured by the sensor. The controller 13 needs the compensation amount of the first coordinate system of the image sensor to control the movement of the operating tool, so as to compensate the error between the position in the image of the image sensor 17 and the position of the operating tool.
It should be noted that the compensation amount of the first coordinate system of the image sensor can also be set to the controller 13 to generate a second coordinate system of the sensor. Therefore, it is not necessary to add the compensation amount to the image captured by the image sensor 17 every time, but when the robot arm 10 drives the image sensor 17 to move, the compensation amount of the first coordinate system of the image sensor is directly added to the moving position of the sensor 17, which is convenient for the user to use.
By the method, the vision-guided robot arm correction method provided by the invention is not limited to a specific correction target such as a bitmap, and the correction operation can be performed only by specifying the positioning mark in the correction target, so that the time for the correction operation can be saved. In addition, the coordinate position is judged by an image analysis mode, and visual errors caused by artificial judgment can be reduced.

Claims (5)

1. A vision guide robot arm correction method is used for a robot arm, and the robot arm is provided with a base; the end of the robot arm is provided with a flange surface; the robot arm is electrically connected with a controller, and the controller has the functions of inputting data, outputting data, storing data, processing operation data and displaying data; the controller prestores a substrate coordinate system and a flange coordinate system, wherein the substrate coordinate system is a coordinate space formed by an X axis, a Y axis and a Z axis which are mutually vertical, and the substrate coordinate has a substrate coordinate origin; the robot arm has a working range; the flange coordinate system is a coordinate space formed by an X1 axis, a Y1 axis and a Z1 axis which are perpendicular to each other, and the flange coordinate system has a flange coordinate origin; an operating tool is mounted on the flange surface; the operating tool has an operating tool center point; the controller sets an operating tool coordinate system, the operating tool coordinate system is a coordinate space formed by an X2 axis, a Y2 axis and a Z2 axis which are perpendicular to each other, the operating tool coordinate system is provided with an operating tool coordinate origin, and the operating tool coordinate origin is positioned at the operating tool center point; the image sensor is arranged on the flange surface and is electrically connected with the controller; the image sensor is internally provided with an image sensing chip which is provided with an image sensing plane; the controller sets a first coordinate system of the image sensor, which is a coordinate space formed by an X3 axis, a Y3 axis and a Z3 axis that are perpendicular to each other, wherein an X3Y3 plane formed by the X3 axis and the Y3 axis of the first coordinate system of the image sensor needs to be parallel to the image sensing plane of the image sensing chip; the first coordinate system of the image sensor is provided with a first coordinate origin of the image sensor; the user can operate the controller to select the flange coordinate system, the operating tool coordinate system or the first coordinate system of the image sensor as a current coordinate system, and the current coordinate system represents a coordinate system currently used; the vision guiding robot arm correction method comprises the following steps:
A) setting the operating conditions:
setting a correction height, a first correction coordinate point, a second correction coordinate point, a third correction coordinate point and a fourth correction coordinate point under the base coordinate system by the controller;
B) placing a correction target:
placing a calibration target within the working range of the robot arm; the calibration target has a positioning mark;
C) moving the operating tool center point:
selecting the coordinate system of the operation tool as the current coordinate system, operating the robot arm to move the operation tool, so that the center point of the operation tool is moved to the positioning mark; the controller stores a current position coordinate under the base coordinate system;
D) moving the image sensor:
selecting the first coordinate system of the image sensor as the current coordinate system and adding the correction height; the controller controls the robot arm to move the image sensor, so that a first coordinate origin of the image sensor is moved to a correction reference position coordinate, the correction reference position coordinate is positioned above the positioning mark, and only the difference of Z-axis coordinate values is the correction height;
E) analyzing the positioning mark image:
the image sensor captures a positioning image which is an image with the positioning mark; the controller sets a positioning image center on the positioning image through an image analysis software and analyzes the positioning image; obtaining the position of the positioning mark in the positioning image relative to the center of the positioning image through the image analysis software, and enabling the controller to obtain a positioning mark image coordinate;
F) correcting the image and the real distance:
operating the robot arm to move the image sensor so that a first coordinate origin of the image sensor is moved to the first to fourth calibration coordinate points; when the first coordinate origin of the image sensor is moved to the first to fourth calibration coordinate points, the image sensor respectively captures a first image, a second image, a third image and a fourth image, the controller analyzes the first image, the second image, the third image and the fourth image through the image analysis software, and respectively obtains a first calibration image coordinate, a second calibration image coordinate, a third calibration image coordinate and a fourth calibration image coordinate of the positioning mark in the first to fourth images;
G) calculating the image correction data:
knowing the coordinate values of the first to fourth calibration coordinate points and the first to fourth calibration image coordinates under the substrate coordinate system, calculating to obtain an image calibration data; by the image correction data, the conversion relation between the distance in the image and the distance of the real world can be known;
H) calculating the compensation quantity of the coordinate system of the image sensor:
and calculating a first coordinate system compensation amount of an image sensor by using the coordinates of the positioning mark image and the image correction data, and compensating the error between the position in the image of the image sensor and the position of the operating tool.
2. The vision-guided robotic correction method of claim 1, wherein: in step a), the Z-axis components of the first to fourth calibration coordinate points are all the same and are located at the same height.
3. The vision-guided robotic correction method of claim 1, wherein: the number of correction coordinate points needs to be four or more.
4. The vision-guided robotic correction method of claim 1, wherein: in step G), the method for calculating the image correction data is as follows:
the coordinates of the first to fourth calibration coordinate points are known as Xri=[xri yri]TI is 1-4, and the corresponding coordinates of the first to fourth corrected images are Xci=[xci yci]TI is 1 to 4, each expressed in a matrix as follows:
Figure FDA0002136335290000031
Figure FDA0002136335290000032
the matrix XRIs formed by the first to fourth calibration coordinate points under the base coordinate system, and the matrix XCThe first to fourth corrected image coordinates in the image space are expressed by the following relations:
XR=AXc
the matrix A is an Affine transformation matrix (affinity transformation matrix) between two plane coordinate systems, and the matrix X is calculatedcGeneralized inverse matrix of Mor-Pentos Xc +(Moore-Penrose pseudo-inverse matrix) the matrix A can be calculated, i.e.:
A=XRXc +
generalized inverse matrix Xc +The solution can be performed by Singular Value Decomposition (SVD), and the matrix a is the image correction data, and shows the conversion relationship between the distance in the image and the distance in the real world.
5. The vision-guided robotic correction method of claim 1, wherein: in step H), the compensation amount of the first coordinate system of the image sensor is set to the controller, and a second coordinate system of the sensor is generated.
CN201910654284.9A 2019-07-19 2019-07-19 Vision-guided robot arm correction method Active CN112238453B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910654284.9A CN112238453B (en) 2019-07-19 2019-07-19 Vision-guided robot arm correction method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910654284.9A CN112238453B (en) 2019-07-19 2019-07-19 Vision-guided robot arm correction method

Publications (2)

Publication Number Publication Date
CN112238453A true CN112238453A (en) 2021-01-19
CN112238453B CN112238453B (en) 2021-08-31

Family

ID=74167333

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910654284.9A Active CN112238453B (en) 2019-07-19 2019-07-19 Vision-guided robot arm correction method

Country Status (1)

Country Link
CN (1) CN112238453B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113560942A (en) * 2021-07-30 2021-10-29 新代科技(苏州)有限公司 Workpiece pick-and-place control device of machine tool and control method thereof
CN113681146A (en) * 2021-10-25 2021-11-23 宁波尚进自动化科技有限公司 BTO intelligent correction device and method of full-automatic lead bonding machine
WO2023024961A1 (en) * 2021-08-26 2023-03-02 深圳市海柔创新科技有限公司 Compensation parameter generation method and device for sensor apparatus

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279775A (en) * 2014-07-23 2016-01-27 广明光电股份有限公司 Correcting device and method of mechanical arm
CN106767393A (en) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 The hand and eye calibrating apparatus and method of robot
US20170368687A1 (en) * 2016-06-22 2017-12-28 Quanta Storage Inc. Method for teaching a robotic arm to pick or place an object
CN108122257A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of Robotic Hand-Eye Calibration method and device
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification
CN109514554A (en) * 2018-11-30 2019-03-26 天津大学 Utilize the tool coordinates system quick calibrating method of robot end's vision system
CN109927036A (en) * 2019-04-08 2019-06-25 青岛小优智能科技有限公司 A kind of method and system of 3D vision guidance manipulator crawl

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105279775A (en) * 2014-07-23 2016-01-27 广明光电股份有限公司 Correcting device and method of mechanical arm
CN106767393A (en) * 2015-11-20 2017-05-31 沈阳新松机器人自动化股份有限公司 The hand and eye calibrating apparatus and method of robot
US20170368687A1 (en) * 2016-06-22 2017-12-28 Quanta Storage Inc. Method for teaching a robotic arm to pick or place an object
CN108122257A (en) * 2016-11-28 2018-06-05 沈阳新松机器人自动化股份有限公司 A kind of Robotic Hand-Eye Calibration method and device
CN109454634A (en) * 2018-09-20 2019-03-12 广东工业大学 A kind of Robotic Hand-Eye Calibration method based on flat image identification
CN109514554A (en) * 2018-11-30 2019-03-26 天津大学 Utilize the tool coordinates system quick calibrating method of robot end's vision system
CN109927036A (en) * 2019-04-08 2019-06-25 青岛小优智能科技有限公司 A kind of method and system of 3D vision guidance manipulator crawl

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
朱振友等: "机器人视觉的"手-眼"关系快速标定算法", 《光学技术》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113560942A (en) * 2021-07-30 2021-10-29 新代科技(苏州)有限公司 Workpiece pick-and-place control device of machine tool and control method thereof
CN113560942B (en) * 2021-07-30 2022-11-08 新代科技(苏州)有限公司 Workpiece pick-and-place control device of machine tool and control method thereof
WO2023024961A1 (en) * 2021-08-26 2023-03-02 深圳市海柔创新科技有限公司 Compensation parameter generation method and device for sensor apparatus
CN113681146A (en) * 2021-10-25 2021-11-23 宁波尚进自动化科技有限公司 BTO intelligent correction device and method of full-automatic lead bonding machine
CN113681146B (en) * 2021-10-25 2022-02-08 宁波尚进自动化科技有限公司 BTO intelligent correction device and method of full-automatic lead bonding machine

Also Published As

Publication number Publication date
CN112238453B (en) 2021-08-31

Similar Documents

Publication Publication Date Title
KR102280663B1 (en) Calibration method for robot using vision technology
KR102532072B1 (en) System and method for automatic hand-eye calibration of vision system for robot motion
JP7207851B2 (en) Control method, robot system, article manufacturing method, program and recording medium
TWI699264B (en) Correction method of vision guided robotic arm
CN110125926B (en) Automatic workpiece picking and placing method and system
CN112238453B (en) Vision-guided robot arm correction method
CN110276799B (en) Coordinate calibration method, calibration system and mechanical arm
JP4021413B2 (en) Measuring device
US9050728B2 (en) Apparatus and method for measuring tool center point position of robot
JP3946711B2 (en) Robot system
JP3733364B2 (en) Teaching position correction method
JP4267005B2 (en) Measuring apparatus and calibration method
JP4191080B2 (en) Measuring device
JP5618770B2 (en) Robot calibration apparatus and calibration method
CN113601158B (en) Bolt feeding pre-tightening system based on visual positioning and control method
JP2016166872A (en) Vision system for training assembly system by virtual assembly of object
JP2021049607A (en) Controller of robot device for adjusting position of member supported by robot
JP6912529B2 (en) How to correct the visual guidance robot arm
CN114800574B (en) Robot automatic welding system and method based on double three-dimensional cameras
JPH06187021A (en) Coordinate correcting method for robot with visual sense
JP2016203282A (en) Robot with mechanism for changing end effector attitude
JP7482364B2 (en) Robot-mounted mobile device and system
KR100640743B1 (en) A calibration equipment of laser vision system for 6-axis robot
JP7183372B1 (en) Marker detection device and robot teaching system
CN114619233A (en) Locking positioning method, screw locking method, locking positioning device and screw machine

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant