CN111369625A - Positioning method, positioning device and storage medium - Google Patents

Positioning method, positioning device and storage medium Download PDF

Info

Publication number
CN111369625A
CN111369625A CN202010135994.3A CN202010135994A CN111369625A CN 111369625 A CN111369625 A CN 111369625A CN 202010135994 A CN202010135994 A CN 202010135994A CN 111369625 A CN111369625 A CN 111369625A
Authority
CN
China
Prior art keywords
coordinate
robot
mark point
point
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010135994.3A
Other languages
Chinese (zh)
Other versions
CN111369625B (en
Inventor
郭秋明
谢盛珍
刘江
袁继广
冯英俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Lyric Robot Automation Co Ltd
Original Assignee
Guangdong Lyric Robot Intelligent Automation Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Lyric Robot Intelligent Automation Co Ltd filed Critical Guangdong Lyric Robot Intelligent Automation Co Ltd
Priority to CN202010135994.3A priority Critical patent/CN111369625B/en
Publication of CN111369625A publication Critical patent/CN111369625A/en
Application granted granted Critical
Publication of CN111369625B publication Critical patent/CN111369625B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The application provides a positioning method, a positioning device and a storage medium, wherein the method comprises the following steps: when the execution tail end of the robot clamps the part with the mark point to a specified position in a fixed visual field of the image acquisition equipment, acquiring an image of the mark point to obtain a first coordinate of the mark point in a first state; when the rotation center of the robot is kept unchanged, calculating a second coordinate of the mark point after rotating along a first appointed direction according to a preset angle under the control of the robot according to the motion path of the part clamped by the robot, and calculating a third coordinate of the mark point after rotating along a second appointed direction according to the preset angle under the control of the robot; and calculating the position of the rotation center of the robot in a robot coordinate system according to the first coordinate, the second coordinate and the third coordinate. Therefore, the problem that the rotation center of the robot is difficult to accurately determine in the prior art can be solved.

Description

Positioning method, positioning device and storage medium
Technical Field
The present application relates to the field of robotics, and in particular, to a positioning method, apparatus, and storage medium.
Background
The multi-axis robot is also called a single-axis manipulator, an industrial mechanical arm, an electric cylinder and the like, and is a multipurpose manipulator which can realize automatic control, can be programmed repeatedly, has multiple degrees of freedom and establishes a spatial right-angle relationship with the degree of freedom of motion. The multi-axis robot can be applied to the common industrial production fields of glue dispensing, plastic dripping, spraying, stacking, sorting, packaging, welding, metal processing, carrying, loading and unloading, assembling, printing and the like, and is used for improving the production efficiency.
However, in an application scenario of joint positioning of vision and a robot, it is difficult to accurately determine the rotation center of the robot, which may affect the processing quality.
Disclosure of Invention
An object of the embodiments of the present application is to provide a positioning method, an apparatus, and a storage medium, so as to solve the problem in the prior art that it is difficult to accurately determine a rotation center of a multi-axis robot.
In a first aspect, an embodiment provides a positioning method, including:
when an execution tail end of a robot clamps a part with a mark point to a specified position in a fixed visual field of an image acquisition device, acquiring an image of the mark point to obtain a first coordinate of the mark point in a first state;
when the rotation center of the robot is kept unchanged, calculating a second coordinate of the mark point after rotating along a first appointed direction according to a preset angle under the control of the robot according to the movement path of the part clamped by the robot, and calculating a third coordinate of the mark point after rotating along a second appointed direction according to the preset angle under the control of the robot;
and calculating the position of the rotation center of the robot in a robot coordinate system according to the first coordinate, the second coordinate and the third coordinate.
In the method, the joint relation of the image acquisition equipment and the robot is combined, under the condition that the robot keeps the rotation center unchanged, in the process that the robot drives the part to move, the mark points on the part in different states are positioned and calculated, so that the rotation center of the multi-axis robot is determined based on the position change of the mark points, the actual rotation center of the multi-axis robot can be determined more accurately, the actual rotation center is expressed in the robot coordinate system, the positioning precision is improved, the processing operations such as correction and the like according to the calculated rotation center in the subsequent processing process are facilitated, and the processing quality of products is improved to a certain extent.
In an optional embodiment, the calculating, according to a path of the movement of the robot gripping the part while a rotation center of the robot remains unchanged, a second coordinate of the mark point after rotating in a first designated direction by a preset angle under the control of the robot, and a third coordinate of the mark point after rotating in a second designated direction by the preset angle under the control of the robot, includes:
acquiring a first reference position of the robot in the first state;
under the condition that the rotation center of the robot is not changed, taking the first coordinate as a reference position, and when an execution tail end of the robot clamps the part and rotates the part along the first specified direction by the preset angle, the mark point is moved to a first rotation position, and the part is clamped and translated to translate the mark point from the first rotation position to the specified position, acquiring a second reference position of the robot in a second state;
calculating a second coordinate when the mark point is moved to the first rotating position according to the coordinate relation between the first reference position and the second reference position;
under the condition that the rotation center of the robot is not changed, taking the first coordinate as a reference position, and when an execution tail end of the robot clamps the part and rotates the part by the preset angle along the second appointed direction, the mark point is moved to a second rotation position, and the part is clamped and translated to translate the mark point from the second rotation position to the appointed position, acquiring a third reference position of the robot in a third state;
and calculating a third coordinate when the mark point is moved to the second rotating position according to the coordinate relation between the first reference position and the third reference position.
Through the implementation mode, under the condition that the rotation center of the robot is kept unchanged, the mark point is moved to the rotating positions on two sides of the first coordinate based on the rotation motion of the part driven by the robot, then the mark point at the rotating position is translated to the fixed visual field through the robot, the reference point position (namely, the second reference position/the third reference position) of the robot under the robot coordinate system is obtained again according to the translation operation, and the originally unknown coordinate (namely, the second coordinate/the third coordinate) of the rotating position where the mark point rotates from the first coordinate to the two sides is calculated by combining the position change of the reference position after the mark point is rotated and translated. And the obtained coordinates can meet the requirement of longer distance, and the longer the distance between the coordinates for rounding is, the more the rotation center can be accurately calculated under the condition of small visual field of the image acquisition equipment.
In an alternative embodiment, before the calculating the second coordinate when the marker point is moved to the first rotational position, the method further comprises:
when the robot clamps the part to move so as to translate the mark point from the first rotating position to the specified position, acquiring an image of the mark point to obtain a fourth coordinate of the mark point in the second state;
the calculating a second coordinate when the mark point is moved to the first rotation position according to the coordinate relationship between the first reference position and the second reference position includes:
and calculating the second coordinate according to the coordinate relation between the first reference position and the second reference position and the position of the fourth coordinate.
Through the implementation mode, after the mark point at the rotating position is translated to the designated position, the image of the mark point is collected again to obtain the fourth coordinate, so that the image collection requirement on the image collection equipment can be reduced.
In an alternative embodiment, before the calculating the third coordinate when the marker point is moved to the second rotational position, the method further comprises:
when the robot clamps the part to move so as to translate the mark point from the second rotation position to the designated position, acquiring an image of the mark point to obtain a fifth coordinate of the mark point in the third state;
the calculating a third coordinate when the mark point is moved to the second rotation position according to the coordinate relationship between the first reference position and the third reference position includes:
and calculating the third coordinate according to the coordinate relation between the first reference position and the third reference position and the position of the fifth coordinate.
Through the implementation mode, after the mark point at the rotating position is translated to the designated position, the image of the mark point is collected again to obtain the fifth coordinate, so that the image collection requirement on the image collection equipment can be reduced.
In an alternative embodiment, the first rotational position is outside the fixed field of view and the second rotational position is outside the fixed field of view.
Even if the imaging visual field of the image acquisition equipment is small due to the limitation of factors such as a processing mechanism, the size of a product, the accuracy of the visual equipment and the like, and therefore the rotation center is difficult to accurately calculate under the condition of small visual field directly according to the image acquisition condition, through the implementation mode, under the condition that the first rotating position and the second rotating position are both outside the fixed visual field, the calculated distance between the second coordinate and the first coordinate is large, and the calculated distance between the third coordinate and the first coordinate is large, so that the rotation center can be accurately calculated, and the positioning accuracy is high.
In an optional embodiment, before the acquiring the image of the marker point to obtain the first coordinate of the marker point in the first state, the method further includes:
and carrying out combined calibration on a robot coordinate system of the robot and an image coordinate system of the image acquisition equipment.
Through the implementation mode, the coordinates of the mark points in the robot coordinate system can be obtained through conversion at any time according to the image acquisition condition of the mark points.
In an optional embodiment, the jointly calibrating the robot coordinate system of the robot and the image coordinate system of the image capturing device includes:
and jointly calibrating the robot coordinate system of the robot and the image coordinate system of the image acquisition equipment by a nine-point calibration method.
Through the implementation mode, the coordinate conversion relation between the robot coordinate system and the image coordinate system can be determined rapidly, and therefore the image coordinate obtained after image acquisition each time can be converted into the coordinate under the robot coordinate system rapidly.
In a second aspect, an embodiment provides a positioning device, the device comprising:
the acquisition module is used for acquiring an image of a mark point to obtain a first coordinate of the mark point in a first state when an execution tail end of the robot clamps a part with the mark point to a specified position in a fixed field of view of the image acquisition equipment;
the calculation module is used for calculating a second coordinate of the mark point after rotating along a first appointed direction according to a preset angle under the control of the robot and a third coordinate of the mark point after rotating along a second appointed direction according to the preset angle under the control of the robot according to a path of the part clamped by the robot when the rotation center of the robot is kept unchanged;
the calculation module is further configured to calculate a position of the rotation center of the robot in a robot coordinate system according to the first coordinate, the second coordinate, and the third coordinate.
The device can execute the method provided by the first aspect, and can accurately position the rotation center of the multi-axis robot, so that the visual positioning precision is improved.
In an alternative embodiment, the apparatus further comprises:
and the joint positioning module is used for carrying out joint calibration on a robot coordinate system of the robot and an image coordinate system of the image acquisition equipment.
In a third aspect, embodiments provide a storage medium having a computer program stored thereon, which, when executed by a processor, performs the method of the first aspect.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments of the present application will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and that those skilled in the art can also obtain other related drawings based on the drawings without inventive efforts.
Fig. 1 is a flowchart of a positioning method according to an embodiment of the present disclosure.
Fig. 2 is a schematic position diagram among a first coordinate, a second coordinate, and a third coordinate in an example provided by the embodiment of the present application.
Fig. 3 is a schematic position diagram among a first coordinate, a second coordinate, and a third coordinate in another example provided by the embodiment of the present application.
Fig. 4 is a schematic diagram of the embodiment of fig. 3 in which the marker point is moved to the first rotational position by the robot and then translated into the fixed field of view.
Fig. 5 is a schematic diagram of the embodiment of fig. 3 in which the marker point is moved to the second rotational position by the robot and then translated into the fixed field of view.
Fig. 6 is a functional block diagram of a positioning apparatus according to an embodiment of the present disclosure.
Fig. 7 is a block diagram of a robot according to an embodiment of the present disclosure.
Detailed Description
The technical solutions in the embodiments of the present application will be described below with reference to the drawings in the embodiments of the present application.
In practical applications, the shape of the multi-axis robot is changeable, the rotation center of some multi-axis robots may be closer to the execution end of the robot, and the rotation center of some multi-axis robots may be farther from the execution end of the robot. The transition structure from the center of rotation to the end of the actuation is various, and the general shape of the transition structure may be a line segment, a combination of line segments, an irregular curve, or the like.
In view of the many kinds of multiaxis robot, the shape is various, under the higher condition to product quality requirement, if can not accurately confirm multiaxis robot's rotation center, can cause the influence to industrial production process, not only influence production efficiency, still can influence product quality.
In view of this, the inventor proposes the following embodiments to accurately position the rotation center of the multi-axis robot.
Referring to fig. 1, fig. 1 is a flowchart of a positioning method according to an embodiment of the present disclosure. The positioning method can be used for positioning the rotation center of the multi-axis robot.
Among them, the "robot" mentioned in the positioning method may be a multi-axis robot. The multi-axis robot may include a connection portion, a rotation structure, and an execution tip. There may be a transition structure between the connection portion and the rotation structure, and a transition structure between the rotation structure and the execution end.
The following describes the positioning method provided by the embodiment of the present application in detail with reference to fig. 1.
As shown in FIG. 1, the positioning method includes steps S11-S13. The method may be implemented by a set of processing modules, which may be configured on different device carriers to implement the corresponding functionality. For example, a functional module for acquiring an image and a functional module for controlling the image capturing device to take a picture in the processing modules may be configured in the image capturing device, or may be configured on an industrial personal computer or a visual software server communicatively connected to the image capturing device. Among these processing modules, a functional module for controlling the movement of the robot may be disposed in a processor of the multi-axis robot, or may be disposed in an industrial personal computer connected to the multi-axis robot.
As an implementation manner, each module for implementing the positioning method may be deployed in a group of monitoring devices, and the monitoring devices may acquire images acquired by image acquisition devices and may also control the movement of the multi-axis robot. The monitoring equipment can be built in the robot, and also can be used as an industrial personal computer for controlling the image acquisition equipment and the multi-axis robot.
S11: when the execution tail end of the robot clamps the part with the mark point to a specified position in a fixed visual field of the image acquisition equipment, acquiring an image of the mark point to obtain a first coordinate of the mark point in a first state.
The execution tail end of the robot can be provided with a clamp, a claw, a mounting rack and other structures capable of operating parts. The mark point can be a through hole, a groove and a protrusion on the surface of the part, can also be an angular point of the part, and can also be a round point, a cross point and the like adhered on the part.
S12: and when the rotation center of the robot is kept unchanged, calculating a second coordinate of the mark point after rotating along the first appointed direction according to a preset angle under the control of the robot according to the movement path of the part clamped by the robot, and a third coordinate of the mark point after rotating along the second appointed direction according to the preset angle under the control of the robot.
S13: and calculating the position of the rotation center of the robot in the robot coordinate system according to the first coordinate, the second coordinate and the third coordinate.
With respect to S11, an image capture device is used to capture an image of the part, thereby obtaining an image of the landmark points. According to the image of the mark point acquired by the image acquisition equipment for the part, the image coordinate of the mark point can be obtained, and under the condition that the coordinate conversion relation between the image coordinate and the robot coordinate is known, the image coordinate of the mark point can be converted into the coordinate of the mark point in the robot coordinate system. And the coordinate conversion can be carried out according to the coordinate relation between the image coordinate system and the robot coordinate system which are subjected to combined calibration in advance. It can be understood that, since the relationship between the image coordinate system and the pixel coordinate system is calibrated before image acquisition, the coordinate conversion relationship between the pixel coordinate system and the robot coordinate system can also be obtained through conversion according to the coordinate conversion relationship between the image coordinate system and the robot coordinate system.
Wherein the first coordinate is the coordinate of the mark point in the robot coordinate system.
As an embodiment, when an execution end clamp of a multi-axis robot (robot for short) clamps a part with a mark point to move, so that the part enters a fixed visual field of an image acquisition device, and the mark point on the part is located at the center position of the fixed visual field, a trigger photographing condition of the image acquisition device is met. The image acquisition equipment acquires images to obtain a first acquired image in a first state, determines the position of the mark point at the moment according to the first acquired image, and represents the position of the mark point at the moment as a first coordinate in a robot coordinate system according to the calibration relation between coordinate systems, wherein the first coordinate is Pc1(Xc1, Yc 1).
In S12, when the rotation center of the robot remains unchanged (i.e., the rotation center of the robot is not replaced), the robot gripper moves, and the positions of the index points rotated by a predetermined angle in the first and second predetermined directions are calculated based on the positions of the index points in the first state, so that the second and third coordinates on both sides of the first coordinate can be obtained. The first designated direction and the second designated direction are opposite rotation directions.
Wherein the size of the preset angle is related to the size of the fixed visual field.
Referring to fig. 2, as an embodiment of S12, the preset angle is a smaller angle, and the smaller angle is a first rotation angle Θ 1 when the robot drives the mark point to rotate and the mark point at the end of rotation is still in the fixed field of view (non-field-of-view center). By driving the mark point to move in the first designated direction and the second designated direction respectively at the set first rotation angle Θ 1 on the basis of the first state, the image acquisition is performed on the mark point in the fixed visual field at the end of each rotation, and the second coordinate Pc2(Xc2, Yc2) and the third coordinate Pc3(Xc3, Yc3) which correspond to the mark point after the rotation in the two directions can be obtained according to the image of the mark point at the end of the rotation. Performing S13 based on the second coordinates Pc2(Xc2, Yc2) and the third coordinates Pc3(Xc3, Yc3) thus obtained allows a positioning calculation in a small field of view and at a small angle, and determines the rotation center Pcenter (Xco, Yco).
As another embodiment of S12, please refer to fig. 3, the preset angle is a larger angle, and the larger angle is a second rotation angle Θ 2 when the marking point at the end of rotation is located outside the fixed field of view after the robot drives the marking point to rotate. Since the index point at the end of the rotation is not in the fixed field of view, in this case, it is necessary to calculate the second coordinates Pc2 '(Xc 2', Yc2 ') and the third coordinates Pc 3' (Xc3 ', Yc 3') of the index point after moving in the first and second designated directions respectively at the set second rotation angle Θ 2 by indirect calculation. Details of this embodiment will be described below.
It should be noted that, compared with the previous embodiment (using the first rotation angle Θ 1), this embodiment (using the second rotation angle Θ 2) has the advantages of being able to perform positioning calculation under a small field of view and a large angle, and in the case of a larger rotation angle, the distance between the first coordinate, the second coordinate, and the third coordinate may be further away, so that the calculated rotation center Pcno (Xcno, Ycno) is more accurate and the positioning accuracy is higher.
After the second coordinates and the third coordinates in the robot coordinate system are obtained in S12, S13 is performed in combination with the first coordinates obtained in S11. According to the principle of forming a circle by three points, a circle center can be determined, and the circle center is the position of the rotation center. Wherein the calculated position of the rotation center is a position in a robot coordinate system.
In the method of S11-S13, a joint calibration relationship between the vision device (image capture device) and the robot is combined, and in the case that the robot keeps the rotation center unchanged, in the process that the robot drives the part to move, the mark points on the part in different states are positioned and calculated, so that the rotation center of the multi-axis robot is determined based on the position change of the mark points, and thus the actual rotation center of the multi-axis robot can be determined more accurately, and the actual rotation center is represented in the robot coordinate system, which is beneficial to performing more accurate motion control according to the calculated rotation center in the subsequent industrial process, for example, is beneficial to performing processing operations such as deviation correction according to the calculated rotation center.
As an application scene, in a visual positioning project, after the accurate rotation center is determined by the method, the visual positioning precision of the system can be improved for the work of angle correction which relates to angle correction and depends on the rotation center of the robot, so that the more accurate correction work is facilitated.
The method can be used for calculating the rotation center of the multi-axis robot before formal machining is carried out on products and parts, and is beneficial to improving the machining quality of the products.
The calculation process of the second coordinate and the third coordinate in step S12 in the above method will be described. The calculation processes of the second coordinate and the third coordinate are similar and can be mutually referred.
As an implementation, the process of calculating the second coordinates may comprise the sub-steps of: S121-S123.
S121: a first reference position of the robot in a first state is acquired.
Wherein a first reference position Pcn1(Xcn1, Ycn1) of the robot in the first state may be recorded when S11 is performed. The first reference position may be determined from an absolute coordinate feature position on the robot. The absolute coordinate feature position is a position in the robot coordinate system, and the absolute coordinate feature position is a position of a reference point on the multi-axis robot, and may be, for example, a position of a reference point on a connecting portion of the multi-axis robot. The position change of the absolute coordinate feature position serves as the change of the reference position during the movement of the robot.
When the rotation center of the robot is kept unchanged and only performs the rotation motion, the reference position is unchanged, and when the rotation center of the robot is kept unchanged and performs the translation motion, the reference position changes along with the translation path. However, before the final rotation center is calculated, the relative position relationship between the rotation center of the robot and the absolute coordinate feature position is kept unchanged regardless of how the reference position corresponding to the absolute coordinate feature position is translated.
S122: and under the condition that the rotation center of the robot is not changed, taking the first coordinate as a reference position, rotating the clamping part at the execution tail end of the robot along a first designated direction by a preset angle, moving the mark point to a first rotating position, and translating the clamping part to translate the mark point from the first rotating position to the designated position, so as to obtain a second reference position of the robot in a second state.
Before the rotation operation is finished and the translation operation is performed, if the image of the mark point can be acquired in the fixed view field, the image acquisition device can be triggered to acquire the image after the mark point is driven to rotate by a preset angle, so that a second acquired image is obtained, and a second coordinate Pc2(Xc2, Yc2) in the robot coordinate system is obtained through conversion according to the position of the mark point in the second acquired image.
Wherein, if the image capturing apparatus is set to perform image capturing only when the marker point is recognized at the designated position of the fixed field of view, even if the marker point can be recognized in the fixed field of view when rotated to the first rotational position, the second reference position Pcn2(Xcn2, Ycn2) of the robot in the second state should be acquired when the robot translates the marker point from the first rotational position to the designated position of the fixed field of view, thereby calculating the second coordinates Pc2 ' (Xc2 ', Yc2 ').
On the other hand, if the marker point is not in the fixed field of view due to a large rotation angle before the rotation operation is completed and the translation operation is performed, and the image of the marker point at the first rotation position cannot be acquired, the position of the marker point rotated to the first rotation position at this time is considered to be unknown (Pc 2'), and needs to be determined by calculation. When the mark point is controlled by the robot, when the robot translates the mark point at the first rotation position to the designated position, the position of the reference point changes in the process of translating the mark point driven by the robot (but the relative position relationship between the rotation center and the reference point, namely the absolute coordinate characteristic position, is not changed, and the transition structure between the rotation center and the execution end is not changed), so that after the translation operation is finished, the reference point position, namely the second reference position Pcn2 of the robot in the second state, needs to be acquired again (Xcn2, Ycn 2). At the end of the translation, the position of the center of rotation of the robot in the second state is noted as Pcno1(Xcno1, Ycno 1).
S123 may be performed after the first reference position Pcn1(Xcn1, Ycn1), the second reference position Pcn2(Xcn2, Ycn2) are obtained.
S123: and calculating a second coordinate when the mark point is moved to the first rotating position according to the coordinate relation between the first reference position and the second reference position.
When the marker point is brought into the field of view of the image capturing device again under the control of the robot and the image of the marker point can be acquired, referring to fig. 4, the fourth coordinate Pc4(Xc4, Yc4) of the marker point in the second state can be obtained from the image of the marker point at this time. Even if the mark point is still at the designated position (e.g., the center position) of the fixed field of view at this time, the position of the reference position point changes (the relative positional relationship between the rotation center and the reference point, which is the absolute coordinate feature position, does not change) due to the rotation and translation of the robot itself, and it can be known from the coordinate correlation between the image coordinate system and the robot coordinate system that the fourth coordinate and the first coordinate are different coordinates in the robot coordinate system.
According to the first reference position corresponding to the first coordinate and the second reference position corresponding to the fourth coordinate, the second coordinate Pc2 ' (Xc2 ', Yc2 ') can be calculated according to the coordinate relationship between the first reference position and the second reference position and the position of the fourth coordinate.
The process of calculating the third coordinate may include sub-steps S124-126, similar to the process of calculating the second coordinate described previously.
S124: a first reference position of the robot in a first state is acquired.
For details of the first reference position, please refer to the related description of the aforementioned S121, which is not repeated herein.
S125: and under the condition that the rotation center of the robot is not changed, taking the first coordinate as a reference position, rotating the clamping part at the execution tail end of the robot by a preset angle along a second specified direction, moving the mark point to a second rotating position, and horizontally moving the clamping part to horizontally move the mark point from the second rotating position to the specified position, so as to obtain a third reference position of the robot in a third state.
Similar to the foregoing S122, if the image of the marker point can be captured in the fixed field of view before the rotation operation is finished and the translation operation is performed, the image capturing device may be triggered to capture the image after the marker point is driven to rotate by the preset angle, so as to obtain a third captured image, and a third coordinate Pc3(Xc3, Yc3) in the robot coordinate system is obtained according to the position conversion of the marker point in the third captured image.
If the image capturing apparatus is set to perform image capturing only when the marker point is recognized at the designated position of the fixed field of view, even if the marker point can be recognized in the fixed field of view when rotated to the second rotational position, a third reference position Pcn3(Xcn3, Ycn3) of the robot in the third state should be acquired when the robot translates the marker point from the second rotational position to the designated position of the fixed field of view, thereby calculating a third coordinate Pc3 ' (Xc3 ', Yc3 ').
On the other hand, if the marker point is not in the fixed field of view due to a large rotation angle before the rotation operation is completed and the translation operation is performed, and the image of the marker point at the second rotational position cannot be acquired, the position of the marker point rotated to the second rotational position at this time is considered to be unknown (Pc 3'), and needs to be determined by calculation. When the marking point is controlled by the robot, when the robot translates the marking point at the second rotation position to the designated position, the position of the reference point of the robot changes (but the relative position relationship between the rotation center and the reference point of the absolute coordinate characteristic position is not changed, and the transition structure between the rotation center and the execution end is not changed), so after the translation operation is finished, the reference point position needs to be obtained again, namely, the third reference position Pcn3 of the robot in the third state is obtained (Xcn3, Ycn 3). At the end of the translation, the position of the rotation center of the robot in the third state is recorded as Pcno2(Xcno2, Ycno 2).
S126 may be performed after the first reference position Pcn1(Xcn1, Ycn1), the third reference position Pcn3(Xcn3, Ycn3) are obtained.
S126: and calculating a third coordinate when the mark point is moved to the second rotating position according to the coordinate relation between the first reference position and the third reference position.
When the mark point enters the field of view of the image capturing device again under the control of the robot, and the mark point is translated from the second rotational position to a designated position in the fixed field of view, please refer to fig. 5, an image of the mark point may be obtained to obtain a fifth coordinate Pc5(Xc5, Yc5) of the mark point in the third state at this time, where the fifth coordinate is different from the first coordinate in the robot coordinate system.
According to the first reference position Pcn1(Xcn1, Ycn1) corresponding to when the first coordinate is obtained and the third reference position Pcn3(Xcn3, Ycn3) corresponding to when the fifth coordinate is obtained, the third coordinate Pc3 ' (Xc3 ', Yc3 ') can be calculated according to the coordinate relationship between the first reference position and the third reference position and by combining the position of the fifth coordinate.
Through the implementation manners S121-123 for calculating the second coordinate and the implementation manners S124-126 for calculating the third coordinate, under the condition that the rotation center of the robot remains unchanged, based on the rotation motion of the part driven by the robot, the mark point is moved to the rotation positions on both sides of the first coordinate, then the mark point at the rotation position is translated to the fixed view field by the robot, the reference point position (i.e., the second reference position/the third reference position) of the robot in the robot coordinate system at this time is obtained again according to the translation operation, and the originally unknown coordinate (i.e., the second coordinate/the third coordinate) at the rotation position where the first coordinate is rotated to both sides is calculated by combining the position change of the reference position after the mark point is rotated and translated. And the second coordinate/third coordinate thus obtained can have a longer distance from the first coordinate, and the farther the distance between the coordinates for rounding is, the more favorable the accurate calculation of the rotation center is when the image acquisition device is in a small visual field.
And the image of the mark point is collected again after the translation is finished to obtain the fourth coordinate/the fifth coordinate, so that the control cost in the positioning process is reduced, if the mark point at the rotating position is translated to the designated position and then the fourth coordinate/the fifth coordinate is obtained by triggering the photographing, the image collection mode of controlling the image collection device to photograph after rotating by the preset angle can be abandoned, and the image collection requirement on the image collection device can be reduced.
Alternatively, the first rotational position in S122 may be out of the fixed field of view, and the second rotational position in S125 may be out of the fixed field of view.
However, when the rotation angle set in S12 is large, the index point may be out of the fixed field of view after the rotation is completed, that is, both the first rotational position and the second rotational position may be out of the fixed field of view. Even in an industrial production system, the imaging visual field of the image acquisition equipment is small due to the limitation of factors such as a processing mechanism, the size of a product, the accuracy of the visual equipment and the like, so that the rotation center is difficult to be accurately calculated directly according to the image acquisition condition, but under the condition that the first rotating position and the second rotating position are both outside the fixed visual field, the calculated distance between the second coordinate and the first coordinate is large, and the calculated distance between the third coordinate and the first coordinate is large, so that the rotation center can be accurately calculated, and the positioning accuracy is high.
The calculation process of the positioning method of the present application will be described in detail with reference to fig. 2 to 5. In any of the corresponding examples of fig. 2-5, the robot coordinate system and the image coordinate system are jointly calibrated.
The first coordinates Pc1(Xc1, Yc1) shown in fig. 2 or fig. 3, which are coordinates in the robot coordinate system, may be acquired through the aforementioned S11.
In the implementation process of S12, if the rotation angle when the robot drives the part to move is small, referring to fig. 2, after the rotation center Pcenter (Xco, Yco) remains unchanged and the mark point on the part driven by the robot rotates clockwise and counterclockwise by the first rotation angle Θ 1, the image capturing device may be controlled to capture an image, so as to obtain the second coordinate Pc2(Xc2, Yc2) and the third coordinate Pc3(Xc3, Yc 3). And based on the first coordinates Pc1(Xc1, Yc1) and the second coordinates Pc2(Xc2, Yc2) and the third coordinates Pc3(Xc3, Yc3) obtained when the rotation angle is small, the rotation center Pcenter (Xco, Yco) is calculated by using the three-point rounding principle in combination with the following first expression.
The first expression is a set of expressions, the first expression including:
Figure BDA0002397349130000141
Figure BDA0002397349130000142
Figure BDA0002397349130000143
Figure BDA0002397349130000144
Figure BDA0002397349130000151
wherein, in a circle formed by three points of a first coordinate Pc1(Xc1, Yc1), a second coordinate Pc2(Xc2, Yc2) and a third coordinate Pc3(Xc3, Yc3), L1 is a line segment corresponding to an arc formed by the first coordinate and the second coordinate, L2 is a line segment corresponding to an arc formed by the first coordinate and the third coordinate, and R is a line segment corresponding to an arc formed by the first coordinate and the third coordinateC1Is a radius in a circle formed by three points, i.e., a first coordinate Pc1(Xc1, Yc1), a second coordinate Pc2(Xc2, Yc2), and a third coordinate Pc3(Xc3, Yc3), and Θ 1 is a predetermined rotation angle. Wherein the radius RC1Is based on three points forming a circleThe calculated virtual radius.
According to the rounding principle of the first expression, the center of a circle Pcenter (Xco, Yco) in a circle formed by three points including the first coordinate Pc1(Xc1, Yc1), the second coordinate Pc2(Xc2, Yc2), and the third coordinate Pc3(Xc3, Yc3) is determined as the rotation center of the robot coordinate system.
In the implementation process of S12, if the rotation angle when the robot drives the part to move is large, referring to fig. 3, after the rotation center Pcno (Xcno, Ycno) remains unchanged and the index point on the part driven by the robot rotates clockwise and counterclockwise by the second rotation angle Θ 2, the unknown points Pc2 'and Pc 3' (the first rotation position and the second rotation position) of the two rotation positions are obtained, and the current reference position of the robot, that is, the first reference position Pcn1(Xcn1, Ycn1) is obtained.
For the requested unknown point Pc2 ', referring to fig. 4, after the robot translates the marker point at the unknown point Pc 2' to the central position of the fixed field of view, the image capturing device is triggered to take a picture, and coordinate transformation is performed to obtain: when the index point is translated from the first rotational position to the center position of the field of view, the index point has a fourth coordinate Pc4(Xc4, Yc4) in the robot coordinate system. The current reference position of the robot, i.e., the second reference position Pcn2, is acquired at the end of the translation (Xcn2, Ycn 2).
The position of this unknown point Pc2 'at the first rotation position is calculated by the following second expression, i.e., second coordinates Pc 2' (Xc2 ', Yc 2') are obtained.
The second expression is a set of expressions, the second expression comprising:
ΔX1=Xcn2-Xcn1;
ΔY1=Ycn2-Ycn1;
Xc2’=Xc4+ΔX1;
Yc2’=Yc4+ΔY1。
where Δ X1, Δ Y1 represent a coordinate difference between the second coordinate Pc2 ' (Xc2 ', Yc2 ') and the fourth coordinate Pc4(Xc4, Yc4), and also represent a coordinate difference between the first reference position Pcn1(Xcn1, Ycn1) and the second reference position Pcn2(Xcn2, Ycn 2).
Correspondingly, referring to fig. 5, for the requested unknown point Pc3 ', after the robot translates the marker point at the unknown point Pc 3' to the central position of the fixed field of view, the image capturing device is triggered to take a picture, and coordinate transformation is performed to obtain: when the index point is translated from the second rotational position to the center position of the fixed field of view, the index point has a fifth coordinate Pc5(Xc5, Yc5) in the robot coordinate system. The current reference position of the robot, i.e., the third reference position Pcn3, is acquired at the end of the translation (Xcn3, Ycn 3).
The position of this unknown point Pc3 'at the second rotational position is calculated by the following third expression, i.e., the third coordinates Pc 3' (Xc3 ', Yc 3') are obtained.
The third expression is a set of expressions, and the third expression includes:
ΔX2=Xcn3-Xcn1;
ΔY3=Ycn3-Ycn1;
Xc3’=Xc5+ΔX2;
Yc3’=Yc5+ΔY2。
where Δ X2, Δ Y2 represent the coordinate difference between the third coordinates Pc3 ' (Xc3 ', Yc3 ') and Pc5(Xc5, Yc5), and also represent the coordinate difference between the first reference position Pcn1(Xcn1, Ycn1) and the third reference position Pcn3(Xcn3, Ycn 3).
After the second coordinates Pc2 '(Xc 2', Yc2 ') and the third coordinates Pc 3' (Xc3 ', Yc 3') are calculated through the second expression and the third expression, the three points are substituted into the following fourth expression in combination with the first coordinates Pc1(Xc1, Yc1) obtained in S11, so that the rotation center Pcno (Xcno, Ycno) with higher accuracy can be determined.
The fourth expression is a set of expressions, the fourth expression comprising:
Figure BDA0002397349130000161
Figure BDA0002397349130000162
Figure BDA0002397349130000163
Figure BDA0002397349130000164
Figure BDA0002397349130000171
among circles formed by three points of the first coordinate Pc1(Xc1, Yc1), the second coordinate Pc2 '(Xc 2', Yc2 '), and the third coordinate Pc 3' (Xc3 ', Yc 3'), L3 is a line segment corresponding to an arc formed by the first coordinate and the second coordinate, L4 is a line segment corresponding to an arc formed by the first coordinate and the third coordinate, and R4 is a line segment corresponding to an arc formed by the first coordinate and the third coordinateC2Is a radius in a circle formed by three points, i.e., a first coordinate Pc1(Xc1, Yc1), a second coordinate Pc2 '(Xc 2', Yc2 '), and a third coordinate Pc 3' (Xc3 ', Yc 3'), and Θ 2 is a preset rotation angle. Wherein the radius RC2Is the virtual radius calculated from the rounded three points.
From the fourth expression above, it can be determined that: a circle center Pcno (Xcno, Ycno) in a circle formed by three points, i.e., a first coordinate Pc1(Xc1, Yc1), a second coordinate Pc2 '(Xc 2', Yc2 '), and a third coordinate Pc 3' (Xc3 ', Yc 3'), is set as a rotation center in the robot coordinate system.
Alternatively, in order to ensure that the coordinates (e.g., the first coordinate, the fourth coordinate, the fifth coordinate) of the landmark point in the robot coordinate system can be obtained from the image conversion of the landmark point, step S10 may be performed before performing S11.
S10: and carrying out combined calibration on an image coordinate system corresponding to the image acquisition equipment and a robot coordinate system corresponding to the robot.
As an implementation of S10, S10 may include: and jointly calibrating the robot coordinate system of the robot and the image coordinate system of the image acquisition equipment by a nine-point calibration method.
When the nine-point calibration method is executed, the robot sends a trigger calibration instruction with the current coordinate point (X0, Y0) of the robot to the visual software server in a TCP/IP communication mode. And the visual software server receives the calibration instruction sent by the robot in a TCP/IP communication mode, and divides the calibration instruction to obtain the current point coordinate of the robot. The moving distance of the robot is set to be S, and 8 point coordinates can be calculated by combining the current point coordinates of the robot. Point coordinates (X0+ S, Y0), (X0-S, Y0) can be obtained in the X-axis direction; the point coordinates (X0, Y0+ S), (X0, Y0-S) can be obtained in the Y-axis direction in the same way; in diagonal directions of the X-axis and the Y-axis, point coordinates (X0+ S, Y0-S), (X0+ S, Y0+ S), (X0-S, Y0-S) and (X0+ S, Y0-S) can be calculated. And the vision software server calculates the coordinates of other eight points of the robot in the nine-square grid and then transmits the coordinates back to the robot in a TCP/IP communication mode.
The robot receives eight point coordinates returned by the vision software server in a TCP/IP communication mode, from the current point of the robot, the robot moves point by point according to the movement mode of the current point, the middle point of the third row of the nine-grid, the upper right point of the nine-grid, the middle point of the first row of the nine-grid, the upper left point of the nine-grid, the middle point of the first row of the nine-grid, the lower left point of the nine-grid, the middle point of the third row of the nine-grid and the lower right point of the nine-grid, the robot delays 1s when reaching one point each time and sends a triggering photographing instruction to the vision computing server in a TCP/IP communication mode, and the vision computing server calls image acquisition equipment to photograph after receiving the instruction and acquires an image to acquire the pixel coordinates/image coordinates of the mark points on the part. The robot moves nine points to obtain nine point coordinates, and nine times of photographing are performed to obtain nine pixel coordinates/image coordinates. And the nine robot coordinates correspond to the nine pixel coordinates/the nine image coordinates, and the conversion relation between the pixel coordinates/the image coordinates and the robot coordinates is obtained by calculation and calibration. The conversion between the coordinate systems is performed by rotating and translating the image coordinate system/pixel coordinate system to coincide with the robot coordinate system, substantially without changing the robot coordinate system. Finally, the coordinate relationship between the image coordinates in the image coordinate system/and the robot coordinates in the robot coordinate system can be expressed by the following matrix expression.
The matrix expression includes:
Figure BDA0002397349130000181
wherein, the matrix R is used for representing the coordinate transformation relation between the image coordinate system and the robot coordinate system,
Figure BDA0002397349130000182
representing arbitrary image coordinates in an image coordinate system,
Figure BDA0002397349130000183
expressed in terms of image coordinates under the influence of the matrix R
Figure BDA0002397349130000184
And converting the coordinates of the robot under the coordinate system. After each image acquisition by the image acquisition device, the image coordinates can be converted into coordinates in the robot coordinate system according to the matrix expression.
It is understood that, in addition to the nine-point calibration method, a person skilled in the art may perform joint calibration on each coordinate system by other joint calibration methods to facilitate coordinate transformation at any time.
Based on the same inventive concept, please refer to fig. 6, an embodiment of the present application further provides a positioning apparatus 200, which can be used to implement the aforementioned positioning method, and each functional module of the apparatus can be installed on at least one device carrier, where the device carrier includes but is not limited to: the robot, image acquisition equipment, industrial personal computer, server and other equipment.
As shown in fig. 6, the positioning apparatus 200 includes: the device comprises an acquisition module 201 and a calculation module 202.
The acquiring module 201 is used for acquiring an image of a mark point to obtain a first coordinate of the mark point in a first state when an executing tail end of the robot clamps a part with the mark point to a specified position in a fixed view field of the image acquisition equipment;
the calculation module 202 is configured to calculate, according to a path of the part clamping movement of the robot, a second coordinate of the mark point after rotating in the first designated direction according to a preset angle under the control of the robot, and a third coordinate of the mark point after rotating in the second designated direction according to the preset angle under the control of the robot, when a rotation center of the robot remains unchanged;
the calculating module 202 is further configured to calculate a position of the rotation center of the robot in the robot coordinate system according to the first coordinate, the second coordinate, and the third coordinate.
The device can execute the positioning method, and can accurately position the rotation center of the multi-axis robot, thereby improving the visual positioning precision.
Optionally, the obtaining module 201 may further be configured to obtain a first reference position of the robot in the first state; the obtaining module 201 is further configured to, when the rotation center of the robot is not changed, taking the first coordinate as a reference position, and when the execution end of the robot rotates the part in the first specified direction by a preset angle, so that the mark point is moved to the first rotation position, and the part is clamped to translate the mark point from the first rotation position to the specified position, obtain a second reference position of the robot in the second state; the calculation module 202 may further be configured to calculate a second coordinate when the mark point is moved to the first rotation position according to a coordinate relationship between the first reference position and the second reference position. The obtaining module 201 is further configured to, when the rotation center of the robot is not changed, and the first coordinate is used as a reference position, and the gripper at the execution end of the robot rotates in the second designated direction by a preset angle, so that the mark point is moved to the second rotation position, and the gripper translates to translate the mark point from the second rotation position to the designated position, obtain a third reference position of the robot in a third state; the calculation module 202 may further be configured to calculate a third coordinate when the mark point is moved to the second rotation position according to a coordinate relationship between the first reference position and the third reference position.
Optionally, the acquiring module 201 may be further configured to acquire an image of the mark point to obtain a fourth coordinate of the mark point in the second state when the robot gripper moves to translate the mark point from the first rotation position to the specified position. Correspondingly, the calculating module 202 is further configured to calculate the second coordinate according to the coordinate relationship between the first reference position and the second reference position, and the position of the fourth coordinate.
Optionally, the acquiring module 201 may be further configured to acquire an image of the mark point to obtain a fifth coordinate of the mark point in the third state when the robot gripper moves to translate the mark point from the second rotation position to the specified position. Correspondingly, the calculating module 202 is further configured to calculate the third coordinate according to the coordinate relationship between the first reference position and the third reference position, and the position of the fifth coordinate.
Optionally, the device may further include a control module for controlling the robot to rotate the mark point on the gripped part out of the fixed field of view with the center of rotation remaining unchanged.
Optionally, the apparatus may further include a joint calibration module, configured to perform joint calibration on a robot coordinate system of the robot and an image coordinate system of the image acquisition device.
Optionally, the joint calibration module may be further configured to perform joint calibration on a robot coordinate system of the robot and an image coordinate system of the image acquisition device by a nine-point calibration method.
For other details of the positioning apparatus 200 provided in the embodiment of the present application, please refer to the related description of the positioning method, which is not repeated herein.
Based on the same inventive concept, the embodiment of the application further provides a robot, the robot is a multi-axis robot, the robot can be connected with the image acquisition device and the visual calculation server, and is used for controlling the image acquisition device to take a picture, acquiring an image acquired by the image acquisition device, and acquiring a coordinate system conversion relation obtained by the visual calculation server, so that coordinate conversion is performed according to the coordinate system conversion relation.
As shown in fig. 7, the robot includes: memory 310, processor 320, communication unit 330. The memory 310, the processor 320 and the communication unit 330 are directly or indirectly connected to realize data interaction.
The memory 310 is a storage medium, and may be a high-speed RAM memory, or a non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. The memory 310 may be used to store various functional modules corresponding to the aforementioned positioning method and corresponding computer programs. The processor 320 may execute software functional modules, computer programs stored in the memory 310 to perform the aforementioned positioning method.
The Processor 320 has an arithmetic Processing capability, and may be a general-purpose Processor such as a Central Processing Unit (CPU) or a Network Processor (NP); but may also be a dedicated processor or a processor built from other programmable logic devices. Processor 320 may implement the methods, steps, and logic blocks provided by embodiments of the present application. The memory 310 stores a computer program executable by the processor 320, and when the computer program is executed by the processor 320, the positioning method is performed.
The communication unit 330 may include a communication bus, a communication card, and the like for wired or wireless communication with other external device carriers.
The structure shown in fig. 7 is only an illustration, and there may be more components or other configurations different from those shown in fig. 7 in specific applications.
In addition to the foregoing embodiments, the present application further provides a storage medium, where a computer program is stored on the storage medium, and the computer program is executed by a processor to perform the foregoing positioning method. The storage medium may be any available medium that can be accessed by the processor, and may be a magnetic medium (e.g., floppy disks, hard disks, tapes), an optical medium (e.g., DVDs), a semiconductor medium (e.g., Solid State Disks (SSDs)), or the like.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, a division of a unit is merely a division of one logic function, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the connections discussed above may be indirect couplings or communication connections between devices or units through some communication interfaces, and may be electrical, mechanical or other forms.
In addition, units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed to a plurality of positions. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
Furthermore, the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist separately, or two or more modules may be integrated to form an independent part.
In this document, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions.
The above embodiments are merely examples of the present application and are not intended to limit the scope of the present application, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, improvement and the like made within the spirit and principle of the present application shall be included in the protection scope of the present application.

Claims (10)

1. A method of positioning, the method comprising:
when an execution tail end of a robot clamps a part with a mark point to a specified position in a fixed visual field of an image acquisition device, acquiring an image of the mark point to obtain a first coordinate of the mark point in a first state;
when the rotation center of the robot is kept unchanged, calculating a second coordinate of the mark point after rotating along a first appointed direction according to a preset angle under the control of the robot according to the movement path of the part clamped by the robot, and calculating a third coordinate of the mark point after rotating along a second appointed direction according to the preset angle under the control of the robot;
and calculating the position of the rotation center of the robot in a robot coordinate system according to the first coordinate, the second coordinate and the third coordinate.
2. The method of claim 1, wherein calculating a second coordinate of the marking point after rotating in a first designated direction by a preset angle under the control of the robot and a third coordinate of the marking point after rotating in a second designated direction by the preset angle under the control of the robot according to a path of the robot gripping the part while a rotation center of the robot remains unchanged comprises:
acquiring a first reference position of the robot in the first state;
under the condition that the rotation center of the robot is not changed, taking the first coordinate as a reference position, and when an execution tail end of the robot clamps the part and rotates the part along the first specified direction by the preset angle, the mark point is moved to a first rotation position, and the part is clamped and translated to translate the mark point from the first rotation position to the specified position, acquiring a second reference position of the robot in a second state;
calculating a second coordinate when the mark point is moved to the first rotating position according to the coordinate relation between the first reference position and the second reference position;
under the condition that the rotation center of the robot is not changed, taking the first coordinate as a reference position, and when an execution tail end of the robot clamps the part and rotates the part by the preset angle along the second appointed direction, the mark point is moved to a second rotation position, and the part is clamped and translated to translate the mark point from the second rotation position to the appointed position, acquiring a third reference position of the robot in a third state;
and calculating a third coordinate when the mark point is moved to the second rotating position according to the coordinate relation between the first reference position and the third reference position.
3. The method of claim 2, wherein prior to said calculating the second coordinate when the landmark point is moved to the first rotational position, the method further comprises:
when the robot clamps the part to move so as to translate the mark point from the first rotating position to the specified position, acquiring an image of the mark point to obtain a fourth coordinate of the mark point in the second state;
the calculating a second coordinate when the mark point is moved to the first rotation position according to the coordinate relationship between the first reference position and the second reference position includes:
and calculating the second coordinate according to the coordinate relation between the first reference position and the second reference position and the position of the fourth coordinate.
4. The method of claim 2, wherein prior to said calculating a third coordinate when said marker point is moved to said second rotational position, said method further comprises:
when the robot clamps the part to move so as to translate the mark point from the second rotation position to the designated position, acquiring an image of the mark point to obtain a fifth coordinate of the mark point in the third state;
the calculating a third coordinate when the mark point is moved to the second rotation position according to the coordinate relationship between the first reference position and the third reference position includes:
and calculating the third coordinate according to the coordinate relation between the first reference position and the third reference position and the position of the fifth coordinate.
5. The method of claim 2, wherein the first rotational position is outside the fixed field of view and the second rotational position is outside the fixed field of view.
6. The method of any of claims 1-5, wherein prior to said obtaining the image of the marker point to obtain the first coordinate of the marker point in the first state, the method further comprises:
and carrying out combined calibration on a robot coordinate system of the robot and an image coordinate system of the image acquisition equipment.
7. The method of claim 6, wherein jointly calibrating the robot coordinate system of the robot and the image coordinate system of the image capture device comprises:
and jointly calibrating the robot coordinate system of the robot and the image coordinate system of the image acquisition equipment by a nine-point calibration method.
8. A positioning device, the device comprising:
the acquisition module is used for acquiring an image of a mark point to obtain a first coordinate of the mark point in a first state when an execution tail end of the robot clamps a part with the mark point to a specified position in a fixed field of view of the image acquisition equipment;
the calculation module is used for calculating a second coordinate of the mark point after rotating along a first appointed direction according to a preset angle under the control of the robot and a third coordinate of the mark point after rotating along a second appointed direction according to the preset angle under the control of the robot according to a path of the part clamped by the robot when the rotation center of the robot is kept unchanged;
the calculation module is further configured to calculate a position of the rotation center of the robot in a robot coordinate system according to the first coordinate, the second coordinate, and the third coordinate.
9. The apparatus of claim 8, further comprising:
and the joint positioning module is used for carrying out joint calibration on a robot coordinate system of the robot and an image coordinate system of the image acquisition equipment.
10. A storage medium, characterized in that the storage medium has stored thereon a computer program which, when being executed by a processor, performs the method of any one of claims 1-7.
CN202010135994.3A 2020-03-02 2020-03-02 Positioning method, positioning device and storage medium Active CN111369625B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010135994.3A CN111369625B (en) 2020-03-02 2020-03-02 Positioning method, positioning device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010135994.3A CN111369625B (en) 2020-03-02 2020-03-02 Positioning method, positioning device and storage medium

Publications (2)

Publication Number Publication Date
CN111369625A true CN111369625A (en) 2020-07-03
CN111369625B CN111369625B (en) 2021-04-13

Family

ID=71210220

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010135994.3A Active CN111369625B (en) 2020-03-02 2020-03-02 Positioning method, positioning device and storage medium

Country Status (1)

Country Link
CN (1) CN111369625B (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111590550A (en) * 2020-07-06 2020-08-28 佛山隆深机器人有限公司 Material position calibration method of carrying manipulator
CN112346266A (en) * 2020-10-27 2021-02-09 合肥欣奕华智能机器有限公司 Method, device and equipment for binding devices
CN112692840A (en) * 2020-12-10 2021-04-23 安徽巨一科技股份有限公司 Mechanical arm positioning guiding and calibrating method based on machine vision cooperation
CN112902961A (en) * 2021-01-19 2021-06-04 宁德思客琦智能装备有限公司 Calibration method, medium, calibration equipment and system based on machine vision positioning
CN112991461A (en) * 2021-03-11 2021-06-18 珠海格力智能装备有限公司 Material assembling method and device, computer readable storage medium and processor
CN113345014A (en) * 2021-08-04 2021-09-03 苏州鼎纳自动化技术有限公司 Method for calculating and checking rotation center in visual alignment project
CN113510697A (en) * 2021-04-23 2021-10-19 知守科技(杭州)有限公司 Manipulator positioning method, device, system, electronic device and storage medium
CN114683267A (en) * 2020-12-31 2022-07-01 北京小米移动软件有限公司 Calibration method, calibration device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130123982A1 (en) * 2011-11-11 2013-05-16 Long-En Chiu Calibration method for tool center point of a robot manipulator
US20140081457A1 (en) * 2012-09-19 2014-03-20 Daihen Corporation Calculating apparatus, transfer robot system, and calculating method
CN104385281A (en) * 2014-07-28 2015-03-04 天津大学 Zero calibrating method for two-degree-freedom high speed parallel robot
CN106312997A (en) * 2016-10-27 2017-01-11 桂林电子科技大学 Laser radar type outdoor autonomously mobile robot provided with automatic stabilization device
CN106610264A (en) * 2015-10-22 2017-05-03 沈阳新松机器人自动化股份有限公司 Method for calibrating coordinate system of pre-alignment machine
CN108182689A (en) * 2016-12-08 2018-06-19 中国科学院沈阳自动化研究所 The plate workpiece three-dimensional recognition positioning method in polishing field is carried applied to robot
US20180350100A1 (en) * 2017-05-30 2018-12-06 General Electric Company System and method of robot calibration using image data
CN109366472A (en) * 2018-12-04 2019-02-22 广东拓斯达科技股份有限公司 Article laying method, device, computer equipment and the storage medium of robot
CN109773774A (en) * 2017-11-14 2019-05-21 合肥欣奕华智能机器有限公司 A kind of scaling method of robot and positioner position orientation relation
CN109829953A (en) * 2019-02-27 2019-05-31 广东拓斯达科技股份有限公司 Image collecting device scaling method, device, computer equipment and storage medium
CN110570414A (en) * 2019-09-06 2019-12-13 广东利元亨智能装备股份有限公司 method and device for acquiring alignment reference, electronic equipment and storage medium

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130123982A1 (en) * 2011-11-11 2013-05-16 Long-En Chiu Calibration method for tool center point of a robot manipulator
US20140081457A1 (en) * 2012-09-19 2014-03-20 Daihen Corporation Calculating apparatus, transfer robot system, and calculating method
CN104385281A (en) * 2014-07-28 2015-03-04 天津大学 Zero calibrating method for two-degree-freedom high speed parallel robot
CN106610264A (en) * 2015-10-22 2017-05-03 沈阳新松机器人自动化股份有限公司 Method for calibrating coordinate system of pre-alignment machine
CN106312997A (en) * 2016-10-27 2017-01-11 桂林电子科技大学 Laser radar type outdoor autonomously mobile robot provided with automatic stabilization device
CN108182689A (en) * 2016-12-08 2018-06-19 中国科学院沈阳自动化研究所 The plate workpiece three-dimensional recognition positioning method in polishing field is carried applied to robot
US20180350100A1 (en) * 2017-05-30 2018-12-06 General Electric Company System and method of robot calibration using image data
CN109773774A (en) * 2017-11-14 2019-05-21 合肥欣奕华智能机器有限公司 A kind of scaling method of robot and positioner position orientation relation
CN109366472A (en) * 2018-12-04 2019-02-22 广东拓斯达科技股份有限公司 Article laying method, device, computer equipment and the storage medium of robot
CN109829953A (en) * 2019-02-27 2019-05-31 广东拓斯达科技股份有限公司 Image collecting device scaling method, device, computer equipment and storage medium
CN110570414A (en) * 2019-09-06 2019-12-13 广东利元亨智能装备股份有限公司 method and device for acquiring alignment reference, electronic equipment and storage medium

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SANDEEP KUMAR MALU等: "Kinematics, Localization and Control of Differential Drive Mobile Robot", 《GLOBAL JOURNAL OF RESEARCHES IN ENGINEERING: HROBOTICS & NANO-TECH》 *
朱良: "机器视觉在工业机器人抓取技术中的应用", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
林文访等: "基于机器人的带有旋转特征的零件坐标系标定方法", 《江汉大学学报(自然科学版)》 *

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111590550A (en) * 2020-07-06 2020-08-28 佛山隆深机器人有限公司 Material position calibration method of carrying manipulator
CN112346266A (en) * 2020-10-27 2021-02-09 合肥欣奕华智能机器有限公司 Method, device and equipment for binding devices
CN112692840A (en) * 2020-12-10 2021-04-23 安徽巨一科技股份有限公司 Mechanical arm positioning guiding and calibrating method based on machine vision cooperation
CN114683267A (en) * 2020-12-31 2022-07-01 北京小米移动软件有限公司 Calibration method, calibration device, electronic equipment and storage medium
CN114683267B (en) * 2020-12-31 2023-09-19 北京小米移动软件有限公司 Calibration method, calibration device, electronic equipment and storage medium
CN112902961A (en) * 2021-01-19 2021-06-04 宁德思客琦智能装备有限公司 Calibration method, medium, calibration equipment and system based on machine vision positioning
CN112902961B (en) * 2021-01-19 2022-07-26 宁德思客琦智能装备有限公司 Calibration method, medium, calibration equipment and system based on machine vision positioning
CN112991461A (en) * 2021-03-11 2021-06-18 珠海格力智能装备有限公司 Material assembling method and device, computer readable storage medium and processor
CN113510697A (en) * 2021-04-23 2021-10-19 知守科技(杭州)有限公司 Manipulator positioning method, device, system, electronic device and storage medium
CN113510697B (en) * 2021-04-23 2023-02-14 知守科技(杭州)有限公司 Manipulator positioning method, device, system, electronic device and storage medium
CN113345014A (en) * 2021-08-04 2021-09-03 苏州鼎纳自动化技术有限公司 Method for calculating and checking rotation center in visual alignment project

Also Published As

Publication number Publication date
CN111369625B (en) 2021-04-13

Similar Documents

Publication Publication Date Title
CN111369625B (en) Positioning method, positioning device and storage medium
CN107871328B (en) Machine vision system and calibration method implemented by machine vision system
KR102532072B1 (en) System and method for automatic hand-eye calibration of vision system for robot motion
CN111452040B (en) System and method for associating machine vision coordinate space in a pilot assembly environment
CN108453701B (en) Method for controlling robot, method for teaching robot, and robot system
US10173324B2 (en) Facilitating robot positioning
CN111331592B (en) Mechanical arm tool center point correcting device and method and mechanical arm system
US9884425B2 (en) Robot, robot control device, and robotic system
JP5815761B2 (en) Visual sensor data creation system and detection simulation system
CN112621743B (en) Robot, hand-eye calibration method for fixing camera at tail end of robot and storage medium
JP2005300230A (en) Measuring instrument
JP2012055999A (en) System and method for gripping object, program and robot system
JP2012030320A (en) Work system, working robot controller, and work program
JPWO2018043525A1 (en) Robot system, robot system control apparatus, and robot system control method
JP2019069493A (en) Robot system
CN113379849A (en) Robot autonomous recognition intelligent grabbing method and system based on depth camera
CN114310901B (en) Coordinate system calibration method, device, system and medium for robot
US20200189108A1 (en) Work robot and work position correction method
JP2018202542A (en) Measurement device, system, control method, and manufacturing method of article
CN111216136A (en) Multi-degree-of-freedom mechanical arm control system, method, storage medium and computer
CN114750160B (en) Robot control method, apparatus, computer device, and storage medium
JP2016203282A (en) Robot with mechanism for changing end effector attitude
CN113858214B (en) Positioning method and control system for robot operation
CN113771042B (en) Vision-based method and system for clamping tool by mobile robot
CN115972192A (en) 3D computer vision system with variable spatial resolution

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant