Disclosure of Invention
The invention aims to provide a visual guide positioning device and method of a composite robot, which aim to solve the problem that the visual guide precision of the composite robot is reduced in an actual working environment.
One aspect of the present invention provides a vision guiding and positioning apparatus for a robot, including:
the measuring module is fixed at the tail end of the mechanical arm and used for measuring the distance between the tail end of the mechanical arm and the working table surface, the inclination angle of the tail end of the mechanical arm and acquiring an image of an identification area serving as a guide reference on the working table;
the visual guidance processing module is used for controlling the motion of the mechanical arm through the robot motion controller according to the information acquired by the measuring module, so that the distance, the position and the angle value of the tail end of the mechanical arm relative to the workbench reach the same state when the robot is calibrated and applied in operation; controlling automatic acquisition of calibration data and calculation of calibration parameters; and controlling the tail end of the mechanical arm to deviate to a workpiece placing position from the calibration position along the TCP to complete workpiece grabbing and placing.
Further, the measurement module includes a ranging sensor, a tilt sensor, and an industrial camera.
Further, the visual guidance processing module includes:
a measurement module communication unit: communicating with the measuring module, and acquiring and recording the distance and inclination angle information acquired by the measuring module in real time;
a robot communication unit: sending an instruction to a robot motion controller, commanding the robot motion controller to control the robot to move, and simultaneously acquiring the coordinate information of the robot in real time;
an image identification positioning unit: controlling the measuring module to collect images, identifying and positioning the received images, and acquiring and recording pixel coordinates of reference points and Rz angle values of the images;
a calibration parameter calculation unit: calculating calibration parameters and storing according to the pixel coordinates of the reference points acquired at the acquisition points and the offset of the mechanical arm;
a human-computer interaction unit: and the input and output interface provides information such as instructions, data, images and the like for a user.
Furthermore, the system also comprises a two-dimensional code label or a character label which is fixed on the identification area on the workbench.
The invention also provides a visual guide positioning method of the compound robot suitable for the device, which comprises a visual guide calibration step and a visual guide job application step, wherein the visual guide calibration step specifically comprises the following steps:
acquiring a relative pose: after the trolley reaches a proper parking point, adjusting the tail end of the mechanical arm to a proper calibration original point, and acquiring and recording the pose of the mechanical arm, the inclination angle of the tail end of the mechanical arm, the distance and position relative to the working table and the Rz angle value;
calibration parameter calculation: collecting calibration data, and calculating calibration parameters according to the collected data;
measuring the TCP offset of the mechanical arm from the calibration original point to the workpiece placing position;
the step of visually guiding the job application specifically comprises:
and (3) reproduction of relative pose: when the trolley stops at the position and the angle during calibration, the mechanical arm is controlled to move to the posture recorded during calibration, and then the tail end of the mechanical arm is controlled to adjust to the relative posture recorded during calibration;
and controlling the mechanical arm to move according to the TCP offset from the calibration origin to the workpiece placing position to complete the grabbing and releasing task.
Further, the step of acquiring the relative pose specifically includes:
adjusting the tail end posture of the mechanical arm to enable a working plane of the mechanical arm to be parallel to a working table surface, and collecting and recording values of inclination angles Rx and Ry of the tail end of the mechanical arm;
adjusting the distance between the tail end of the mechanical arm and the workbench to enable the imaging view of the measuring module to be clear, and recording the current point as a calibration origin;
collecting and recording the distance from the tail end of the mechanical arm to the working table at the moment;
collecting the image of the identification area on the workbench, identifying and positioning the image, and extracting and recording the pixel coordinates of the reference point and the Rz angle value of the identification area;
and collecting and recording the pose of the mechanical arm at the moment.
Further, the method for identifying and positioning the image comprises the following steps: and identifying and positioning by using a shape matching mode, and if the identification area is provided with a two-dimension code label, identifying and positioning by using a two-dimension code identifying and positioning mode.
Further, the calibration data acquisition method comprises the following steps:
setting TCP offset of each acquisition point relative to the calibration origin;
controlling the mechanical arm to move to each acquisition point along the TCP coordinate system according to a preset offset;
and collecting the image of the identification area on the workbench at each collection point, identifying and positioning the image, and extracting and recording the pixel coordinates of the reference point.
Further, the calibration parameter calculation method comprises the following steps:
assuming that the tail end of the mechanical arm is at the calibration origin, obtaining the pixel coordinate of the reference point in the identification area as (u)0,v0) (ii) a The end of the arm moves Δ x along the TCP coordinate system, at which point the fiducial point pixel coordinate is (u)1,v1) (ii) a And after the tail end of the mechanical arm returns to the calibration origin, moving the tail end of the mechanical arm by delta y along the TCP coordinate system, wherein the pixel coordinate of the reference point is (u)2,v2);
Then, in the job application, if the pixel coordinate of the reference point is obtained as (u, v), the TCP offset (Δ x ', Δ y') of the mechanical arm with respect to the calibration origin at this time shall be:
wherein the content of the first and second substances,
Δu1=u1-u0,Δv1=v1-v0,Δu2=u2-u0,Δv2=v2-v0,Δu=u-u0,Δv=v-v0。
further, the step of reproducing the relative pose specifically includes:
controlling the mechanical arm to move to the posture recorded during calibration;
measuring the values of the inclination angles Rx and Ry of the tail end of the mechanical arm, and adjusting the posture of the tail end of the mechanical arm to enable the inclination angle to be the same as the inclination angle value recorded during calibration;
measuring the distance value between the tail end of the mechanical arm and the working table surface, and adjusting the posture of the mechanical arm to enable the distance to be equal to the distance value recorded in calibration;
acquiring an image of the identification area on the workbench, extracting an Rz angle value of the identification area, comparing the Rz angle value with an image Rz value recorded during calibration, and adjusting the terminal attitude Rz of the mechanical arm to make the Rz angle value and the image Rz value consistent;
and acquiring the image of the identification area again to obtain the pixel coordinate of the reference point, calculating the TCP offset of the mechanical arm according to the recorded calibration parameters, and controlling the mechanical arm to move according to the offset, so that the recorded pose relative to the working table surface can be reached during calibration.
Therefore, the visual guiding and positioning device and method for the compound robot can record the relative position and posture information of the compound robot including the distance, position, angle and the like with the workbench in detail during calibration, further reproduce the relative pose during operation application, cover the visual and action deviation of the compound robot caused by the factors of trolley navigation positioning error, uneven ground and the like, and improve the operation precision of the robot in the actual environment. Compared with the prior art, its beneficial effect mainly includes: (1) the object grabbing and placing accuracy of the composite robot in various adverse working environments is guaranteed; (2) the robot motion control system can be communicated with a robot motion controller to directly control the motion of the robot without using a mechanical arm demonstrator, and is convenient and quick, accurate in data and free of human input errors; (3) the pixel coordinates and the image angle of the visual guidance reference are automatically identified and obtained, the operation complexity in the calibration process is reduced, and the calibration efficiency is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the scope of the invention, as claimed.
Detailed Description
Preferred embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While the preferred embodiments of the present disclosure are shown in the drawings, it should be understood that the present disclosure may be embodied in various forms and should not be limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the disclosure to those skilled in the art.
A schematic composition diagram of an exemplary embodiment of the composite robot vision-guided positioning apparatus according to the present disclosure is shown in fig. 1.
The utility model provides a vision guide positioner of compounding machine people includes:
the measuring module is used for measuring the distance between the tail end of the mechanical arm and the working table surface, the inclination angle of the tail end of the mechanical arm and acquiring an image of an identification area serving as a guide reference on the working table;
the visual guidance processing module is used for controlling the motion of the mechanical arm through the robot motion controller according to the information acquired by the measuring module, so that the distance, the position and the angle value of the tail end of the mechanical arm relative to the workbench reach the same state when the robot is calibrated and applied in operation; controlling automatic acquisition of calibration data and calculation of calibration parameters; and controlling the tail end of the mechanical arm to deviate to a workpiece placing position from the calibration position along the TCP to complete workpiece grabbing and placing.
The identification area on the work table as a guide reference should have features that are clearly different from the surrounding environment and are easy to recognize and distinguish. One point with the characteristics is selected as a reference point, and the pixel coordinate positioning of the point in the image is used for realizing the positioning of the tail end of the mechanical arm relative to the workbench.
The position of the identification area can be flexibly selected: if the workpiece to be grabbed and placed is placed in the workpiece positioning tool on the workbench, and the position and the posture of the workpiece relative to the positioning tool are unchanged, the workpiece positioning tool can be directly selected, and the relative positioning relation among the workpiece to be grabbed and placed, the workpiece positioning tool and the identification area is kept unchanged; if the position of the target workpiece to be grabbed on the workbench is changed movably, namely, the workpiece positioning tool is not limited, the target workpiece to be grabbed can be directly selected. And meanwhile, the position convenient for the tail end of the mechanical arm to be recognized by the photographing device is required to be selected.
The identification area can be internally provided with features by means of printing or label pasting and the like, and a preferred scheme is to paste a two-dimensional code label or a character label.
The measuring module provides more information for measuring the pose of the robot relative to the workbench, and subsequently performs pose adjustment on the basis of the information, so that the vision deviation of the composite robot caused by factors such as trolley navigation positioning error and uneven ground can be effectively corrected, and the vision guiding operation precision is provided.
The measuring module can adopt all the existing detecting devices which are suitable for measuring the distance of the tail end of the mechanical arm relative to the workbench, the inclination angle of the tail end of the mechanical arm and acquiring the image of the identification area serving as the guide reference on the workbench.
Preferred solutions are a range sensor, a tilt sensor and an industrial camera, as shown in fig. 1.
The visual guidance processing module typically includes a vision control computer of the robot and visual guidance processing software running therein. The vision control computer is also known as a vision controller, is an industrial control computer, and is provided with communication interfaces with the robot motion controller and the measuring module: the robot motion controller is generally connected with a network cable and communicates through a TCP/IP protocol; the camera is connected with the high-speed network cable; and is connected with an angle measuring or distance measuring sensor through a serial port. The visual guide processing software completes the functions of the visual guide processing module.
Preferably, the visual guidance processing module includes:
a measurement module communication unit: communicating with the measuring module, and acquiring and recording the distance and inclination angle information acquired by the measuring module in real time;
a robot communication unit: sending an instruction to a robot motion controller, commanding the robot motion controller to control the robot to move, and simultaneously acquiring the coordinate information of the robot in real time;
an image identification positioning unit: controlling the measuring module to collect images, identifying and positioning the received images, and acquiring and recording pixel coordinates of reference points and Rz angle values of the images;
a calibration parameter calculation unit: calculating calibration parameters and storing according to the pixel coordinates of the reference points acquired at the acquisition points and the offset of the mechanical arm;
a human-computer interaction unit: and the input and output interface provides information such as instructions, data, images and the like for a user.
A general flow diagram of an exemplary embodiment of a visual guide positioning method suitable for use with the composite robotic visual guide apparatus described above is given in fig. 2. As shown in fig. 2, the visual guidance method according to the exemplary embodiment mainly includes a step of visual guidance calibration and a step of visual guidance job application, wherein the step of visual guidance calibration includes:
acquiring a relative pose: and after the trolley reaches a proper parking point, adjusting the tail end of the mechanical arm to a proper calibration original point, and acquiring and recording the pose of the mechanical arm, the tail end inclination angle of the mechanical arm, the distance and position relative to the working table surface and the Rz angle value. The position of the tail end of the mechanical arm relative to the workbench and the Rz angle value can be positioned through the pixel coordinates of the reference point in the identification area serving as the guide reference on the workbench and the Rz angle value of the identification area.
Calibration parameter calculation: the method comprises the steps of collecting calibration data and calculating calibration parameters according to the collected data.
And measuring the TCP offset of the mechanical arm from the calibration original point to the placing position of the workpiece.
The visual guidance job applying step includes:
and (3) reproduction of relative pose: and after the trolley moves to a stopping point recorded in the calibration process, the mechanical arm is controlled to move to the posture recorded in the calibration process, and the tail end of the mechanical arm is controlled to adjust to the pose relative to the workbench in the calibration process according to the information collected by the measuring module.
And controlling the mechanical arm to offset according to the TCP offset from the calibration origin to the workpiece placing position acquired during calibration, and completing the pick-and-place task.
In the method, in the relative pose acquisition process, the pose of the mechanical arm and the pose of the tail end of the mechanical arm relative to a workbench during calibration are recorded in detail through acquisition of parameters obtained by a measurement module, wherein the pose comprises a distance, a position and an angle, the position is represented by pixel coordinates of a reference point in an image of an identification area, and the angle is represented by RX and Ry direction angle values recorded by an inclination angle sensor and an Rz angle value of the image of the identification area. And the pixel coordinates of the reference points in the identification area image and the Rz angle value of the identification area image are obtained by carrying out identification and positioning processing on the identification area image acquired by the measuring module.
Then, when the robot works, the relative pose is reproduced, so that the pose of the composite robot relative to the workbench is the same as that during calibration, the vision and action deviation of the robot caused by factors such as trolley navigation positioning error and uneven ground is shielded, and the working accuracy of the robot in an actual working environment is ensured.
As a preferred scheme, the step of collecting the relative pose specifically comprises the following steps:
adjusting the tail end posture of the mechanical arm to enable a working plane of the mechanical arm to be parallel to a working table surface, and collecting and recording values of inclination angles Rx and Ry of the tail end of the mechanical arm;
adjusting the distance between the tail end of the mechanical arm and the workbench to enable the imaging view of the measuring module to be clear, and recording the current point as a calibration origin;
collecting and recording the distance from the tail end of the mechanical arm to the working table at the moment;
collecting the image of the identification area on the workbench, identifying and positioning the image, and extracting and recording the pixel coordinates of the reference point and the Rz angle value of the identification area;
and collecting and recording the pose of the mechanical arm at the moment.
As a preferred scheme, the step of reproducing the relative pose specifically comprises:
controlling the mechanical arm to move to the posture recorded during calibration;
measuring the values of the inclination angles Rx and Ry of the tail end of the mechanical arm, and adjusting the posture of the tail end of the mechanical arm to enable the inclination angle to be the same as the inclination angle value recorded during calibration;
measuring the distance value between the tail end of the mechanical arm and the working table surface, and adjusting the posture of the mechanical arm to enable the distance to be equal to the distance value recorded in calibration;
acquiring an image of the identification area on the workbench, extracting an Rz angle value of the identification area, comparing the Rz angle value with an image Rz value recorded during calibration, and adjusting the terminal attitude Rz of the mechanical arm to make the Rz angle value and the image Rz value consistent;
and acquiring the image of the identification area again to obtain the pixel coordinate of the reference point, calculating the TCP offset of the mechanical arm according to the recorded calibration parameters, and controlling the mechanical arm to move according to the offset, so that the recorded pose relative to the working table surface can be reached during calibration.
The method for identifying and positioning the image belongs to a mature technology, a shape matching method is usually adopted, but if a two-dimension code label is arranged in the identification area, a two-dimension code identification and positioning method can be adopted.
The calibration parameter calculation of the composite robot can adopt all applicable methods in the prior art. The preferable scheme is that the calibration data acquisition and calibration parameter calculation are based on incremental compensation: controlling the mechanical arm to respectively reach each preset point around the calibration origin according to the preset TCP offset in the Rx direction and the Ry direction, acquiring the pixel coordinates of the reference point on the workbench at the point, then calculating the calibration parameters according to the acquired data, and establishing the corresponding relation between the pixel coordinates of the reference point and the TCP offset of the mechanical arm relative to the calibration origin.
In addition, the TCP offset of the mechanical arm from the calibration origin to the workpiece placing position is measured in the calibration step, and the measurement can be completed in the modes of traction teaching and the like.
According to the visual guiding and positioning method for the composite robot, disclosed by the invention, through detailed relative pose measurement during calibration and reproduction of a later operation application link, the influence of various error factors of the robot trolley is shielded, the workpiece grabbing and placing precision is ensured, and the method is convenient, fast and accurate.
Application example:
fig. 1 shows an exemplary vision-guided pointing device of a robot. In the figure, the compound robot is composed of a mobile trolley (1) and a mechanical arm (2), and a working table top (10) is in a horizontal state. When the trolley is calibrated or works, a proper parking point (11) is selected, and the workpiece position of the workbench is located in the working range (9) of the tail end of the mechanical arm.
This robot vision guide positioner includes:
the system comprises an inclination angle sensor (3) horizontally arranged at the tail end of a mechanical arm of the compound robot, a distance measuring sensor (4) arranged at the tail end of the mechanical arm and an industrial camera (5), wherein the inclination angle sensor is used for measuring the inclination angle of the tail end of the mechanical arm and the distance from the tail end of the mechanical arm to a workbench respectively, and acquiring an image of an identification area (7) serving as a visual guide reference;
the vision guide processing module is provided with communication interfaces with the robot motion controller, the inclination angle sensor (3), the distance measurement sensor (4) and the industrial camera (5), acquires data such as distance, inclination angle and image collected by the measurement module, and identifies and positions the image to obtain pixel coordinates of a reference point and Rz angle values of the identification area; and sending an instruction to the robot motion controller to control the robot to move, and simultaneously acquiring the coordinate information of the robot in real time.
The visual guidance processing module is mainly used for: controlling the mechanical arm to deflect according to a preset deflection, collecting pixel coordinates of a reference point, and calculating a calibration parameter; the distance, the position and the angle value of the tail end of the mechanical arm relative to the working platform surface are obtained and recorded during calibration, and the tail end of the mechanical arm is controlled to reach the same relative pose during operation application; and controlling the tail end of the mechanical arm to deviate to a workpiece placing position from the calibration position along the TCP to complete workpiece grabbing and placing.
The vision guide processing module comprises a robot vision controller and vision guide processing software running in the robot vision controller, wherein the vision controller is an industrial personal computer, is respectively connected with the composite robot motion controller and the camera through network cables, and is connected with the inclination angle sensor and the distance measuring sensor through serial ports; the visual guide processing software completes each specific function.
In addition, the visual guidance positioning device also comprises a two-dimensional code label or a character label arranged in the identification area (7), and the pixel coordinate positioning is carried out by taking the center point of the label as a reference point.
The visual guide positioning method of the compound robot suitable for the device comprises a visual guide positioning calibration step and a working step, wherein:
s1, the calibration step, as shown in fig. 3, includes:
and S11, after the AGV arrives at a proper working stop point (the movement navigation of the AGV is controlled by a control device of the AGV), adjusting the tail end posture of the mechanical arm of the composite robot to enable the tail end of the mechanical arm to be parallel to the workbench, and recording the angle values of the inclination angle sensors Rx and Ry at the moment by the vision guidance processing module.
And S12, adjusting the distance between the mechanical arm of the composite robot and the workbench, adjusting the focal length and the aperture of the industrial camera to enable the imaging field of the camera to be clear, and recording the distance value of the ranging sensor at the moment by the vision guidance processing module. Taking the point as a calibration origin.
And S13, communicating the visual guidance processing module with the robot, recording the pose of the mechanical arm at the moment, opening a camera for image acquisition, and positioning and recording the pixel coordinates and the angle values of the label.
S14, setting offsets of the composite robot arm along the X direction and the Y direction of a TCP coordinate system (tool coordinate system), respectively controlling the arm to move to an offset point, opening a camera for picking up a picture, and carrying out pixel coordinate positioning on a label; and then the mechanical arm returns to the calibration original point, the robot is controlled to shift to another point along the TCP, the camera is started to pick up the image, the pixel positioning is carried out on the label in the image, and the mechanical arm returns to the calibration original point.
And S15, the vision guide processing module calculates calibration parameters according to the obtained pixel coordinates and the mechanical arm offset, and stores the calibration result to the local.
And S16, moving the mechanical arm from the current calibration origin position to the position of the target workpiece to be grabbed and placed along the TCP in a traction teaching mode, and recording the TCP offset (delta Tx ', delta Ty ', delta Tz ') of the mechanical arm.
S2, the operation steps, as shown in fig. 4, include:
s21, after the AGV trolley of the composite robot navigates from the initial point to the stopping point during calibration, the mechanical arm is moved to the calibration initial point posture;
s22, adjusting the tail end posture of the mechanical arm according to the Rx and Ry values of the tilt sensors recorded in the calibration, and ensuring that the tail end posture is the same as the value of the tilt sensors recorded in the calibration;
s23, adjusting the distance from the tail end of the mechanical arm to the workbench according to the distance of the ranging sensor recorded during calibration;
s24, opening the industrial camera to take a picture, identifying the label angle, comparing the label angle with Rz recorded during calibration, and sending the angle deviation to the mechanical arm of the composite robot;
s25, adjusting the tail end posture Rz according to the angle deviation by the mechanical arm of the compound robot to be consistent with the posture during calibration;
and S26, opening the camera again to take a picture, identifying and positioning the pixel coordinates of the label, calculating the TCP offset of the mechanical arm according to the calibration parameters, and commanding and controlling the mechanical arm to carry out TCP offset. The position and angle of the tool coordinate system of the compound robot arm relative to the label is now consistent with that at calibration.
And S27, commanding and controlling the mechanical arm to move according to the TCP offset between the calibration origin and the position of the workpiece to be grabbed of the mechanical arm recorded during calibration. At the moment, the mechanical arm can accurately run to the position of the workpiece to be grabbed and placed, and the grabbing and placing task is completed.
Therefore, in the device and the method for guiding and positioning the vision of the composite robot in the embodiment, the distance, the position and the angle of the mechanical arm relative to the workbench during calibration are recorded in detail by acquiring the angle measurement and distance measurement sensor data and the camera image in real time, and the mechanical arm is accurately adjusted in the subsequent actual working process to achieve the relative pose, so that the influence on the grabbing and placing operation of the mechanical arm caused by the factors such as trolley parking positioning error, angle error, uneven ground and the like is shielded, and the precision of the operation of the visual guiding and positioning of the composite robot can be obviously improved compared with the prior art.
The foregoing is merely an illustrative embodiment of the present application, and any equivalent changes and modifications made by those skilled in the art without departing from the spirit and principles of the present application shall fall within the protection scope of the present application.