CN114750160B - Robot control method, apparatus, computer device, and storage medium - Google Patents
Robot control method, apparatus, computer device, and storage medium Download PDFInfo
- Publication number
- CN114750160B CN114750160B CN202210527441.1A CN202210527441A CN114750160B CN 114750160 B CN114750160 B CN 114750160B CN 202210527441 A CN202210527441 A CN 202210527441A CN 114750160 B CN114750160 B CN 114750160B
- Authority
- CN
- China
- Prior art keywords
- positioning robot
- robot
- image acquisition
- acquisition device
- horizontal
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1656—Programme controls characterised by programming, planning systems for manipulators
- B25J9/1664—Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1602—Programme controls characterised by the control system, structure, architecture
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/02—Total factory control, e.g. smart factories, flexible manufacturing systems [FMS] or integrated manufacturing systems [IMS]
Landscapes
- Engineering & Computer Science (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Automation & Control Theory (AREA)
- Manipulator (AREA)
Abstract
The application relates to a robot control method, a device, computer equipment, a storage medium and a computer program product, wherein a vertical positioning robot, a horizontal positioning robot and an image acquisition device are calibrated under the same world coordinate system, after the product to be calibrated is positioned through the image acquisition device on the vertical positioning robot, the vertical position of the image acquisition device on the vertical positioning robot is acquired, after the product to be calibrated is positioned through the image acquisition device on the horizontal positioning robot, the horizontal position of the image acquisition device on the horizontal positioning robot is acquired, the first error of the vertical position and a basic model is calculated, the second error of the horizontal position and the basic model is calculated, when the first error and the second error are within the allowable error range, the space coordinates are calculated according to the vertical position, the horizontal position and the basic model, and the space coordinates are transmitted to the vertical positioning robot and the horizontal positioning robot, so that the working reliability of the robot is improved.
Description
Technical Field
The present application relates to the field of robotics, and in particular, to a method, apparatus, computer device, storage medium, and computer program product for controlling a robot.
Background
With the development of robot technology, the cooperative robot plays an irreplaceable role in the fields of industrial 3C, automobile electronics, daily necessities, biosafety and the like, and can ensure the life safety of people while improving the product quality and the production efficiency. The cooperation robot and vision can assist human to complete mechanical work, and the efficiency and vision accuracy of the robot are fully exerted. The method has the outstanding advantages of high efficiency and high precision, and can greatly promote the rapid development of manufacturing industry and service industry.
The traditional robot vision positioning method is realized through a camera arranged on the mechanical arm, the camera collects images of the product to be detected, the robot is helped to position the product to be detected, and the robot can execute tasks on the product after positioning. However, the visual positioning method cannot judge errors caused by factors such as product inclination due to the reasons such as the angle of view, so that the positioning is not accurate enough, and the working reliability of the robot is reduced.
Disclosure of Invention
In view of the foregoing, it is desirable to provide a robot control method, apparatus, computer device, storage medium, and computer program product that can improve the operational reliability of a robot.
In a first aspect, the present application provides a robot control method for controlling a robot, the robot includes a vertical positioning robot and a horizontal positioning robot, both the vertical positioning robot and the horizontal positioning robot are provided with image acquisition devices, and the control method is used for performing visual positioning on a product to be calibrated, and includes:
calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same world coordinate system;
after the product to be calibrated is positioned through the image acquisition device on the vertical positioning robot, the vertical position of the image acquisition device on the vertical positioning robot is obtained, and after the product to be calibrated is positioned through the image acquisition device on the horizontal positioning robot, the horizontal position of the image acquisition device on the horizontal positioning robot is obtained;
calculating a first error between the vertical position and a base model, and a second error between the horizontal position and the base model;
calculating spatial coordinates according to the vertical position, the horizontal position and the base model when the first error and the second error are both within an allowable error range;
And sending the space coordinates to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot work based on the space coordinates.
In one embodiment, after the calculating the first error between the vertical position and the base model and the second error between the horizontal position and the base model, the method further comprises:
when the first error or the second error is not in the allowable error range, sending an adjustment instruction to each robot, returning to the vertical position of the image acquisition device on the vertical positioning robot after the product to be calibrated is positioned by the image acquisition device on the vertical positioning robot, and acquiring the horizontal position of the image acquisition device on the horizontal positioning robot after the product to be calibrated is positioned by the image acquisition device on the horizontal positioning robot; the adjusting instruction is used for controlling each image acquisition device to adjust the gesture.
In one embodiment, the adjustment instructions are further configured to control the image capturing device on the vertical positioning robot and the image capturing device on the horizontal positioning robot to maintain 90 ° vertical in space.
In one embodiment, before calculating the first error between the vertical position and the base model and the second error between the horizontal position and the base model, the method further comprises:
and acquiring a basic model through the image acquisition device, the vertical positioning robot and the horizontal positioning robot.
In one embodiment, the base model includes a vertical base model and a horizontal base model, the acquiring the base model by the image acquisition device, the vertical positioning robot, and the horizontal positioning robot includes:
sending a first calibration instruction to the vertical positioning robot; the first calibration instruction is used for controlling an image acquisition device on the vertical positioning robot to determine a vertical basic model according to the initial vertical space direction posture;
sending a second calibration instruction to the horizontal positioning robot; the second calibration instruction is used for controlling an image acquisition device on the horizontal positioning robot to determine a horizontal basic model according to the initial horizontal space direction posture.
In one embodiment, before calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device under the same world coordinate system, the method further includes:
And distributing the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same network segment.
In a second aspect, the present application further provides a robot control device for controlling a robot, the robot includes a vertical positioning robot and a horizontal positioning robot, the vertical positioning robot with all be provided with image acquisition device on the horizontal positioning robot, be used for treating the product of demarcating and carry out vision positioning, the device includes:
the coordinate unifying module is used for calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same world coordinate system;
the position acquisition module is used for acquiring the vertical position of the image acquisition device on the vertical positioning robot after the product to be calibrated is positioned by the image acquisition device on the vertical positioning robot, and acquiring the horizontal position of the image acquisition device on the horizontal positioning robot after the product to be calibrated is positioned by the image acquisition device on the horizontal positioning robot;
the error calculation module is used for calculating a first error between the vertical position and the basic model and a second error between the horizontal position and the basic model;
A control coordinate calculation module for calculating a spatial coordinate according to the vertical position, the horizontal position and the basic model when the first error and the second error are both within an allowable error range;
and the execution module is used for sending the space coordinates to the vertical positioning robot and the horizontal positioning robot so that the vertical positioning robot and the horizontal positioning robot work based on the space coordinates.
In a third aspect, the present application also provides a computer device. The computer device comprises a memory storing a computer program and a processor which when executing the computer program performs the steps of:
calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same world coordinate system;
after the product to be calibrated is positioned through the image acquisition device on the vertical positioning robot, the vertical position of the image acquisition device on the vertical positioning robot is obtained, and after the product to be calibrated is positioned through the image acquisition device on the horizontal positioning robot, the horizontal position of the image acquisition device on the horizontal positioning robot is obtained;
Calculating a first error between the vertical position and a base model, and a second error between the horizontal position and the base model;
calculating spatial coordinates according to the vertical position, the horizontal position and the base model when the first error and the second error are both within an allowable error range;
and sending the space coordinates to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot work based on the space coordinates.
In a fourth aspect, the present application also provides a computer-readable storage medium. The computer readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of:
calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same world coordinate system;
after the product to be calibrated is positioned through the image acquisition device on the vertical positioning robot, the vertical position of the image acquisition device on the vertical positioning robot is obtained, and after the product to be calibrated is positioned through the image acquisition device on the horizontal positioning robot, the horizontal position of the image acquisition device on the horizontal positioning robot is obtained;
Calculating a first error between the vertical position and a base model, and a second error between the horizontal position and the base model;
calculating spatial coordinates according to the vertical position, the horizontal position and the base model when the first error and the second error are both within an allowable error range;
and sending the space coordinates to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot work based on the space coordinates.
In a fifth aspect, the present application also provides a computer program product. The computer program product comprises a computer program which, when executed by a processor, implements the steps of:
calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same world coordinate system;
after the product to be calibrated is positioned through the image acquisition device on the vertical positioning robot, the vertical position of the image acquisition device on the vertical positioning robot is obtained, and after the product to be calibrated is positioned through the image acquisition device on the horizontal positioning robot, the horizontal position of the image acquisition device on the horizontal positioning robot is obtained;
Calculating a first error between the vertical position and a base model, and a second error between the horizontal position and the base model;
calculating spatial coordinates according to the vertical position, the horizontal position and the base model when the first error and the second error are both within an allowable error range;
and sending the space coordinates to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot work based on the space coordinates.
The robot control method, the device, the computer equipment, the storage medium and the computer program product are used for controlling the robot, the robot comprises a vertical positioning robot and a horizontal positioning robot, the vertical positioning robot and the horizontal positioning robot are respectively provided with an image acquisition device, and the image acquisition devices are used for visually positioning a product to be calibrated, and the control method comprises the following steps: calibrating a vertical positioning robot, a horizontal positioning robot and an image acquisition device under the same world coordinate system, after positioning a product to be calibrated through the image acquisition device on the vertical positioning robot, acquiring the vertical position of the image acquisition device on the vertical positioning robot, after positioning the product to be calibrated through the image acquisition device on the horizontal positioning robot, acquiring the horizontal position of the image acquisition device on the horizontal positioning robot, calculating a first error of the vertical position and a basic model, and calculating a second error of the horizontal position and the basic model, and when the first error and the second error are in an allowable error range, calculating space coordinates according to the vertical position, the horizontal position and the basic model, and transmitting the space coordinates to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot work based on the space coordinates. The vertical positioning robot and the horizontal positioning robot perform visual mutual calibration through the image acquisition device, a method of error gradient descent is adopted, and the high-precision gesture of the product to be calibrated in the space range is calculated in the vertical direction and the horizontal direction of the space, so that the visual positioning precision is improved, and the working reliability of the robot is improved.
Drawings
FIG. 1 is a flow chart of a method of robot control in one embodiment;
FIG. 2 is a schematic diagram of the operation of a robot in one embodiment;
FIG. 3 is a flow chart of a robot control method in another embodiment;
FIG. 4 is a detailed flow chart of a robot control method in one embodiment;
FIG. 5 is a block diagram of a robot control device in one embodiment;
fig. 6 is an internal structural diagram of a computer device in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application will be further described in detail with reference to the accompanying drawings and examples. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
The robot control method is used for controlling robots, and the number of the robots is more than two, including a vertical positioning robot and a horizontal positioning robot. The number of the vertical positioning robots and the number of the horizontal positioning robots can be more than two, and the vertical positioning robots and the horizontal positioning robots are provided with image acquisition devices for visually positioning products to be calibrated. Specifically, the image acquisition device can be arranged on mechanical arms of the vertical positioning robot and the horizontal positioning robot, and the mechanical arms move with the image acquisition device. The image acquisition device can be a camera, such as a CCD camera, or other types of devices, such as an infrared camera.
In one embodiment, as shown in fig. 1, a robot control method is provided for controlling a robot, where the robot includes a vertical positioning robot and a horizontal positioning robot, and image acquisition devices are disposed on the vertical positioning robot and the horizontal positioning robot, and are used for visually positioning a product to be calibrated. The robot is generally a cooperative robot, and may be, for example, a transfer robot, a welding robot, a die processing robot, an industrial screw driver, or the like. The control method can be executed by a controller arranged in one of the robots, and the robots can communicate with each other. Alternatively, the robot control method may be executed by a server, and data transmission may be performed between the server and each robot. The server may be implemented as a stand-alone server or as a server cluster composed of a plurality of servers. The control method comprises the following steps:
and 202, calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same world coordinate system.
Wherein, since the camera can be placed at any position in the environment, a reference coordinate system is selected in the environment to describe the position of the camera and used to describe the position of any object in the environment, which coordinate system is called world coordinate system. Because each robot has an independent world coordinate system, each image acquisition device has an independent pixel coordinate system, and the vertical positioning robot, the horizontal positioning robot and the image acquisition device are calibrated under the same world coordinate system, so that the calculation is convenient. The calibration can ensure that the coordinate system of the vertical positioning robot, the horizontal positioning robot and the pixel coordinate system of the image acquisition device are integrated into a world coordinate system, thereby facilitating the subsequent operation.
Step 204, after the product to be calibrated is positioned by the image acquisition device on the vertical positioning robot, the vertical position of the image acquisition device on the vertical positioning robot is obtained, and after the product to be calibrated is positioned by the image acquisition device on the horizontal positioning robot, the horizontal position of the image acquisition device on the horizontal positioning robot is obtained.
As shown in fig. 2, the vertical positioning robot may position the product to be calibrated in the height direction through the image acquisition device disposed thereon. Taking an example that the image acquisition device is arranged on a mechanical arm of the vertical positioning robot, the movement direction of the mechanical arm is the height direction of a product to be calibrated, and the image acquisition device is driven to move in the height direction of the product to be calibrated, so that the product to be calibrated is positioned. Positioning the product to be calibrated can be understood as enabling the product to be calibrated to be located in a proper visual field range of the image acquisition device, and enabling an image of the product to be calibrated to be clear. After the product to be calibrated is positioned, the vertical position of the image acquisition device on the vertical positioning robot is acquired, wherein the vertical position can comprise height coordinates, angles and the like.
Similarly, the horizontal positioning robot can position the product to be calibrated in the horizontal direction through the image acquisition device arranged on the horizontal positioning robot. Taking an example that the image acquisition device is arranged on a mechanical arm of the horizontal positioning robot, the movement direction of the mechanical arm is the horizontal direction of a product to be calibrated, and the image acquisition device is driven to move in the horizontal direction of the product to be calibrated so as to position the product to be calibrated. Further, the horizontal direction may include a first direction and a second direction, i.e., an x-direction and a y-direction, the first direction and the second direction being perpendicular. The position in the horizontal direction can be obtained through the first direction and the second direction, so that horizontal positioning is realized. After the product to be calibrated is positioned, the horizontal position of the image acquisition device on the horizontal positioning robot is acquired, wherein the horizontal position can comprise horizontal coordinates, angles and the like.
In step 206, a first error between the vertical position and the base model and a second error between the horizontal position and the base model are calculated.
The vertical position can be understood as the vertical position after the product to be calibrated moves, the image acquisition device locates the product to be calibrated after the movement, the horizontal position can be understood as the horizontal position after the product to be calibrated moves, the image acquisition device locates the product to be calibrated after the movement, and the movement of the product to be calibrated can be overturning or displacement. The basic model is used for indicating the initial position of the image acquisition device when the product to be calibrated is calibrated, and can comprise the initial position of the product to be calibrated and the initial position of each image acquisition device. The basic model can be determined through a manual teaching process, such as manually controlling the moving position of the robot, and determining an initial vertical space and a horizontal space to obtain the basic model.
The first error is a deviation value between the vertical position and the vertical position in the basic model, and the first error can represent a deviation value between the vertical position and the initial vertical position after the product to be calibrated is repositioned by the image acquisition device after the product to be calibrated moves (such as displacement and overturn) in the vertical direction. Similarly, the second error is a deviation value between the horizontal position and the horizontal position in the basic model, and the second error can represent a deviation value between the horizontal position and the initial horizontal position after the product to be calibrated is repositioned by the image acquisition device after the product to be calibrated moves (such as displacement and overturn) in the horizontal direction. The motion direction and the motion degree of the image acquisition device can be obtained through the first error and the second error, so that the motion direction, the motion degree and the like of the product to be calibrated are represented.
In step 208, when the first error and the second error are both within the allowable error range, the spatial coordinates are calculated according to the vertical position, the horizontal position, and the base model.
When the first error and the second error are both in the allowable error range, the motion amplitude of the image acquisition device is considered to be smaller, the motion amplitude of the product to be calibrated is considered to be smaller, and the product to be calibrated can be considered to be in the target range. The space coordinates are the space coordinates of the product to be calibrated and are used for representing the space position of the product to be calibrated. The current position of the image acquisition device can be obtained according to the vertical position and the horizontal position, and the position of the image acquisition device, the position of the product to be calibrated and the corresponding relation between the image acquisition device and the product to be calibrated can be obtained by combining the basic model, so that the space coordinates of the product to be calibrated can be calculated.
Step 210, the spatial coordinates are sent to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot work based on the spatial coordinates.
After the space coordinates are obtained, the space coordinates are sent to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot can execute actions on products to be calibrated according to the space coordinates, and corresponding work is completed.
The robot control method is used for controlling the robot, the robot comprises a vertical positioning robot and a horizontal positioning robot, the vertical positioning robot and the horizontal positioning robot are respectively provided with an image acquisition device for visually positioning a product to be calibrated, and the control method comprises the following steps: calibrating a vertical positioning robot, a horizontal positioning robot and an image acquisition device under the same world coordinate system, after positioning a product to be calibrated through the image acquisition device on the vertical positioning robot, acquiring the vertical position of the image acquisition device on the vertical positioning robot, after positioning the product to be calibrated through the image acquisition device on the horizontal positioning robot, acquiring the horizontal position of the image acquisition device on the horizontal positioning robot, calculating a first error of the vertical position and a basic model, and calculating a second error of the horizontal position and the basic model, and when the first error and the second error are in an allowable error range, calculating space coordinates according to the vertical position, the horizontal position and the basic model, and transmitting the space coordinates to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot work based on the space coordinates. The vertical positioning robot and the horizontal positioning robot perform visual mutual calibration through the image acquisition device, a method of error gradient descent is adopted, and the high-precision gesture of the product to be calibrated in the space range is calculated in the vertical direction and the horizontal direction of the space, so that the visual positioning precision is improved, and the working reliability of the robot is improved.
In one embodiment, as shown in FIG. 3, after step 206, the robot control method further includes step 207.
Step 207, when the first error or the second error is not within the allowable error range, sending an adjustment command to each robot, and returning to step 204.
The adjusting instruction is used for controlling each image acquisition device to adjust the gesture. After receiving the adjustment instruction, each robot drives the image acquisition device to move, so that the gesture of the image acquisition device arranged on the robot is adjusted, for example, the photographing angle is adjusted or the distance between the image acquisition device and the product to be calibrated is adjusted, and the product to be calibrated is positioned again. After the image acquisition device is controlled to adjust the gesture, returning to the step 204, positioning the product to be calibrated again, comparing the product with the basic model, iterating until the first error and the second error are within the allowable error range, calculating the space coordinates again, and sending the space coordinates to the vertical positioning robot and the horizontal positioning robot to enable the robots to complete tasks.
Further, the adjusting instruction is also used for controlling the image acquisition device on the vertical positioning robot and the image acquisition device on the horizontal positioning robot to always keep the space at 90 degrees. Because the image acquisition device is generally a plane camera, X, Y and 3 angle direction parameters can be provided, when the image acquisition device on the vertical positioning robot and the image acquisition device on the horizontal positioning robot always keep 90 degrees of space vertical, the coordinates of 6 degrees of freedom of the space of the vertical positioning robot and the space of the horizontal positioning robot can be provided, so that correction is realized, and the visual positioning precision is further improved.
In one embodiment, as shown in FIG. 3, prior to step 206, the robot control method further includes step 205.
Step 205, obtaining a basic model through an image acquisition device, a vertical positioning robot and a horizontal positioning robot.
The basic model is used for indicating the initial position of the image acquisition device when the product to be calibrated is calibrated, and can comprise the initial position of the product to be calibrated and the initial position of each image acquisition device. The basic model can be obtained through the image acquisition device, the vertical positioning robot and the horizontal positioning robot, so that the positioning of the product to be calibrated can be more fit with the actual situation every time, and the visual positioning precision is improved. The basic model can be determined through a manual teaching process, such as manually controlling the moving positions of the vertical positioning robot and the horizontal positioning robot, and determining an initial vertical space and a horizontal space to obtain the basic model. It will be appreciated that in other embodiments, the base model may also be an initial value that has been set prior to executing the method, and need not be recalibrated each time to save workflow.
In one embodiment, as shown in FIG. 3, step 205 includes steps 302 and 304.
Step 302, a first calibration command is sent to the vertical positioning robot.
The first calibration instruction is used for controlling an image acquisition device on the vertical positioning robot to determine a vertical basic model according to the initial vertical space direction posture. And after the image acquisition device on the vertical positioning robot receives the first calibration instruction, determining a vertical basic model according to the initial vertical space direction posture (such as the RZ direction). The vertical basis model may comprise the vertical position of the product to be calibrated and the initial position of the image acquisition device.
Step 304, a second calibration instruction is sent to the horizontal positioning robot.
The second calibration instruction is used for controlling an image acquisition device on the horizontal positioning robot to determine a horizontal basic model according to the initial horizontal space direction posture. And after the image acquisition device on the horizontal positioning robot receives the second calibration instruction, determining a horizontal basic model according to the initial horizontal space direction posture (such as RX and RY directions). The horizontal base model may comprise the horizontal position of the product to be calibrated and the horizontal position of the image acquisition device. The basic model can be obtained through the vertical positioning robot and the horizontal positioning robot, so that the obtained basic model is more comprehensive and accurate.
In one embodiment, as shown in fig. 3, the robot control method further comprises step 201 before step 202.
Step 201, the vertical positioning robot, the horizontal positioning robot and the image acquisition device are distributed to the same network segment.
The same network segment refers to the same address of the network segment, and the subnet mask is used to cut the network address and the host address of the address, but conversely, the address subnet mask of the same network segment must be the same, and an IP address segment is allocated to each network segment, which is the same network segment. The vertical positioning robot, the horizontal positioning robot and the image acquisition device are distributed to the same network segment, so that the communication efficiency can be improved by directly communicating the vertical positioning robot with the horizontal positioning robot and the vertical positioning robot and the horizontal positioning robot with the image acquisition device. It will be appreciated that in other embodiments, the vertical positioning robot, the horizontal positioning robot and the image acquisition device may also be indirectly communicated by auxiliary devices, as long as those skilled in the art deem this to be possible.
For a better understanding of the above embodiments, a detailed explanation is provided below in connection with a specific embodiment. In one embodiment, taking all robots as cooperative robots, the number of cooperative robots and the number of image capturing devices are two, and the image capturing devices are cameras as examples, please refer to fig. 4, the robot control method includes:
At the beginning, all the cooperative robots and cameras are started, the IP is configured in the same network segment, the two cooperative robots can communicate with each other directly through the configuration of the same network segment, and the communication efficiency is improved. A camera 1 is mounted on the end of the collaborative robot 1 and a camera 2 is mounted on the end of the collaborative robot 2.
Next, the cooperative robot 1 and the cooperative robot 2 and the camera 1 and the camera 2 are calibrated to the same world coordinate system. Because each collaborative robot has an independent world coordinate system and each camera has an independent pixel coordinate system, all collaborative robots and cameras need to be unified into one world coordinate system, calculation is convenient, and the calibration function is to unify the coordinate systems of the two collaborative robots and the pixel coordinate systems of the cameras into one world coordinate system.
Meanwhile, the camera 1 is set to determine a calibration product base model in an initial vertical spatial direction posture (RZ direction), and the camera 2 is set to determine a calibration product base model (two models of RX and RY directions) in an initial horizontal spatial direction posture (RX and RY directions), as schematically shown. The initial vertical space and the initial horizontal space are determined through a manual teaching process, namely, the initial vertical space and the initial horizontal space are determined by manually controlling the moving positions of the cooperative robots.
When the product deviates at a certain space angle, the collaborative robot 1 carries the camera 1 to position the product in the RZ direction and then compares the product with the basic model, the collaborative robot 2 carries the camera 2 to position the product in the RX and RY directions and then compares the product with the basic model, and if the RZ, RX and RY direction models are all in the error range, X, Y, Z, RX, RY and RZ coordinates can be calculated by utilizing the data of the camera 1 and the camera 2 and sent to an executing mechanism to complete the task. The general cooperative robot has 6 joints, namely 6 degrees of freedom, and X, Y, Z, RX, RY and RZ respectively correspond to the 6 degrees of freedom; x, Y, Z differs from RX, RY, RZ in the direction of movement in the world coordinate system.
Otherwise, if the RZ, RX and RY direction models are not in the error range, the posture of the cooperative robot 1 is adjusted, and meanwhile, the cooperative robot 2 correspondingly adjusts the posture, so that the relative positions of the camera 1 and the camera 2 in the space of 90 degrees are always unchanged. The camera 1 and the camera 2 always keep 90 degrees of space to act vertically: the cameras are plane cameras, X, Y and 3 angle direction parameters can be provided, the two cameras are combined in a space of 90 degrees, and the coordinates of 6 degrees of freedom of the space of the cooperative robot can be provided to realize deviation correction.
After the positions of the camera 1 and the camera 2 are adjusted, the camera 1 is again positioned in the RZ direction and is compared with the basic model, the camera 2 is compared with the basic model in the RX and RY directions, and iteration is continued until the RX, RY and RZ direction positioning products and the basic model are in an error range, and then X, Y, Z, RX, RY and RZ coordinates can be sent to an executing mechanism to complete tasks.
The present solution determines whether the collaborative robot 1 (i.e. the actuator) has reached the execution position (i.e. the spatial 6 degree of freedom position of the product) based on the real-time position of the camera compared to the base position. Since the pixels of the prior-stage camera have reached the billion level, after calibration, the pixel equivalent of the camera has been calculated (one pixel represents an actual physical quantity, for example, the actual physical distance between two points is 10mm, the pixel distance is 100pixel, the pixel equivalent=10/100=0.1 mm/pixel), when the actual physical distance is smaller, the larger the pixel of the camera is, the smaller the pixel equivalent is, and micrometer-level positioning can be achieved (for example, the actual photographing range 60mm by 40mm of the camera, the pixel resolution of the camera is 600 pixel by 4000pixel, the pixel equivalent is 0.01mm/pixel, and when the real-time position and the basic model error of the camera are within one pixel, that is, micrometer-level positioning is achieved).
According to the robot control method, the plane vision is installed at the tail end of the cooperative robot, the two-arm cooperative robot and the plane high-precision camera are used, interactive iteration of a double camera is adopted, the high-precision posture of a workpiece in the space range is calibrated by adopting an error gradient reduction method, and the spatial position (X, Y, Z) and the spatial posture angle (RX, RY, RZ) of the workpiece in the space range of the mechanical arm are obtained. The multiple collaborative robots are mutually calibrated through vision, iterate for many times in the vertical direction and the horizontal direction of the space, and calculate the high-precision 6D gesture of the product, so that the spatial freedom degree of the collaborative robots can be ensured, and the precision of plane vision is also ensured. The multi-machine collaborative robot and vision can jointly complete high-precision grabbing of space three-dimensional products, and the millimeter-level positioning precision of the existing collaborative robot matched with a 3D camera is improved to the 10-micrometer level, for example, the multi-machine collaborative robot and vision three-dimensional product grabbing device is used in the fields of semiconductor packaging, high-precision chip assembly and the like.
It should be understood that, although the steps in the flowcharts related to the above embodiments are sequentially shown as indicated by arrows, these steps are not necessarily sequentially performed in the order indicated by the arrows. The steps are not strictly limited to the order of execution unless explicitly recited herein, and the steps may be executed in other orders. Moreover, at least some of the steps in the flowcharts described in the above embodiments may include a plurality of steps or a plurality of stages, which are not necessarily performed at the same time, but may be performed at different times, and the order of the steps or stages is not necessarily performed sequentially, but may be performed alternately or alternately with at least some of the other steps or stages.
Based on the same inventive concept, the embodiment of the application also provides a robot control device for realizing the robot control method. The implementation of the solution provided by the device is similar to the implementation described in the above method, so the specific limitations in one or more embodiments of the robot control device provided below may be referred to above as limitations of the robot control method, and will not be described herein.
In one embodiment, as shown in fig. 5, a robot control device is provided, which includes a coordinate unifying module 102, a position acquisition module 104, an error calculation module 106, a control coordinate calculation module 108, and an execution module 110, wherein:
the coordinate unification module 102 is used for calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same world coordinate system;
the position acquisition module 104 is used for acquiring the vertical position of the image acquisition device on the vertical positioning robot after the product to be calibrated is positioned by the image acquisition device on the vertical positioning robot, and acquiring the horizontal position of the image acquisition device on the horizontal positioning robot after the product to be calibrated is positioned by the image acquisition device on the horizontal positioning robot;
an error calculation module 106, configured to calculate a first error between the vertical position and the base model, and a second error between the horizontal position and the base model;
a control coordinate calculation module 108 for calculating a spatial coordinate according to the vertical position, the horizontal position and the basic model when the first error and the second error are both within the allowable error range;
and an execution module 110 for transmitting the spatial coordinates to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot operate based on the spatial coordinates.
In one embodiment, the robot control device further includes an adjustment module, where the adjustment module is configured to send an adjustment instruction to each robot after the error calculation module 106 calculates the first error between the vertical position and the base model and the second error between the horizontal position and the base model, and return the adjustment instruction to the vertical position of the image acquisition device on the vertical positioning robot after the image acquisition device on the vertical positioning robot is used to position the product to be calibrated by the position acquisition module 104, and obtain the horizontal position of the image acquisition device on the horizontal positioning robot after the image acquisition device on the horizontal positioning robot is used to position the product to be calibrated.
In one embodiment, the adjustment instructions are further for controlling the image acquisition device on the vertically oriented robot and the image acquisition device on the horizontally oriented robot to remain 90 ° vertical in space.
In one embodiment, the robot control device further includes a base model acquisition module for acquiring the base model by the image acquisition device, the vertical positioning robot, and the horizontal positioning robot before the error calculation module 106 calculates the first error of the vertical position and the base model, and the second error of the horizontal position and the base model.
In one embodiment, the base model acquisition module is further configured to send a first calibration command to the vertically positioned robot and a second calibration command to the horizontally positioned robot. The first calibration instruction is used for controlling an image acquisition device on the vertical positioning robot to determine a vertical basic model according to the initial vertical space direction posture; the second calibration instruction is used for controlling an image acquisition device on the horizontal positioning robot to determine a horizontal basic model according to the initial horizontal space direction posture.
In one embodiment, the robot control device further comprises a network segment allocation module for allocating the vertical positioning robot, the horizontal positioning robot, and the image acquisition device to the same network segment before the coordinate unification module 102 calibrates the vertical positioning robot, the horizontal positioning robot, and the image acquisition device to the same world coordinate system.
The respective modules in the robot control device described above may be implemented in whole or in part by software, hardware, or a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, the internal structure of which may be as shown in fig. 6. The computer device includes a processor, a memory, and a network interface connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing data such as basic position, horizontal position, basic model, space coordinates and the like. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program is executed by a processor to implement a robot control method.
It will be appreciated by those skilled in the art that the structure shown in fig. 6 is merely a block diagram of some of the structures associated with the present application and is not limiting of the computer device to which the present application may be applied, and that a particular computer device may include more or fewer components than shown, or may combine certain components, or have a different arrangement of components.
In one embodiment, a computer device is provided, comprising a memory and a processor, the memory having stored therein a computer program, the processor implementing the steps of the method embodiments described above when the computer program is executed.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored which, when executed by a processor, carries out the steps of the method embodiments described above.
In an embodiment, a computer program product is provided, comprising a computer program which, when executed by a processor, implements the steps of the method embodiments described above.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, database, or other medium used in the various embodiments provided herein may include at least one of non-volatile and volatile memory. The nonvolatile Memory may include Read-Only Memory (ROM), magnetic tape, floppy disk, flash Memory, optical Memory, high density embedded nonvolatile Memory, resistive random access Memory (ReRAM), magnetic random access Memory (Magnetoresistive Random Access Memory, MRAM), ferroelectric Memory (Ferroelectric Random Access Memory, FRAM), phase change Memory (Phase Change Memory, PCM), graphene Memory, and the like. Volatile memory can include random access memory (Random Access Memory, RAM) or external cache memory, and the like. By way of illustration, and not limitation, RAM can be in the form of a variety of forms, such as static random access memory (Static Random Access Memory, SRAM) or dynamic random access memory (Dynamic Random Access Memory, DRAM), and the like. The databases referred to in the various embodiments provided herein may include at least one of relational databases and non-relational databases. The non-relational database may include, but is not limited to, a blockchain-based distributed database, and the like. The processors referred to in the embodiments provided herein may be general purpose processors, central processing units, graphics processors, digital signal processors, programmable logic units, quantum computing-based data processing logic units, etc., without being limited thereto.
The technical features of the above embodiments may be arbitrarily combined, and all possible combinations of the technical features in the above embodiments are not described for brevity of description, however, as long as there is no contradiction between the combinations of the technical features, they should be considered as the scope of the description.
The above examples only represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the present application. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application shall be subject to the appended claims.
Claims (10)
1. The robot control method is characterized by being used for controlling a robot, wherein the robot comprises a vertical positioning robot and a horizontal positioning robot, and the vertical positioning robot and the horizontal positioning robot are respectively provided with an image acquisition device for visually positioning a product to be calibrated, and the control method comprises the following steps:
calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same world coordinate system;
After the product to be calibrated is positioned through the image acquisition device on the vertical positioning robot, the vertical position of the image acquisition device on the vertical positioning robot is obtained, and after the product to be calibrated is positioned through the image acquisition device on the horizontal positioning robot, the horizontal position of the image acquisition device on the horizontal positioning robot is obtained;
calculating a first error between the vertical position and a base model, and a second error between the horizontal position and the base model;
when the first error or the second error is not in the allowable error range, sending an adjustment instruction to each robot, returning to the vertical position of the image acquisition device on the vertical positioning robot after the product to be calibrated is positioned by the image acquisition device on the vertical positioning robot, and acquiring the horizontal position of the image acquisition device on the horizontal positioning robot after the product to be calibrated is positioned by the image acquisition device on the horizontal positioning robot; the adjusting instruction is used for controlling each image acquisition device to adjust the gesture;
calculating spatial coordinates according to the vertical position, the horizontal position and the base model when the first error and the second error are both within an allowable error range;
And sending the space coordinates to the vertical positioning robot and the horizontal positioning robot, so that the vertical positioning robot and the horizontal positioning robot work based on the space coordinates.
2. The method of claim 1, wherein the adjustment instructions are further for controlling the image capture device on the vertical positioning robot and the image capture device on the horizontal positioning robot to remain 90 ° vertical in space.
3. The method of claim 2, wherein the image acquisition device is a planar camera having X, Y and three directional parameters of angle for providing coordinates of the vertical positioning robot and the horizontal positioning robot in 6 degrees of freedom.
4. The method of claim 1, wherein the calculating the first error of the vertical position and the base model and the second error of the horizontal position and the base model is preceded by:
and acquiring a basic model through the image acquisition device, the vertical positioning robot and the horizontal positioning robot.
5. The method of claim 4, wherein the base model comprises a vertical base model and a horizontal base model, the acquiring the base model by the image acquisition device, the vertical positioning robot, and the horizontal positioning robot comprising:
Sending a first calibration instruction to the vertical positioning robot; the first calibration instruction is used for controlling an image acquisition device on the vertical positioning robot to determine a vertical basic model according to the initial vertical space direction posture;
sending a second calibration instruction to the horizontal positioning robot; the second calibration instruction is used for controlling an image acquisition device on the horizontal positioning robot to determine a horizontal basic model according to the initial horizontal space direction posture.
6. The method of claim 1, wherein before calibrating the vertical positioning robot, the horizontal positioning robot, and the image acquisition device to the same world coordinate system, further comprising:
and distributing the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same network segment.
7. The utility model provides a robot controlling means for control the robot, the robot includes vertical positioning robot and horizontal positioning robot, vertical positioning robot with all be provided with image acquisition device on the horizontal positioning robot for treat demarcating the product and carry out vision location, the device includes:
The coordinate unifying module is used for calibrating the vertical positioning robot, the horizontal positioning robot and the image acquisition device to the same world coordinate system;
the position acquisition module is used for acquiring the vertical position of the image acquisition device on the vertical positioning robot after the product to be calibrated is positioned by the image acquisition device on the vertical positioning robot, and acquiring the horizontal position of the image acquisition device on the horizontal positioning robot after the product to be calibrated is positioned by the image acquisition device on the horizontal positioning robot;
the error calculation module is used for calculating a first error between the vertical position and the basic model and a second error between the horizontal position and the basic model;
the adjustment module is used for sending an adjustment instruction to each robot when the first error or the second error is not in the allowable error range, returning the adjustment instruction to the robots, acquiring the vertical position of the image acquisition device on the vertical positioning robot after the product to be calibrated is positioned by the image acquisition device on the vertical positioning robot, and acquiring the horizontal position of the image acquisition device on the horizontal positioning robot after the product to be calibrated is positioned by the image acquisition device on the horizontal positioning robot; the adjusting instruction is used for controlling each image acquisition device to adjust the gesture;
A control coordinate calculation module for calculating a spatial coordinate according to the vertical position, the horizontal position and the basic model when the first error and the second error are both within an allowable error range;
and the execution module is used for sending the space coordinates to the vertical positioning robot and the horizontal positioning robot so that the vertical positioning robot and the horizontal positioning robot work based on the space coordinates.
8. A computer device comprising a memory and a processor, the memory storing a computer program, characterized in that the processor implements the steps of the method of any of claims 1 to 6 when the computer program is executed.
9. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
10. A computer program product comprising a computer program, characterized in that the computer program, when being executed by a processor, implements the steps of the method of any of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210527441.1A CN114750160B (en) | 2022-05-16 | 2022-05-16 | Robot control method, apparatus, computer device, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210527441.1A CN114750160B (en) | 2022-05-16 | 2022-05-16 | Robot control method, apparatus, computer device, and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114750160A CN114750160A (en) | 2022-07-15 |
CN114750160B true CN114750160B (en) | 2023-05-23 |
Family
ID=82335711
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210527441.1A Active CN114750160B (en) | 2022-05-16 | 2022-05-16 | Robot control method, apparatus, computer device, and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114750160B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN115533908B (en) * | 2022-10-11 | 2023-10-03 | 江苏高倍智能装备有限公司 | Alignment control method and system for multi-manipulator matched workpiece lifting |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014161603A1 (en) * | 2013-04-05 | 2014-10-09 | Abb Technology Ltd | A robot system and method for calibration |
CN108687776A (en) * | 2017-04-05 | 2018-10-23 | 大族激光科技产业集团股份有限公司 | A kind of robot control system |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2718678B2 (en) * | 1987-07-01 | 1998-02-25 | 株式会社日立製作所 | Coordinate system alignment method |
JP3946711B2 (en) * | 2004-06-02 | 2007-07-18 | ファナック株式会社 | Robot system |
DE102006004153B4 (en) * | 2006-01-27 | 2014-10-23 | Vision Tools Hard- Und Software Entwicklungs Gmbh | Automatic calibration of cooperating robots |
CN106272444B (en) * | 2016-08-31 | 2018-11-13 | 山东中清智能科技股份有限公司 | A method of realizing that trick relationship and dual robot relationship are demarcated simultaneously |
CN107995885B (en) * | 2016-11-18 | 2021-02-26 | 深圳配天智能技术研究院有限公司 | Coordinate system calibration method, system and device |
JP6484213B2 (en) * | 2016-12-09 | 2019-03-13 | ファナック株式会社 | Robot system including a plurality of robots, robot control apparatus, and robot control method |
US11504853B2 (en) * | 2017-11-16 | 2022-11-22 | General Electric Company | Robotic system architecture and control processes |
CN108519055A (en) * | 2018-04-26 | 2018-09-11 | 华中科技大学 | A kind of dual robot relative pose online calibration method of view-based access control model |
CN112692828B (en) * | 2020-12-18 | 2022-08-19 | 上海新时达机器人有限公司 | Robot calibration method, system, device and storage medium |
-
2022
- 2022-05-16 CN CN202210527441.1A patent/CN114750160B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2014161603A1 (en) * | 2013-04-05 | 2014-10-09 | Abb Technology Ltd | A robot system and method for calibration |
CN108687776A (en) * | 2017-04-05 | 2018-10-23 | 大族激光科技产业集团股份有限公司 | A kind of robot control system |
Non-Patent Citations (1)
Title |
---|
付贵 ; .基于机器视觉的工业机器人标定方法研究.南方农机.2018,(第09期),第453-154页. * |
Also Published As
Publication number | Publication date |
---|---|
CN114750160A (en) | 2022-07-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
KR102532072B1 (en) | System and method for automatic hand-eye calibration of vision system for robot motion | |
CN108453701B (en) | Method for controlling robot, method for teaching robot, and robot system | |
US20200298411A1 (en) | Method for the orientation of an industrial robot, and industrial robot | |
US9199379B2 (en) | Robot system display device | |
JP2018169403A (en) | System and method for tying together machine vision coordinate spaces in guided assembly environment | |
US10331728B2 (en) | System and method of robot calibration using image data | |
WO2021169855A1 (en) | Robot correction method and apparatus, computer device, and storage medium | |
CN111801198A (en) | Hand-eye calibration method, system and computer storage medium | |
WO2018043525A1 (en) | Robot system, robot system control device, and robot system control method | |
CN113379849A (en) | Robot autonomous recognition intelligent grabbing method and system based on depth camera | |
JP2005201824A (en) | Measuring device | |
CN111590593B (en) | Calibration method, device and system of mechanical arm and storage medium | |
EP3317052A1 (en) | Technologies for pan tilt unit calibration | |
CN114750160B (en) | Robot control method, apparatus, computer device, and storage medium | |
CN114523477A (en) | Joint pose calibration method, system and storage medium | |
CN115439633A (en) | Calibration method and device and electronic equipment | |
CN114833825B (en) | Collaborative robot control method, device, computer equipment and storage medium | |
CN215701709U (en) | Configurable hand-eye calibration device | |
JP7423387B2 (en) | Calibration system, information processing system, robot control system, calibration method, information processing method, robot control method, calibration program, information processing program, calibration device, information processing device, and robot control device | |
CN111784771A (en) | 3D triangulation method and device based on binocular camera | |
CN112815851A (en) | Hand-eye calibration method, device, system, electronic equipment and storage medium | |
CN217932765U (en) | Robot surface structured light stereo camera pose online calibration device | |
CN114619233B (en) | Lock positioning method, screw locking method, lock positioning device and screw machine | |
US20240066713A1 (en) | Robot system and robot control method | |
TWI832770B (en) | Calibration method and system for mechanical arm based on image processing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |