CN212218483U - Visual operation system of compound robot - Google Patents

Visual operation system of compound robot Download PDF

Info

Publication number
CN212218483U
CN212218483U CN202020555792.XU CN202020555792U CN212218483U CN 212218483 U CN212218483 U CN 212218483U CN 202020555792 U CN202020555792 U CN 202020555792U CN 212218483 U CN212218483 U CN 212218483U
Authority
CN
China
Prior art keywords
robot
vision
manipulator
control unit
positioning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202020555792.XU
Other languages
Chinese (zh)
Inventor
寇淼
丁诗咏
徐东冬
程胜
张建伟
谢成宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Kunshan Robotech Intelligent Technology Co ltd
Ksitri Intelligent Manufacturing Technology Co ltd
Original Assignee
Kunshan Robotech Intelligent Technology Co ltd
Ksitri Intelligent Manufacturing Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Kunshan Robotech Intelligent Technology Co ltd, Ksitri Intelligent Manufacturing Technology Co ltd filed Critical Kunshan Robotech Intelligent Technology Co ltd
Priority to CN202020555792.XU priority Critical patent/CN212218483U/en
Application granted granted Critical
Publication of CN212218483U publication Critical patent/CN212218483U/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Manipulator (AREA)

Abstract

The utility model discloses a vision operation system of a compound robot, which comprises a robot and a positioning label; the whole robot can move controllably; the robot comprises a manipulator, wherein an actuating mechanism and a camera are arranged at the tail end of the manipulator; the manipulator, the executing mechanism and the camera are all electrically connected with the control unit; the positioning label is fixed on an article to be operated or a carrier for carrying the article to be operated. The utility model discloses a compounding machine ware people vision operating system only need use ordinary industry camera and location label, need not to use the two mesh cameras that the cost is higher and control algorithm is complicated, has effectively reduced the hardware cost, and control is simple, and positioning accuracy is high.

Description

Visual operation system of compound robot
Technical Field
The utility model relates to the technical field of robot, especially, relate to a vision operating system of compounding machine robot.
Background
In the industrial and commercial fields, a compound robot is often used, and the robot is composed of a movable chassis and a mechanical arm, wherein the movable chassis can be controlled to carry the mechanical arm to different places for operation. Due to the positioning error of the movable chassis, the uncertainty of the position of the article to be operated, and the like, a positioning device is needed to determine the relative position of the mechanical arm and the article or the goods shelf, and the like, so as to assist the robot in operating. In a general visual positioning method, a monocular camera is used, only a plane object can be positioned, and the method is only suitable for a mechanical arm at a fixed position; the method of binocular vision or a monocular camera plus structured light is adopted to acquire the three-dimensional position information of the object, and the method involves complicated algorithm and can increase the product cost.
Disclosure of Invention
Utility model purpose: in order to overcome the deficiencies in the prior art, the utility model provides a with low costs and treat the high composite robot vision operating system of positioning accuracy of operation object.
The technical scheme is as follows: in order to achieve the above object, the utility model discloses a vision operating system of composite robot, it includes:
the whole compound robot can move controllably; the robot comprises a manipulator, wherein an actuating mechanism and a camera are arranged at the tail end of the manipulator; the manipulator, the executing mechanism and the camera are all electrically connected with the control unit; and
the positioning label is fixed on an article to be operated or a carrier for carrying the article to be operated.
Furthermore, the compound robot is also provided with a detection sensor which is electrically connected with the control unit.
Further, the positioning label is provided with four control points for image recognition by the control unit.
Furthermore, the positioning tag is an ArUco tag, and four corner points of the ArUco tag form four control points.
Furthermore, the positioning label is provided with a black frame, the black frame is internally provided with four white dots, and the central points of the four white dots form four control points.
Furthermore, a black dot is arranged in the area defined by the black frame, and the distance between the black dot and one of the white dots is smaller than the distance between the black dot and the other three white dots.
Furthermore, the composite robot comprises a movable chassis, an electric cabinet and the manipulator from bottom to top in sequence, the control unit is installed in the electric cabinet, and the detection sensor is installed on the movable chassis.
Furthermore, the movable chassis comprises a chassis base body, two driving wheels are symmetrically arranged on the left side and the right side of the middle of the chassis base body, and a plurality of driven wheels are respectively arranged on the front side and the rear side of the chassis base body.
Further, the manipulator is a multi-axis robot.
Has the advantages that: the utility model discloses a compounding machine ware people vision operating system only need use ordinary industry camera and location label, need not to use the two mesh cameras that the cost is higher and control algorithm is complicated, has effectively reduced the hardware cost, and control is simple, and positioning accuracy is high.
Drawings
FIG. 1 is a diagram of a vision operating system of a composite robot;
FIG. 2 is an exemplary illustration of an Aruco tag;
FIGS. 3(a) -3(c) are exemplary illustrations of three types of self-made labels;
fig. 4 is a flow chart of the visual operation method of the composite robot.
In the figure: 1-a compound robot; 11-a manipulator; 12-an electric cabinet; 13-a camera; 14-a detection sensor; 15-moving the chassis; 151-chassis base; 152-a driving wheel; 153-driven wheel; 2-positioning the label.
Detailed Description
The present invention will be further described with reference to the accompanying drawings.
The vision operation system of the compound robot as shown in fig. 1 includes a compound robot 1 and a positioning tag 2.
The whole composite robot 1 can move controllably; specifically, it includes moving chassis 15 and manipulator 11, and the setting is piled up in proper order from bottom to top to the three, promptly: the electric cabinet 12 is arranged at the upper end of the movable chassis 15, the manipulator 11 is arranged at the upper end of the electric cabinet 12, and a control unit is arranged in the electric cabinet 12; with the layout, the electric cabinet 12 not only can protect the control unit, but also plays a role in heightening and supporting, so that the mounting position of the manipulator 11 is higher; the manipulator 11 is a multi-axis robot (such as a six-axis industrial robot), an actuator (not shown in the figure) and a camera 13 are arranged at the tail end of the manipulator 11, the actuator 12 can be in the form of a claw or the like for executing grabbing operation, and the camera 13 is a common industrial camera; the manipulator 11, the actuator 12 and the camera 13 are all electrically connected with the control unit, and the control unit can control the manipulator 11, the actuator 12 and the camera 13 to operate and can acquire image data acquired by the camera 13.
In one embodiment, the control unit may control the movement of the moving chassis 15 by receiving a remote control signal sent by a user through a remote control handle, so that the moving chassis 15 drives the composite robot 1 to move integrally; in a preferred embodiment, a detection sensor 14 (e.g., a laser radar) is installed on the mobile chassis 15, the detection sensor 14 is electrically connected to the control unit, the control unit can perform autonomous navigation according to detection data of the detection sensor 14 to move to a target position, autonomous navigation based on sensors such as the laser radar is a mature technology in the prior art, and can be implemented by using the prior art, which is not described herein any more.
The movable chassis 15 includes a chassis base 151, two driving wheels 152 are symmetrically disposed on the left and right sides of the middle portion of the chassis base 151, and two driven wheels 153 are disposed on the front and rear sides of the chassis base 151, respectively. Through the six-wheel structure of the mobile chassis 15, the composite robot 1 can run smoothly.
The positioning tag 2 is fixed on the article to be operated or on a carrier for carrying the article to be operated, wherein the carrier can be in the form of a shelf, a tray, a positioning fixture and the like. In order to facilitate the control unit to calculate the pose of the positioning tag 2 according to the image of the positioning tag 2, the positioning tag 2 is provided with four control points for the control unit to perform image recognition, and the control unit can extract coordinates of the four control points through the image to further determine the pose of the positioning tag 2.
In one embodiment, the positioning tag 2 is an ArUco tag, and as shown in fig. 2, four corner points of the ArUco tag form four control points, which are indicated by small boxes in the figure.
The Aruco tag is Rafael
Figure BDA0002452032570000041
And a label proposed by Sergio Garrido, the periphery of which is a circle of black frame, the interior of which is a binary coding matrix, and the label can be used for carrying out dislocation detection and correction, and the label can use the internal matrix to set ID and uniquely identify an object. If there is no need to identify and distinguish objects, the identification code may not be used, and as described above, only one kind of label is designed, and four points can be identified and distinguished on the image. The Aruco label can be obtained through Opencv, a common printer can be used for printing on paper in a common application environment, the resolution of a label image can be improved for high-precision application, and a more precise printing method is used for obtaining the label.
In another embodiment, the positioning tag 2 is a self-designed tag, as shown in fig. 3(a) -3(c), and has a black frame, in the tag of fig. 3(b) -3(c), like the ArUco tag, four corner points of the black frame can be used as four control points, and when there is a need to distinguish and identify an object, a graphic for recording ID data can be printed in an area defined by the black frame, as shown in fig. 3(b), and can be determined by identifying the ID data.
In this embodiment, a label as shown in fig. 3(a) is preferably used, and the label has a rectangular black frame, and the black frame itself has four white dots, the four white dots are disposed at four corners of the black frame, and the central points of the four white dots form four control points. When there is no need to distinguish and identify an object, in order to identify four control points in order, as shown in fig. 3(a), an area framed by the black frame has a black dot, and the distance from the black dot to one of the white dots is smaller than the distances from the other three white dots. Thus, the four control points can be distinguished by determining the method of distinguishing the white dot closest to the black dot.
The control unit at least comprises a memory and a controller, wherein an executable program is stored in the memory, and the controller runs the executable program to realize the following composite robot vision working method.
The method for visual work of a compound robot as shown in fig. 4, which is applied to the control unit of the compound robot 1, includes the following steps S301 to S304:
step S301, controlling the composite robot 1 to move to a target position integrally;
in this step, the control unit may receive an external manipulation command to control the movement of the moving chassis 15, and preferably, the control unit performs autonomous navigation to the target position through the detection data of the detection sensor 14.
Step S302, acquiring an image of the positioning label 2 through the camera 13;
step S303, processing the image of the positioning label 2 to obtain coordinate data of the article to be operated;
and step S304, controlling the operation of the manipulator 11 according to the coordinate data so that the execution mechanism 12 executes the operation on the article to be operated.
Since the camera 13 is mounted at the end of the robot hand 11, the view of the camera 13 is affected by the posture of the robot hand 11, and the camera 13 may not acquire the image containing the positioning tag 2 after the compound robot 1 reaches the target position, and in view of this, the acquiring the image of the positioning tag 2 by the camera 13 in step S302 includes the following steps S401 to S402:
step S401, adjusting the pose of the manipulator 11;
step S402, triggering the camera 13 to operate to obtain an image.
Further, the processing of the image of the positioning tag 2 to obtain the coordinate data of the article to be worked in step S303 includes the following steps S501 to S502:
step S501, extracting coordinates of four control points from the image of the positioning label 2 by using an image processing algorithm;
step S502, obtaining coordinates of the four control points in a coordinate system of the manipulator 11 as coordinate data according to a pose relation matrix of the manipulator 11 and the camera 13;
in this step, according to the formula
Figure BDA0002452032570000061
Calculating coordinates of the four control points in the manipulator coordinate system; wherein the content of the first and second substances,
Figure BDA0002452032570000062
coordinates of the control point in a camera coordinate system;
Figure BDA0002452032570000063
the coordinates of the control point in the robot coordinate system,
Figure BDA0002452032570000064
is a matrix of the pose relationship between the manipulator 11 and the camera 13.
In a first embodiment, when the positioning tag 2 is an ArUco tag, the step S501 specifically includes: the ID data of the positioning tag 2 and the coordinates of the four control points are obtained using the detectMarkers () function in Opencv. Actually, a cornlerRefinementmethod parameter needs to be set when a control point is detected so as to obtain a detection point with the accuracy of the subapix.
In the second embodiment, when the positioning tag 2 is a tag as shown in fig. 3(a), the step S501 of extracting coordinates of four control points from the image of the positioning tag 2 by using the image processing algorithm includes the following steps S601-S604:
step S601, carrying out adaptive threshold segmentation binarization processing on the image of the positioning label 2;
step S602, acquiring a connected region of a binary image to obtain a plurality of candidate regions;
step S603, screening the obtained candidate area according to the area characteristics to obtain an area image corresponding to the positioning label 2;
step S604, obtaining coordinates of the four control points according to the areas corresponding to the four white dots in the area image.
In this step, after the control unit identifies the region corresponding to the white dot, the coordinates of the center of the white dot are extracted to obtain the coordinates of the control point.
Further, the step S603 of screening the obtained candidate region according to the region feature to obtain the region image corresponding to the positioning tag 2 includes the following steps S701 to S703:
step S701, removing the regions with the areas exceeding a first threshold or smaller than a second threshold according to the areas of the candidate regions;
in this step, the first threshold and the second threshold are fixed values and can be preset as required.
Step S702, removing the special-shaped area and reserving a candidate area with a shape similar to a rectangle;
step S703, selecting a candidate area which accords with the set characteristics from the remaining candidate areas as the area image; wherein the setting feature is: the candidate areas have four approximately circular holes and one approximately rectangular hole with a black approximately circular area inside.
In the above steps S702 to S703, the description of the approximate rectangle should be understood to include a rectangle and a rectangle-like quadrangle, and the rectangle-like quadrangle is generally a trapezoid; the description of approximately circular should be understood to encompass circular as well as circular-like shapes, which are generally elliptical.
Preferably, the step S703 is followed by the following steps S801 to S802:
step S801, calculating the distance between each control point and the center of the black dot;
and S802, taking the control point closest to the center of the black dot as a starting point, and sequencing the coordinates of the four control points.
Through the above steps S801 to S802, the control unit may determine the posture of the positioning tag 2 to obtain the posture of the to-be-operated object to which the positioning tag 2 is attached, so that the control unit may obtain not only the position data of the to-be-operated object but also the posture data of the to-be-operated object through the image of the self-made tag shown in fig. 3(a), which is low in cost and simple in algorithm.
The utility model discloses a compounding machine ware people vision operating system only need use ordinary industry camera and location label, need not to use the two mesh cameras that the cost is higher and control algorithm is complicated, has effectively reduced the hardware cost, and control is simple, and positioning accuracy is high.
The above description is only a preferred embodiment of the present invention, and it should be noted that: for those skilled in the art, without departing from the principle of the present invention, several improvements and modifications can be made, and these improvements and modifications should also be considered as the protection scope of the present invention.

Claims (9)

1. A vision operation system of a composite robot is characterized by comprising:
the whole compound robot can move controllably; the robot comprises a manipulator, wherein an actuating mechanism and a camera are arranged at the tail end of the manipulator; the manipulator, the executing mechanism and the camera are all electrically connected with the control unit; and
the positioning label is fixed on an article to be operated or a carrier for carrying the article to be operated.
2. The vision work system of a compound robot according to claim 1, wherein the compound robot further comprises a detection sensor electrically connected to the control unit.
3. The vision task system of a composite robot as claimed in claim 1, wherein the positioning tab includes four control points at which the control unit can perform image recognition.
4. The vision operating system of the compound robot as claimed in claim 3, wherein the positioning tag is an ArUco tag, and four corner points of the ArUco tag constitute four control points.
5. The vision operation system of a composite robot as claimed in claim 3, wherein the positioning label has a black frame, the black frame has four white dots inside itself, and the central points of the four white dots constitute four control points.
6. The vision task system of claim 5, wherein a black dot is located in an area framed by the black frame, and the distance from the black dot to one of the white dots is smaller than the distances from the black dot to the other three white dots.
7. The vision working system of a composite robot as claimed in claim 2, wherein the composite robot is a moving chassis, an electric cabinet and the robot arm in this order from bottom to top, the control unit is installed in the electric cabinet, and the detection sensor is installed on the moving chassis.
8. The vision operation system of a compound robot as claimed in claim 7, wherein the movable chassis comprises a chassis base, two driving wheels are symmetrically arranged on the left and right sides of the middle of the chassis base, and a plurality of driven wheels are respectively arranged on the front and rear sides of the chassis base.
9. The vision work system of a compound robot as claimed in claim 1, wherein the robot arm is a multi-axis robot.
CN202020555792.XU 2020-04-15 2020-04-15 Visual operation system of compound robot Active CN212218483U (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202020555792.XU CN212218483U (en) 2020-04-15 2020-04-15 Visual operation system of compound robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202020555792.XU CN212218483U (en) 2020-04-15 2020-04-15 Visual operation system of compound robot

Publications (1)

Publication Number Publication Date
CN212218483U true CN212218483U (en) 2020-12-25

Family

ID=73907629

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202020555792.XU Active CN212218483U (en) 2020-04-15 2020-04-15 Visual operation system of compound robot

Country Status (1)

Country Link
CN (1) CN212218483U (en)

Similar Documents

Publication Publication Date Title
EP3584042B1 (en) Systems, devices, components, and methods for a compact robotic gripper with palm-mounted sensing, grasping, and computing devices and components
EP3383593B1 (en) Teaching an industrial robot to pick parts
KR101772367B1 (en) Combination of stereo and structured-light processing
JP7154815B2 (en) Information processing device, control method, robot system, computer program, and storage medium
CN111516006B (en) Composite robot operation method and system based on vision
EP3735339A1 (en) Grasping of an object by a robot based on grasp strategy determined using machine learning model(s)
EP4290458A1 (en) Device determination system, device determination method, and device determination program
JP5645769B2 (en) Image processing device
JP2018161692A (en) Information processing system, information processing method and program
JP2016144841A (en) Transportation robot system equipped with three-dimensional sensor
JP2006346767A (en) Mobile robot, marker, method of computing position and posture of mobile robot, and autonomous traveling system for mobile robot
JP6902369B2 (en) Presentation device, presentation method and program, and work system
CN108453730B (en) Workpiece taking-out system
JP2004243215A (en) Robot teaching method for sealer applicator and sealer applicator
JP2019059016A (en) Robotic gripper fingers
CN212218483U (en) Visual operation system of compound robot
EP3495099B1 (en) Grasping apparatus, grasping determination method and grasping determination program
JP2006021300A (en) Predicting device and holding device
CN210072415U (en) System for unmanned aerial vehicle cooperation target recognition vision assists landing
WO2024035917A1 (en) Autonomous solar installation using artificial intelligence
KR20210014033A (en) Method for picking and place object
US20220274257A1 (en) Device and method for controlling a robot for picking up an object
EP4094904B1 (en) Robot system control device, robot system control method, computer control program, and robot system
US11618167B2 (en) Pixelwise filterable depth maps for robots
CN114104453A (en) Non-ferrous metal automatic labeling method and device based on image processing

Legal Events

Date Code Title Description
GR01 Patent grant
GR01 Patent grant