CN220945383U - Robot grabbing system guided by 3D vision - Google Patents
Robot grabbing system guided by 3D vision Download PDFInfo
- Publication number
- CN220945383U CN220945383U CN202322168744.1U CN202322168744U CN220945383U CN 220945383 U CN220945383 U CN 220945383U CN 202322168744 U CN202322168744 U CN 202322168744U CN 220945383 U CN220945383 U CN 220945383U
- Authority
- CN
- China
- Prior art keywords
- robot
- control cabinet
- robot control
- target object
- structured light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 claims abstract description 30
- 229910000831 Steel Inorganic materials 0.000 claims description 9
- 239000010959 steel Substances 0.000 claims description 9
- 238000000034 method Methods 0.000 abstract description 4
- 230000008569 process Effects 0.000 abstract description 4
- 238000004519 manufacturing process Methods 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 238000010586 diagram Methods 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000012216 screening Methods 0.000 description 1
- 230000008685 targeting Effects 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Landscapes
- Manipulator (AREA)
Abstract
The utility model relates to the technical field of industrial robots and discloses a 3D vision guided robot grabbing system, which comprises a server, wherein the server is respectively connected with a structured light three-dimensional imaging device and a robot control cabinet, the robot control cabinet is respectively connected with a robot and an air pipe electromagnetic valve, the air pipe electromagnetic valve is connected with a vacuum pump, and a sucker for grabbing an object is arranged at the execution tail end of the robot; the structured light three-dimensional imaging device generates 3D pose information for a target object, the server acquires the 3D pose information of the target object, calculates and solves the pose information of the robot and transmits the pose information to the robot control cabinet, the robot control cabinet drives the execution tail end of the robot to drive the sucker to move and position to the target object, the robot control cabinet controls the air pipe electromagnetic valve to open and close the vacuum pump to pump and inflate the sucker, and the sucker grabs, moves and places the object. According to the utility model, the structured light three-dimensional imaging device is adopted to process and convert point cloud data of the target object, so that the object can be positioned and grabbed in a complex three-dimensional space more accurately.
Description
Technical Field
The utility model relates to the technical field of industrial robots, in particular to a 3D vision guided robot grabbing system.
Background
Industrial robots play an important role in modern production, representing industry advances and developments. The device has the characteristics of high efficiency, accuracy and automation, and can complete various tasks on a production line. However, problems associated with robot targeting remain with some challenges and needs in industrial processes. Aiming at the problem of robot target identification and positioning, a photoelectric positioning technology and a two-dimensional positioning technology are adopted in early stages. But still cannot overcome the problem of robot positioning and grabbing in a more complex three-dimensional scene.
Disclosure of Invention
In order to overcome the technical problems, the utility model aims to provide a 3D vision guided robot grabbing system. In order to enable the robot to more accurately finish the grabbing work of complex three-dimensional information objects, the problem of invalid grabbing of the robot caused by inaccurate traditional positioning technology is solved.
The utility model provides the following technical scheme:
The utility model provides a robot snatchs system of 3D vision guide, it includes server (300), server (300) are connected with three-dimensional image device of structured light (100) and robot control cabinet (400) respectively, robot control cabinet (400) are connected with robot (200) and air pipe solenoid valve (450) respectively, air pipe solenoid valve (450) are connected with vacuum pump (430), the execution end of robot (200) is equipped with sucking disc (420) that are used for snatching the object; the three-dimensional imaging device (100) generates 3D pose information for a target object, the server (300) acquires the 3D pose information of the target object on a workbench, the pose information of the robot (200) is calculated and solved and is transmitted to the robot control cabinet (400), the robot control cabinet (400) drives the execution tail end of the robot (200) to drive the sucker (420) to move and position to the target object, the robot control cabinet (400) controls the air pipe electromagnetic valve (450) to open and close the vacuum pump (430) to pump and inflate the sucker (420), and the sucker (420) performs operations of grabbing, moving and placing the target object.
According to some embodiments, the server (300) obtains three-dimensional information of the target object on the workbench by capturing the point cloud data generated by the structured light three-dimensional imaging device (100), screens out the three-dimensional information of the target object on the workbench by comparing the templates, and sends the three-dimensional information to the robot control cabinet (400) to complete a target positioning process; the template is a target object geometric outline characteristic data set, and the geometric outline characteristic data set at least comprises a data set of outline shape and direction.
According to some embodiments, the structured light three-dimensional imaging apparatus (100) includes a structured light projector for projecting a plurality of encoded images onto a stage on which a target object is placed, and a camera; and decoding the images shot by the camera on the workbench to obtain three-dimensional scene data.
According to some embodiments, a door-shaped steel frame with adjustable height is fixedly arranged on the workbench, and the structural light projector and the camera are fixedly arranged on the steel frame.
According to some embodiments, the robot control cabinet (400) is connected with a demonstrator (410), and the robot (200) is controlled to move the execution end through the demonstrator (410) and the robot control cabinet (400).
Compared with the prior art, the utility model has the following beneficial effects:
The utility model aims to provide a 3D vision guided robot grabbing system, which adopts a structured light projection device to project structured light after special coding on a target object, and photographs the projected image through a camera, so that the acquisition speed of three-dimensional data acquisition is increased and the data precision is also improved in an active three-dimensional imaging mode. The server processes the data acquired by the structured light three-dimensional imaging device to obtain pose information of a target object, the server sends the pose information of the target object to the robot control cabinet through the network cable, and the robot control cabinet drives the robot to move to a grabbing position to start an electromagnetic valve on a vacuum pump air pipe to grab the target object. The system has the positioning precision and the positioning speed which cannot be achieved by the traditional passive imaging positioning technology, and meanwhile, the mechanical structure of the system is clear and the connection is simple. The device is convenient for later maintenance and education, has higher portability and dual-purpose type, and can be widely applied to production and manufacture.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present utility model, the drawings that are needed in the embodiments of the present utility model will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present utility model and should not be considered as limiting the scope, and other related drawings can be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a 3D vision-guided robotic grasping system according to an embodiment of the utility model.
The reference numerals in the drawings are:
100-structured light three-dimensional imaging device; 200-robot; 300-a server; 400-robot control cabinet; 410-a demonstrator; 420-sucking disc; 430-a vacuum pump; 440-power switch; 450-tracheal solenoid valve.
Detailed Description
The present utility model will be described in detail with reference to the following examples and drawings, but it should be understood that the examples and drawings are only for illustrative purposes and are not intended to limit the scope of the present utility model in any way. All reasonable variations and combinations that are included within the scope of the inventive concept fall within the scope of the present utility model.
In the description of the present utility model, it should be noted that, directions or positional relationships indicated by terms such as "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", "front", "rear", etc., are based on those shown in the drawings, are merely for convenience of description and simplification of the description, and do not indicate or imply that the apparatus or elements referred to must have a specific direction, be constructed and operated in a specific direction, and thus should not be construed as limiting the present utility model; the terms "first," "second," "third," and the like, are used for descriptive purposes only and are not to be construed as indicating or implying relative importance, and further, unless otherwise expressly specified and defined, the terms "disposed," "mounted," "connected," "coupled," and the like are to be construed broadly, and may be either fixedly coupled, detachably coupled, or integrally coupled, for example; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present utility model will be understood in specific cases by those of ordinary skill in the art.
The utility model is further described below with reference to the accompanying drawings.
Example 1
Referring to fig. 1, fig. 1 is a schematic structural diagram of a 3D vision-guided robotic grasping system according to the present embodiment.
The robot gripping system includes a structured light three-dimensional imaging device 100 connected to a server 300, a robot control cabinet 400 connected to the server 300, a robot 200 connected to the robot control cabinet 400, a teach pendant 410 connected to the robot control cabinet 400, an air pipe solenoid valve 450 connected to the robot control cabinet 400, a vacuum pump 430 connected to the air pipe solenoid valve 450, a power switch 440 connected to the vacuum pump 430, and a suction cup 420 for gripping an object mounted at an execution end of the robot 200.
The robot 200 and the object to be grabbed are placed on a workbench, and the structured light three-dimensional imaging device 100 is used for accurately positioning objects with different heights on an industrial production line, so as to help the sucker 420 to grab the object in a complex three-dimensional space. The robot control cabinet 400 finally reaches the upper part of the target object by driving the cooperative motion of the shafts of the robot 200, and the grabbing and placing of the target object are completed by controlling the air pipe electromagnetic valve 450. The vacuum pump 430 sucks the target object onto the suction cup 420 by applying negative pressure to the suction cup 420 by vacuum suction to complete the grabbing action, and closes the air pipe solenoid valve 450 to complete the placing action.
Illustratively, the structured light three-dimensional imaging apparatus 100 includes a structured light projector and a camera, the structured light projector and the camera being held stationary in relative positions, the structured light projector being primarily responsible for projecting the encoded image camera being responsible for capturing the image projected by the structured light projector. In actual operation, the structured light projector projects a plurality of coded images onto the surface of the workbench, and the camera shoots the images on the workbench and obtains three-dimensional scene data through decoding. When the structured light three-dimensional imaging apparatus 100 is mounted, it is necessary to measure three-dimensional information on the stage first to ensure that the effective imaging space of the structured light three-dimensional imaging apparatus 100 can cover the working area while keeping the imaging area as much as possible within an ideal range. After the installation positions of the structured light projector and the camera of the structured light three-dimensional imaging device 100 are determined, the structured light three-dimensional imaging device 100 needs to be fixed, so that the accuracy of the calibration result is not easily affected by position change.
After the structured light three-dimensional imaging device 100 is installed, the robot gripping system needs to be calibrated to establish a mapping relationship between the robot coordinate system and the camera coordinate system. Before the robot gripping system performs calibration, a calibration plate needs to be prepared and placed on a workbench.
The calibration plate is used for establishing a mapping relation between the robot coordinate system and the camera coordinate system. At least four calibration points are needed on the calibration plate, and the camera coordinates of the calibration points stored in the server are in one-to-one correspondence with the tail end coordinates of the robot tool. The calibration points may be of any regular shape (circle, triangle, square); the robot tail end sucker must keep a relatively uniform distance with the calibration plate, and cannot directly contact with the calibration plate, so that the calibration quality is prevented from being influenced by moving the calibration plate.
The server 300 is connected to the structured light three-dimensional imaging apparatus 100 through a network cable and photographs a calibration plate on a table, as shown in fig. 1, and then the suction cup 430 of the end of the robot 200 is moved to a calibration point on the calibration plate using the teach pendant 410, and position information of the suction cup 430 of the end of the robot 200 at each calibration point is manually recorded. The transformation relationship of the two coordinate systems is obtained by processing the information by the server 300. After calibration of the robot gripping system is completed, it is also necessary to create information features of the gripped object, extract point cloud data in the three-dimensional structured light imaging apparatus 100 through the server 300, and then preprocess the data to obtain feature information of the target object, and store the feature information in the server 300.
The robot grabbing system works and operates as follows:
The server 300 obtains three-dimensional information on the workbench by capturing the point cloud data generated by the structured light three-dimensional imaging device 100, screens out the three-dimensional information of the target object on the workbench by comparing the templates, and sends the three-dimensional information to the robot control cabinet 400 to complete the target positioning process. The template is a geometric profile feature data set of the target object, which at least comprises a data set of various profile shapes and directions.
Firstly, the structured light projector projects the coded structured light onto a workpiece table, and the camera decodes the shot image to obtain three-dimensional point cloud information. The server 300 obtains three-dimensional point cloud data under a camera coordinate system, transforms the three-dimensional point cloud data into a 3D scene model, and obtains a transformation matrix through calibration. And transforming the 3D scene model into a robot coordinate system, and then obtaining the three-dimensional point cloud of the target object by screening the region with higher similarity with the template data. The server 300 obtains pose information of the target object, then solves the corresponding pose of the robot 200, and after receiving the information that the robot 200 is ready to grasp, sends the pose data to the robot control cabinet 400 through a network cable. The robot control cabinet 400 drives the plurality of motors of the robot 200, coordinates the suction cup 420 of the robot 200 to move above the target object, causes the suction cup 420 to suck the target object and move it to the placement area by opening the air pipe solenoid valve 450, then the robot control cabinet 400 controls to close the air pipe solenoid valve 450 to complete the object placement, and then the robot control cabinet 400 drives the robot 200 to move to the waiting area, at which time a single grabbing task has been completed. After the robot 200 completes one gripping cycle, the robot control cabinet 400 transmits the ready-to-grip information to the server 300 through the network cable to enter the gripping cycle again.
Communicate with robotic control cabinet 400 through a USB interface on demonstrator 410, while demonstrator 410 may facilitate modifying some false actions based on the actual scenario. While its robot movement position can be manually manipulated by the teach pendant 410 during calibration.
Example two
In this embodiment, the 3D vision-guided robot gripping system further includes a portal steel frame fixed on the platform where the robot 200 is located for supporting and fixing, and the steel frame is used for installing the structured light three-dimensional imaging device 100.
Whether the installation of the structured light three-dimensional imaging device 100 is firm directly affects the grabbing precision of the whole system, and the structured light three-dimensional imaging device 100 is fixed to an ideal position by installing a steel frame on a platform where the robot 200 is located and installing the structured light three-dimensional imaging device on the steel frame and adjusting the height of the steel frame.
The above examples are only preferred embodiments of the present utility model, and the scope of the present utility model is not limited to the above examples. All technical schemes belonging to the concept of the utility model belong to the protection scope of the utility model. It should be noted that modifications and adaptations to the present utility model may occur to one skilled in the art without departing from the principles of the present utility model and are intended to be within the scope of the present utility model.
Claims (4)
1. A 3D vision guided robotic grasping system, characterized by: the robot control system comprises a server (300), wherein the server (300) is respectively connected with a structured light three-dimensional imaging device (100) and a robot control cabinet (400), the robot control cabinet (400) is respectively connected with a robot (200) and an air pipe electromagnetic valve (450), the air pipe electromagnetic valve (450) is connected with a vacuum pump (430), and the execution tail end of the robot (200) is provided with a sucker (420) for grabbing objects; the three-dimensional imaging device (100) generates 3D pose information for a target object, the server (300) acquires the 3D pose information of the target object on a workbench, the pose information of the robot (200) is calculated and solved and is transmitted to the robot control cabinet (400), the robot control cabinet (400) drives the execution tail end of the robot (200) to drive the sucker (420) to move and position to the target object, the robot control cabinet (400) controls the air pipe electromagnetic valve (450) to open and close the vacuum pump (430) to pump and inflate the sucker (420), and the sucker (420) performs operations of grabbing, moving and placing the target object.
2. The 3D vision-guided robotic grasping system according to claim 1, wherein: the structured light three-dimensional imaging device (100) comprises a structured light projector and a camera, wherein the structured light projector is used for projecting a plurality of coded images onto a workbench for placing a target object; and decoding the images shot by the camera on the workbench to obtain three-dimensional scene data.
3. The 3D vision-guided robotic grasping system according to claim 2, wherein: the workbench is fixedly provided with a steel frame with a height adjustable door, and the steel frame is fixedly provided with the structural light projector and the camera.
4. The 3D vision-guided robotic grasping system according to claim 1, wherein: the robot control cabinet (400) is connected with a demonstrator (410), and the demonstrator (410) and the robot control cabinet (400) are used for controlling the execution tail end of the robot (200) to move.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202322168744.1U CN220945383U (en) | 2023-08-14 | 2023-08-14 | Robot grabbing system guided by 3D vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202322168744.1U CN220945383U (en) | 2023-08-14 | 2023-08-14 | Robot grabbing system guided by 3D vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN220945383U true CN220945383U (en) | 2024-05-14 |
Family
ID=90976258
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202322168744.1U Active CN220945383U (en) | 2023-08-14 | 2023-08-14 | Robot grabbing system guided by 3D vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN220945383U (en) |
-
2023
- 2023-08-14 CN CN202322168744.1U patent/CN220945383U/en active Active
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111791239B (en) | Method for realizing accurate grabbing by combining three-dimensional visual recognition | |
CN110509300B (en) | Steel hoop processing and feeding control system and control method based on three-dimensional visual guidance | |
CN108109174B (en) | Robot monocular guidance method and system for randomly sorting scattered parts | |
CN111906788B (en) | Bathroom intelligent polishing system based on machine vision and polishing method thereof | |
CN108098762A (en) | A kind of robotic positioning device and method based on novel visual guiding | |
CN110881748A (en) | Robot sole automatic gluing system and method based on 3D scanning | |
CN104369188A (en) | Workpiece grabbing device and method based on machine vision and ultrasonic transducer | |
CN105499953A (en) | Automobile engine piston and cylinder block assembly system based on industrial robot and method thereof | |
CN106044570A (en) | Steel coil lifting device automatic identification device and method adopting machine vision | |
CN111067197A (en) | Robot sole dynamic gluing system and method based on 3D scanning | |
CN108827154A (en) | A kind of robot is without teaching grasping means, device and computer readable storage medium | |
CN111112981B (en) | Automatic license plate installation equipment of robot and method thereof | |
CN109604468B (en) | Workpiece stamping system based on machine vision and control method thereof | |
CN113246142B (en) | Measuring path planning method based on laser guidance | |
CN110980276A (en) | Method for implementing automatic casting blanking by three-dimensional vision in cooperation with robot | |
CN110568866B (en) | Three-dimensional curved surface visual guidance alignment system and alignment method | |
CN115042175A (en) | Method for adjusting tail end posture of mechanical arm of robot | |
CN109900251A (en) | A kind of robotic positioning device and method of view-based access control model technology | |
CN111300405A (en) | Visual identification positioning device and method for mobile platform | |
CN109278021A (en) | It is a kind of for grabbing the robot tool system of thin-wall case class workpiece | |
CN108789412A (en) | A kind of robot motion's Trajectory Planning System, method and device | |
CN110961583B (en) | Ladle positioning device adopting laser scanning and using method thereof | |
CN109541626B (en) | Target plane normal vector detection device and detection method | |
CN114055501A (en) | Robot grabbing system and control method thereof | |
CN220945383U (en) | Robot grabbing system guided by 3D vision |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
GR01 | Patent grant | ||
GR01 | Patent grant |