CN113793383A - 3D visual identification taking and placing system and method - Google Patents

3D visual identification taking and placing system and method Download PDF

Info

Publication number
CN113793383A
CN113793383A CN202110977234.1A CN202110977234A CN113793383A CN 113793383 A CN113793383 A CN 113793383A CN 202110977234 A CN202110977234 A CN 202110977234A CN 113793383 A CN113793383 A CN 113793383A
Authority
CN
China
Prior art keywords
pose
placing
module
point cloud
template
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110977234.1A
Other languages
Chinese (zh)
Inventor
聂志华
曹燕杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jiangxi Intelligent Industry Technology Innovation Research Institute
Original Assignee
Jiangxi Intelligent Industry Technology Innovation Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jiangxi Intelligent Industry Technology Innovation Research Institute filed Critical Jiangxi Intelligent Industry Technology Innovation Research Institute
Priority to CN202110977234.1A priority Critical patent/CN113793383A/en
Publication of CN113793383A publication Critical patent/CN113793383A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30164Workpiece; Machine component

Abstract

The invention provides a 3D visual identification pick-and-place system and a method, and belongs to the technical field of visual guidance robots. The system comprises a 3D vision unit, a pose operation unit and a pick-and-place unit which are in communication connection, wherein the 3D vision unit projects a coding pattern, shoots an irregular workpiece, collects the image and decodes the coding pattern by using calibrated parameters to obtain a scene point cloud; the template library module creates template point clouds of irregular workpieces under different poses, the scene point clouds are matched with the template point clouds to obtain matching parameters, and the pose calculation module calculates the grabbing pose and the placing pose according to the input matching parameters so as to realize grabbing and placing of the irregular workpieces by the picking and placing unit. The system and the method effectively solve the problems of large matching error and even matching failure of the irregular workpiece due to the fact that the irregular workpiece is easy to match due to lack of visual features, and therefore picking and placing operation of the irregular workpiece with high precision, high speed and high stability is improved.

Description

3D visual identification taking and placing system and method
Technical Field
The invention belongs to the technical field of vision-guided robots, and particularly relates to a 3D vision recognition pick-and-place system and a method.
Background
With the development of technology, robots will gradually replace human beings to complete some simple, repetitive and low-intelligence-requirement works. The application of the vision-guided robot grasping technology in industry is more and more extensive, and the application scenes are more and more abundant, such as in the industrial processes of machine assembly, part sorting, feeding and discharging classification and the like. The traditional vision-guided robot grabbing application mainly focuses on grabbing a target object on a fixed plane based on 2D vision detection, the 2D vision detection method is limited in position and attitude information of a part, and the part can be limited to a fixed measurement depth for detection. However, when the grabbed irregular objects face the problems of blocking, stacking of objects and the like, the calculation of the position and posture of the object under the complex conditions with high precision, high speed and high stability is a great challenge for the mechanical arm grabbing.
With the improvement of the three-dimensional technology, the three-dimensional object recognition algorithm is rapidly developed, so that the recognition and positioning of the target object in the point cloud data or the depth image become more stable and effective. At present, in a 3D visual inspection method, two cameras are used to obtain 3D point clouds through a binocular sensor, the two cameras need to obtain images with overlapping regions at the same time, then feature points of the two images are detected, a stereo matching algorithm is used to find matching pixel points, then parallax information is obtained according to geometric constraints between the cameras, and 3D points under a camera coordinate system corresponding to 2D pixel points are calculated through polar line constraint eigen matrices and basic matrices. However, for the industrial product with the irregular shape, the lack of visual features thereof easily causes matching difficulty, which results in large matching error and even failure of matching, thereby affecting the picking and placing operation with high precision, high speed and high stability for the industrial product with the irregular shape.
Disclosure of Invention
In order to solve the technical problems, the invention provides a 3D visual identification pick-and-place system and a method, a coding pattern is projected by a transmitting module, an image obtained by shooting the irregular workpiece and collecting the image is output to a decoding module by the receiving module, and the coding pattern is decoded by the decoding module by using calibrated parameters to obtain scene point cloud; matching the scene point cloud with the template point cloud to obtain matching parameters, inputting the matching parameters into the pose calculation module to accurately calculate the grabbing pose and the placing pose so as to realize accurate grabbing and placing of the irregular workpiece.
The embodiment of the invention provides a 3D visual identification pick-and-place system, which has the following specific technical scheme:
A3D visual identification pick-and-place system is applied to control the grabbing and placing of irregular workpieces; it includes:
a 3D vision unit for generating a scene point cloud of the identified irregular workpiece;
the pose calculation unit is used for acquiring matching parameters and calculating pose point cloud of the irregular workpiece;
the picking and placing unit is used for picking the irregular workpiece; the pose operation unit is respectively in communication connection with the 3D vision unit and the taking and placing unit;
the 3D vision unit comprises a transmitting module, a receiving module and a decoding module, wherein the transmitting module projects a coding pattern, the receiving module shoots the irregular workpiece and outputs an acquired image to the decoding module, and the decoding module decodes the coding pattern by using calibrated parameters to obtain scene point cloud;
the pose calculation unit comprises a template library module, a template matching module and a pose calculation module, the template library module is used for creating template point clouds of the irregular workpieces under different poses, the scene point clouds are matched with the template point clouds to obtain matching parameters, the pose calculation module is used for calculating a grabbing pose and a placing pose according to the matching parameters, and the picking and placing unit is used for grabbing and placing the irregular workpieces according to the grabbing pose and the placing pose.
Compared with the prior art, the system has the beneficial effects that: through a stereoscopic vision system consisting of the transmitting module and the receiving module, a coding pattern is projected by the transmitting module, the receiving module shoots and transmits the collected image to the decoding module for processing, and the coding pattern is decoded by using calibrated parameters to obtain the scene point cloud; then, the pose calculation unit carries out the steps of template point cloud acquisition, denoising, segmentation, surface template matching and the like to calculate the grabbing pose and the placing pose; and acquiring the grabbing pose and the placing pose, transmitting the grabbing pose and the placing pose to the taking and placing unit, finishing accurate grabbing action and placing the grabbed irregular workpiece at an appointed position.
Preferably, the light emitted by the emitting module deforms on the surface of the irregular workpiece to form a changed optical signal, the 3D vision is used for forming a geometric constraint by combining the emitting module and the receiving module according to the change of the optical signal, and the decoding module decodes the formed geometric constraint to obtain the coordinate of the three-dimensional space of the irregular workpiece.
Preferably, the pose calculation unit further includes a point cloud denoising module, configured to remove a background and outliers of the scene point cloud, so as to facilitate matching of the scene point cloud and the template point cloud.
Preferably, the template library module is created in an off-line environment, the template point cloud generates a three-dimensional workpiece model by using SolidWorks three-dimensional mapping software, and the template point cloud is obtained by sampling the whole point cloud of each surface of the three-dimensional model by using a library function pcl _ mesh _ sampling.
Preferably, the template matching module performs matching processing between the scene point cloud and the template point cloud to obtain a matched rotation and translation matrix for obtaining matching parameters.
Preferably, the pose calculation module calculates the grabbing pose and the placing pose required for the picking and placing unit to grab and place the irregular workpiece by adopting a conversion matrix among the 3D vision unit, the template library module and the picking and placing unit.
Preferably, the pick-and-place unit comprises a mechanical arm module and an actuator module; when the picking and placing unit receives the grabbing pose, the manipulator module is opened to complete grabbing of the irregular workpiece after reaching the appointed position according to the determined pose; and when the taking and placing unit receives the placing pose, path planning is completed through the starting pose and the end pose, so that the grabbed irregular workpiece is placed to a specified position.
Preferably, the actuator module employs a collision test to prevent it from colliding with a platform carrying the irregular workpiece.
Another embodiment of the invention provides a 3D visual identification pick-and-place method, which has the following specific technical scheme:
A3D visual identification picking and placing method is applied to control of grabbing and placing irregular workpieces and comprises the following steps:
the 3D visual unit receives a scene image of the observed irregular workpiece, and the scene point cloud of the irregular workpiece is obtained through reconstruction based on the scene image;
the template library module imports a three-dimensional workpiece model of the irregular workpiece to generate the template point cloud;
the template matching module is used for matching the scene point cloud with the template point cloud to obtain matching parameters;
the obtained matching parameters are input into the pose calculation module, and the grabbing pose and the placing pose are obtained through calculation;
and the grabbing pose and the placing pose are obtained and transmitted to the taking and placing unit so as to realize the grabbing and placing of the irregular workpiece.
Preferably, after the step of receiving a scene image of the observed irregular workpiece by the 3D visual unit and reconstructing the scene point cloud of the irregular workpiece based on the scene image, the method further includes:
and filtering the scene point cloud.
Compared with the prior art, the method has the beneficial effects that: projecting a coding pattern for the irregular workpiece through a stereoscopic vision system consisting of the 3D vision units, shooting a collected image, and decoding the coding pattern by using calibrated parameters to obtain the scene point cloud; the template point cloud is obtained by creating a template library of the irregular workpiece in an off-line manner, the scene point cloud is matched with the template point cloud to obtain the matching parameters, and the matching parameters are input into the pose calculation module to accurately calculate the grabbing pose and the placing pose; the grabbing pose and the placing pose are obtained and transmitted to the picking and placing unit, so that the irregular workpiece can be grabbed and placed accurately; the method effectively solves the problems that the irregular workpiece is easy to be matched difficultly due to lack of visual features, so that the matching error is large, and even the matching fails, thereby improving the picking and placing operation of the irregular workpiece with high precision, high speed and high stability.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a system diagram of a 3D vision recognition pick-and-place system according to an embodiment of the present invention;
FIG. 2 is a block diagram of a 3D visual unit according to an embodiment of the present invention;
fig. 3 is an imaging schematic diagram of a projector and an industrial camera according to an embodiment of the present invention;
fig. 4 is a block diagram of a pose calculation unit according to an embodiment of the present invention;
fig. 5 is a block diagram of a pick-and-place unit according to a first embodiment of the present invention;
fig. 6 is a flowchart of a 3D vision recognition pick-and-place method according to a second embodiment of the present invention;
fig. 7 is a flowchart of a 3D vision recognition pick-and-place method according to a third embodiment of the present invention;
description of reference numerals:
10-3D visual unit, 11-transmitting module, 12-receiving module, 13-decoding module;
20-a pose operation unit, 21-a template library module, 22-a template matching module, 23-a pose calculation module and 24-a point cloud denoising module;
30-pick-and-place unit, 31-mechanical arm module and 32-actuator module.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the drawings are exemplary and intended to be illustrative of the embodiments of the present invention, and should not be construed as limiting the invention.
The first embodiment is as follows:
in a first embodiment of the present invention, as shown in fig. 1, a 3D vision recognition pick-and-place system is provided, which is applied to control the grabbing and placing of irregular workpieces, where the irregular workpieces related to the present invention may be 3C electronic products, hardware workpieces, and the like. Specifically, the system comprises a 3D vision unit 10, a pose calculation unit 20 and a pick-and-place unit 30, wherein the 3D vision unit 10 is used for generating scene point clouds of the identified irregular workpieces, the pose calculation unit 20 is used for acquiring matching parameters and calculating the pose point clouds of the irregular workpieces, and the pick-and-place unit 30 is used for grabbing and placing the irregular workpieces. In this embodiment, the stereoscopic vision system composed of the 3D vision unit 10 can effectively solve the problem that the existing 3D vision system uses a binocular sensor to obtain a 3D point cloud by using two cameras, and two images with an overlapped area must be obtained at the same time, and the harsh conditions of feature points of the two images are detected, which causes a high cost for obtaining the 3D point cloud of the irregular workpiece.
Further, in this embodiment, through the pose operation unit 20, the defect that the matching error is large and even the matching fails in the existing 3D vision system due to the lack of the visual features of the irregular workpiece, which is easy to cause matching difficulty, can be effectively solved. In addition, the problem that the traditional visual recognition is low in recognition precision and cannot meet the requirements of industrial precise grabbing and placing due to the fact that no template is matched can be effectively solved.
Further, in this embodiment, the pose calculation unit 20 is respectively connected to the 3D vision unit 10 and the pick-and-place unit 30 in a communication manner. In specific practice, the communication connection mode adopts a remote communication mode so as to be beneficial to grabbing and placing the irregular workpiece under different working conditions; the communication connection mode includes, but is not limited to, a 5G communication mode, an Ethernet communication mode, a radio station mode, a GPRS communication mode or a Modem dialing mode. Preferably, the communication connection is realized by adopting a 5G communication mode, and particularly, the communication connection is realized by a 5G communication module arranged in each unit.
As shown in fig. 2, the 3D vision unit 10 includes a transmitting module 11, a receiving module 12, and a decoding module 13. In this embodiment, the transmitting module 11 is a projector device, the receiving module 12 is an industrial camera device, and the decoding module 13 is an upper computer device. The specific process is as follows: the projector projects a coding pattern, the industrial camera shoots the irregular workpiece and collects an obtained image and outputs the image to the upper computer, the upper computer processes the image, and the coding pattern projected by the projector is decoded by using calibrated parameters to obtain the scene point cloud of the irregular workpiece. Further, as shown in fig. 3, the 3D vision unit 10 may complete three-dimensional point cloud restoration mainly using a structured light system, specifically using the structured light principle: the light emitted by the projector deforms on the surface of the irregular workpiece to form a changed optical signal, a geometrical constraint is formed by combining the projector and the industrial camera, and the upper computer decodes the formed geometrical constraint to obtain the coordinate of the three-dimensional space of the irregular workpiece; in the specific operation, the working distance, the visual field and the resolution of the structured light sensor can be adjusted by selecting the type of equipment such as an industrial camera, a projector and the like.
As shown in fig. 4, the pose calculation unit 20 mainly matches the scene point cloud with the template point cloud to obtain matching parameters, and the matching parameters are input to the pose calculation module 23 to calculate the accurate capture pose and placement pose. Specifically, the pose calculation unit 20 includes a template library module 21, a point cloud denoising module 24, a template matching module 22, and a pose calculation module 23. The method comprises the steps of acquiring the scene point cloud by using a structured light system, acquiring the template point cloud by creating a template base module so as to solve the problem that the multi-pose of the irregular workpiece is acquired in a shielding and stacking scene in an actual scene, denoising the scene point cloud during or before identification so as to acquire a standard template at a working distance, acquiring a grabbing pose and a placing pose under the standard template, matching the acquired scene point cloud with the template point cloud in the standard template, and acquiring the grabbing pose and the placing pose required for accurately picking and placing the irregular workpiece.
Further, the template matching module 22 performs matching processing between the scene point cloud and the template point cloud to obtain a matched rotation and translation matrix for obtaining matching parameters.
Further, the template library module 21 is created in an off-line environment, the template point cloud generates a three-dimensional workpiece model by using solid works three-dimensional mapping software, and the template point cloud is obtained by sampling the whole point cloud of each surface of the three-dimensional model by using a library function pcl _ mesh _ sampling. The method for sampling the point cloud (pcd, ply format) from the CAD model (stl, obj format) by utilizing the PCL point cloud library has two modes: the integral point cloud sampling method comprises the steps that each surface (possibly comprising an internal structure which cannot be seen from the outside) of an original CAD model is used, a pcl _ mesh _ sampling is used, an exe file can be directly operated, and codes in the exe file can be pasted out and placed in a program of the CAD model to be combined with an exe operation method; the method for sampling from a plurality of visual angles only comprises one surface of the original CAD model under a certain visual angle, which is more convenient to apply when in registration, and because the used depth camera is generally shot from a visual angle, the PCL self-contained function is better: render viewTesseltatepedSphere, which is a function that takes partial views of a CAD model from different perspectives. The view angle set here is an icosahedron composed of regular triangles and wrapped outside the CAD model, and the virtual camera photographs the CAD model from each vertex (or each face) of the icosahedron and then obtains a point cloud at the corresponding view angle.
As shown in fig. 5, the pick-and-place unit 30 includes a robot arm module 31 and an actuator module 32. When the grabbing pose is received, after the mechanical arm module 31 reaches a designated position according to the determined pose, the actuator module 32 is opened to finish grabbing the irregular workpiece; and when the taking and placing module receives the placing pose, path planning is completed through the starting pose and the end pose, so that the grabbed irregular workpiece is placed to an appointed position. In this embodiment, the pose calculation module 23 calculates the grabbing pose and the placing pose required for the taking and placing unit 30 to accurately grab and place the irregular workpiece by using the transformation matrix among the 3D vision unit 10, the template library module 21 and the taking and placing unit 30.
Further, the robot arm module 31 includes three driving and controlling integrated modules, and the three driving and controlling integrated modules are provided with a 5G communication module inside, so that the taking and placing unit 30 receives the placing pose and the instruction of the placing pose. Furthermore, the driving and controlling integrated module comprises a sensor, a motor, a driver, a brake and a speed reducer, and the 5G communication module is connected with the driver; the connection mode of the 5G communication module and the driver adopts a network interface mode; the specific process is that the 5G communication module receives the placing pose and the action command of the placing pose and then converts the placing pose into a driving command, and the driver receives the driving command to drive the motor to operate and sequentially passes through the sensor module, the brake and the speed reducer, so that the mechanical arm module 31 outputs an action to the outside. Of course, the connection mode of the 5G communication module and the driver may also be implemented by integrating the 5G communication module into the driver, so as to connect the 5G communication module and the driver.
Further, when the actuator template 32 grabs the irregular workpiece, the actuator module adopts a collision test to prevent the irregular workpiece from colliding with a platform carrying the irregular workpiece.
In this embodiment, a stereoscopic vision system composed of the transmitting module 11 and the receiving module 12 projects a coding pattern through the transmitting module 11, the receiving module 12 shoots, transmits the collected image to the decoding module 13 for processing, and decodes the coding pattern by using calibrated parameters to obtain the scene point cloud; then, the pose calculation unit 20 performs the steps of template point cloud acquisition, denoising, segmentation, surface template matching and the like to calculate the grabbing pose and the placing pose; the grasping pose and the placing pose are acquired and transmitted to the pick-and-place unit 30, so that accurate grasping action is completed and the grasped irregular workpiece is placed at a designated position.
Example two:
in the second embodiment of the present invention, as shown in fig. 6, a 3D vision recognition pick-and-place method is applied to control of grabbing and placing irregular workpieces, and includes the above-mentioned 3D vision recognition pick-and-place system; the 3D visual identification pick-and-place method comprises the following steps:
s101: the 3D visual unit receives a scene image of the observed irregular workpiece, and the scene point cloud of the irregular workpiece is obtained through reconstruction based on the scene image;
the scene point cloud is detailed point cloud data of the irregular workpiece and comprises the position and the posture of the irregular workpiece; specifically, with the stereoscopic vision system of the embodiment, namely, the projector projects a coding pattern, the industrial camera shoots the irregular workpiece and collects the obtained image, and outputs the image to the upper computer, the upper computer processes the image, and the calibrated parameters are used for decoding the coding pattern projected by the projector to obtain the accurate scene point cloud of the irregular workpiece.
S102: the template library module imports a three-dimensional workpiece model of the irregular workpiece to generate the template point cloud;
the three-dimensional workpiece model is generated by adopting SolidWorks three-dimensional mapping software and is obtained by storing stl and obj format files; and sampling the whole point cloud of each surface of the three-dimensional workpiece model by the template point cloud by using a library function pcl _ mesh _ sampling.exe in the point cloud library to obtain model point cloud, and storing the pose relationship of the model point cloud relative to a coordinate system of the three-dimensional scanner. Further, a model database is established, and the template point cloud is trained, wherein the specific process is as follows: sampling by using uniform characteristic points on the template point cloud, namely sampling by establishing a space voxel grid, and representing the characteristic points in each voxel by using the gravity center in each voxel; establishing a local coordinate system in the point spherical support domain based on the characteristic points, and establishing a local coordinate system; after the local coordinate system is determined, counting structural shape information in a spherical support domain, and establishing 3D descriptor Shot characteristics; and completing the description of the Shot characteristics according to the normal vector histogram, and storing the Shot descriptor characteristics of the model point cloud of each workpiece for subsequent identification process.
S103: the template matching module is used for matching the scene point cloud with the template point cloud to obtain matching parameters;
the template matching module performs matching processing to obtain a matched rotation and translation matrix, and the matching parameters are obtained through the rotation and translation matrix.
S104: the obtained matching parameters are input into the pose calculation module, and the grabbing pose and the placing pose are obtained through calculation;
matching the Shot descriptor in each scene point cloud with all Shot descriptors in the module library module according to the Shot descriptors in the scene point cloud, and determining a point-to-point corresponding relation through the model-scene descriptors close to the K-dimensional tree associated descriptor space to obtain the grabbing pose and the placing pose required by accurately picking and placing the irregular workpiece.
S105: the grabbing pose and the placing pose are obtained and transmitted to the picking and placing unit so as to grab and place the irregular workpiece;
when the grabbing pose is received, after the mechanical arm module determines that the pose reaches a designated position, the actuator module is opened to finish grabbing the irregular workpiece; and when the placing pose is received, finishing path planning through the starting pose and the end pose so as to place the grabbed irregular workpiece to a specified position.
In the embodiment, a stereoscopic vision system composed of the 3D vision units is used for projecting a coding pattern to an irregular workpiece, shooting a collected image, and decoding the coding pattern by using calibrated parameters to obtain the scene point cloud; the template point cloud is obtained by creating a template library of the irregular workpiece in an off-line manner, the scene point cloud is matched with the template point cloud to obtain the matching parameters, and the matching parameters are input into the pose calculation module to accurately calculate the grabbing pose and the placing pose; the grabbing pose and the placing pose are obtained and transmitted to the picking and placing unit, so that the irregular workpiece can be grabbed and placed accurately; the method effectively solves the problems that the irregular workpiece is easy to be matched difficultly due to lack of visual features, so that the matching error is large, and even the matching fails, thereby improving the picking and placing operation of the irregular workpiece with high precision, high speed and high stability.
Example three:
in a third embodiment of the present invention, as shown in fig. 7, a 3D visual recognition pick-and-place method is applied to control of grabbing and placing irregular workpieces, and the method of the third embodiment is different from the method of the second embodiment in that: after the step of receiving, by the 3D visual unit, a scene image of the observed irregular workpiece and reconstructing the scene point cloud of the irregular workpiece based on the scene image, the method further includes:
and filtering the scene point cloud.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (10)

1. A3D visual identification pick-and-place system is applied to control the grabbing and placing of irregular workpieces; the 3D vision recognition system includes:
a 3D vision unit for generating a scene point cloud of the identified irregular workpiece;
the pose calculation unit is used for acquiring matching parameters and calculating pose point cloud of the irregular workpiece;
the picking and placing unit is used for picking and placing the irregular workpiece; the pose calculation unit is respectively in communication connection with the 3D vision unit and the taking and placing unit;
the 3D vision unit comprises a transmitting module, a receiving module and a decoding module, wherein the transmitting module projects a coding pattern, the receiving module shoots the irregular workpiece and outputs an acquired image to the decoding module, and the decoding module decodes the coding pattern by using calibrated parameters to obtain scene point cloud;
the pose calculation unit comprises a template library module, a template matching module and a pose calculation module, the template library module is used for creating template point clouds of the irregular workpieces under different poses, the scene point clouds are matched with the template point clouds to obtain matching parameters, the pose calculation module is used for calculating a grabbing pose and a placing pose according to the matching parameters, and the picking and placing unit is used for grabbing and placing the irregular workpieces according to the grabbing pose and the placing pose.
2. The 3D vision recognition taking and placing system of claim 1, wherein the light emitted by the emitting module deforms on the surface of the irregular workpiece to form a changed light signal, the 3D vision is used for forming a geometric constraint by combining the emitting module and the receiving module according to the change of the light signal, and the decoding module decodes the formed geometric constraint to obtain the three-dimensional space coordinates of the irregular workpiece.
3. The 3D visual identification pick-and-place system of claim 1, wherein the pose computation unit further comprises a point cloud denoising module for removing background and outliers of the scene point cloud to facilitate matching of the scene point cloud and the template point cloud.
4. The 3D visual identification pick-and-place system of claim 1, wherein the template library module is created in an off-line environment, the template point cloud generates a three-dimensional workpiece model using SolidWorks three-dimensional drawing software, and the template point cloud is obtained by sampling the whole point cloud of each surface of the three-dimensional model by using a library function pcl _ mesh _ sampling.
5. The 3D visual identification pick-and-place system of claim 1, wherein the template matching module performs matching processing between the scene point cloud and the template point cloud to obtain a matched rotation and translation matrix for obtaining matching parameters.
6. The 3D vision recognition taking and placing system of claim 1, wherein the pose calculation module calculates the grabbing pose and the placing pose required for the taking and placing unit to grab and place the irregular workpiece by using a transformation matrix among the 3D vision unit, the template library module and the taking and placing unit.
7. The 3D vision recognition pick-and-place system of claim 1, wherein the pick-and-place unit comprises a robotic arm module and an actuator module; when the picking and placing unit receives the grabbing pose, the manipulator module is opened to complete grabbing of the irregular workpiece after reaching the appointed position according to the determined pose; and when the taking and placing unit receives the placing pose, path planning is completed through the starting pose and the end pose, so that the grabbed irregular workpiece is placed to a specified position.
8. The 3D vision recognition pick-and-place system of claim 7, wherein the actuator module employs a collision test to prevent it from colliding with a platform carrying the irregular workpiece.
9. A3D visual identification pick-and-place method based on the 3D visual identification pick-and-place system of claim 1 is applied to control of grabbing and placing irregular workpieces; the method is characterized by comprising the following steps of:
the 3D visual unit receives a scene image of the observed irregular workpiece, and the scene point cloud of the irregular workpiece is obtained through reconstruction based on the scene image;
the template library module imports a three-dimensional workpiece model of the irregular workpiece to generate the template point cloud;
the template matching module is used for matching the scene point cloud with the template point cloud to obtain matching parameters;
the obtained matching parameters are input into the pose calculation module, and the grabbing pose and the placing pose are obtained through calculation;
and the grabbing pose and the placing pose are obtained and transmitted to the taking and placing unit so as to realize the grabbing and placing of the irregular workpiece.
10. The 3D visual recognition pick-and-place method of claim 9, wherein the 3D visual unit receives a scene image of the observed irregular workpiece, and after the step of reconstructing the scene point cloud of the irregular workpiece based on the scene image, the 3D visual recognition pick-and-place method further comprises:
and filtering the scene point cloud.
CN202110977234.1A 2021-08-24 2021-08-24 3D visual identification taking and placing system and method Pending CN113793383A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110977234.1A CN113793383A (en) 2021-08-24 2021-08-24 3D visual identification taking and placing system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110977234.1A CN113793383A (en) 2021-08-24 2021-08-24 3D visual identification taking and placing system and method

Publications (1)

Publication Number Publication Date
CN113793383A true CN113793383A (en) 2021-12-14

Family

ID=79181970

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110977234.1A Pending CN113793383A (en) 2021-08-24 2021-08-24 3D visual identification taking and placing system and method

Country Status (1)

Country Link
CN (1) CN113793383A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113927606A (en) * 2021-12-20 2022-01-14 湖南视比特机器人有限公司 Robot 3D vision grabbing method, deviation rectifying method and system
CN115497087A (en) * 2022-11-18 2022-12-20 广州煌牌自动设备有限公司 Tableware posture recognition system and method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190221003A1 (en) * 2015-12-30 2019-07-18 Tsinghua University Method and device for interactive calibration based on 3d reconstruction in 3d surveillance system
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190221003A1 (en) * 2015-12-30 2019-07-18 Tsinghua University Method and device for interactive calibration based on 3d reconstruction in 3d surveillance system
CN110340891A (en) * 2019-07-11 2019-10-18 河海大学常州校区 Mechanical arm positioning grasping system and method based on cloud template matching technique

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
孔令升 等: "基于时域编码结构光的高精度三维视觉引导抓取系统研究", 集成技术, pages 38 - 49 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113927606A (en) * 2021-12-20 2022-01-14 湖南视比特机器人有限公司 Robot 3D vision grabbing method, deviation rectifying method and system
CN115497087A (en) * 2022-11-18 2022-12-20 广州煌牌自动设备有限公司 Tableware posture recognition system and method
CN115497087B (en) * 2022-11-18 2024-04-19 广州煌牌自动设备有限公司 Tableware gesture recognition system and method

Similar Documents

Publication Publication Date Title
CN110580725A (en) Box sorting method and system based on RGB-D camera
US20190152054A1 (en) Gripping system with machine learning
CN111151463B (en) Mechanical arm sorting and grabbing system and method based on 3D vision
CN110728715A (en) Camera angle self-adaptive adjusting method of intelligent inspection robot
CN113524194A (en) Target grabbing method of robot vision grabbing system based on multi-mode feature deep learning
CN110065068B (en) Robot assembly operation demonstration programming method and device based on reverse engineering
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN113793383A (en) 3D visual identification taking and placing system and method
TW201927497A (en) Robot arm automatic processing system, method, and non-transitory computer-readable recording medium
JP2012101320A (en) Image generation apparatus, image generation method and program
CN115345822A (en) Automatic three-dimensional detection method for surface structure light of aviation complex part
CN112161619A (en) Pose detection method, three-dimensional scanning path planning method and detection system
Melchiorre et al. Collison avoidance using point cloud data fusion from multiple depth sensors: a practical approach
CN112862878A (en) Mechanical arm trimming method based on 3D vision
Jerbić et al. Robot assisted 3D point cloud object registration
CN110992416A (en) High-reflection-surface metal part pose measurement method based on binocular vision and CAD model
Borangiu et al. Robot arms with 3D vision capabilities
Park et al. 3D log recognition and pose estimation for robotic forestry machine
Seçil et al. 3-d visualization system for geometric parts using a laser profile sensor and an industrial robot
Makovetskii et al. An algorithm for rough alignment of point clouds in three-dimensional space
Fan et al. An automatic robot unstacking system based on binocular stereo vision
US20240003675A1 (en) Measurement system, measurement device, measurement method, and measurement program
Li et al. Workpiece intelligent identification and positioning system based on binocular machine vision
CN115578460A (en) Robot grabbing method and system based on multi-modal feature extraction and dense prediction
JP2011174891A (en) Device and method for measuring position and attitude, and program

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination