CN110340891B - Mechanical arm positioning and grabbing system and method based on point cloud template matching technology - Google Patents

Mechanical arm positioning and grabbing system and method based on point cloud template matching technology Download PDF

Info

Publication number
CN110340891B
CN110340891B CN201910623223.6A CN201910623223A CN110340891B CN 110340891 B CN110340891 B CN 110340891B CN 201910623223 A CN201910623223 A CN 201910623223A CN 110340891 B CN110340891 B CN 110340891B
Authority
CN
China
Prior art keywords
point cloud
cloud data
mechanical arm
points
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910623223.6A
Other languages
Chinese (zh)
Other versions
CN110340891A (en
Inventor
童晶
袁承城
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changzhou Campus of Hohai University
Original Assignee
Changzhou Campus of Hohai University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changzhou Campus of Hohai University filed Critical Changzhou Campus of Hohai University
Priority to CN201910623223.6A priority Critical patent/CN110340891B/en
Publication of CN110340891A publication Critical patent/CN110340891A/en
Application granted granted Critical
Publication of CN110340891B publication Critical patent/CN110340891B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a mechanical arm positioning and grabbing system based on a point cloud template matching technology, which comprises a data acquisition module, a data processing and transmitting module and a mechanical arm control module which are connected in sequence. The invention also discloses a mechanical arm positioning and grabbing method based on the point cloud template matching technology. The invention searches for the target object in the scene based on the three-dimensional point cloud registration method, overcomes the problems that a plurality of mutually shielded target objects, the target object lacking texture details and the like are difficult to detect by using scene two-dimensional image information acquired by a common color camera, has high accuracy, is suitable for goods sorting and stacking scenes, and has good application prospect.

Description

Mechanical arm positioning and grabbing system and method based on point cloud template matching technology
Technical Field
The invention relates to the field of point cloud processing algorithms, in particular to a mechanical arm positioning and grabbing method based on a point cloud template matching technology.
Background
The mechanical arm can replace human beings to carry out high-strength and simple and repeated work, such as sorting, stacking and the like of assembly line goods; in addition, dangerous and harmful work can be completed, such as high-altitude operation, mine sweeping, paint spraying and the like. The application of the mechanical arm can reduce the labor cost, ensure the production safety, greatly improve the production efficiency and have wide application prospect.
Traditional arm is used, and according to the fixed orbit of instruction walking of artificial settlement, the fixed gesture of snatching of execution usually, the arm can't be autonomic and the environment takes place alternately, in case the environment changes, just needs artificial resetting instruction, and this efficiency that has reduced the production activity.
Disclosure of Invention
Aiming at the defects of the prior art, the invention aims to provide a low-cost high-precision mechanical arm positioning and grabbing method, so as to solve the problem that the mechanical arm cannot interact with the environment independently in the prior art.
In order to achieve the above object, the present invention adopts the following technical solutions:
a mechanical arm positioning and grabbing system based on a point cloud template matching technology comprises a data acquisition module, a data processing and transmitting module and a mechanical arm control module which are sequentially connected;
the data acquisition module is used for acquiring scene point cloud data of a target object to be grabbed;
the data processing and transmitting module is used for acquiring pose information of the target object through the scene point cloud data and transmitting the pose information to the mechanical arm control module;
the mechanical arm control module is used for grabbing the target object according to the pose information.
Further, the data acquisition module comprises a depth camera and a computer;
the depth camera is used for acquiring scene point cloud data of a target object to be grabbed and storing the scene point cloud data into a computer.
A mechanical arm positioning and grabbing method based on a point cloud template matching technology comprises the following steps:
collecting scene point cloud data of a target object to be grabbed;
acquiring target point cloud data of a target object according to the scene point cloud data;
calculating pose information of a target object under the depth camera according to the target point cloud data;
calculating the pose information of the target object under a mechanical arm base coordinate system according to the pose information under the depth camera and the mechanical arm teaching;
and planning the track and the corresponding posture of the grabbing target object according to the pose information under the mechanical arm base coordinate system and carrying out grabbing.
Further, the method for acquiring the target point cloud data comprises the following steps:
filtering the scene point cloud data to obtain a point cloud without noise points and a background;
creating a plurality of three-dimensional voxel grids according to the three-dimensional space where the noise-free point and the background point clouds are located, and taking the gravity center points of all the point clouds in the voxel grids as sampling point clouds;
and partitioning the sampling point cloud to obtain target point cloud data.
Further, the pose information calculation method under the depth camera includes:
extracting key points of the target point cloud data;
and carrying out initial registration and fine registration on the key points of the target point cloud data and the template point cloud data to obtain a homogeneous transformation matrix.
Further, the method of initial registration comprises:
selecting sample point from key points of target point cloud data;
acquiring initial corresponding points of the sample points in key points of the template point cloud data according to the feature histogram;
and acquiring an initial homogeneous transformation matrix according to the sample points and the initial corresponding points.
Further, the method of fine registration includes:
a. acquiring accurate corresponding points of the sample points in key points of the template point cloud data according to the initial homogeneous transformation matrix;
b. calculating the root mean square of the sample points and the accurate corresponding points, and obtaining the transformation of the minimum root mean square between the sample points and the accurate corresponding points;
c. and (c) repeating the step (a) and the step (b), and stopping iterative calculation until the set iteration times are reached or the error is smaller than a threshold value to obtain the homogeneous transformation matrix.
Further, the pose information calculation method in the mechanical arm base coordinate system includes:
and obtaining a combined homogeneous transformation matrix of the rotation matrix, the translation vector and the homogeneous transformation matrix, which are taught by the mechanical arm, namely the pose information of the mechanical arm in a basic coordinate system.
Further, the scene point cloud data acquisition method comprises the following steps:
the method comprises the steps of collecting a depth image of a scene through a depth camera, and converting depth image data into scene point cloud data through a universal data collection interface OpenNI.
Compared with the prior art, the invention has the beneficial effects that:
according to the method, a depth camera is combined with a mechanical arm, a PrimeSense depth camera is used for collecting point cloud data of a scene where a target to be grabbed is located, a three-dimensional point cloud template of the target object is created, the position and posture information of the target object under a depth camera coordinate system is calculated by combining a point cloud pairwise registration algorithm, a homogeneous transformation matrix between a mechanical arm clamping jaw and the template object is obtained through mechanical arm teaching, and the coordinate of the target object under a mechanical arm coordinate system is obtained in sum, so that the motion track and the posture of the target grabbed by the clamping jaw at the tail end of the mechanical arm are planned, and the function that the mechanical arm can autonomously interact with the environment is realized; compared with the traditional method of collecting point clouds by expensive laser equipment, the method has the advantages that the cost is greatly reduced, and the application of a low-cost visual scheme in the field of mechanical arms can be promoted.
Drawings
FIG. 1 is a schematic view of a robotic arm;
FIG. 2 is a schematic diagram of system module relationships;
FIG. 3 is a flow chart of a target object location algorithm.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1 and 2, a robot arm positioning and grabbing system based on a point cloud template matching technology includes a data acquisition module, a data processing and transmitting module, and a robot arm control module, which are connected in sequence.
The data acquisition module is used for acquiring scene point cloud data of a target object to be grabbed;
the data processing and transmitting module is used for acquiring pose information of the target object through the scene point cloud data and transmitting the pose information to the mechanical arm control module;
the mechanical arm control module is used for grabbing the target object according to the pose information.
The data acquisition module comprises a depth camera and a computer;
the depth camera is used for acquiring three-dimensional scene point cloud data of a scene where a target object to be grabbed is located and storing the acquired scene point cloud data in a local memory of a computer.
The data processing and transmitting module is used for identifying and positioning a target object to be grabbed from the scene point cloud data, calculating the position and posture information of the target object under a depth camera coordinate system, and respectively corresponding to a rotation matrix and a translation vector; secondly, through combined teaching of the mechanical arm and the depth camera, the pose information under the depth camera is converted into a mechanical arm base coordinate system to obtain the pose information of the target object under the mechanical arm base coordinate system; and finally, transmitting the pose information under the mechanical arm base coordinate system to a mechanical arm control cabinet through Socket network communication.
The mechanical arm control module is used for receiving pose information of the target object, planning a motion track and a corresponding gesture of a clamping jaw at the tail end of the mechanical arm according to the pose information, and executing grabbing of the target object.
As shown in fig. 3, an object positioning method based on a point cloud template matching technology includes the following steps:
firstly, a computer end controls a depth camera to acquire scene point cloud data of a scene where a target object to be captured is located;
secondly, acquiring target point cloud data of a target object according to the scene point cloud data;
step three, calculating the position and pose information of the target object under the depth camera according to the target point cloud data;
and fourthly, calculating the pose information of the target object under the base coordinate system of the mechanical arm according to the pose information under the depth camera and the teaching result of the mechanical arm.
And fifthly, transmitting the pose information under the base coordinate system of the mechanical arm to the mechanical arm through a network cable by utilizing Socket network communication.
And step six, planning the track and the corresponding posture of the grabbing target object by the mechanical arm according to the received pose information under the mechanical arm base coordinate system, and carrying out grabbing.
The method for acquiring the target point cloud data comprises the following steps:
2.1, filtering scene point cloud data;
and removing noise points in the scene point cloud data through statistical filtering, and removing the background of the scene point cloud data through straight-through filtering to obtain point clouds without the noise points and the background. In the statistical filtering step, the average distance between each point and its adjacent points is calculated by performing statistical analysis on the neighborhood of each point, and the obtained result follows normal distribution, so that the average value of the distances of the adjacent points can be obtained. Defining points with distances larger than the mean value as outliers, and removing the outliers from the scene point cloud data; in the straight-through filtering step, a threshold value is set by designating a direction needing filtering, then the point cloud is traversed according to the direction, whether each point is in a threshold value interval is judged, and points which are not in the threshold value are deleted.
2.1, acquiring a sampling point cloud;
a plurality of three-dimensional voxel grids are created in a three-dimensional space where the point clouds without noise points and backgrounds are located, the voxel grids can be regarded as tiny three-dimensional cubes, the point clouds fall into corresponding voxels according to different spatial distributions, the gravity centers of all the points in the voxels are calculated, and the gravity centers approximately replace all the points in the voxels to serve as sampling point clouds. Thereby reducing the point cloud scene density while still maintaining the point cloud shape characteristics.
2.3, segmenting the sampling point cloud to obtain target point cloud data;
the method is characterized in that point clouds with different characteristics and different areas are distinguished through a random sampling consistency algorithm, m sample points are selected from all data, and parents or rarefactions of some properties between the points are defined in an m-dimensional space, so that clustering operation is completed. Assuming that m point cloud data can be aggregated into n classes, and then two classes with the minimum inter-class distance are aggregated into one class, the inter-class distance needs to be recalculated after aggregation because the sample point changes. After recalculating the class spacing, comparing the distances between all classes again, merging the two classes with the minimum distance again, repeating the iteration until the distance between any two classes is larger than the set threshold value or the number of the classes is smaller than the predefined number, stopping the iteration, completing the point cloud segmentation, and obtaining the target point cloud data.
The pose information calculation method under the depth camera comprises the following steps:
3.1, extracting key points of the target point cloud data and calculating key point feature descriptors;
extracting key points of target point cloud data by the following steps: traversing each point of the depth image, focusing on the point with more obvious depth change in the adjacent area, and carrying out edge detection; calculating the coefficient of surface change according to the numerical fluctuation of the surface of the adjacent area, and finding out the main direction of the change; calculating an interest value from the principal direction, the value being indicative of a difference between the principal direction of change and the other direction; performing smooth filtering on all the interest values obtained by calculation; and carrying out non-maximum compression on the filtered data so as to find a final NARF (Normal Aligned Radial Feature) key point. Describing shape features around the key points by calculating FPFH (fast point feature histograms) descriptors of the key points, and taking the shape features as a registration basis of the template point cloud and the target point cloud.
3.2, point cloud rough registration;
the rough registration between the key points of the template point cloud and the target point cloud is completed through a sampling consistency initial registration algorithm, and the process comprises the following steps: selecting proper sample points from the key point data set P of the target point cloud data, and setting the minimum pairing distance valuedminEnsure that the pairing distance of the sample points is greater thandmin(ii) a For a sample point selected from the data set P, an initial corresponding point in a key point data set Q of corresponding template point cloud data needs to be found according to the feature histogram and stored, so that a corresponding relation is determined; an initial homogeneous transformation matrix may be calculated from the sample points and their corresponding initial corresponding points.
And 3.3, point cloud fine registration.
Finishing the precise registration between the template point clouds and the target point clouds by an iterative closest point algorithm, wherein the process comprises the following steps: a. acquiring accurate corresponding points of the sample points in key points of the template point cloud data according to the initial homogeneous transformation matrix; b. calculating the root mean square of the obtained sample points and the accurate corresponding points, and searching for the transformation which enables the root mean square between the sample points and the accurate corresponding points to be minimum; c. and (c) repeating the operations of the steps (a) and (b), and stopping iterative calculation until the set iteration times are reached or the error is smaller than a threshold value to obtain a homogeneous transformation matrix, wherein the minimum error means that the absolute value of the difference of the root mean square of two adjacent calculated times is smaller than a set target error threshold value.
The pose information calculation method under the mechanical arm base coordinate system comprises the following steps:
and obtaining a combined homogeneous transformation matrix of the rotation matrix, the translation vector and the homogeneous transformation matrix, which are taught by the mechanical arm, namely the pose information of the mechanical arm in a basic coordinate system.
The method also comprises the steps of target object template point cloud making, scene point cloud obtaining and mechanical arm and depth camera combined teaching;
target object template point cloud making and scene point cloud obtaining:
acquiring a depth image of a scene through a depth camera, converting depth data into point cloud data through a general data acquisition interface OpenNI (open natural interaction interface) and storing the point cloud data to a local computer; and putting the template object Model into a scene, re-collecting point cloud data, and manually clipping the point cloud to obtain a point cloud template Model cloud of the target object.
Teaching is jointly carried out on the mechanical arm and the depth camera: the method comprises the steps of binding a depth camera on a mechanical arm, setting an initial position of the mechanical arm, manually adjusting the mechanical arm to a position for grabbing a template target object Model, and reading a rotation matrix R1 and a translation vector T1 through a mechanical arm control cabinet.
The present invention is not limited to the above embodiments, and any changes, modifications, substitutions, combinations and simplifications made without departing from the spirit and principle of the present invention should be construed as equivalents thereof, and included in the protection scope of the present invention. Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A mechanical arm positioning and grabbing method based on a point cloud template matching technology is characterized by comprising the following steps:
collecting scene point cloud data of a target object to be grabbed;
acquiring target point cloud data of a target object according to the scene point cloud data;
calculating pose information of a target object under the depth camera according to the target point cloud data;
calculating the pose information of the target object under a mechanical arm base coordinate system according to the pose information under the depth camera and the mechanical arm teaching;
planning the track and the corresponding posture of the grabbing target object according to the pose information under the mechanical arm base coordinate system and carrying out grabbing;
the pose information calculation method under the depth camera comprises the following steps:
extracting key points of the target point cloud data;
carrying out initial registration and fine registration on key points of the target point cloud data and the template point cloud data to obtain a homogeneous transformation matrix;
the method of initial registration comprises:
selecting sample point from key points of target point cloud data;
acquiring initial corresponding points of the sample points in key points of the template point cloud data according to the feature histogram;
acquiring an initial homogeneous transformation matrix according to the sample points and the initial corresponding points;
the method of fine registration comprises:
a. acquiring accurate corresponding points of the sample points in key points of the template point cloud data according to the initial homogeneous transformation matrix;
b. calculating the root mean square of the sample points and the accurate corresponding points, and obtaining the transformation of the minimum root mean square between the sample points and the accurate corresponding points;
c. and (c) repeating the step (a) and the step (b), and stopping iterative calculation until the set iteration times are reached or the error is smaller than a threshold value to obtain the homogeneous transformation matrix.
2. The mechanical arm positioning and grabbing method based on the point cloud template matching technology of claim 1 is characterized in that the method for acquiring the target point cloud data comprises the following steps:
filtering the scene point cloud data to obtain a point cloud without noise points and a background;
creating a plurality of three-dimensional voxel grids according to the three-dimensional space where the noise-free point and the background point clouds are located, and taking the gravity center points of all the point clouds in the voxel grids as sampling point clouds;
and partitioning the sampling point cloud to obtain target point cloud data.
3. The mechanical arm positioning and grabbing method based on the point cloud template matching technology of claim 1 is characterized in that the pose information calculation method under the mechanical arm base coordinate system comprises the following steps:
and obtaining a combined homogeneous transformation matrix of the rotation matrix, the translation vector and the homogeneous transformation matrix taught by the mechanical arm, namely the pose information of the mechanical arm under a basic coordinate system.
4. The mechanical arm positioning and grabbing method based on the point cloud template matching technology of claim 1 is characterized in that the scene point cloud data acquisition method comprises the following steps:
the method comprises the steps of collecting a depth image of a scene through a depth camera, and converting depth image data into scene point cloud data through a universal data collection interface OpenNI.
5. A mechanical arm positioning and grabbing system based on a point cloud template matching technology is characterized by comprising a data acquisition module, a data processing and transmitting module and a mechanical arm control module which are sequentially connected;
the data acquisition module is used for acquiring scene point cloud data of a target object to be grabbed;
the data processing and transmitting module is used for acquiring pose information of the target object through the scene point cloud data and transmitting the pose information to the mechanical arm control module; the pose information calculation method comprises the following steps:
extracting key points of the target point cloud data; carrying out initial registration and fine registration on key points of the target point cloud data and the template point cloud data to obtain a homogeneous transformation matrix;
the method of initial registration comprises:
selecting sample point from key points of target point cloud data; acquiring initial corresponding points of the sample points in key points of the template point cloud data according to the feature histogram; acquiring an initial homogeneous transformation matrix according to the sample points and the initial corresponding points;
the method of fine registration comprises: a. acquiring accurate corresponding points of the sample points in key points of the template point cloud data according to the initial homogeneous transformation matrix; b. calculating the root mean square of the sample points and the accurate corresponding points, and obtaining the transformation of the minimum root mean square between the sample points and the accurate corresponding points; c. repeating the step (a) and the step (b), and stopping iterative computation until a set iteration number is reached or an error is smaller than a threshold value to obtain a homogeneous transformation matrix;
the mechanical arm control module is used for grabbing the target object according to the pose information.
6. The system of claim 5, wherein the data acquisition module comprises a depth camera and a computer;
the depth camera is used for acquiring scene point cloud data of a target object to be grabbed and storing the scene point cloud data into a computer.
CN201910623223.6A 2019-07-11 2019-07-11 Mechanical arm positioning and grabbing system and method based on point cloud template matching technology Active CN110340891B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910623223.6A CN110340891B (en) 2019-07-11 2019-07-11 Mechanical arm positioning and grabbing system and method based on point cloud template matching technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910623223.6A CN110340891B (en) 2019-07-11 2019-07-11 Mechanical arm positioning and grabbing system and method based on point cloud template matching technology

Publications (2)

Publication Number Publication Date
CN110340891A CN110340891A (en) 2019-10-18
CN110340891B true CN110340891B (en) 2022-05-24

Family

ID=68175836

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910623223.6A Active CN110340891B (en) 2019-07-11 2019-07-11 Mechanical arm positioning and grabbing system and method based on point cloud template matching technology

Country Status (1)

Country Link
CN (1) CN110340891B (en)

Families Citing this family (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110793437A (en) * 2019-10-23 2020-02-14 珠海格力智能装备有限公司 Positioning method and device of manual operator, storage medium and electronic equipment
CN111085997A (en) * 2019-12-17 2020-05-01 清华大学深圳国际研究生院 Capturing training method and system based on point cloud acquisition and processing
CN113021333A (en) * 2019-12-25 2021-06-25 沈阳新松机器人自动化股份有限公司 Object grabbing method and system and terminal equipment
CN110980276B (en) * 2019-12-30 2021-08-17 南京埃克里得视觉技术有限公司 Method for implementing automatic casting blanking by three-dimensional vision in cooperation with robot
CN111311679B (en) * 2020-01-31 2022-04-01 武汉大学 Free floating target pose estimation method based on depth camera
CN111168685B (en) * 2020-02-17 2021-06-18 上海高仙自动化科技发展有限公司 Robot control method, robot, and readable storage medium
CN111652936B (en) * 2020-08-06 2020-11-06 天津迦自机器人科技有限公司 Three-dimensional sensing and stacking planning method and system for open container loading
CN112366615B (en) * 2020-09-22 2022-04-22 亿嘉和科技股份有限公司 Non-bucket-moving line lapping method for scene with vertical distance of 1m
CN112509063A (en) * 2020-12-21 2021-03-16 中国矿业大学 Mechanical arm grabbing system and method based on edge feature matching
CN112509145B (en) * 2020-12-22 2023-12-08 珠海格力智能装备有限公司 Material sorting method and device based on three-dimensional vision
CN112883984B (en) * 2021-02-26 2022-12-30 山东大学 Mechanical arm grabbing system and method based on feature matching
CN113483664B (en) * 2021-07-20 2022-10-21 科派股份有限公司 Screen plate automatic feeding system and method based on line structured light vision
CN113793383A (en) * 2021-08-24 2021-12-14 江西省智能产业技术创新研究院 3D visual identification taking and placing system and method
CN116197886A (en) * 2021-11-28 2023-06-02 梅卡曼德(北京)机器人科技有限公司 Image data processing method, device, electronic equipment and storage medium
CN114241286B (en) * 2021-12-08 2024-04-12 浙江华睿科技股份有限公司 Object grabbing method and device, storage medium and electronic device
CN113927606B (en) * 2021-12-20 2022-10-14 湖南视比特机器人有限公司 Robot 3D vision grabbing method and system
CN114939891B (en) * 2022-06-28 2024-03-19 上海仙工智能科技有限公司 3D grabbing method and system for composite robot based on object plane characteristics
CN116503450A (en) * 2023-03-20 2023-07-28 盐城工学院 Tire marking method based on 3D visual positioning

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824270A (en) * 2013-09-25 2014-05-28 浙江树人大学 Rapid disperse three-dimensional point cloud filtering method
JP2015201192A (en) * 2014-04-03 2015-11-12 エアバス デーエス ゲゼルシャフト ミット ベシュレンクテル ハフツング Detection of object position and direction
CN105225269A (en) * 2015-09-22 2016-01-06 浙江大学 Based on the object modelling system of motion
CN106041937A (en) * 2016-08-16 2016-10-26 河南埃尔森智能科技有限公司 Control method of manipulator grabbing control system based on binocular stereoscopic vision
CN106934833A (en) * 2017-02-06 2017-07-07 江苏华航威泰机器人科技有限公司 A kind of stacking material pick device and method at random
CN107598775A (en) * 2017-07-27 2018-01-19 芜湖微云机器人有限公司 It is a kind of by laser detect the method that simultaneously multi-axis robot is polished
CN107914272A (en) * 2017-11-20 2018-04-17 北京科技大学 A kind of method of seven freedom robot assemblies crawl target object

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824270A (en) * 2013-09-25 2014-05-28 浙江树人大学 Rapid disperse three-dimensional point cloud filtering method
JP2015201192A (en) * 2014-04-03 2015-11-12 エアバス デーエス ゲゼルシャフト ミット ベシュレンクテル ハフツング Detection of object position and direction
CN105225269A (en) * 2015-09-22 2016-01-06 浙江大学 Based on the object modelling system of motion
CN106041937A (en) * 2016-08-16 2016-10-26 河南埃尔森智能科技有限公司 Control method of manipulator grabbing control system based on binocular stereoscopic vision
CN106934833A (en) * 2017-02-06 2017-07-07 江苏华航威泰机器人科技有限公司 A kind of stacking material pick device and method at random
CN107598775A (en) * 2017-07-27 2018-01-19 芜湖微云机器人有限公司 It is a kind of by laser detect the method that simultaneously multi-axis robot is polished
CN107914272A (en) * 2017-11-20 2018-04-17 北京科技大学 A kind of method of seven freedom robot assemblies crawl target object

Also Published As

Publication number Publication date
CN110340891A (en) 2019-10-18

Similar Documents

Publication Publication Date Title
CN110340891B (en) Mechanical arm positioning and grabbing system and method based on point cloud template matching technology
CN107886528B (en) Distribution line operation scene three-dimensional reconstruction method based on point cloud
Sun et al. Aerial 3D building detection and modeling from airborne LiDAR point clouds
CN106529469B (en) Unmanned aerial vehicle-mounted LiDAR point cloud filtering method based on self-adaptive gradient
CN106022381B (en) Automatic extraction method of street lamp pole based on vehicle-mounted laser scanning point cloud
JP6216508B2 (en) Method for recognition and pose determination of 3D objects in 3D scenes
CN107481274B (en) Robust reconstruction method of three-dimensional crop point cloud
CN111553949B (en) Positioning and grabbing method for irregular workpiece based on single-frame RGB-D image deep learning
CN107862735B (en) RGBD three-dimensional scene reconstruction method based on structural information
CN104063711B (en) A kind of corridor end point fast algorithm of detecting based on K means methods
CN112818925B (en) Urban building and crown identification method
CN109816664B (en) Three-dimensional point cloud segmentation method and device
CN114004938B (en) Urban scene reconstruction method and device based on mass data
CN110070567A (en) A kind of ground laser point cloud method for registering
CN111145129A (en) Point cloud denoising method based on hyper-voxels
CN114821571A (en) Point cloud processing method for power cable identification and reconstruction
CN114898041A (en) Improved ICP method based on luminosity error
Yuan et al. 3D point cloud recognition of substation equipment based on plane detection
CN117409336A (en) Suburb power line network tree obstacle positioning and removing method based on point cloud
CN116452604A (en) Complex substation scene segmentation method, device and storage medium
CN115410036A (en) Automatic classification method for key element laser point clouds of high-voltage overhead transmission line
CN112365534B (en) Large coal pile volume measurement method based on monocular camera three-dimensional reconstruction
CN114782357A (en) Self-adaptive segmentation system and method for transformer substation scene
Zhang et al. Object detection and grabbing based on machine vision for service robot
CN113791400A (en) Laser radar-based stair parameter autonomous detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant