CN112720477B - Object optimal grabbing and identifying method based on local point cloud model - Google Patents

Object optimal grabbing and identifying method based on local point cloud model Download PDF

Info

Publication number
CN112720477B
CN112720477B CN202011531432.7A CN202011531432A CN112720477B CN 112720477 B CN112720477 B CN 112720477B CN 202011531432 A CN202011531432 A CN 202011531432A CN 112720477 B CN112720477 B CN 112720477B
Authority
CN
China
Prior art keywords
grabbing
point
point cloud
cuboid
cloud model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011531432.7A
Other languages
Chinese (zh)
Other versions
CN112720477A (en
Inventor
曾辉雄
李俊
程靖航
高银
谢银辉
杨进兴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quanzhou Institute of Equipment Manufacturing
Original Assignee
Quanzhou Institute of Equipment Manufacturing
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quanzhou Institute of Equipment Manufacturing filed Critical Quanzhou Institute of Equipment Manufacturing
Priority to CN202011531432.7A priority Critical patent/CN112720477B/en
Publication of CN112720477A publication Critical patent/CN112720477A/en
Application granted granted Critical
Publication of CN112720477B publication Critical patent/CN112720477B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J13/00Controls for manipulators
    • B25J13/08Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
    • B25J13/087Controls for manipulators by means of sensing devices, e.g. viewing or touching devices for sensing other physical parameters, e.g. electrical or chemical properties
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J15/00Gripping heads and other end effectors
    • B25J15/08Gripping heads and other end effectors having finger members
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects

Abstract

The invention relates to an object optimal grabbing recognition method based on a local point cloud model, which comprises the steps of firstly obtaining a local point cloud model of a measured object through a linear laser displacement sensor, then extracting the outline of the point cloud model, generating a grabbing cuboid based on a gravity center point, and finally determining an optimal grabbing mode based on grabbing evaluation rules and quantitatively calculating the scores of the grabbing cuboids. The method can automatically judge the optimal grabbing mode of the object so as to carry out grabbing operation by using operation tools such as a mechanical arm, meets the requirements of accuracy, instantaneity and universality, and can be applied to the fields of industrial logistics sorting, carrying and the like.

Description

Object optimal grabbing and identifying method based on local point cloud model
Technical Field
The invention relates to the technical field of robot grabbing, in particular to an optimal grabbing recognition method for an object based on a local point cloud model.
Background
Robot gripping is a very common application scenario, but the perception of machine vision is not separated. In recent years, stereo vision based on laser has been widely developed and applied, and compared with two-dimensional images, the stereo vision based on laser has the advantages of depth information, less influence by ambient light and the like, and has wide application prospects in the field of robot grabbing, such as logistics sorting, baggage carrying, factory loading and unloading and the like.
Most of the researches at present focus on binocular vision or RGB-D cameras (such as Kinect) to collect a local point cloud model of an object and extract grabbing points, and the method is complex in algorithm, poor in universality and large in calculation amount. Most robots currently used in industrial places are only provided with a single camera, and based on a monocular grabbing and positioning technology, the main stream idea is to combine a camera or a motion model of a target with multi-frame image information to obtain three-dimensional information of the target; the method requires camera calibration and motion modeling, relates to an image fusion technology, has large operation amount in the positioning process, and is easily influenced by the outside. If a single image is adopted to carry out matching recognition of the object, a model library of the object needs to be established in advance, the recognition accuracy is low, the universality is poor, and the object placement is limited greatly.
Therefore, the existing grabbing method has problems in aspects of universality, accuracy and the like, and research results have limitations and are difficult to apply to industrial sites.
Disclosure of Invention
In view of the above, the present invention aims to provide an object optimal grabbing and identifying method based on a local point cloud model, which meets the requirements of accuracy and universality and can be applied to industrial sites.
In order to achieve the above purpose, the invention adopts the following technical scheme:
an object optimal grabbing and identifying method based on a local point cloud model is characterized by comprising the following steps of: comprising
Obtaining a local point cloud model: acquiring a local point cloud model of an object to be grabbed;
grabbing cuboid generation: intercepting part of point cloud data according to the height information of the local point cloud model to form a new point cloud model; extracting an edge contour of the new point cloud model, and calculating a center point; generating a grabbing cuboid based on the central point;
grabbing cuboid score calculation step: calculating an outer contour point set of the new point cloud model intercepted by the grabbing cuboid; based on the outer contour point set, carrying out quantization scoring on the grabbing cuboid according to a preset grabbing rule;
determining an optimal grabbing mode: and determining the grabbing cuboid with the highest score as the optimal grabbing mode.
In the step of grabbing cuboid generation, detecting the highest point of the local point cloud model, and then intercepting the height of h downwards based on the highest point to acquire partial point cloud data to form a new point cloud model; wherein the height h intercepted downwards is a preset value.
In the living area cuboid generation step, the outer contour of the new point cloud model is extracted by a method of searching for extreme points, and the center point of the new point cloud model is obtained by a method of averaging the extreme point cloud coordinates.
In the step of calculating the grabbing cuboid score, a preset grabbing rule is as follows:
a. the closer the point set pairs are to the center of the point cloud model, the better;
b. the smaller the standard deviation between the straight line fitted by the point set to each group of point sets and the original point is, the better;
c. the more parallel the short side of the grabbing cuboid is with the straight line fitted by the intercepted point set, the better;
d. the more parallel the two straight lines respectively fitted to each group of point sets by the point sets are, the better;
all four rules are considered and calculated only in the XY plane;
the quantization scoring mode is as follows:
for rule a, calculating the shortest distance between the point cloud outline and the center point as a reference value x, and calculating the average distance ratio of a certain point in the point set on the assumption that the distance between the certain point and the center point is y:
for m groups of point set pairs formed by different grabbing cuboids, assigning values of a1, a2, … and am according to the sorting of the distance ratio; wherein m is the number of the generated gripping cuboids;
for the rule b, fitting a best straight line to each group of point sets by using a least square method, namely, fitting a straight line with the minimum square sum of the distances between each original point and the straight line, and comparing the standard deviation of the fitted straight line and the original point set, wherein the smaller the standard deviation is, the better the straightness of the point sets is, and the flatter the edge of an object is, so that the object is favorable for grabbing; sorting the m-group point sets according to standard deviation, and respectively assigning values of b1, b2, … and bm;
for rule c, calculating the slope of a straight line fitted by two sets of points intercepted by the grabbing cuboid respectively, and comparing the slope with the slope of the short side of the grabbing cuboid, wherein the closer the slope is to the higher the score is; sorting the m-group point sets according to the slope proximity degree, and respectively assigning values of c1, c2, … and cm;
for rule d, the slope of the straight line fitted by the two sets of points intercepted by the grabbing cuboid is compared, and the closer the slope of the two sets of points is to the specification, the more parallel the slope of the two sets of points is, so that the clamping of the clamp is facilitated. Sorting the m-group point sets according to the slope proximity degree, and respectively assigning values of d1, d2, … and dm;
finally, the total score of each group of point cloud pairs is the sum of the four rule scores.
In the step of calculating the grabbing cuboid score, if the standard deviation between the fitting straight line calculated by the rule b and the point set is larger than a set threshold value, and the point set distribution is not straight line, the calculation result of the rule c and the rule d is directly assigned with 0 score; the remaining point sets are still assigned scores according to established rules.
In the step of calculating the grab cuboid score, the weight coefficient of the preset grab rule is set as follows: rule a:0.5; rule b:0.2; rule c:0.2; rule d:0.1.
in the step of determining the optimal grabbing mode, if the average height of the contour point set intercepted by the grabbing cuboid in the z-axis direction is lower than 5mm, the contour point set is not considered to be used as grabbing selection; considering the grabbing success rate, if the average height difference of two groups of point clouds intercepted by the grabbing cuboid is larger than 20mm, the grabbing success rate is not considered as the grabbing selection.
After the scheme is adopted, the local point cloud model of the measured object is obtained through the linear laser displacement sensor, the outline of the point cloud model is extracted, the grabbing cuboid is generated based on the gravity center point, the grabbing evaluation rule is based, the score of the grabbing cuboid is calculated in a quantization mode, and the optimal grabbing mode is finally determined. The method can automatically judge the optimal grabbing mode of the object so as to carry out grabbing operation by using operation tools such as a mechanical arm, meets the requirements of accuracy, instantaneity and universality, and can be applied to the fields of industrial logistics sorting, carrying and the like.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a point cloud model of an object to be grasped;
FIG. 3 is an outline point diagram of an object to be grasped;
FIG. 4 is a schematic view of the resulting grabbed cuboid;
FIG. 5 is a schematic diagram of point location determination;
fig. 6 is an outer contour point set diagram of a point cloud model.
Detailed Description
The invention discloses an object optimal grabbing and identifying method based on a local point cloud model as shown in fig. 1, which comprises the following steps:
and step 1, acquiring a local point cloud model of the object to be grabbed. As shown in fig. 2, in this embodiment, a local point cloud model of an object to be grasped is obtained based on a line laser sensor.
Step 2, intercepting part of point cloud data according to the height information of the local point cloud model to form a new point cloud model; extracting an edge contour of the new point cloud model, and calculating a center point; m gripping cuboids are generated based on the center point.
Specifically, as shown in fig. 3 and 4, in this embodiment, the highest point of the local point cloud model is detected first; and intercepting the height of h downwards based on the highest point, and acquiring partial point cloud data to form a new point cloud model. The height h of the downward interception is a preset value, and is specifically preset according to the specification of the gripper of the robot. And secondly, extracting an outer contour calculation center point of the new point cloud model, and generating a plurality of grabbing cuboids according to the center point. In this embodiment, the outer contour of the new point cloud model is extracted by searching for the extreme point, and the center point of the new point cloud model is obtained by using the average value of the extreme point cloud coordinates.
In the invention, the number of the gripping cuboids depends on the included angles of the gripping cuboids. In general, the included angle is set to be 45 ° or 30 °, and then the number of the generated gripping cuboids is 4 or 6. The length, width and height of the grabbing cuboid are determined according to the specification of the gripper of the robot, and the specification of the gripper is required to be matched with the specification of the grabbing cuboid. In this embodiment, the included angle is set to 45 °,4 gripping cuboids are generated, and the length, width and height of each cuboid are respectively 80mm,22mm and 40mm.
And 3, calculating an outer contour point set of the new point cloud model intercepted by the grabbing cuboid, and quantitatively scoring the grabbing cuboid according to a preset grabbing rule based on the outer contour point set.
In this embodiment, each capturing cuboid includes a point cloud outline point, that is, a point located inside the capturing cuboid, and the method for determining the point cloud outline point is as follows: as long as one point is on different sides of the 3 sets of parallel planes of the cuboid, respectively, this point is located inside the cuboid. Specifically, as shown in fig. 5, point a is on a different side of parallel planes PL1 and PL2, point B is on the same side of PL1 and PL2, point a is a cuboid interior point, and point B is a cuboid exterior point. It can be seen that the position of the space point can be determined by using the angle formed between the vector formed by the space point and the vertex of the cuboid and the normal vector of the parallel plane, when the vector formed by the space point and the vertex (such as vector) When the included angles with the normal vector (the arrow in fig. 5 is the normal vector) are obtuse angles or acute angles, the points are on the same side of the parallel plane; when the angle between the vector and the normal vector is obtuse and acute, the points are on different sides of the parallel plane.
Therefore, only the sign of the cosine value of the included angle needs to be judged to judge whether the included angle is an acute angle or an obtuse angle, and the inner product calculation formula according to the vector is as follows:
a·b=|a||b|cos∠(a,b)
therefore, the sign of the vector inner product, that is, the sign of the vector dot product may be determined. And judging the same side and different sides of the 3 groups of parallel planes of the cuboid respectively, so that the point cloud set of the external contour of the object, which is contained in the grabbing cuboid, can be obtained. In theory, m grabbing cuboids can generate m groups of point set pairs and m x 2 clusters of point sets, but the outer contour of object point cloud can not be intercepted by actually looking at grabbing cuboids, and in the embodiment, the outer contour points are not intercepted by transverse cuboids, and no point set exists. The present embodiment generates only 3 sets of point sets versus 6 clusters of point sets (see the circled point sets in fig. 6).
After the model outline point clouds contained in the grabbing cuboids are obtained, quantitative evaluation calculation is carried out on the point cloud pairs corresponding to each grabbing cuboid to determine which is the optimal grabbing mode, and grabbing rules are required to be preset based on the grabbing success rate. Because each group of point cloud pairs is two independent point sets, and meanwhile, the hardware conditions of grabbing are considered, the embodiment formulates the following grabbing rules:
a. the closer the point set pairs are to the center of the point cloud model, the better.
b. The smaller the standard deviation of the straight line fitted by the point set to each set of points from the original point, the better.
c. The more parallel the short sides of the grabbing cuboid are to the straight line fitting the intercepted point set, the better.
d. The more parallel the two straight lines the point sets fit separately to each set of points, the better.
All four rules are considered and calculated only in the XY plane.
According to a preset grabbing rule, quantization scoring is needed to select an optimal grabbing mode. The quantization scoring mode is as follows:
for rule a, calculating the shortest distance between the point cloud outline and the center point as a reference value x, and calculating the average distance ratio of a certain point in the point set on the assumption that the distance between the certain point and the center point is y:
for m groups of point set pairs formed by different grabbing cuboids (the number of point set pairs is the same as that of the generated grabbing cuboids), the values of a1, a2, … and am are respectively assigned according to the sorting of the distance ratio values (the point set pairs take the average value of two clusters of point sets). The scores a1, a2, …, am are increasing or decreasing gradually; if the m sets of points are ordered from large to small according to the distance ratio, the scores a1, a2 and …, am are gradually increased; if the m sets of points are ordered in a distance ratio from small to large, the scores a1, a2, …, am are progressively decreasing. The assignment of rule b, rule c and rule d is similar to rule a, and will not be described in detail.
In this embodiment, 3 sets of point pairs are counted, and the 3 sets of point pairs are sorted according to the distance ratio from large to small, and assigned with scores of 0.4,0.3 and 0.2 respectively.
And for the rule b, the point sets respectively fit a best straight line to each cluster of point sets by using a least square method, namely, the straight line with the minimum square sum of the distances between each original point and the straight line, and then the standard deviation of the fitted straight line and the original point sets is compared, wherein the smaller the standard deviation is, the better the straightness of the point sets is, and the flatter the edge of the object is, so that the object is favorable for grabbing. The m-group point set pairs are ordered according to standard deviation, and the values of b1, b2, … and bm are assigned respectively.
In this embodiment, the 3 point sets are also sorted from small to large according to standard deviation, and the values are assigned, that is, the values of 0.4,0.3 and 0.2 are respectively assigned.
And for rule c, calculating the slope of a straight line fitted by two cluster point sets intercepted by the grabbing cuboid, comparing the slope with the slope of the short side of the grabbing cuboid, sequencing the m group point sets according to the slope approaching degree as the slope approaches to the slope, and assigning values of c1, c2, … and cm respectively.
In this embodiment, the 3 point pairs are also sorted from high to low according to the slope proximity, and the values are assigned according to the above scores, that is, the scores of 0.4,0.3 and 0.2 are respectively assigned.
It should be noted that if the standard deviation of the fitting straight line and the point set obtained by calculating the rule b is not a straight line for the preset threshold value, the calculation significance of the rule c and the rule d is not great, and the 0 score is directly used for assigning value; the other point sets still give scores according to the established rules, wherein the preset threshold value is set by the user according to the actual use requirement, and the value is generally larger.
For rule (d), the slope of a straight line fitted by two cluster point sets intercepted by a grabbing cuboid is compared, and the closer the slope of the two cluster point sets is to the specification, the more parallel the slope of the two cluster point sets is, so that the clamping of the clamp is facilitated. The m point sets were ordered by slope closeness and assigned scores d1, d2, …, dm, respectively.
In this embodiment, the 3 point pairs are also sorted from high to low according to the slope proximity, and the values are assigned according to the above scores, that is, the scores of 0.4,0.3 and 0.2 are respectively assigned.
Finally, the total score of each group of point cloud pairs is the sum of the four rule scores. Considering that the degree of dependence of each rule is different, the embodiment assigns a weight coefficient to each rule, and after the scoring result of the human supervision test, the weight coefficient is set as follows: rule a:0.5; rule b:0.2; rule c:0.2; rule d:0.1.
and 4, determining the grabbing cuboid with the highest score as an optimal grabbing mode. In addition, considering operability, if the average height of the contour point set intercepted by the grabbing cuboid in the z-axis direction is lower than 5mm, taking no consideration as a grabbing option; considering the grabbing success rate, if the average height difference of two groups of point clouds intercepted by the grabbing cuboid is larger than 20mm, the grabbing success rate is not considered as the grabbing selection.
In summary, the invention firstly acquires a local point cloud model of a measured object through a linear laser displacement sensor, then extracts the outline of the point cloud model, generates a grabbing cuboid based on a gravity center point, and finally determines an optimal grabbing mode based on grabbing evaluation rules and quantitatively calculating the scores of the grabbing cuboids. The method can automatically judge the optimal grabbing mode of the object so as to carry out grabbing operation by using operation tools such as a mechanical arm, meets the requirements of accuracy, instantaneity and universality, and can be applied to the fields of industrial logistics sorting, carrying and the like.
The foregoing embodiments of the present invention are not intended to limit the technical scope of the present invention, and therefore, any minor modifications, equivalent variations and modifications made to the above embodiments according to the technical principles of the present invention still fall within the scope of the technical proposal of the present invention.

Claims (6)

1. An object optimal grabbing and identifying method based on a local point cloud model is characterized by comprising the following steps of: comprising
Obtaining a local point cloud model: acquiring a local point cloud model of an object to be grabbed;
grabbing cuboid generation: intercepting part of point cloud data according to the height information of the local point cloud model to form a new point cloud model; extracting an edge contour of the new point cloud model, and calculating a center point; generating a grabbing cuboid based on the central point;
grabbing cuboid score calculation step: calculating an outer contour point set of the new point cloud model intercepted by the grabbing cuboid; based on the outer contour point set, carrying out quantization scoring on the grabbing cuboid according to a preset grabbing rule; the preset grabbing rules are as follows:
a. the closer the point set pairs are to the center of the point cloud model, the better;
b. the smaller the standard deviation between the straight line fitted by the point set to each group of point sets and the original point is, the better;
c. the more parallel the short side of the grabbing cuboid is with the straight line fitted by the intercepted point set, the better;
d. the more parallel the two straight lines respectively fitted to each group of point sets by the point sets are, the better;
all four rules are considered and calculated only in the XY plane;
the quantization scoring mode is as follows:
for rule a, calculating the shortest distance between the point cloud outline and the center point as a reference value x, and calculating the average distance ratio of a certain point in the point set on the assumption that the distance between the certain point and the center point is y:
for m groups of point set pairs formed by different grabbing cuboids, assigning values of a1, a2, … and am according to the sorting of the distance ratio; wherein m is the number of the generated gripping cuboids;
for the rule b, fitting a best straight line to each group of point sets by using a least square method, namely, fitting a straight line with the minimum square sum of the distances between each original point and the straight line, and comparing the standard deviation of the fitted straight line and the original point set, wherein the smaller the standard deviation is, the better the straightness of the point sets is, and the flatter the edge of an object is, so that the object is favorable for grabbing; sorting the m-group point sets according to standard deviation, and respectively assigning values of b1, b2, … and bm;
for rule c, calculating the slope of a straight line fitted by two sets of points intercepted by the grabbing cuboid respectively, and comparing the slope with the slope of the short side of the grabbing cuboid, wherein the closer the slope is to the higher the score is; sorting the m-group point sets according to the slope proximity degree, and respectively assigning values of c1, c2, … and cm;
for rule d, the slope of a straight line fitted by two sets of points intercepted by the grabbing cuboid is compared, and the closer the slope of the two sets of points is, the more parallel the two sets of points are explained, so that the clamping of the clamp holder is facilitated; sorting the m-group point sets according to the slope proximity degree, and respectively assigning values of d1, d2, … and dm;
finally, the total score of each group of point cloud pairs is the sum of the four rule scores;
determining an optimal grabbing mode: and determining the grabbing cuboid with the highest score as the optimal grabbing mode.
2. The method for optimally grabbing and identifying the object based on the local point cloud model according to claim 1, wherein the method comprises the following steps of: in the step of grabbing cuboid generation, detecting the highest point of the local point cloud model, and then intercepting the height of h downwards based on the highest point to acquire partial point cloud data to form a new point cloud model; wherein the height h intercepted downwards is a preset value.
3. The method for optimally grabbing and identifying the object based on the local point cloud model according to claim 1, wherein the method comprises the following steps of: in the step of grabbing cuboid generation, the outer contour of the new point cloud model is extracted by a method of searching extreme points, and the center point of the new point cloud model is obtained by a method of averaging the extreme point cloud coordinates.
4. The method for optimally grabbing and identifying the object based on the local point cloud model according to claim 1, wherein the method comprises the following steps of: in the step of calculating the grabbing cuboid score, if the standard deviation between the fitting straight line calculated by the rule b and the point set is larger than a set threshold value, and the point set distribution is not straight line, the calculation result of the rule c and the rule d is directly assigned with 0 score; the remaining point sets are still assigned scores according to established rules.
5. The method for optimally grabbing and identifying the object based on the local point cloud model according to claim 1, wherein the method comprises the following steps of: in the step of calculating the grab cuboid score, the weight coefficient of the preset grab rule is set as follows: rule a:0.5; rule b:0.2; rule c:0.2; rule d:0.1.
6. the method for optimally grabbing and identifying the object based on the local point cloud model according to claim 1, wherein the method comprises the following steps of: in the step of determining the optimal grabbing mode, if the average height of the contour point set intercepted by the grabbing cuboid in the z-axis direction is lower than 5mm, the contour point set is not considered to be used as grabbing selection; considering the grabbing success rate, if the average height difference of two groups of point clouds intercepted by the grabbing cuboid is larger than 20mm, the grabbing success rate is not considered as the grabbing selection.
CN202011531432.7A 2020-12-22 2020-12-22 Object optimal grabbing and identifying method based on local point cloud model Active CN112720477B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011531432.7A CN112720477B (en) 2020-12-22 2020-12-22 Object optimal grabbing and identifying method based on local point cloud model

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011531432.7A CN112720477B (en) 2020-12-22 2020-12-22 Object optimal grabbing and identifying method based on local point cloud model

Publications (2)

Publication Number Publication Date
CN112720477A CN112720477A (en) 2021-04-30
CN112720477B true CN112720477B (en) 2024-01-30

Family

ID=75605818

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011531432.7A Active CN112720477B (en) 2020-12-22 2020-12-22 Object optimal grabbing and identifying method based on local point cloud model

Country Status (1)

Country Link
CN (1) CN112720477B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114986521B (en) * 2022-08-01 2022-11-15 深圳市信润富联数字科技有限公司 Object grabbing method and device, electronic equipment and readable storage medium
CN116045833B (en) * 2023-01-03 2023-12-22 中铁十九局集团有限公司 Bridge construction deformation monitoring system based on big data

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017015898A1 (en) * 2015-07-29 2017-02-02 Abb 瑞士股份有限公司 Control system for robotic unstacking equipment and method for controlling robotic unstacking
CN110017773A (en) * 2019-05-09 2019-07-16 福建(泉州)哈工大工程技术研究院 A kind of package volume measuring method based on machine vision
CN110310331A (en) * 2019-06-18 2019-10-08 哈尔滨工程大学 A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN110653820A (en) * 2019-09-29 2020-01-07 东北大学 Robot grabbing pose estimation method combined with geometric constraint
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN111144242A (en) * 2019-12-13 2020-05-12 中国科学院深圳先进技术研究院 Three-dimensional target detection method and device and terminal
CA3084650A1 (en) * 2018-11-13 2020-05-13 Mycionics Inc. Vision system for automated harvester and method for operating a vision system for an automated harvester
CN111227444A (en) * 2020-01-17 2020-06-05 泉州装备制造研究所 3D sole glue spraying path planning method based on k nearest neighbor
CN112060087A (en) * 2020-08-28 2020-12-11 佛山隆深机器人有限公司 Point cloud collision detection method for robot to grab scene

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017015898A1 (en) * 2015-07-29 2017-02-02 Abb 瑞士股份有限公司 Control system for robotic unstacking equipment and method for controlling robotic unstacking
CA3084650A1 (en) * 2018-11-13 2020-05-13 Mycionics Inc. Vision system for automated harvester and method for operating a vision system for an automated harvester
CN110017773A (en) * 2019-05-09 2019-07-16 福建(泉州)哈工大工程技术研究院 A kind of package volume measuring method based on machine vision
CN110310331A (en) * 2019-06-18 2019-10-08 哈尔滨工程大学 A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN110653820A (en) * 2019-09-29 2020-01-07 东北大学 Robot grabbing pose estimation method combined with geometric constraint
CN111080693A (en) * 2019-11-22 2020-04-28 天津大学 Robot autonomous classification grabbing method based on YOLOv3
CN111144242A (en) * 2019-12-13 2020-05-12 中国科学院深圳先进技术研究院 Three-dimensional target detection method and device and terminal
CN111227444A (en) * 2020-01-17 2020-06-05 泉州装备制造研究所 3D sole glue spraying path planning method based on k nearest neighbor
CN112060087A (en) * 2020-08-28 2020-12-11 佛山隆深机器人有限公司 Point cloud collision detection method for robot to grab scene

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于3维点云深度信息和质心距相结合的机器人抓取控制方法;邹遇;熊禾根;陶永;任帆;陈超勇;江山;;高技术通讯(第05期);全文 *
基于单点云信息的未知物体抓取方法;叶仙;胡洁;邵全全;戚进;方懿;;数码设计(第06期);全文 *

Also Published As

Publication number Publication date
CN112720477A (en) 2021-04-30

Similar Documents

Publication Publication Date Title
CN108555908B (en) Stacked workpiece posture recognition and pickup method based on RGBD camera
CN105729468B (en) A kind of robotic workstation based on the enhancing of more depth cameras
CN107618030B (en) Robot dynamic tracking grabbing method and system based on vision
CN109272523B (en) Random stacking piston pose estimation method based on improved CVFH (continuously variable frequency) and CRH (Crh) characteristics
CN105866790B (en) A kind of laser radar obstacle recognition method and system considering lasing intensity
CN112720477B (en) Object optimal grabbing and identifying method based on local point cloud model
CN108107444B (en) Transformer substation foreign matter identification method based on laser data
JP7212236B2 (en) Robot Visual Guidance Method and Apparatus by Integrating Overview Vision and Local Vision
JP5618569B2 (en) Position and orientation estimation apparatus and method
JP5282717B2 (en) Robot system
CN107610176A (en) A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium
CN110648361A (en) Real-time pose estimation method and positioning and grabbing system of three-dimensional target object
JP2016161569A (en) Method and system for obtaining 3d pose of object and 3d location of landmark point of object
CN111340873B (en) Object minimum outer envelope size measuring and calculating method for multi-view image
KR20130021018A (en) Method for separating object in three dimension point clouds
CN108876852B (en) Online real-time object identification and positioning method based on 3D vision
JP6983828B2 (en) Systems and methods for simultaneously considering edges and normals in image features with a vision system
Herakovic Robot vision in industrial assembly and quality control processes
CN107238374B (en) A kind of classification of concave plane part and recognition positioning method
KR102634535B1 (en) Method for recognizing touch teaching point of workpiece using point cloud analysis
CN110425996A (en) Workpiece size measurement method based on binocular stereo vision
WO2021082380A1 (en) Laser radar-based pallet recognition method and system, and electronic device
CN115187803B (en) Positioning method for picking process of famous tea tender shoots
CN107610086A (en) Industrial parallel robot rapid visual detection algorithm based on bionic compound eye structure
CN111598172A (en) Dynamic target grabbing posture rapid detection method based on heterogeneous deep network fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant