CN110223345B - Point cloud-based distribution line operation object pose estimation method - Google Patents

Point cloud-based distribution line operation object pose estimation method Download PDF

Info

Publication number
CN110223345B
CN110223345B CN201910397857.4A CN201910397857A CN110223345B CN 110223345 B CN110223345 B CN 110223345B CN 201910397857 A CN201910397857 A CN 201910397857A CN 110223345 B CN110223345 B CN 110223345B
Authority
CN
China
Prior art keywords
point
point cloud
cloud
operation object
pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910397857.4A
Other languages
Chinese (zh)
Other versions
CN110223345A (en
Inventor
肖潇
郭毓
蔡梁
吴钧浩
张冕
郭飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN201910397857.4A priority Critical patent/CN110223345B/en
Publication of CN110223345A publication Critical patent/CN110223345A/en
Application granted granted Critical
Publication of CN110223345B publication Critical patent/CN110223345B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y04INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
    • Y04SSYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
    • Y04S10/00Systems supporting electrical power generation, transmission or distribution
    • Y04S10/50Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a distribution line operation object pose estimation method based on point cloud, which comprises the following steps: collecting point cloud data of a working scene including an object to be detected; cutting the point cloud; setting the average distance between the point clouds as a confidence interval, and then rejecting the point clouds outside the confidence interval; performing semantic segmentation on the point cloud to obtain an operation object point cloud serving as a point cloud set P to be registered; establishing a three-dimensional model of the operation object with the pose to be estimated, and converting the three-dimensional model into a PCD format of a point cloud, thereby establishing a point cloud model of the operation object with the pose to be estimated, and taking the point cloud model as a reference point cloud set Q; carrying out rough registration on the cloud set P of the point to be registered and the cloud set Q of the reference point to enable the reference coordinate systems of the cloud set P of the point to be registered and the reference coordinate system of the point to be registered to be consistent, and obtaining the initial pose of the operation object; and correcting the initial pose to obtain a final pose. The invention can quickly and accurately obtain the pose measurement result of the operation object in the distribution line with a disordered background, and has stronger robustness to illumination change.

Description

Point cloud-based distribution line operation object pose estimation method
Technical Field
The invention belongs to the field of measurement of the position and pose of an operation object of a live-line operation robot, and particularly relates to a point cloud-based distribution line operation object position and pose estimation method.
Background
With the vigorous development of the robot technology, the position of the robot in the modern production life is more and more important. The hot-line work robot is introduced into the power industry, the manual work is replaced to carry out power maintenance and overhaul work, casualty accidents in the hot-line work process can be effectively avoided, and the operating efficiency of power maintenance and overhaul can be greatly improved.
At present, most of live-line work robots developed at home and abroad and applied to distribution line maintenance need to be operated at high altitude by operators or remotely operate mechanical arms to execute work tasks by means of video monitoring. The positioning accuracy of the target is low, so that the operation difficulty for operations needing accurate positioning, such as replacing a lightning arrester, connecting and overlapping a lead wire and the like, is huge, and therefore the measurement of the pose of the operation object is necessary. However, in a live working site, the environment is complex, a plurality of equipment and appliances are provided, and the equipment and appliances are single in color and difficult to distinguish from the background environment, so that the measurement of the target pose is difficult due to the factors. At present, a commonly used pose measurement method, such as a method based on LINEMOD for RGBD images, obtains a better pose estimation method by using stable gradient information and normal vector characteristics, but has weak robustness to illumination conditions. When the illumination in the scene changes, the color distribution of the target also changes, thereby causing instability of the model and failure of measurement of the pose of the target.
Disclosure of Invention
The invention aims to provide a distribution line operation object pose estimation method with good robustness and good real-time performance on illumination change.
The technical solution for realizing the purpose of the invention is as follows: the distribution line operation object pose estimation method based on the point cloud comprises the following steps:
step 1, collecting point cloud data of an operation scene including an object to be detected by a depth camera;
step 2, cutting the point clouds obtained in the step 1 to reduce the number of the point clouds;
step 3, solving the average distance between the point clouds, setting the average distance as a confidence interval, and then removing the point clouds outside the confidence interval;
step 4, performing semantic segmentation on the point cloud processed in the step 3 to segment the operation object from a complex environment to obtain an operation object point cloud;
step 5, establishing a three-dimensional model of the operation object with the pose to be estimated by utilizing modeling software, and converting the three-dimensional model into a PCD format of point cloud, thereby establishing a point cloud model of the operation object with the pose to be estimated, and taking the point cloud model as a reference point cloud set Q;
step 6, taking the point cloud of the operation object as a cloud set P of points to be registered, and carrying out coarse registration on the cloud set P of the points to be registered and a cloud set Q of reference points to enable reference coordinate systems of the cloud set P of the points to be registered and the cloud set Q of the reference points to be consistent, so as to obtain an initial pose of the operation object;
and 7, correcting the initial pose to obtain the final pose of the operation object.
Compared with the prior art, the invention has the following remarkable advantages: 1) the three-dimensional vision is adopted to measure the pose of the operation target in the unstructured environment, and a more accurate pose measurement result can be obtained for distribution line operation objects with more disordered backgrounds, and the robustness to illumination change is stronger; 2) the PointNet is improved, the PointNet is pre-trained on the basis of Stanford 3D inoor Dataset, and then the network is finely adjusted through a custom database, so that the parameter training speed is increased, and the efficiency of the whole method is improved; 3) the operation object pose is estimated by combining PCA and the improved ICP registration algorithm, so that the problem that the classical ICP algorithm is easy to fall into local optimization can be effectively solved, and the registration real-time performance is guaranteed.
The present invention is described in further detail below with reference to the attached drawings.
Drawings
FIG. 1 is a flow chart of the point cloud-based power distribution line work object pose estimation of the present invention.
FIG. 2 is a diagram of a PointNet semantic segmentation network structure according to the present invention.
Fig. 3 is a point cloud diagram in an embodiment of the present invention, where (a) is an original point cloud diagram of a power distribution line maintenance operation scene acquired by a depth camera, (b) is a point cloud diagram after region of interest selection, (c) is a point cloud diagram after adaptive voxel grid filtering, and (d) is a point cloud diagram after outlier removal.
FIG. 4 is a diagram illustrating semantic segmentation results of a job scenario according to an embodiment of the present invention.
Fig. 5 is a schematic view of a reference point cloud model of the lightning arrester in the embodiment of the invention.
Fig. 6 is a point cloud diagram of the lightning arrester point cloud to be registered and the reference lightning arrester before the initial transformation in the embodiment of the invention.
Fig. 7 is a point cloud diagram of the lightning arrester point cloud to be registered and the reference lightning arrester after the initial transformation in the embodiment of the invention.
Fig. 8 is a point cloud diagram of the lightning arrester point cloud to be registered and the reference lightning arrester after the precise registration is adopted in the embodiment of the invention.
Detailed Description
With reference to fig. 1, the method for estimating the position and pose of the distribution line operation object based on the point cloud comprises the following steps:
step 1, collecting point cloud data of a working scene including a measuring object by a depth camera;
step 2, cutting the point clouds obtained in the step 1 to reduce the number of the point clouds;
step 3, solving the average distance between the point clouds, setting the average distance as a confidence interval, then eliminating the point clouds outside the confidence interval, neglecting interference information, and solving the problem that the acquired point cloud data has outliers;
step 4, performing semantic segmentation on the point cloud processed in the step 3 to segment the operation object from a complex environment to obtain an operation object point cloud;
step 5, establishing a three-dimensional model of the operation object with the pose to be estimated by utilizing modeling software, and converting the three-dimensional model into a PCD format of point cloud, thereby establishing a point cloud model of the operation object with the pose to be estimated, and taking the point cloud model as a reference point cloud set Q;
step 6, taking the point cloud of the operation object as a cloud set P of points to be registered, and carrying out coarse registration on the cloud set P of the points to be registered and a cloud set Q of reference points to enable reference coordinate systems of the cloud set P of the points to be registered and the cloud set Q of the reference points to be consistent, so as to obtain an initial pose of the operation object;
and 7, correcting the initial pose to obtain the final pose of the operation object.
Further, in step 2, the point cloud of step 1 is trimmed to reduce the number of point clouds, so as to meet the real-time requirement of target point cloud pose estimation, specifically:
step 2-1, cutting the point cloud in the whole view field range to a point cloud which only keeps an interesting area, namely a surrounding area containing an operation object through a conditional filtering algorithm;
and 2-2, on the basis of the step 2-1, further reducing the number of point clouds by adopting a self-adaptive voxel grid method under the condition of not influencing feature extraction. When the resolution ratio of the point cloud is small, the size of the grid is reduced, the situation that the simplified point cloud is too sparse and is not beneficial to subsequent feature extraction is avoided, and when the resolution ratio of the point cloud is large, the size of the grid is correspondingly increased, and the effective reduction of the number of the point cloud is realized.
Further preferably, the average distance between the point clouds is calculated in the step 3, and the calculation is specifically realized in a kd-tree manner.
Further, in the step 4, semantic segmentation is performed on the point cloud, specifically, semantic segmentation is performed on the point cloud by using an improved PointNet deep neural network, wherein the point cloud semantic segmentation means that each point in the point cloud is classified, and segmentation according to categories is realized. The overall architecture of PointNet is shown in fig. 2, the inputs of PointNet are blocks, each block has N3D points, the MLP network learns the D1 dimensional local point feature of each 3D point, and the D2 dimensional global feature of the block is calculated through the Max Pooling layer. And finally fusing the D1 dimensional local features and the D2 dimensional global features and performing MLP processing on the fused local features and the D2 dimensional global features, and outputting the score of each point semantic label in the block (MLP represents multi-layer perception, M represents a Max Pooling layer, and C represents fusion operation). The method comprises the following specific steps:
4-1, rapidly marking the point cloud by using a point cloud marking system;
step 4-2, pre-training PointNet based on Stanford 3D Indoor Dataset, and setting block size as b 1 ×b 1 Step length of l 1 The number of point clouds N is N 1 Initializing each layer parameter by using a Gaussian distribution numerical value with standard deviation sigma;
step 4-3, based on the parameters of the pre-training model in the step 4-2, fixing the previous three layers of convolutional layers, inputting a sample of an operation object to continue training, and setting the block size to be b 2 ×b 2 Step length of l 2 ,N=n 2 D, outputting point cloud data; wherein b is 2 <b 1 ,l 2 <l 1 ,n 2 <n 1
Illustratively, b in step 4-2 1 ×b 1 =1×1m 2 ,l 1 =0.5m,N=n 1 4096, D9, σ 0.001; step 4-3 b 2 ×b 2 =0.1×0.1m 2 ,l 2 =0.05m,N=n 2 =128。
Further preferably, in step 6, the point set P to be registered and the reference point cloud set Q are roughly registered to make reference coordinate systems of the point set P and the reference point cloud set Q consistent, and the method is specifically implemented by using a Principal Component Analysis (PCA) method. Aiming at a point cloud set P to be registered and a reference point cloud set Q in a space, respectively calculating the gravity center and covariance of the point cloud, taking the gravity center as the origin of coordinates of the point set, and taking the eigenvector of the covariance matrix as the three coordinate axes, and establishing respective reference coordinate systems of the two point clouds. And adjusting the reference coordinate systems of the reference point cloud and the point cloud to be estimated to be consistent, so as to achieve the purpose of coarse registration.
Further, the initial pose is corrected in step 7, specifically: and correcting the initial pose by adopting an improved ICP algorithm. The theoretical basis of the ICP algorithm is as follows: suppose three-dimensional data for two point clouds, i.e., two sets of three-dimensional points P ═ P 1 ,p 2 ,...,p n Q ═ Q 1 ,q 2 ,...,q n And f, wherein P is a cloud set of points to be registered, and Q is a cloud set of reference points. And the cloud set P to be registered continuously performs rigid body transformation in the iterative process and gradually approaches the reference point cloud set Q. The ICP algorithm is used to find a rigid transformation matrix (including the rotation matrix R and the translation matrix t) to minimize the registration error function of the transformed P and Q. The ICP algorithm needs to do two things: acquiring a corresponding relation of a point set according to a recent principle; secondly, calculating a rigid body transformation matrix according to the corresponding relation. The step of correcting the initial pose by using the improved ICP algorithm specifically comprises the following steps:
step 7-1, taking the initial pose as an initial value of a rigid body transformation matrix [ R | t ]; wherein R is a rotation matrix, t is a translation matrix, and a rigid body transformation matrix [ R | t ] is the pose of the operation object;
step 7-2, establishing a corresponding relation between the cloud set P of the point to be registered and the cloud set Q of the reference point:
step 7-2-1, acquiring curvature geometric characteristics of the cloud set P of the point to be registered and the cloud set Q of the reference point, and classifying point clouds in the cloud set P of the point to be registered and the cloud set Q of the reference point according to the sizes of curvature values;
step 7-2-2, scanning each point cloud in the cloud collection P of points to be registered one by one, wherein each point is marked as a point to be inquired, searching a plurality of points with high curvature similarity in the reference point cloud with the same classification as candidate points according to the classification to which the point belongs, and selecting conditions of the candidate points are as follows:
Figure GDA0003686176870000041
in the formula, epsilon 1 And ε 2 To set an initial threshold value, p i Representing the ith point, q, in the cloud set P of points to be registered j Denotes the jth point, k, in the reference point cloud set Q 1 (p i )、k 1 (q j ) Respectively represents p i 、q j Principal curvature of (k) 2 (p i )、k 2 (q j ) Respectively represents p i 、q j A normal vector of (a);
7-2-3, searching K neighborhoods of each candidate point, and combining the neighborhood points closest to the point to be inquired with the point to be inquired to form a point pair;
and 7-3, removing the mismatching point pairs, specifically: according to p i And q is j Distance of (d) and t if p removes the mis-matched point pair i And q is j Is more than d t If the point pair is mismatching, removing the point pair; wherein d is t Is a distance threshold;
7-4, solving a rigid body transformation matrix [ R | t ], namely the pose of the operation object, wherein the formula is as follows:
Figure GDA0003686176870000051
step 7-5, updating the cloud set P of the point to be registered according to the R and the t obtained in the step 7-4, and specifically comprises the following steps: judging the relation between E (R, t) and a set threshold p1, and if E (R, t) is smaller than p1, directly outputting the final pose [ R | t ] of the working object; otherwise, step 7-2 is returned until E (R, t) is less than p1 or the number of iterations is greater than the set maximum number of iterations C, after which the final pose of the job object is output [ R | t ].
Examples
In the embodiment, the operation object with the pose to be estimated is the lightning arrester, and the pose estimation by the method comprises the following processes:
1. a point cloud image of a work scene including a measurement object captured by a depth camera is shown in fig. 3 (a).
2. The point cloud in the above 1 is clipped to reduce the number of the point clouds, and the point cloud images after region-of-interest clipping and adaptive voxel grid filtering are respectively shown in fig. 3 (b) and fig. 3 (c);
3. calculating the average distance between the point clouds, setting the average distance as a confidence interval, eliminating the point clouds outside the confidence interval, and processing the point cloud image as shown in (d) in fig. 3;
4. performing semantic segmentation on the point cloud processed in the step 3, and segmenting the operation object from a complex environment to obtain an operation object point cloud, wherein a semantic segmentation result is shown in fig. 4;
5. establishing a three-dimensional model of the operation object with the pose to be estimated by utilizing modeling software, and converting the three-dimensional model into a PCD (polycrystalline Diamond) format of a point cloud, thereby establishing a point cloud model of the operation object with the pose to be estimated, wherein the point cloud model is used as a reference point cloud set Q, and the point cloud model of the lightning arrester is shown in figure 5;
6. taking the point cloud of the operation object as a cloud set P of points to be registered, carrying out rough registration on the cloud set P of the points to be registered and a cloud set Q of reference points, enabling reference coordinate systems of the cloud set P of the points to be registered and the cloud set Q of the reference points to be consistent, and obtaining an initial pose of the operation object, wherein the result is shown as the following formula:
Figure GDA0003686176870000061
in the formula, T 1 For the calculated initial pose, pass through the initial pose T 1 The position relations of the lightning arrester point cloud to be estimated before and after matrix transformation and the reference point cloud are respectively shown in fig. 6 and fig. 7;
7. correcting the initial pose to obtain a transformation matrix as shown in the following formula:
Figure GDA0003686176870000062
from T 1 And T 2 The final point cloud pose of the arrester can be obtained as follows:
Figure GDA0003686176870000063
after the coordinate system transformation is performed on the point cloud of the lightning arrester by using the matrix T, the position relationship between the point cloud of the lightning arrester to be estimated and the reference point cloud is shown in fig. 8.
According to the method for estimating the pose of the working object under the unstructured distribution line environment, the point cloud data of the working scene are collected by the depth camera, preprocessing, working object segmentation and pose estimation of the working object are carried out on the point cloud data, the pose measurement result of the working object can be rapidly and accurately obtained in the distribution line with disordered backgrounds, and the robustness to illumination changes is strong.

Claims (5)

1. A distribution line operation object pose estimation method based on point cloud is characterized by comprising the following steps:
step 1, a depth camera collects point cloud data of an operation scene including an operation object to be detected;
step 2, cutting the point clouds obtained in the step 1 to reduce the number of the point clouds;
step 3, solving the average distance between the point clouds, setting the average distance as a confidence interval, and then removing the point clouds outside the confidence interval;
step 4, performing semantic segmentation on the point cloud processed in the step 3 to segment the operation object from a complex environment to obtain an operation object point cloud; the point cloud is subjected to semantic segmentation, and the point cloud is subjected to semantic segmentation by specifically adopting an improved PointNet deep neural network, and the semantic segmentation specifically comprises the following steps:
4-1, rapidly marking the point cloud by using a point cloud marking system;
step 4-2, based on StanfoPre-training PointNet for rd 3D inoor Dataset, setting block size as b 1 ×b 1 Step length of l 1 The number of point clouds N is N 1 Initializing each layer parameter by using a Gaussian distribution numerical value with standard deviation sigma;
step 4-3, based on the parameters of the pre-training model in the step 4-2, fixing the previous three layers of convolutional layers, inputting a sample of an operation object to continue training, and setting the block size to be b 2 ×b 2 Step length of l 2 ,N=n 2 D, outputting point cloud data; wherein b is 2 <b 1 ,l 2 <l 1 ,n 2 <n 1
Step 4-2 b 1 ×b 1 =1×1m 2 ,l 1 =0.5m,N=n 1 4096, D9, σ 0.001; step 4-3 b 2 ×b 2 =0.1×0.1m 2 ,l 2 =0.05m,N=n 2 =128;
Step 5, establishing a three-dimensional model of the operation object with the pose to be estimated by utilizing modeling software, and converting the three-dimensional model into a PCD format of point cloud, thereby establishing a point cloud model of the operation object with the pose to be estimated, and taking the point cloud model as a reference point cloud set Q;
step 6, taking the point cloud of the operation object as a cloud set P of points to be registered, and carrying out coarse registration on the cloud set P of the operation object and a cloud set Q of reference points to ensure that reference coordinate systems of the cloud set P of the operation object and the cloud set Q of the reference points are consistent, so as to obtain an initial pose of the operation object;
and 7, correcting the initial pose to obtain the final pose of the operation object.
2. The point cloud-based distribution line work object pose estimation method according to claim 1, wherein the step 2 of clipping the point cloud of the step 1 to reduce the number of the point clouds specifically comprises:
step 2-1, cutting the point cloud in the whole view field range to a point cloud which only keeps an interesting area, namely a surrounding area containing an operation object through a conditional filtering algorithm;
and 2-2, on the basis of the step 2-1, further reducing the number of the point clouds by adopting a self-adaptive voxel grid method under the condition of not influencing feature extraction.
3. The method for estimating the pose of the distribution line operation object based on the point clouds according to claim 1 or 2, wherein the step 3 is implemented by solving the average distance between the point clouds, specifically by using a kd-tree.
4. The point cloud-based distribution line operation object pose estimation method according to claim 1, wherein step 6 is to perform coarse registration on the point set to be registered P and the reference point cloud set Q to make the reference coordinate systems of the point set to be registered P and the reference point cloud set Q consistent, and the coarse registration is realized by using a Principal Component Analysis (PCA) method.
5. The point cloud-based distribution line work object pose estimation method of claim 1, wherein the initial pose is corrected in step 7, specifically: and (3) correcting the initial pose by adopting an improved ICP algorithm:
step 7-1, taking the initial pose as an initial value of a rigid body transformation matrix [ R | t ]; wherein R is a rotation matrix, t is a translation matrix, and a rigid body transformation matrix [ R | t ] is the pose of the operation object;
step 7-2, establishing a corresponding relation between the cloud set P of the point to be registered and the cloud set Q of the reference point:
step 7-2-1, acquiring curvature geometric characteristics of the cloud set P of the point to be registered and the cloud set Q of the reference point, and classifying point clouds in the cloud set P of the point to be registered and the cloud set Q of the reference point according to the sizes of curvature values;
step 7-2-2, scanning each point cloud in the cloud collection P of points to be registered one by one, wherein each point is marked as a point to be inquired, searching a plurality of points with high curvature similarity in the reference point cloud with the same classification as candidate points according to the classification to which the point belongs, and selecting conditions of the candidate points are as follows:
Figure FDA0003686176860000021
in the formula, epsilon 1 And ε 2 To set an initial threshold value, p i Representing the ith point, q, in the cloud set P of points to be registered j Denotes the jth point, k, in the reference point cloud set Q 1 (p i )、k 1 (q j ) Respectively represents p i 、q j Principal curvature of (k) 2 (p i )、k 2 (q j ) Respectively represents p i 、q j A normal vector of (a);
7-2-3, searching K neighborhoods of each candidate point, and combining the neighborhood points closest to the point to be inquired with the point to be inquired to form a point pair;
and 7-3, removing the mismatching point pairs, specifically: according to p i And q is j Distance of (d) and t if p removes the mis-matched point pair i And q is j Over a distance of d t If the point pair is mismatching, removing the point pair; wherein d is t Is a distance threshold;
7-4, solving a rigid body transformation matrix [ R | t ], namely the pose of the operation object, wherein the formula is as follows:
Figure FDA0003686176860000031
step 7-5, updating the cloud set P of the point to be registered according to the R and the t obtained in the step 7-4, and specifically comprises the following steps: judging the relation between E (R, t) and a set threshold p1, and if E (R, t) is smaller than p1, directly outputting the final pose [ R | t ] of the working object; otherwise, step 7-2 is returned until E (R, t) is less than p1 or the number of iterations is greater than the set maximum number of iterations C, after which the final pose of the job object is output [ R | t ].
CN201910397857.4A 2019-05-14 2019-05-14 Point cloud-based distribution line operation object pose estimation method Active CN110223345B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910397857.4A CN110223345B (en) 2019-05-14 2019-05-14 Point cloud-based distribution line operation object pose estimation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910397857.4A CN110223345B (en) 2019-05-14 2019-05-14 Point cloud-based distribution line operation object pose estimation method

Publications (2)

Publication Number Publication Date
CN110223345A CN110223345A (en) 2019-09-10
CN110223345B true CN110223345B (en) 2022-08-09

Family

ID=67821039

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910397857.4A Active CN110223345B (en) 2019-05-14 2019-05-14 Point cloud-based distribution line operation object pose estimation method

Country Status (1)

Country Link
CN (1) CN110223345B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110842918B (en) * 2019-10-24 2020-12-08 华中科技大学 Robot mobile processing autonomous locating method based on point cloud servo
CN111259934B (en) * 2020-01-09 2023-04-07 清华大学深圳国际研究生院 Stacked object 6D pose estimation method and device based on deep learning
CN111251301B (en) * 2020-02-27 2022-09-16 云南电网有限责任公司电力科学研究院 Motion planning method for operation arm of power transmission line maintenance robot
CN111640143B (en) * 2020-04-12 2023-05-30 复旦大学 Neural navigation rapid surface registration method and system based on PointNet
CN111524168B (en) * 2020-04-24 2023-04-18 中国科学院深圳先进技术研究院 Point cloud data registration method, system and device and computer storage medium
CN112069899A (en) * 2020-08-05 2020-12-11 深兰科技(上海)有限公司 Road shoulder detection method and device and storage medium
CN112614186A (en) * 2020-12-28 2021-04-06 上海汽车工业(集团)总公司 Target pose calculation method and calculation module
CN113671527A (en) * 2021-07-23 2021-11-19 国电南瑞科技股份有限公司 Accurate operation method and device for improving distribution network live working robot
CN113537180B (en) * 2021-09-16 2022-01-21 南方电网数字电网研究院有限公司 Tree obstacle identification method and device, computer equipment and storage medium
CN115958589A (en) * 2021-10-11 2023-04-14 罗伯特·博世有限公司 Method and device for calibrating hand and eye of robot
CN114986515A (en) * 2022-07-04 2022-09-02 中国科学院沈阳自动化研究所 Pose decoupling dynamic assembly method for insulator replacement robot

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105976353B (en) * 2016-04-14 2020-01-24 南京理工大学 Spatial non-cooperative target pose estimation method based on model and point cloud global matching
EP3457357B1 (en) * 2017-09-13 2021-07-07 Tata Consultancy Services Limited Methods and systems for surface fitting based change detection in 3d point-cloud

Also Published As

Publication number Publication date
CN110223345A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110223345B (en) Point cloud-based distribution line operation object pose estimation method
CN107886528B (en) Distribution line operation scene three-dimensional reconstruction method based on point cloud
CN111899301A (en) Workpiece 6D pose estimation method based on deep learning
CN110065068B (en) Robot assembly operation demonstration programming method and device based on reverse engineering
CN111062915A (en) Real-time steel pipe defect detection method based on improved YOLOv3 model
CN112907735B (en) Flexible cable identification and three-dimensional reconstruction method based on point cloud
CN115147437B (en) Intelligent robot guiding machining method and system
CN113538486B (en) Method for improving identification and positioning accuracy of automobile sheet metal workpiece
CN112509063A (en) Mechanical arm grabbing system and method based on edge feature matching
CN115032648B (en) Three-dimensional target identification and positioning method based on laser radar dense point cloud
CN113421291B (en) Workpiece position alignment method using point cloud registration technology and three-dimensional reconstruction technology
CN111028238A (en) Robot vision-based three-dimensional segmentation method and system for complex special-shaped curved surface
CN109636897B (en) Octmap optimization method based on improved RGB-D SLAM
CN113681552B (en) Five-dimensional grabbing method for robot hybrid object based on cascade neural network
CN116909208B (en) Shell processing path optimization method and system based on artificial intelligence
Xu et al. A new welding path planning method based on point cloud and deep learning
CN117315025A (en) Mechanical arm 6D pose grabbing method based on neural network
CN112588621A (en) Agricultural product sorting method and system based on visual servo
CN114913289A (en) Semantic SLAM method for three-dimensional dynamic uncertainty of production workshop
Zhang et al. Object detection and grabbing based on machine vision for service robot
CN113814982A (en) Welding robot manipulator control method
CN113510691A (en) Intelligent vision system of plastering robot
CN111242159B (en) Image recognition and robot automatic positioning method based on product features
Li et al. Simultaneous Coverage and Mapping of Stereo Camera Network for Unknown Deformable Object
CN117260744B (en) Manipulator route planning method based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant