CN110223345A - Distribution line manipulating object position and orientation estimation method based on cloud - Google Patents
Distribution line manipulating object position and orientation estimation method based on cloud Download PDFInfo
- Publication number
- CN110223345A CN110223345A CN201910397857.4A CN201910397857A CN110223345A CN 110223345 A CN110223345 A CN 110223345A CN 201910397857 A CN201910397857 A CN 201910397857A CN 110223345 A CN110223345 A CN 110223345A
- Authority
- CN
- China
- Prior art keywords
- point
- cloud
- manipulating object
- pose
- registration
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J9/00—Programme-controlled manipulators
- B25J9/16—Programme controls
- B25J9/1694—Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
- B25J9/1697—Vision controlled systems
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
- G06V10/267—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y04—INFORMATION OR COMMUNICATION TECHNOLOGIES HAVING AN IMPACT ON OTHER TECHNOLOGY AREAS
- Y04S—SYSTEMS INTEGRATING TECHNOLOGIES RELATED TO POWER NETWORK OPERATION, COMMUNICATION OR INFORMATION TECHNOLOGIES FOR IMPROVING THE ELECTRICAL POWER GENERATION, TRANSMISSION, DISTRIBUTION, MANAGEMENT OR USAGE, i.e. SMART GRIDS
- Y04S10/00—Systems supporting electrical power generation, transmission or distribution
- Y04S10/50—Systems or methods supporting the power network operation or management, involving a certain degree of interaction with the load-side end user applications
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of distribution line manipulating object position and orientation estimation method based on cloud, comprising the following steps: the point cloud data of working scene of the acquisition including object to be measured;A cloud is cut;Average distance between cloud is set as confidence interval, rejects the point cloud except confidence interval later;Semantic segmentation is carried out to cloud, manipulating object point cloud is obtained, converges P as point subject to registration;The threedimensional model of the manipulating object of pose to be estimated is established, and is converted into the PCD format of a cloud, the point cloud model of the manipulating object of pose to be estimated is thus constructed, converges Q as reference point;Treating registration point and converging P and reference point and converge Q and carry out rough registration both makes reference frame consistent, obtains the initial pose of manipulating object;Initial pose is modified, final pose is obtained.The present invention also can quickly and accurately obtain the pose measurement of manipulating object as a result, and having stronger robustness to illumination variation in the more numerous and disorderly distribution line of background.
Description
Technical field
The invention belongs to hot line robot manipulating object pose measurement field, especially a kind of distribution based on cloud
Circuit homework object pose estimation method.
Background technique
With flourishing for robot technology, status of the robot in modern production life is more and more important.By band
Electric Work robot is introduced into power industry, instead of manually carrying out electric power maintenance service work, it is possible to prevente effectively from electrification is made
The generation of personnel casualty accidents when industry, and the operating efficiency of electric power maintenance maintenance can be greatly improved.
That develops both at home and abroad at present is applied to the hot line robot of distribution line maintenance, needs operating personnel mostly
In high-altitude or by video monitoring, remote operating mechanical arm executes job task.The positioning accuracy of target is low, causes for needing essence
Determine that the operation of position for example replaces arrester, connects and take that the operation difficulties such as lead are huge, therefore the measurement of manipulating object pose is very
It is necessary.However, environment is complex, and equipment utensil is more, and these equipment utensil colors are single, no at livewire work scene
It is easily distinguished with background environment, these factors cause the difficulty of object pose measurement.Currently used pose measuring method is for example
For the method based on LINEMOD of RGBD image, obtained preferably using stable gradient information and normal direction measure feature
Position and orientation estimation method, but it is not strong to the robustness of illumination condition.When the illumination variation in scene, the distribution of color of target also can
It changes, so as to cause the failure of the unstable and object pose measurement of model.
Summary of the invention
The purpose of the present invention is to provide a kind of pair of distribution line manipulating objects that illumination variation robustness is good, real-time is good
Position and orientation estimation method.
The technical solution for realizing the aim of the invention is as follows: the distribution line manipulating object pose estimation side based on cloud
Method, comprising the following steps:
Step 1, the point cloud data that the working scene including object to be measured is acquired by depth camera;
Step 2 cuts to reduce the quantity of point cloud the point cloud of step 1;
Step 3 seeks average distance between a cloud, and the average distance is set as confidence interval, rejects set later
Believe the point cloud except section;
Step 4, to step 3 treated point cloud carry out semantic segmentation manipulating object to be separated from complex environment
Come, obtains manipulating object point cloud;
Step 5, established using modeling software pose to be estimated manipulating object threedimensional model, and be converted into a cloud
PCD format, thus construct the point cloud model of the manipulating object of pose to be estimated, converge Q as reference point;
Step 6 converges P using the manipulating object point cloud as point subject to registration, treats that registration point converges P and reference point is converged
Q, which carries out rough registration, keeps the reference frame of the two consistent, obtains the initial pose of manipulating object;
Step 7 is modified the initial pose, obtains the final pose of manipulating object.
Compared with prior art, the present invention its remarkable advantage are as follows: 1) using 3D vision to the work under unstructured moving grids
Industry object pose measures, and can also obtain more accurate pose measurement for the more numerous and disorderly distribution line manipulating object of background
As a result, and there is stronger robustness to illumination variation;2) PointNet is improved, is first based on Stanford 3D
Indoor Dataset pre-training PointNet, then by customized database trim network, improve the speed of parameter training
Degree, and then improve the efficiency of entire method;3) registration Algorithm for improving ICP is combined to carry out manipulating object pose using PCA
Estimation, can effectively solve the problem of classical ICP algorithm easily falls into local optimum, while ensure that the real-time of registration.
The present invention is described in further detail with reference to the accompanying drawing.
Detailed description of the invention
Fig. 1 is that the present invention is based on the flow charts that the distribution line manipulating object pose of cloud is estimated.
Fig. 2 is PointNet semantic segmentation network structure in the present invention.
Fig. 3 is the point cloud chart in the embodiment of the present invention, wherein figure (a) is that the collected distribution line maintenance of depth camera is made
The original point cloud chart of industry scene, figure (b) are the point cloud chart after area-of-interest selects, and figure (c) is to pass through adaptive voxel
The filtered point cloud chart of grid, figure (d) is the point cloud chart removed by outlier.
Fig. 4 is the semantic segmentation result schematic diagram of working scene in the embodiment of the present invention.
Fig. 5 is the reference point clouds model schematic of arrester in the embodiment of the present invention.
Fig. 6 is for arrester point cloud to be registered before initial transformation in the embodiment of the present invention and with reference to the point cloud chart of arrester.
Fig. 7 is for arrester point cloud to be registered after initial transformation in the embodiment of the present invention and with reference to the point cloud chart of arrester.
Fig. 8 is in the embodiment of the present invention using the arrester point cloud to be registered after essence registration and with reference to the point cloud of arrester
Figure.
Specific embodiment
In conjunction with Fig. 1, the present invention is based on the distribution line manipulating object position and orientation estimation methods of cloud, comprising the following steps:
Step 1, the point cloud data that the working scene including measurement object is acquired by depth camera;
Step 2 cuts to reduce the quantity of point cloud the point cloud of step 1;
Step 3 seeks average distance between a cloud, and the average distance is set as confidence interval, rejects set later
Believe the point cloud except section, ignores interference information, solve the problems, such as that there are outliers for collected point cloud data;
Step 4, to step 3 treated point cloud carry out semantic segmentation manipulating object to be separated from complex environment
Come, obtains manipulating object point cloud;
Step 5, established using modeling software pose to be estimated manipulating object threedimensional model, and be converted into a cloud
PCD format, thus construct the point cloud model of the manipulating object of pose to be estimated, and converge Q as reference point;
Step 6 converges P using the manipulating object point cloud as point subject to registration, treats that registration point converges P and reference point is converged
Q, which carries out rough registration, keeps the reference frame of the two consistent, obtains the initial pose of manipulating object;
Step 7 is modified the initial pose, obtains the final pose of manipulating object.
Further, the point cloud of step 1 is cut to reduce the quantity of point cloud in step 2, and then meets target point
The requirement of real-time of cloud pose estimation, specifically:
Step 2-1, the point cloud within the scope of whole visual field is cut to by condition filtering algorithm and only retains area-of-interest
It include the point cloud of manipulating object peripheral region;
Step 2-2, on the basis of step 2-1, adaptive voxel grid is used in the case where not influencing feature extraction
Method further simplifies the quantity of a cloud.When cloud resolution ratio is smaller, reduce the size of grid, prevents the point cloud after simplifying excessively
It is sparse, it is unfavorable for subsequent feature extraction, when cloud resolution ratio is larger, correspondingly increases the size of grid, realizes point cloud number
Amount effectively reduces.
It is further preferred that seeking the average distance between a cloud in step 3, specifically realized by the way of kd-tree.
Further, semantic segmentation is carried out to cloud in step 4, it is specific using improvement PointNet deep neural network pair
Point cloud carries out semantic segmentation, and point cloud semantic segmentation, which refers to, classifies to point each in cloud, realizes category segmentation.
The overall architecture of PointNet has N number of 3D point in each block, by MLP as shown in Fig. 2, its input is block one by one
The D1 of each 3D point of e-learning ties up local point feature, and the D2 dimension for calculating the block by Max Pooling layers is global special
Sign.D1 is finally tieed up into local feature and D2 dimension global characteristics are merged and made MLP processing, it is semantic to export each point in block
The score of label (MLP indicates Multilayer Perception, and M indicates Max Pooling layers, and C indicates the operation of fusion).This step specifically:
Step 4-1, the quick mark of a cloud is carried out using cloud labeling system;
Step 4-2, it is based on Stanford 3D Indoor Dataset pre-training PointNet, setting block size is
b1×b1, step-length l1, the point cloud quantity N=n of input1, put cloud intrinsic dimensionality D=d, the Gaussian Profile numerical value for being σ with standard deviation
Initialize each layer parameter;
Step 4-3, the parameter based on step 4-2 pre-training model, fixed three first layers convolutional layer, inputs the sample of manipulating object
This continues to train, and setting block size is b2×b2, step-length l2, N=n2, D=d, output point cloud data;Wherein b2< b1, l2
< l1, n2< n1。
Illustratively, b in step 4-21×b1=1 × 1m2, l1=0.5m, N=n1=4096, D=9, σ=0.001;Step
B in rapid 4-32×b2=0.1 × 0.1m2, l2=0.05m, N=n2=128.
It is further preferred that treating registration point set P in step 6 and reference point converges Q and carries out the reference that rough registration both makes
Coordinate system is consistent, is specifically realized using PCA Principal Component Analysis.P is converged for the point subject to registration in space and reference point is converged
Q seeks the center of gravity and covariance of its cloud respectively, using center of gravity as the coordinate origin of point set, the feature vector of covariance matrix
For three axes, the respective reference frame of two clouds is established.By the reference frame tune of reference point clouds and to be estimated cloud
It is whole to arrive unanimously, that is, achieve the purpose that rough registration.
Further, initial pose is modified in step 7, specifically: using Revised ICP algorithm to initial pose into
Row amendment.Wherein, the theoretical foundation of ICP algorithm is: assuming that the three-dimensional data of given two amplitude point clouds, i.e. two three-dimensional point set P=
{p1,p2,...,pnAnd Q={ q1,q2,...,qn, wherein P is that point subject to registration converges, and Q is that reference point is converged.Subject to registration cloud
Collection P can constantly carry out rigid body translation during iteration, gradually approach reference point and converge Q.It is exactly using the purpose of ICP algorithm
Find a rigid body translation matrix (including spin matrix R and translation matrix t), so that the registration error function of transformed P and Q
It is minimum.ICP algorithm works of both needing to complete: the corresponding relationship of point set is 1. obtained according to nearest principle;2. according to correspondence
Relationship calculates rigid body translation matrix.This step is modified initial pose using Revised ICP algorithm specifically:
Step 7-1, using initial pose as the initial value of rigid body translation matrix [R | t];Wherein R is spin matrix, t is flat
Shifting matrix, and rigid body translation matrix [R | t] it is manipulating object pose;
Step 7-2, establish that point subject to registration converges P and reference point converges Q corresponding relationship:
Step 7-2-1, acquisition point subject to registration converges P and reference point converges the curvature geometrical characteristic of Q, according to the big of curvature value
It is small to treat the point cloud that registration point converges P, reference point is converged in Q respectively and classify;
Step 7-2-2, point subject to registration is scanned one by one converges each of P clouds, each point therein is denoted as point to be checked,
Classified according to belonging to the point, if searching in the reference point clouds of same category, curvature similarity is high to be done as candidate point, is waited
The selection condition of reconnaissance are as follows:
In formula, ε1And ε2For the initial threshold of setting, piIndicate that point subject to registration converges in P at i-th point, qjIndicate reference point
Converge j-th point in Q, k1(pi)、k1(qj) respectively indicate pi、qjPrincipal curvatures, k2(pi)、k2(qj) respectively indicate pi、qj's
Law vector;
Step 7-2-3, the K neighborhood for searching each candidate point, by wherein with point to be checked apart from nearest neighborhood point with to
Query point combines to be formed a little pair;
Step 7-3, Mismatching point pair is removed, specifically: according to piAnd qjDistance and dtRelationship remove Mismatching point
It is right, if piAnd qjDistance be more than dt, then the point is removed it to for error hiding;Wherein dtFor distance threshold;
Step 7-4, rigid body translation matrix [R | t] i.e. manipulating object pose, formula used are solved are as follows:
Step 7-5, point subject to registration is updated according to step the 7-4 R acquired and t and converges P, specifically: judge E (R, t) and sets
Determine the relationship of threshold value p1, if E (R, t) is less than p1, then the final pose [R | t] of direct output operation object;Conversely, then returning to step
Rapid 7-2, until E (R, t) is less than the maximum number of iterations C that p1 or the number of iterations are greater than setting, output operation object is most later
Final position appearance [R | t].
Embodiment
The manipulating object of pose to be estimated is arrester in the present embodiment, carries out pose estimation by the method for the present invention, including
Following procedure:
1, shown in point cloud chart picture such as Fig. 3 (a) of the working scene by depth camera acquisition including measurement object.
2, the point cloud in above-mentioned 1 is cut to reduce the quantity of point cloud, is cut by area-of-interest and adaptive
The filtered point cloud chart picture of voxel grid is respectively as shown in Fig. 3 (b), 3 (c);
3, seek the average distance between a cloud, and the average distance be set as confidence interval, reject confidence interval it
Outer point cloud, treated shown in point cloud chart picture such as Fig. 3 (d);
4, to above-mentioned 3 treated point cloud carry out semantic segmentation, manipulating object is separated from complex environment, obtain
Manipulating object point cloud is obtained, semantic segmentation result is as shown in Figure 4;
5, the threedimensional model of the manipulating object of pose to be estimated is established using modeling software, and is converted into a cloud
Thus PCD format constructs the point cloud model of the manipulating object of pose to be estimated, and converges Q as reference point, arrester
Point cloud model is as shown in Figure 5;
6, converge P using above-mentioned manipulating object point cloud as point subject to registration, treat registration point converge P and reference point converge Q into
Row rough registration keeps the reference frame of the two consistent, obtains the initial pose of manipulating object, be as a result shown below:
In formula, T1For the initial pose acquired, by initial pose T1The forward and backward pose arrester to be estimated of matrixing
The positional relationship of point cloud and reference point clouds difference is as shown in Figure 6 and Figure 7;
7, initial pose is modified, obtains transformation matrix and is shown below:
By T1And T2The final point Yun Weizi of arrester can be obtained are as follows:
After application matrix T carries out coordinate system transformation to the point cloud of arrester, arrester point cloud to be estimated and reference point clouds
Shown in positional relationship Fig. 8.
The position and orientation estimation method of the manipulating object proposed by the present invention under non-structured distribution line environment, with depth phase
The point cloud data of machine Collecting operation scene, and it is pre-processed, the pose estimation of manipulating object segmentation and manipulating object, it protects
Card also can quickly and accurately obtain the pose measurement of manipulating object as a result, and to light in the more numerous and disorderly distribution line of background
There is stronger robustness according to variation.
Claims (7)
1. a kind of distribution line manipulating object position and orientation estimation method based on cloud, which comprises the following steps:
Step 1, the point cloud data that the working scene including manipulating object to be measured is acquired by depth camera;
Step 2 cuts to reduce the quantity of point cloud the point cloud of step 1;
Step 3 seeks average distance between a cloud, and the average distance is set as confidence interval, rejects confidence area later
Between except point cloud;
Step 4, to step 3 treated point cloud carry out semantic segmentation with manipulating object is separated from complex environment,
Obtain manipulating object point cloud;
Step 5, established using modeling software pose to be estimated manipulating object threedimensional model, and be converted into a cloud
PCD format, thus constructs the point cloud model of the manipulating object of pose to be estimated, converges Q as reference point;
Step 6 converges P using the manipulating object point cloud as point subject to registration, treat registration point converge P and reference point converge Q into
Row rough registration keeps the reference frame of the two consistent, obtains the initial pose of manipulating object;
Step 7 is modified the initial pose, obtains the final pose of manipulating object.
2. the distribution line manipulating object position and orientation estimation method according to claim 1 based on cloud, which is characterized in that step
The rapid 2 point clouds to step 1 are cut to reduce the quantity of point cloud, specifically:
Step 2-1, the point cloud within the scope of whole visual field only reservation area-of-interest is cut to by condition filtering algorithm to wrap
The point cloud of the peripheral region containing manipulating object;
Step 2-2, on the basis of step 2-1, in the case where not influencing feature extraction using adaptive voxel grid method into
One step simplifies the quantity of a cloud.
3. the distribution line manipulating object position and orientation estimation method according to claim 1 or 2 based on cloud, feature exist
In seeking the average distance between a cloud described in step 3, specifically realized by the way of kd-tree.
4. the distribution line manipulating object position and orientation estimation method according to claim 3 based on cloud, which is characterized in that step
Rapid 4 described pairs of point clouds carry out semantic segmentation, specific to carry out a semantic segmentation to cloud using improvement PointNet deep neural network,
Specifically:
Step 4-1, the quick mark of a cloud is carried out using cloud labeling system;
Step 4-2, it is based on Stanford 3D Indoor Dataset pre-training PointNet, setting block size is b1×
b1, step-length l1, the point cloud quantity N=n of input1, cloud intrinsic dimensionality D=d is put, the Gaussian Profile numerical value for being σ with standard deviation is initial
Change each layer parameter;
Step 4-3, the parameter based on step 4-2 pre-training model, fixed three first layers convolutional layer, input the sample of manipulating object after
Continuous training, setting block size are b2×b2, step-length l2, N=n2, D=d, output point cloud data;Wherein b2< b1, l2< l1,
n2< n1。
5. the distribution line manipulating object position and orientation estimation method according to claim 4 based on cloud, which is characterized in that step
B in rapid 4-21×b1=1 × 1m2, l1=0.5m, N=n1=4096, D=9, σ=0.001;B in step 4-32×b2=0.1 ×
0.1m2, l2=0.05m, N=n2=128.
6. the distribution line manipulating object position and orientation estimation method according to claim 5 based on cloud, which is characterized in that step
Registration point set P and reference point are treated described in rapid 6 to converge Q to carry out the reference frame that rough registration both makes consistent, specifically use PCA
Principal Component Analysis is realized.
7. the distribution line manipulating object position and orientation estimation method according to claim 6 based on cloud, which is characterized in that step
Rapid 7 it is described initial pose is modified, specifically: initial pose is modified using Revised ICP algorithm:
Step 7-1, using initial pose as the initial value of rigid body translation matrix [R | t];Wherein R is spin matrix, t is translation square
Battle array, and rigid body translation matrix [R | t] it is manipulating object pose;
Step 7-2, establish that point subject to registration converges P and reference point converges Q corresponding relationship:
Step 7-2-1, acquisition point subject to registration converges P and reference point converges the curvature geometrical characteristic of Q, according to the size of curvature value point
The point cloud that registration point converges P, reference point is converged in Q is not treated to classify;
Step 7-2-2, point subject to registration is scanned one by one converges each of P clouds, each point therein is denoted as point to be checked, according to
Classification belonging to the point, if searching in the reference point clouds of same category, curvature similarity is high to be done as candidate point, candidate point
Selection condition are as follows:
In formula, ε1And ε2For the initial threshold of setting, piIndicate that point subject to registration converges in P at i-th point, qjIndicate that reference point converges Q
In j-th point, k1(pi)、k1(qj) respectively indicate pi、qjPrincipal curvatures, k2(pi)、k2(qj) respectively indicate pi、qjMethod arrow
Amount;
Step 7-2-3, the K neighborhood for searching each candidate point, by wherein with point to be checked apart from nearest neighborhood point with it is to be checked
Point combination forms point pair;
Step 7-3, Mismatching point pair is removed, specifically: according to piAnd qjDistance and dtRelationship remove Mismatching point pair, if pi
And qjDistance be more than dt, then the point is removed it to for error hiding;Wherein dtFor distance threshold;
Step 7-4, rigid body translation matrix [R | t] i.e. manipulating object pose, formula used are solved are as follows:
Step 7-5, point subject to registration is updated according to step the 7-4 R acquired and t and converges P, specifically: judge E (R, t) and setting threshold
The relationship of value p1, if E (R, t) is less than p1, then the final pose [R | t] of direct output operation object;Conversely, then return step 7-
2, until E (R, t) is less than the maximum number of iterations C that p1 or the number of iterations are greater than setting, the most final position of output operation object later
Appearance [R | t].
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910397857.4A CN110223345B (en) | 2019-05-14 | 2019-05-14 | Point cloud-based distribution line operation object pose estimation method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910397857.4A CN110223345B (en) | 2019-05-14 | 2019-05-14 | Point cloud-based distribution line operation object pose estimation method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110223345A true CN110223345A (en) | 2019-09-10 |
CN110223345B CN110223345B (en) | 2022-08-09 |
Family
ID=67821039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910397857.4A Active CN110223345B (en) | 2019-05-14 | 2019-05-14 | Point cloud-based distribution line operation object pose estimation method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110223345B (en) |
Cited By (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110842918A (en) * | 2019-10-24 | 2020-02-28 | 华中科技大学 | Robot mobile processing autonomous locating method based on point cloud servo |
CN111259934A (en) * | 2020-01-09 | 2020-06-09 | 清华大学深圳国际研究生院 | Stacked object 6D pose estimation method and device based on deep learning |
CN111251301A (en) * | 2020-02-27 | 2020-06-09 | 云南电网有限责任公司电力科学研究院 | Motion planning method for operation arm of power transmission line maintenance robot |
CN111524168A (en) * | 2020-04-24 | 2020-08-11 | 中国科学院深圳先进技术研究院 | Point cloud data registration method, system and device and computer storage medium |
CN111640143A (en) * | 2020-04-12 | 2020-09-08 | 复旦大学 | Nerve navigation rapid surface registration method and system based on PointNet |
CN112069899A (en) * | 2020-08-05 | 2020-12-11 | 深兰科技(上海)有限公司 | Road shoulder detection method and device and storage medium |
CN112614186A (en) * | 2020-12-28 | 2021-04-06 | 上海汽车工业(集团)总公司 | Target pose calculation method and calculation module |
CN113537180A (en) * | 2021-09-16 | 2021-10-22 | 南方电网数字电网研究院有限公司 | Tree obstacle identification method and device, computer equipment and storage medium |
CN113671527A (en) * | 2021-07-23 | 2021-11-19 | 国电南瑞科技股份有限公司 | Accurate operation method and device for improving distribution network live working robot |
CN114986515A (en) * | 2022-07-04 | 2022-09-02 | 中国科学院沈阳自动化研究所 | Pose decoupling dynamic assembly method for insulator replacement robot |
WO2023061695A1 (en) * | 2021-10-11 | 2023-04-20 | Robert Bosch Gmbh | Method and apparatus for hand-eye calibration of robot |
CN116134488A (en) * | 2020-12-23 | 2023-05-16 | 深圳元戎启行科技有限公司 | Point cloud labeling method, point cloud labeling device, computer equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105976353A (en) * | 2016-04-14 | 2016-09-28 | 南京理工大学 | Spatial non-cooperative target pose estimation method based on model and point cloud global matching |
US20190080503A1 (en) * | 2017-09-13 | 2019-03-14 | Tata Consultancy Services Limited | Methods and systems for surface fitting based change detection in 3d point-cloud |
-
2019
- 2019-05-14 CN CN201910397857.4A patent/CN110223345B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105976353A (en) * | 2016-04-14 | 2016-09-28 | 南京理工大学 | Spatial non-cooperative target pose estimation method based on model and point cloud global matching |
US20190080503A1 (en) * | 2017-09-13 | 2019-03-14 | Tata Consultancy Services Limited | Methods and systems for surface fitting based change detection in 3d point-cloud |
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110842918A (en) * | 2019-10-24 | 2020-02-28 | 华中科技大学 | Robot mobile processing autonomous locating method based on point cloud servo |
CN111259934A (en) * | 2020-01-09 | 2020-06-09 | 清华大学深圳国际研究生院 | Stacked object 6D pose estimation method and device based on deep learning |
CN111259934B (en) * | 2020-01-09 | 2023-04-07 | 清华大学深圳国际研究生院 | Stacked object 6D pose estimation method and device based on deep learning |
CN111251301B (en) * | 2020-02-27 | 2022-09-16 | 云南电网有限责任公司电力科学研究院 | Motion planning method for operation arm of power transmission line maintenance robot |
CN111251301A (en) * | 2020-02-27 | 2020-06-09 | 云南电网有限责任公司电力科学研究院 | Motion planning method for operation arm of power transmission line maintenance robot |
CN111640143B (en) * | 2020-04-12 | 2023-05-30 | 复旦大学 | Neural navigation rapid surface registration method and system based on PointNet |
CN111640143A (en) * | 2020-04-12 | 2020-09-08 | 复旦大学 | Nerve navigation rapid surface registration method and system based on PointNet |
CN111524168B (en) * | 2020-04-24 | 2023-04-18 | 中国科学院深圳先进技术研究院 | Point cloud data registration method, system and device and computer storage medium |
CN111524168A (en) * | 2020-04-24 | 2020-08-11 | 中国科学院深圳先进技术研究院 | Point cloud data registration method, system and device and computer storage medium |
CN112069899A (en) * | 2020-08-05 | 2020-12-11 | 深兰科技(上海)有限公司 | Road shoulder detection method and device and storage medium |
CN116134488A (en) * | 2020-12-23 | 2023-05-16 | 深圳元戎启行科技有限公司 | Point cloud labeling method, point cloud labeling device, computer equipment and storage medium |
CN112614186A (en) * | 2020-12-28 | 2021-04-06 | 上海汽车工业(集团)总公司 | Target pose calculation method and calculation module |
CN113671527A (en) * | 2021-07-23 | 2021-11-19 | 国电南瑞科技股份有限公司 | Accurate operation method and device for improving distribution network live working robot |
CN113537180A (en) * | 2021-09-16 | 2021-10-22 | 南方电网数字电网研究院有限公司 | Tree obstacle identification method and device, computer equipment and storage medium |
WO2023061695A1 (en) * | 2021-10-11 | 2023-04-20 | Robert Bosch Gmbh | Method and apparatus for hand-eye calibration of robot |
CN114986515A (en) * | 2022-07-04 | 2022-09-02 | 中国科学院沈阳自动化研究所 | Pose decoupling dynamic assembly method for insulator replacement robot |
Also Published As
Publication number | Publication date |
---|---|
CN110223345B (en) | 2022-08-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110223345A (en) | Distribution line manipulating object position and orientation estimation method based on cloud | |
CN109410321B (en) | Three-dimensional reconstruction method based on convolutional neural network | |
CN111080627B (en) | 2D +3D large airplane appearance defect detection and analysis method based on deep learning | |
CN111299815B (en) | Visual detection and laser cutting trajectory planning method for low-gray rubber pad | |
CN107392964B (en) | The indoor SLAM method combined based on indoor characteristic point and structure lines | |
CN106296693B (en) | Based on 3D point cloud FPFH feature real-time three-dimensional space-location method | |
Sun et al. | Aerial 3D building detection and modeling from airborne LiDAR point clouds | |
CN108648233A (en) | A kind of target identification based on deep learning and crawl localization method | |
CN111832655A (en) | Multi-scale three-dimensional target detection method based on characteristic pyramid network | |
CN106780524A (en) | A kind of three-dimensional point cloud road boundary extraction method | |
CN115032648B (en) | Three-dimensional target identification and positioning method based on laser radar dense point cloud | |
CN111046767B (en) | 3D target detection method based on monocular image | |
CN110322453A (en) | 3D point cloud semantic segmentation method based on position attention and auxiliary network | |
CN106599915B (en) | A kind of vehicle-mounted laser point cloud classifications method | |
CN112396641B (en) | Point cloud global registration method based on congruent two-baseline matching | |
CN113628263A (en) | Point cloud registration method based on local curvature and neighbor characteristics thereof | |
Wang et al. | An overview of 3d object detection | |
CN109344813A (en) | A kind of target identification and scene modeling method and device based on RGBD | |
CN113706710A (en) | Virtual point multi-source point cloud fusion method and system based on FPFH (field programmable gate flash) feature difference | |
CN110781920A (en) | Method for identifying semantic information of cloud components of indoor scenic spots | |
CN113989547B (en) | Three-dimensional point cloud data classification system and method based on graph convolution depth neural network | |
CN117475170B (en) | FPP-based high-precision point cloud registration method guided by local-global structure | |
Hu et al. | Geometric feature enhanced line segment extraction from large-scale point clouds with hierarchical topological optimization | |
CN104050640A (en) | Multi-view dense point cloud data fusion method | |
CN109920050A (en) | A kind of single-view three-dimensional flame method for reconstructing based on deep learning and thin plate spline |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |