CN112652003A - Three-dimensional point cloud registration method based on RANSAC measure optimization - Google Patents
Three-dimensional point cloud registration method based on RANSAC measure optimization Download PDFInfo
- Publication number
- CN112652003A CN112652003A CN202110016027.XA CN202110016027A CN112652003A CN 112652003 A CN112652003 A CN 112652003A CN 202110016027 A CN202110016027 A CN 202110016027A CN 112652003 A CN112652003 A CN 112652003A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- score
- function
- calculating
- ransac
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 238000005457 optimization Methods 0.000 title claims abstract description 18
- 238000005070 sampling Methods 0.000 claims abstract description 17
- 238000011156 evaluation Methods 0.000 claims abstract description 15
- 239000011159 matrix material Substances 0.000 claims abstract description 8
- 230000006870 function Effects 0.000 claims description 52
- 238000010801 machine learning Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 8
- 230000005484 gravity Effects 0.000 claims description 6
- 230000008569 process Effects 0.000 claims description 3
- 238000013519 translation Methods 0.000 claims description 3
- 238000012216 screening Methods 0.000 claims description 2
- 238000012163 sequencing technique Methods 0.000 claims description 2
- 239000000126 substance Substances 0.000 claims description 2
- 238000005259 measurement Methods 0.000 claims 3
- 238000010586 diagram Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
- G06T7/33—Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02T—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
- Y02T10/00—Road transport of goods or passengers
- Y02T10/10—Internal combustion engine [ICE] based vehicles
- Y02T10/40—Engine management systems
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
- Length Measuring Devices With Unspecified Measuring Means (AREA)
Abstract
The invention relates to a three-dimensional point cloud registration method based on RANSAC measure optimization, which comprises the steps of reading point cloud data from a database; adding noise to the point cloud data; calculating a matching set of the point cloud to obtain a candidate matching set; performing iterative random sampling on the matching set, and calculating a pose matrix between the scene point cloud and the target point cloud; the evaluation algorithm was performed for both time-consuming and RMSE. The pose calculated by the method still enables the scene point cloud and the target point cloud to have higher goodness of fit and overlapping rate, and the anti-jamming capability is stronger.
Description
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a RANdom SAmple Consensus (RANSAC) measure optimization-based three-dimensional point cloud registration method.
Background
The point cloud is composed of a plurality of discrete, unordered and topological-structure-free three-dimensional points, can accurately reflect the real size and shape structure of a three-dimensional object, and has the advantages of illumination resistance, scale change resistance and the like. With the development of science and technology, three-dimensional point cloud registration is widely applied to the fields of computer vision, computer graphics, remote sensing, robots and the like. Specifically, registration of two groups of point cloud sequences under different visual angles can be applied to three-dimensional target reconstruction, three-dimensional scene reconstruction and three-dimensional data fusion; two groups of point cloud sequence registration at different moments can be applied to attitude tracking of three-dimensional moving objects; the registration between model and scene point clouds is widely applied to the detection and identification of three-dimensional targets, such as the grabbing and placing of objects in the robot field, and the accurate target striking in the air-to-ground field. The existing three-dimensional point cloud registration based on RANSAC and its variant algorithm are relatively effective point cloud registration algorithms, and these algorithms have been applied correspondingly in the industry. The main idea of the algorithm is to adopt a random sampling consistency algorithm to iteratively estimate the optimal transformation pose so as to achieve the aim of registration. However, the existing three-dimensional point cloud registration algorithm based on RANSAC and its variants has some problems, such as poor algorithm adaptability, and most of them can only have good effect in some specific data sets; the robustness is not strong, when the data is interfered by noise, the resolution of the obtained point cloud is low, or holes and a repeated mode exist in the point cloud, the performance is obviously reduced. Therefore, if a higher registration accuracy can be obtained under various complicated conditions, it is very beneficial to improve the reliability and stability of the registration result.
Disclosure of Invention
Technical problem to be solved
In order to avoid the defects of the prior art, the invention provides a three-dimensional point cloud registration method based on RANSAC measure optimization. The method combines a loss function commonly used in machine learning with a traditional RANSAC algorithm, and optimizes the existing evaluation measure. The method can obtain higher registration accuracy under different types of data sets; in addition, under the condition that the data is subjected to various interferences, the registration effect of the conventional RANSAC and its variant algorithm is particularly poor, and even the registration cannot be carried out, the method still has a high registration rate.
Technical scheme
A three-dimensional point cloud registration method based on RANSAC measure optimization is characterized by comprising the following steps:
s1: the method comprises the steps of conducting down-sampling on input scene point cloud and target point cloud, calculating a matching set of a scene and a target according to key points and descriptors of the point cloud after down-sampling, then conducting sequencing on the matching set, and screening out K candidate matches;
s2: setting an initial pose Score Score and initial iteration times Iterators;
s3: randomly selecting three matches from the candidate matches, and generating hypotheses according to the three matches;
s4: evaluating the generated hypothesis by adopting optimization measure, and recording the evaluation Score as Score Iterator;
s5: updating Score according to the evaluation Score, and iterating steps S3 and S4 until the number of iterations.
The technical scheme of the invention is further that: the specific process of matching the K candidates in S1 includes:
s11: establishing a three-dimensional voxel grid, calculating the gravity center of each voxel, and taking the gravity center of the voxel as a down-sampling value;
s12: calculating the corresponding relation of each match indexed in the scene point cloud and the target point cloud:
wherein the content of the first and second substances,andrespectively representing descriptors of ith and jth key points in the scene point cloud and the target point cloud, wherein n represents the number of the key points in the target point cloud;
s13: recording a scene point cloud key point as PsThe key point of the target point cloud is PtCalculating the matching set between two point clouds as C ═ C according to the corresponding relation in formula (1)iTherein ofAnd is
S14: matching set according to distanceAnd arranging from small to large, and selecting smaller K matches as a candidate matching set.
The technical scheme of the invention is further that: in S2, the initialization parameters specifically set the initial Score to 0 and the iteration number Iterators to 1000.
The technical scheme of the invention is further that: the specific steps of generating the hypothesis in S3 include:
s31: generating 3 different random numbers from 0 to K-1 as indexes of candidate matching;
s32: calculating the pose between the scene point cloud and the target point cloud according to the three randomly generated matches
Wherein, R and t respectively represent a rotation change matrix and a translation change matrix between the scene point cloud and the target point cloud.
The technical scheme of the invention is further that: the specific steps of evaluating the hypothesis in S4 include:
s41: traversing each matching in the candidate matching set, and calculating the point of the key point in the scene point cloud in each matching after the key point changes through the pose R and the pose t:
s42: calculating the distance between the transformed key points in the scene point cloud and the corresponding target point cloud key points:
s43: calculate the score for each match at R, t:
wherein t is a distance threshold;
s44: calculating scores of all matches in R, t poses:
where n represents the number of matches.
The technical scheme of the invention is further that: t is set to 7.5 times the point cloud resolution in S43.
The technical scheme of the invention is further that: the f function in S43 is 6 evaluation functions, which are as follows:
score MAE, using a 1 norm loss function of machine learning regression loss, the range of values of the function is 0 to 1, and the function is as follows:
score MSE, using a 2 norm loss function of machine learning regression loss, the range of values of the function is 0 to 1, and the function is as follows:
score LOG-COSH, using the LOG-COSH loss function of machine learning regression loss, the value range of the function is 0 to 1, and the function is as follows:
score QUANTILE uses the QUANTILE loss function of machine learning regression loss for reference, and m in the invention is 0.9, which means that the matching contribution of the distance within the threshold is larger, and the point contribution of the distance outside the threshold is smaller, and the function is as follows:
Score-QUANTILE, using the QUANTILE loss function of machine learning regression loss for reference, in the present invention m takes the value of 0.9, unlike equation (9), punishment is made on matching distances greater than a threshold, the contribution is negative, and the function is as follows:
score EXP, combined with the properties of the exponential function, the following function was chosen:
advantageous effects
The three-dimensional point cloud registration method based on RANSAC measure optimization provided by the invention has the following beneficial effects:
1. the method of the technical scheme of the invention has low time complexity. Because the evaluation function performance of the invention is better, compared with the traditional RANSAC algorithm which uses global point cloud for evaluation, the invention only needs to use point cloud formed by source and target key points for evaluation, thereby greatly reducing the number of points in the point cloud, reducing the complexity of subsequent operation and further improving the running speed of the algorithm.
2. The method of the technical scheme of the invention has high accuracy. By adopting the method, after the finally estimated pose parameters are applied to the scene point cloud data, the coincidence degree and the overlapping rate with the target point cloud are higher, and the registration precision is higher.
3. The method of the technical scheme of the invention has strong robustness. Various interferences, such as Gaussian noise, downsampling, holes and the like, which may occur when sampling data in the real world are added to the target point cloud data to simulate the data sampling in the real world, and then pose estimation is performed on the scene point cloud and the target point cloud.
Drawings
FIG. 1 is a schematic flow chart of an embodiment of the present invention
FIG. 2 is a diagram of a target point cloud and after the addition of interference;
fig. 3 is a RANSAC overall flow diagram;
FIG. 4 is a graph of the registration results of point clouds at different viewing angles;
fig. 5 is a diagram of the registration result of the scene point cloud and the target point cloud.
Detailed Description
The invention will now be further described with reference to the following examples and drawings:
the flow of the three-dimensional point cloud registration algorithm based on RANSAC measure optimization provided by the embodiment of the technical scheme of the invention is shown in figure 1, and the method comprises the steps of reading point cloud data from a database; adding noise to the point cloud data; calculating a matching set of the point cloud to obtain a candidate matching set; performing iterative random sampling on the matching set, and calculating a pose matrix between the scene point cloud and the target point cloud; the evaluation algorithm was performed for both time-consuming and RMSE. The three-dimensional point cloud registration algorithm based on RANSAC measure optimization provided by the invention is specifically explained below by combining an example.
(1) The method comprises the steps of conducting downsampling on input scene point cloud and target point cloud, and adding different types of interference with different levels to target point cloud data.
Since the initially obtained point cloud data is dense and the number of points is large, in this embodiment, in order to reduce the time overhead in the experiment, the initial point cloud is down-sampled. The specific operation is to establish a three-dimensional voxel grid, calculate the gravity center of each voxel and take the gravity center of the voxel as a down-sampling value.
In order to verify the robustness of the algorithm of the invention, noise interference needs to be added to the target point cloud data. As shown in fig. 2, the original image of one target point cloud data and the image with different interferences are shown.
(2) And calculating a matching set of the scene point cloud and the target point cloud to obtain a candidate matching set. The candidate matching set is obtained by firstly calculating a key point, a normal vector and a descriptor of the point cloud, and the ISS3D key point and the SHOT descriptor are used in the invention. The matching set calculation steps are as follows: firstly, calculating the corresponding relation of each match indexed in the scene point cloud and the target point cloud.
WhereinAndrespectively representing descriptors of ith and jth key points in the robot scene point cloud and the target point cloud, wherein n represents the number of the key points in the target point cloud; secondly, recording the scene point cloud key point as PsThe key point of the target point cloud is PtCalculating the matching set between two point clouds as C ═ C according to the corresponding relation in formula (1)iTherein ofAnd isFinally, matching set is matched according to distanceAnd arranging from small to large, and selecting smaller K matches as a candidate matching set. Wherein K is an experimental parameter, and the value of K in the invention is 300.
(3) RANSAC random iterative sampling is carried out on the matching set, the general flow diagram of RANSAC is shown in FIG. 3, and the main steps include: randomly sampling the matching set; generating a hypothesis; the hypothesis is evaluated.
And (3.1) randomly sampling. The sampling method used by the invention is random sampling, 3 matches are randomly selected from a matching set in each iteration process, and the specific operation is to generate 3 different random numbers from 0 to K-1, and the random numbers are used as the indexes of candidate matches.
(3.2) generating a hypothesis. Calculating the pose between the scene point cloud and the target point cloud according to the three randomly generated matches
Wherein R and t respectively represent a rotation change matrix and a translation change matrix between the scene point cloud and the target point cloud.
(3.3) hypothesis evaluation. The hypothesis evaluation aims at evaluating the score of the pose at this time and measuring and calculating the quality of the pose, and the specific steps are as follows: firstly, traversing each matching in a candidate matching set, and calculating the point of a scene point cloud in each matching after the key point changes through pose R and t
Secondly, calculating the distance between the transformed key points in the scene point cloud and the corresponding target point cloud key points
Then calculating the score of each matched pose R, t
Where t is the distance threshold, the invention is set to 7.5 times the point cloud resolution. The f-function is 6 evaluation functions proposed by the present invention, and is specifically as follows.
Score (mae), using a 1 norm loss function of machine learning regression loss, the range of the function is 0 to 1, and the function is as follows:
score (mse), using a 2-norm loss function of machine learning regression loss, the range of values of the function is 0 to 1, and the function is as follows:
score (LOG-COSH), using the LOG-COSH loss function of machine learning regression loss, the value range of the function is 0 to 1, and the function is as follows:
score (quantile) uses the quantile loss function of the machine learning regression loss, in the present invention, m is 0.9, which means that the matching contribution of the distance within the threshold is larger, and the point contribution of the distance outside the threshold is smaller, and the function is as follows:
score (-QUANTILE), by referring to the QUANTILE loss function of machine learning regression loss, m in the invention takes the value of 0.9, and unlike function (9), punishment is made on matching with distance greater than a threshold value, and the contribution is negative, and the function is as follows:
score (exp), combined with the nature of the exponential function, the following function was chosen,
finally, calculating scores of all matches in R and t poses
Where n represents the number of matches.
(4) And evaluating the quality of the algorithm. The invention uses Euclidean distance Root Mean Square Error (RMSE) to evaluate the quality of the algorithm. In particular, the method comprises the following steps of,
wherein N represents the number of points in the scene point cloud,andand respectively representing the three-dimensional coordinate points with the shortest distance in the point cloud after the calculated pose change of the scene point cloud and the target point cloud.
The algorithm of the invention is applied to actual point cloud registration, the effect is shown in fig. 4 and 5, and the registration precision of point cloud registration at different visual angles and between a scene and a target is very high.
The implementation method can be applied to the registration of the robot.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.
Claims (7)
1. A three-dimensional point cloud registration method based on RANSAC measure optimization is characterized by comprising the following steps:
s1: the method comprises the steps of conducting down-sampling on input scene point cloud and target point cloud, calculating a matching set of a scene and a target according to key points and descriptors of the point cloud after down-sampling, then conducting sequencing on the matching set, and screening out K candidate matches;
s2: setting an initial pose Score Score and initial iteration times Iterators;
s3: randomly selecting three matches from the candidate matches, and generating hypotheses according to the three matches;
s4: evaluating the generated hypothesis by adopting optimization measure, and recording the evaluation Score as Score Iterator;
s5: updating Score according to the evaluation Score, and iterating steps S3 and S4 until the number of iterations.
2. The RANSAC measure optimization-based three-dimensional point cloud registration method as claimed in claim 1, wherein the specific process of K candidate matching in S1 comprises:
s11: establishing a three-dimensional voxel grid, calculating the gravity center of each voxel, and taking the gravity center of the voxel as a down-sampling value;
s12: calculating the corresponding relation of each match indexed in the scene point cloud and the target point cloud:
wherein the content of the first and second substances,andrespectively representing descriptors of ith and jth key points in the scene point cloud and the target point cloud, wherein n represents the number of the key points in the target point cloud;
s13: recording a scene point cloud key point as PsThe key point of the target point cloud is PtCalculating the matching set between two point clouds as C ═ C according to the corresponding relation in formula (1)iTherein ofAnd is
3. The RANSAC measure optimization-based three-dimensional point cloud registration method as claimed in claim 1, wherein the initialization parameters in S2 are specifically to set the initial Score Score to 0 and the iteration number Iterators to 1000.
4. The RANSAC measurement optimization-based three-dimensional point cloud registration method as claimed in claim 1, wherein the specific step of generating hypotheses in S3 comprises:
s31: generating 3 different random numbers from 0 to K-1 as indexes of candidate matching;
s32: calculating the pose between the scene point cloud and the target point cloud according to the three randomly generated matches
Wherein, R and t respectively represent a rotation change matrix and a translation change matrix between the scene point cloud and the target point cloud.
5. The RANSAC measurement optimization-based three-dimensional point cloud registration method as claimed in claim 1, wherein the specific step of evaluating the hypothesis in S4 comprises:
s41: traversing each matching in the candidate matching set, and calculating the point of the key point in the scene point cloud in each matching after the key point changes through the pose R and the pose t:
s42: calculating the distance between the transformed key points in the scene point cloud and the corresponding target point cloud key points:
s43: calculate the score for each match at R, t:
wherein t is a distance threshold;
s44: calculating scores of all matches in R, t poses:
where n represents the number of matches.
6. The RANSAC measure optimization-based three-dimensional point cloud registration method of claim 1, wherein t is set to 7.5 times point cloud resolution in S43.
7. The RANSAC measurement optimization-based three-dimensional point cloud registration method according to claim 1, wherein the f-functions in S43 are 6 evaluation functions, specifically as follows:
score MAE, using a 1 norm loss function of machine learning regression loss, the range of values of the function is 0 to 1, and the function is as follows:
score MSE, using a 2 norm loss function of machine learning regression loss, the range of values of the function is 0 to 1, and the function is as follows:
score LOG-COSH, using the LOG-COSH loss function of machine learning regression loss, the value range of the function is 0 to 1, and the function is as follows:
score QUANTILE uses the QUANTILE loss function of machine learning regression loss for reference, and m in the invention is 0.9, which means that the matching contribution of the distance within the threshold is larger, and the point contribution of the distance outside the threshold is smaller, and the function is as follows:
Score-QUANTILE, using the QUANTILE loss function of machine learning regression loss for reference, in the present invention m takes the value of 0.9, unlike equation (9), punishment is made on matching distances greater than a threshold, the contribution is negative, and the function is as follows:
score EXP, combined with the properties of the exponential function, the following function was chosen:
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011176071 | 2020-10-29 | ||
CN2020111760719 | 2020-10-29 |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112652003A true CN112652003A (en) | 2021-04-13 |
CN112652003B CN112652003B (en) | 2024-04-12 |
Family
ID=75367796
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110016027.XA Active CN112652003B (en) | 2020-10-29 | 2021-01-07 | Three-dimensional point cloud registration method based on RANSAC measure optimization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112652003B (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113989447A (en) * | 2021-10-14 | 2022-01-28 | 重庆数字城市科技有限公司 | Three-dimensional model indoor and outdoor integrated construction method and system |
CN114332180A (en) * | 2021-12-29 | 2022-04-12 | 湖北亿咖通科技有限公司 | Laser point cloud registration result evaluation method, electronic device and storage medium |
CN114565644A (en) * | 2022-03-02 | 2022-05-31 | 湖南中科助英智能科技研究院有限公司 | Three-dimensional moving object detection method, device and equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447704A (en) * | 2016-10-13 | 2017-02-22 | 西北工业大学 | A visible light-infrared image registration method based on salient region features and edge degree |
CN108830902A (en) * | 2018-04-19 | 2018-11-16 | 江南大学 | A kind of workpiece identification at random and localization method based on points cloud processing |
CN109767463A (en) * | 2019-01-09 | 2019-05-17 | 重庆理工大学 | A kind of three-dimensional point cloud autoegistration method |
CN111080684A (en) * | 2019-12-12 | 2020-04-28 | 哈尔滨工程大学 | Point cloud registration method for point neighborhood scale difference description |
CN111369603A (en) * | 2020-02-25 | 2020-07-03 | 北京百度网讯科技有限公司 | Point cloud registration technology evaluation method and device, electronic equipment and readable storage medium |
-
2021
- 2021-01-07 CN CN202110016027.XA patent/CN112652003B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106447704A (en) * | 2016-10-13 | 2017-02-22 | 西北工业大学 | A visible light-infrared image registration method based on salient region features and edge degree |
CN108830902A (en) * | 2018-04-19 | 2018-11-16 | 江南大学 | A kind of workpiece identification at random and localization method based on points cloud processing |
CN109767463A (en) * | 2019-01-09 | 2019-05-17 | 重庆理工大学 | A kind of three-dimensional point cloud autoegistration method |
CN111080684A (en) * | 2019-12-12 | 2020-04-28 | 哈尔滨工程大学 | Point cloud registration method for point neighborhood scale difference description |
CN111369603A (en) * | 2020-02-25 | 2020-07-03 | 北京百度网讯科技有限公司 | Point cloud registration technology evaluation method and device, electronic equipment and readable storage medium |
Non-Patent Citations (3)
Title |
---|
YANG, JIAQI: "NINTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2017)", 《 NINTH INTERNATIONAL CONFERENCE ON DIGITAL IMAGE PROCESSING (ICDIP 2017)》, 6 December 2017 (2017-12-06) * |
李康: "基于RGBD的三维重建系统的设计与实现", 《中国优秀硕士学位论文全文数据库 信息科技辑》, no. 11, 15 November 2018 (2018-11-15) * |
苏波: "3D 智能传感器的工件位姿获取方法研究", 《自动化仪表》, vol. 40, no. 5, 31 May 2019 (2019-05-31) * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113989447A (en) * | 2021-10-14 | 2022-01-28 | 重庆数字城市科技有限公司 | Three-dimensional model indoor and outdoor integrated construction method and system |
CN114332180A (en) * | 2021-12-29 | 2022-04-12 | 湖北亿咖通科技有限公司 | Laser point cloud registration result evaluation method, electronic device and storage medium |
CN114565644A (en) * | 2022-03-02 | 2022-05-31 | 湖南中科助英智能科技研究院有限公司 | Three-dimensional moving object detection method, device and equipment |
CN114565644B (en) * | 2022-03-02 | 2023-07-21 | 湖南中科助英智能科技研究院有限公司 | Three-dimensional moving object detection method, device and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN112652003B (en) | 2024-04-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111156984B (en) | Monocular vision inertia SLAM method oriented to dynamic scene | |
CN110533722B (en) | Robot rapid repositioning method and system based on visual dictionary | |
CN112652003A (en) | Three-dimensional point cloud registration method based on RANSAC measure optimization | |
CN110047142A (en) | No-manned plane three-dimensional map constructing method, device, computer equipment and storage medium | |
Yu et al. | Robust robot pose estimation for challenging scenes with an RGB-D camera | |
CN103310188A (en) | Method and apparatus for pose recognition | |
CN103559711A (en) | Motion estimation method based on image features and three-dimensional information of three-dimensional visual system | |
CN109458994B (en) | Method and system for judging matching correctness of laser point cloud ICP (inductively coupled plasma) pose of space non-cooperative target | |
CN107358629A (en) | Figure and localization method are built in a kind of interior based on target identification | |
CN112101160B (en) | Binocular semantic SLAM method for automatic driving scene | |
CN112364881B (en) | Advanced sampling consistency image matching method | |
CN111429481B (en) | Target tracking method, device and terminal based on adaptive expression | |
He et al. | A generative feature-to-image robotic vision framework for 6D pose measurement of metal parts | |
CN109544632B (en) | Semantic SLAM object association method based on hierarchical topic model | |
CN114463397A (en) | Multi-modal image registration method based on progressive filtering | |
Li et al. | A context-free method for robust grasp detection: Learning to overcome contextual bias | |
He et al. | ContourPose: Monocular 6-D Pose Estimation Method for Reflective Textureless Metal Parts | |
CN112085117B (en) | Robot motion monitoring visual information fusion method based on MTLBP-Li-KAZE-R-RANSAC | |
CN116894876A (en) | 6-DOF positioning method based on real-time image | |
Ward et al. | A model-based approach to recovering the structure of a plant from images | |
Huang et al. | Methods on visual positioning based on basketball shooting direction standardisation | |
CN115170621A (en) | Target tracking method and system under dynamic background based on relevant filtering framework | |
Chen et al. | An application of improved RANSAC algorithm in visual positioning | |
CN111009029B (en) | Data processing method and device for three-dimensional reconstruction, electronic equipment and storage medium | |
CN113888603A (en) | Loop detection and visual SLAM method based on optical flow tracking and feature matching |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |