CN113450269A - Point cloud key point extraction method based on 3D vision - Google Patents

Point cloud key point extraction method based on 3D vision Download PDF

Info

Publication number
CN113450269A
CN113450269A CN202110567048.0A CN202110567048A CN113450269A CN 113450269 A CN113450269 A CN 113450269A CN 202110567048 A CN202110567048 A CN 202110567048A CN 113450269 A CN113450269 A CN 113450269A
Authority
CN
China
Prior art keywords
point
point cloud
feature
key
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110567048.0A
Other languages
Chinese (zh)
Inventor
段晋军
伍春宇
戴振东
刘正权
宾一鸣
李炳锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Aeronautics and Astronautics
Original Assignee
Nanjing University of Aeronautics and Astronautics
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Aeronautics and Astronautics filed Critical Nanjing University of Aeronautics and Astronautics
Priority to CN202110567048.0A priority Critical patent/CN113450269A/en
Publication of CN113450269A publication Critical patent/CN113450269A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/2433Single-class perspective, e.g. one-against-all classification; Novelty detection; Outlier detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds

Abstract

The invention discloses a point cloud key point extraction method based on 3D vision, which belongs to the technical field of 3D vision and is characterized in that a point cloud picture and point cloud data are obtained by utilizing the synthesis of a depth map and a color map; carrying out statistical filtering processing, carrying out statistical analysis on the neighborhood of each point in the point cloud, and removing outliers; calculating the feature degree of each point in the point cloud, determining feature points according to the feature degrees, and establishing a rapid feature point histogram of the key points of the target point cloud; and calculating to obtain a coordinate transformation matrix of the key points of the source point cloud and the key points of the target point cloud. The method is scientific and reasonable, is safe and convenient to use, removes noise points irrelevant to the component point cloud by carrying out statistical filtering on the point cloud and taking out outliers, and is favorable for reducing algorithm time and space complexity; the source point cloud key points and the target point cloud key points are selected by utilizing the feature degrees, and a fast feature point histogram of the key points is established, so that feature calculation data are few, the calculation efficiency and the calculation precision are improved, and the anti-noise capability is strong.

Description

Point cloud key point extraction method based on 3D vision
Technical Field
The invention relates to the technical field of 3D vision, in particular to a point cloud key point extraction method based on 3D vision.
Background
For some complicated heterogeneous parts, a traditional detection mode of manual comparison such as a sample plate, a mold and the like is adopted, the evaluation precision depends on the experience of a technician, and accurate digital description cannot be obtained. With the powerful development of the 3D visual technology, the point cloud data with high precision and high density can be obtained very quickly. The point cloud data on the surface of the part is analyzed and processed, the characteristics of the part are extracted, and quantitative description of the quality of the part and reverse three-dimensional reconstruction of the characteristics can be achieved.
The research of point cloud feature point extraction and feature line fitting mainly focuses on two aspects, namely feature extraction based on a grid model and feature extraction based on scattered point clouds. The method for extracting the characteristic line of the grid model can be divided into automatic extraction and semi-automatic extraction: usually abrupt points of curvature on the grid are found as feature points and these discrete features are then fitted to a feature line. Compared with the characteristic line extraction of a grid model, the scattered point cloud has no topological information, at present, the research of directly extracting the characteristics from the scattered point cloud is relatively less, the repeatability of extracting the key points is higher, the accuracy degree of the key points cannot be ensured, and the extraction efficiency is lower. Therefore, a method for extracting key points of a point cloud based on 3D vision is needed to solve the above problems.
Disclosure of Invention
The purpose of the invention is as follows: the invention aims to provide a point cloud key point extraction method based on 3D vision to solve the problem of low key point extraction efficiency in the prior art.
The technical scheme is as follows: the invention relates to a point cloud key point extraction method based on 3D vision, which comprises the following steps,
s10, shooting through a depth camera to obtain a depth map and a color map image of the complex heterogeneous piece, and performing coordinate conversion by using a formula; synthesizing by using the depth map and the color map to obtain a point cloud map and point cloud data;
s20, carrying out statistical filtering processing, carrying out statistical analysis on the neighborhood of each point in the point cloud, and calculating the average distance between the point cloud and all the adjacent points; points with the average distance exceeding the standard range are defined as outliers and are removed from the data;
s30, after the point cloud is subjected to statistical filtering, calculating the normal vector change degree of each point in the point cloud, selecting a point with a large normal vector change degree as a characteristic point, wherein the normal vector change degree is the characteristic degree; the larger the change of the characteristic degree is, the larger the surface relief of the component is;
s40, calculating the SPFH value of each feature point, selecting one feature point as a source point cloud key point, matching and continuously repeating the source point cloud key point and the neighbor of the source point cloud key point, and generating a rapid feature point histogram of the source point cloud key point by continuously changing the weight of the neighbor SPFH value; defining the feature points of the complex heterogeneous pieces to be extracted as target point cloud key points, and generating a fast feature point histogram of the target point cloud key points; establishing a feature vector for describing the geometric characteristics of a local area of the source point cloud key point and the target point cloud key point by utilizing the statistical characteristics of the histogram;
s50, calculating Euclidean distance between the source point cloud key point feature vector and the target point cloud key point feature vector; obtaining a source point cloud key point Kd-tree and a target point cloud key point Kd-tree according to the feature point histogram of the source point cloud key point and the feature point histogram of the target point cloud; and calculating to obtain a coordinate transformation matrix of the key points of the source point cloud and the target point cloud based on a key point cloud registration method.
Preferably, in S10, the coordinate transformation formula is as follows:
xw=zc*(u-u0)*dx(dy)(dz)/f
wherein x iswIs the coordinate of the world coordinate system, u is the coordinate of the center coordinate of the image in the world coordinate system0Is the central coordinate of the image, zcIs the z-axis value of the camera coordinate system (i.e., the distance of the target object from the camera), dx、dy、 dzFor each pixel of the imaging element the corresponding physical width and height are f the camera focal length.
Preferably, in S30, the feature degree calculation formula is as follows:
Figure BDA0003081293940000021
wherein: thetaijIs a normal vector angle f of a neighboring pointiIs the characteristic value of a certain point in the point cloud, and k is the number of the adjacent points.
Preferably, the S40 includes the following steps:
s4001, a characteristic point is given as a source point cloud key point, an angle difference value after normal decomposition is calculated, and the angle difference value is defined as SPFH;
s4002, calculating a final value of the fast feature point histogram by using the weighted adjacent SPFH:
Figure BDA0003081293940000031
wherein, wkFor the weights, i.e. the distance between two points, FPFH (P)q) Namely a fast feature histogram of key points, SPFH is a simplified point feature histogram, and k is the number of neighboring points.
Preferably, in S50, the euclidean distance calculation formula is as follows:
Si=(Si1 Si2 ... Si120)
Tj=(Tj1 Tj2 ... Tj120)
Figure BDA0003081293940000032
wherein S isiAs source cloud key point feature vectors, TjIs a feature vector of key points of the target point cloud, d (S)i,Tj) Is the Euclidean distance between two feature vectors, p is the p-th key point characteristicAnd (5) sign vectors.
Preferably, in S50, the calculating a coordinate transformation matrix of the key points of the source point cloud and the target point cloud includes the following steps:
s5001, searching the nearest point and the distance of the key point of the source point cloud in a Kd-tree of the key point of the target point cloud, and if the distance is smaller than a fixed threshold, adding the point into a first pre-corresponding relation set;
s5002, searching the nearest point and the distance of the key point of the target point cloud in the Kd-tree of the source point cloud key point, and if the distance is smaller than a fixed threshold, adding the point pair into a second pre-corresponding relationship set;
s5003, taking an intersection of the first pre-corresponding relationship set and the second pre-corresponding relationship set as an initial corresponding relationship, wherein the intersection is the same corresponding relationship in the first pre-corresponding relationship set and the second pre-corresponding relationship set;
s5004, according to the initial corresponding relation, adopting random sampling consistency to carry out corresponding elimination of error relation, randomly selecting three groups of corresponding relations from the obtained corresponding relations, calculating a coordinate transformation matrix from a source point cloud key point to a target point cloud key point according to the three groups of corresponding relations, calculating the distance deviation between the source point cloud key point and the corresponding point in the target point cloud after coordinate transformation of all the corresponding relations, if the deviation is less than a set threshold value, the corresponding relation belongs to a sample in the model, otherwise, the corresponding relation belongs to a sample outside the model;
s5005, storing all internal samples, repeating the above process, counting the times of each group of corresponding relations belonging to the internal samples until the iteration times reach a set value (the set value depends on the desired precision, the larger the set value is, the higher the precision is), ending the iteration to obtain a final corresponding relation set, finding out three groups of corresponding relations which belong to the internal samples and have the largest number, and calculating the coordinate transformation matrix of the source point cloud key point and the target point cloud key point by using singular value decomposition.
Has the advantages that: according to the method, the noise points irrelevant to the component point cloud are removed by performing statistical filtering on the point cloud and taking out the outliers, so that the algorithm time and the space complexity are reduced; selecting source point cloud key points and target point cloud key points by using the feature degrees, and establishing a key point fast feature point histogram, wherein feature calculation data are few, calculation efficiency and calculation precision are improved, and the anti-noise capability is strong; in addition, according to the key point search with the maximum key point neighborhood curvature mean value, the noise resistance of the algorithm is enhanced, and the repetition of the key points at the same position is reduced.
Drawings
FIG. 1 is a general flowchart of a method for extracting a point cloud key point based on 3D vision according to the present invention;
FIG. 2 is a schematic diagram of angle difference after normal decomposition of a point cloud key point extraction method based on 3D vision according to the present invention;
FIG. 3 is a point cloud diagram of a point cloud key point extraction method based on 3D vision according to the invention;
FIG. 4 is a filtered point cloud image of a 3D vision-based point cloud key point extraction method of the present invention;
FIG. 5 is a feature point histogram of a 3D vision-based point cloud key point extraction method of the present invention;
Detailed Description
The technical solution of the present invention is described in detail below with reference to the accompanying drawings, but the scope of the present invention is not limited to the embodiments.
Example 1: as shown in fig. 1-5, a method for extracting a point cloud key point based on 3D vision includes the following steps,
step 10, shooting through a depth camera to obtain a depth map and a color map image of the complex heterogeneous piece, and performing coordinate conversion by using a formula; the depth camera is IntelRealSense; synthesizing by using the depth map and the color map to obtain a point cloud map and point cloud data;
step 20, performing statistical filtering processing, performing statistical analysis on the neighborhood of each point in the point cloud, and calculating the average distance from the neighborhood to all the adjacent points; assuming that the result is a gaussian distribution whose shape is determined by the mean and standard deviation, points whose mean distance is outside the standard range are defined as outliers and removed from the data;
step 30, after the point cloud is subjected to statistical filtering, calculating the normal vector change degree of each point in the point cloud, selecting a point with a large normal vector change degree as a characteristic point, wherein the normal vector change degree is the characteristic degree; the larger the change of the characteristic degree is, the larger the surface relief of the component is;
step 40, calculating the SPFH value of each feature point, selecting one feature point as a source point cloud key point, matching and continuously repeating the source point cloud key point and the neighbor thereof, and generating a rapid feature point histogram of the source point cloud key point by continuously changing the weight of the neighbor SPFH value; defining the feature points of the complex heterogeneous pieces to be extracted as target point cloud key points, and generating a fast feature point histogram of the target point cloud key points; establishing a feature vector for describing the geometric characteristics of a local area of the source point cloud key point and the target point cloud key point by utilizing the statistical characteristics of the histogram;
step 50, calculating Euclidean distance between the source point cloud key point feature vector and the target point cloud key point feature vector; obtaining a source point cloud key point Kd-tree and a target point cloud key point Kd-tree (obtained through a PCL point cloud library open source algorithm, and redundant description is not repeated herein) according to the feature point histogram of the source point cloud key point and the feature point histogram of the target point cloud; and calculating to obtain a coordinate transformation matrix of the key points of the source point cloud and the target point cloud based on a key point cloud registration method. In S10, the coordinate conversion formula is as follows:
xw=zc*(u-u0)*dx(dy)(dz)/f
wherein x iswIs the coordinate of the world coordinate system, u is the coordinate of the center coordinate of the image in the world coordinate system, u0Is the central coordinate of the image, zcIs the z-axis value of the camera coordinate system (i.e., the distance of the target object from the camera), dx、dy、 dzFor each pixel of the imaging element the corresponding physical width and height are f the camera focal length.
In step 30, the feature degree calculation formula is as follows:
Figure BDA0003081293940000051
wherein: thetaijIs a normal vector angle f of a neighboring pointiIs the characteristic value of a certain point in the point cloud, and k is the number of the adjacent points.
Step 40 comprises the steps of:
step 4001, a feature point is given as a source point cloud key point, an angle difference value after normal decomposition is calculated, and the angle difference value is defined as SPFH;
the angle differences after normal decomposition are respectively alpha, phi and theta (see the attached figure 2), and the calculation formulas are respectively as follows: α ═ v × nt
Figure BDA0003081293940000052
Step 4002, calculate the final value of the fast feature point histogram using the weighted adjacent SPFH:
Figure BDA0003081293940000061
wherein, wkFor the weights, i.e. the distance between two points, FPFH (P)q) Namely a fast feature histogram of key points, SPFH is a simplified point feature histogram, and k is the number of neighboring points. In this example, k is chosen to be 50.
In step 50, the euclidean distance calculation formula is as follows:
Si=(Si1 Si2 ... Si120)
Tj=(Tj1 Tj2 ... Tj120)
Figure BDA0003081293940000062
wherein S isiAs source cloud key point feature vectors, TjIs a feature vector of key points of the target point cloud, d (S)i,Tj) And p is the Euclidean distance between two feature vectors and is the p-th key point feature vector.
In step 50, calculating a coordinate transformation matrix of the source point cloud key points and the target point cloud key points comprises the following steps:
step 5001, searching the nearest neighbor point of the key point of the source point cloud and the distance thereof in the Kd-tree of the key point of the target point cloud (the distance has an open source program algorithm in PCL, namely the point cloud library), and if the distance is less than a fixed threshold (the threshold is 0.02), adding the point into the first pre-corresponding relationship set;
step 5002, searching the nearest neighbor point of the key point of the target point cloud and the distance thereof in the Kd-tree of the source point cloud key point (the distance has an open source program algorithm in PCL, namely the point cloud library), and if the distance is less than a fixed threshold (the threshold is 0.02), adding the point pair into a second pre-corresponding relationship set;
step 5003, taking an intersection of the first pre-correspondence set and the second pre-correspondence set as an initial correspondence, where the intersection is the same correspondence in the first pre-correspondence set and the second pre-correspondence set;
step 5004, according to the initial corresponding relationship, adopting random sampling consistency to make corresponding removal of error relationship, randomly selecting three groups of corresponding relationships from the obtained corresponding relationships, calculating coordinate transformation matrix from source point cloud key point to target point cloud key point according to these three groups of corresponding relationships, calculating distance deviation between the source point cloud key point and corresponding point in the target point cloud after coordinate transformation of all corresponding relationships, if the deviation is less than set threshold value, said corresponding relationship belongs to sample in the model, otherwise it belongs to sample outside the model;
step 5005, storing all internal samples, repeating the above process, counting the number of times that each group of corresponding relations belong to the internal samples until the number of iterations reaches a set value (the set value depends on the desired accuracy, the larger the set value is, the higher the accuracy is), ending the iterations to obtain a final corresponding relation set, finding the three groups of corresponding relations that belong to the internal samples with the largest number, and calculating the coordinate transformation matrix of the source point cloud key point and the target point cloud key point by using singular value decomposition. The transformation matrix is estimated by using Singular Value Decomposition (SVD), which is more accurate than a QR matrix, and the matrix obtained by the SVD method has the minimum square error, so that the coordinate transformation matrix of the source point cloud key points and the target point cloud key points is more accurate, and the result is more accurate
As above, while the invention has been shown and described with reference to certain preferred embodiments, it is not to be construed as limited thereto. Various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (6)

1. A point cloud key point extraction method based on 3D vision is characterized by comprising the following steps:
s10, shooting through a depth camera to obtain a depth map and a color map image of the complex heterogeneous piece, and performing coordinate conversion by using a formula; synthesizing by using the depth map and the color map to obtain a point cloud map and point cloud data;
s20, carrying out statistical filtering processing, carrying out statistical analysis on the neighborhood of each point in the point cloud, and calculating the average distance between the point cloud and all the adjacent points; points with the average distance exceeding the standard range are defined as outliers and are removed from the data;
s30, after the point cloud is subjected to statistical filtering, calculating the normal vector change degree of each point in the point cloud, selecting a point with a large normal vector change degree as a characteristic point, wherein the normal vector change degree is the characteristic degree; the larger the change of the characteristic degree is, the larger the surface relief of the component is;
s40, calculating the SPFH value of each feature point, selecting one feature point as a source point cloud key point, matching and continuously repeating the source point cloud key point and the neighbor of the source point cloud key point, and generating a rapid feature point histogram of the source point cloud key point by continuously changing the weight of the neighbor SPFH value; defining the feature points of the complex heterogeneous pieces to be extracted as target point cloud key points, and generating a fast feature point histogram of the target point cloud key points; establishing a feature vector for describing the geometric characteristics of a local area of the source point cloud key point and the target point cloud key point by utilizing the statistical characteristics of the histogram;
s50, calculating Euclidean distance between the source point cloud key point feature vector and the target point cloud key point feature vector; obtaining a source point cloud key point Kd-tree and a target point cloud key point Kd-tree according to the feature point histogram of the source point cloud key point and the feature point histogram of the target point cloud; and calculating to obtain a coordinate transformation matrix of the key points of the source point cloud and the target point cloud based on a key point cloud registration method.
2. The method for extracting point cloud key points based on 3D vision according to claim 1, wherein in S10, the coordinate transformation formula is as follows:
xw=zc*(u-u0)*dx(dy)(dz)/f
wherein x iswIs the coordinate of the world coordinate system, u is the coordinate of the center coordinate of the image in the world coordinate system, u0Is the central coordinate of the image, zcIs the z-axis value of the camera coordinate system (i.e., the distance of the target object from the camera), dx、dy、dzFor each pixel of the imaging element the corresponding physical width and height are f the camera focal length.
3. The method for extracting point cloud key points based on 3D vision according to claim 1, wherein in S30, the feature degree calculation formula is as follows:
Figure FDA0003081293930000021
wherein: thetaijIs a normal vector angle f of a neighboring pointiIs the characteristic value of a certain point in the point cloud, and k is the number of the adjacent points.
4. The method for extracting point cloud key points based on 3D vision according to claim 1, wherein the step S40 includes the following steps:
s4001, a characteristic point is given as a source point cloud key point, an angle difference value after normal decomposition is calculated, and the angle difference value is defined as SPFH;
s4002, calculating a final value of the fast feature point histogram by using the weighted adjacent SPFH:
Figure FDA0003081293930000022
wherein, wkFor the weights, i.e. the distance between two points, FPFH (P)q) Namely a fast feature histogram of key points, SPFH is a simplified point feature histogram, and k is the number of neighboring points.
5. The method for extracting point cloud key points based on 3D vision according to claim 1, wherein in S50, the euclidean distance calculation formula is as follows:
Si=(Si1 Si2 ... Si120)
Tj=(Tj1 Tj2 ... Tj120)
Figure FDA0003081293930000023
wherein S isiAs source cloud key point feature vectors, TjIs a feature vector of key points of the target point cloud, d (S)i,Tj) And p is the Euclidean distance between two feature vectors and is the p-th key point feature vector.
6. The method of claim 5, wherein the step of calculating the coordinate transformation matrix of the cloud key points of the source point and the cloud key points of the target point in the step S50 comprises the steps of:
s5001, searching the nearest point and the distance of the key point of the source point cloud in a Kd-tree of the key point of the target point cloud, and if the distance is smaller than a fixed threshold, adding the point into a first pre-corresponding relation set;
s5002, searching the nearest point and the distance of the key point of the target point cloud in the Kd-tree of the source point cloud key point, and if the distance is smaller than a fixed threshold, adding the point pair into a second pre-corresponding relationship set;
s5003, taking an intersection of the first pre-corresponding relationship set and the second pre-corresponding relationship set as an initial corresponding relationship, wherein the intersection is the same corresponding relationship in the first pre-corresponding relationship set and the second pre-corresponding relationship set;
s5004, according to the initial corresponding relation, adopting random sampling consistency to carry out corresponding elimination of error relation, randomly selecting three groups of corresponding relations from the obtained corresponding relations, calculating a coordinate transformation matrix from a source point cloud key point to a target point cloud key point according to the three groups of corresponding relations, calculating the distance deviation between the source point cloud key point and the corresponding point in the target point cloud after coordinate transformation of all the corresponding relations, if the deviation is less than a set threshold value, the corresponding relation belongs to a sample in the model, otherwise, the corresponding relation belongs to a sample outside the model;
s5005, storing all internal samples, repeating the above process, counting the times of each group of corresponding relations belonging to the internal samples until the iteration times reach a set value (the set value depends on the desired precision, the larger the set value is, the higher the precision is), ending the iteration to obtain a final corresponding relation set, finding out three groups of corresponding relations which belong to the internal samples and have the largest number, and calculating the coordinate transformation matrix of the source point cloud key point and the target point cloud key point by using singular value decomposition.
CN202110567048.0A 2021-05-24 2021-05-24 Point cloud key point extraction method based on 3D vision Pending CN113450269A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110567048.0A CN113450269A (en) 2021-05-24 2021-05-24 Point cloud key point extraction method based on 3D vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110567048.0A CN113450269A (en) 2021-05-24 2021-05-24 Point cloud key point extraction method based on 3D vision

Publications (1)

Publication Number Publication Date
CN113450269A true CN113450269A (en) 2021-09-28

Family

ID=77810141

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110567048.0A Pending CN113450269A (en) 2021-05-24 2021-05-24 Point cloud key point extraction method based on 3D vision

Country Status (1)

Country Link
CN (1) CN113450269A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115338874A (en) * 2022-10-19 2022-11-15 爱夫迪(沈阳)自动化科技有限公司 Laser radar-based robot real-time control method
CN115797423A (en) * 2022-12-07 2023-03-14 中国矿业大学(北京) Registration method and system for optimizing near iteration of mine point cloud based on descriptor characteristics
CN116563561A (en) * 2023-07-06 2023-08-08 北京优脑银河科技有限公司 Point cloud feature extraction method, point cloud registration method and readable storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143210A (en) * 2014-07-31 2014-11-12 哈尔滨工程大学 Multi-scale normal feature point cloud registering method
CN105046694A (en) * 2015-07-02 2015-11-11 哈尔滨工程大学 Quick point cloud registration method based on curved surface fitting coefficient features
CN109887015A (en) * 2019-03-08 2019-06-14 哈尔滨工程大学 A kind of point cloud autoegistration method based on local surface feature histogram
CN110490912A (en) * 2019-07-17 2019-11-22 哈尔滨工程大学 3D-RGB point cloud registration method based on local gray level sequence model descriptor

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104143210A (en) * 2014-07-31 2014-11-12 哈尔滨工程大学 Multi-scale normal feature point cloud registering method
CN105046694A (en) * 2015-07-02 2015-11-11 哈尔滨工程大学 Quick point cloud registration method based on curved surface fitting coefficient features
CN109887015A (en) * 2019-03-08 2019-06-14 哈尔滨工程大学 A kind of point cloud autoegistration method based on local surface feature histogram
CN110490912A (en) * 2019-07-17 2019-11-22 哈尔滨工程大学 3D-RGB point cloud registration method based on local gray level sequence model descriptor

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
宋成航: ""利用特征点采样一致性改进ICP算法点云配准方法"", 北京测绘, 25 March 2021 (2021-03-25), pages 317 - 322 *
王永信: "《逆向工程及检测技术与应用》", 31 May 2014, 西安:西安交通大学出版社, pages: 18 - 20 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115338874A (en) * 2022-10-19 2022-11-15 爱夫迪(沈阳)自动化科技有限公司 Laser radar-based robot real-time control method
CN115338874B (en) * 2022-10-19 2023-01-03 爱夫迪(沈阳)自动化科技有限公司 Real-time robot control method based on laser radar
CN115797423A (en) * 2022-12-07 2023-03-14 中国矿业大学(北京) Registration method and system for optimizing near iteration of mine point cloud based on descriptor characteristics
CN116563561A (en) * 2023-07-06 2023-08-08 北京优脑银河科技有限公司 Point cloud feature extraction method, point cloud registration method and readable storage medium
CN116563561B (en) * 2023-07-06 2023-11-14 北京优脑银河科技有限公司 Point cloud feature extraction method, point cloud registration method and readable storage medium

Similar Documents

Publication Publication Date Title
CN110322453B (en) 3D point cloud semantic segmentation method based on position attention and auxiliary network
CN103729643B (en) The identification of three dimensional object in multi-mode scene and posture are determined
CN113450269A (en) Point cloud key point extraction method based on 3D vision
JP5677798B2 (en) 3D object recognition and position and orientation determination method in 3D scene
CN111079685B (en) 3D target detection method
CN107633226B (en) Human body motion tracking feature processing method
CN108303037B (en) Method and device for detecting workpiece surface shape difference based on point cloud analysis
CN110544233B (en) Depth image quality evaluation method based on face recognition application
CN104616278B (en) Three-dimensional point cloud interest point detection method and system
CN111028292B (en) Sub-pixel level image matching navigation positioning method
CN110070567B (en) Ground laser point cloud registration method
CN107133966B (en) Three-dimensional sonar image background segmentation method based on sampling consistency algorithm
CN107481274B (en) Robust reconstruction method of three-dimensional crop point cloud
CN104090972A (en) Image feature extraction and similarity measurement method used for three-dimensional city model retrieval
CN111027140B (en) Airplane standard part model rapid reconstruction method based on multi-view point cloud data
CN109993800A (en) A kind of detection method of workpiece size, device and storage medium
CN113470090A (en) Multi-solid-state laser radar external reference calibration method based on SIFT-SHOT characteristics
CN111091101B (en) High-precision pedestrian detection method, system and device based on one-step method
CN111524168B (en) Point cloud data registration method, system and device and computer storage medium
CN108921864A (en) A kind of Light stripes center extraction method and device
CN106257497B (en) Matching method and device for image homonymy points
CN107909018B (en) Stable multi-mode remote sensing image matching method and system
CN113888461A (en) Method, system and equipment for detecting defects of hardware parts based on deep learning
CN114067075A (en) Point cloud completion method and device based on generation of countermeasure network
CN114663373A (en) Point cloud registration method and device for detecting surface quality of part

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination