CN115830340B - Point cloud target identification method and device and storage medium - Google Patents

Point cloud target identification method and device and storage medium Download PDF

Info

Publication number
CN115830340B
CN115830340B CN202211433421.4A CN202211433421A CN115830340B CN 115830340 B CN115830340 B CN 115830340B CN 202211433421 A CN202211433421 A CN 202211433421A CN 115830340 B CN115830340 B CN 115830340B
Authority
CN
China
Prior art keywords
point
key
feature
key points
point cloud
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202211433421.4A
Other languages
Chinese (zh)
Other versions
CN115830340A (en
Inventor
李雪梅
王思鸥
王刚
武元昊
陈冲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Baicheng Normal University
Original Assignee
Baicheng Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Baicheng Normal University filed Critical Baicheng Normal University
Priority to CN202211433421.4A priority Critical patent/CN115830340B/en
Publication of CN115830340A publication Critical patent/CN115830340A/en
Application granted granted Critical
Publication of CN115830340B publication Critical patent/CN115830340B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a point cloud target identification method and device and a storage medium, wherein the method comprises the following steps: acquiring cloud data of a target point; extracting key points of the point cloud data; constructing 4DCBS feature descriptions according to the key points, and calculating 4DCBS descriptors of each feature point; and performing feature matching on the target key points according to the 4DCBS feature descriptors. By adopting the technical scheme of the invention, the problem of low target recognition accuracy of the point cloud target under the conditions of noise interference, shielding and data resolution change is solved.

Description

Point cloud target identification method and device and storage medium
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a point cloud target identification method and device based on a 4DCBS feature descriptor and a storage medium.
Background
The three-dimensional point cloud is a visual form which is acquired by the intelligent sensor and used for presenting geometric characteristic information of the object surface, and the three-dimensional point cloud data processing is an important link in the three-dimensional vision field. However, because the environment scene acquired by the intelligent sensor is complex, noise, change of the resolution of point cloud data, target shielding and other interferences easily exist, the local feature description and matching performance of the target are degraded, and the target is difficult to effectively separate from the complex background, and the subsequent processing of the point cloud target can be adversely affected, so that the research of the point cloud target identification method based on feature matching has become one of research emphasis in the three-dimensional vision field.
The target feature description is the key of links such as target identification, positioning and tracking, and is the coding of the spatial distribution and geometric information of the peripheral surface of the target feature points. Designing an excellent feature descriptor is the core of target processing based on feature matching. The three-dimensional feature descriptor is an effective means for searching a corresponding relation between two three-dimensional point clouds in any direction, and the existing 3D feature descriptors are mainly divided into two types, namely a global feature descriptor and a local feature descriptor. The local feature descriptors are suitable for scenes with strong target shielding and background interference, such as FPFH, SDASS, MDCS and the like. However, with the continuous development and progress of the point cloud sensor, the scale of acquiring the point cloud data is continuously enlarged, which causes the local neighborhood calculation processing of each key point to be very time-consuming and the description timeliness to be lower.
Disclosure of Invention
The invention aims to solve the technical problem of low target recognition accuracy under the conditions of noise interference, shielding and data resolution change of a point cloud target.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
a point cloud target identification method comprises the following steps:
s1, acquiring target point cloud data;
s2, extracting key points of the point cloud data;
step S3, constructing 4DCBS feature description according to the key points, and calculating 4DCBS descriptors of each feature point;
and S4, performing feature matching on the target key points according to the 4DCBS feature descriptors.
Preferably, in step S2, the point cloud data P is extracted by using an ISS feature point extraction algorithm source Key point p= { P on i |i∈N},P i Representative point cloud P source And each key point in the point cloud is N key points.
Preferably, the step S3 specifically includes:
reconstructing a three-dimensional surface patch from the keypointsAnd removing the centroid from said key point to obtain +.>
New dough sheet according to reconstructionEstimating LRF matrix and adding key points +.>Transforming into a new 3D space;
calculating key pointsEuclidean distance from other key points;
finding a key point according to the Euclidean distanceThree key points of nearest neighbor;
let the projection plane perpendicular to the viewpoint V be PP,projection onto PP surfaceThe profile is S_RPC (θ) si ) The method comprises the steps of carrying out a first treatment on the surface of the By key point P i Starting from N l The intersection point of the ray and the profile RPC is the profile point P RPC(θsi)
Calculating the key point P i And the contour point P S_RPC(θsi) The euclidean distance between them as a contour sub-feature CS (θ si );
Will beContour sub-feature CS (θ si ) Combining to form a contour feature CS and carrying out normalization treatment;
and (3) the outline features after normalization are subjected to binarization approximation to represent local features, and 4DCBS feature descriptors of each feature point are obtained.
Preferably, in step S3, the key points are setTransformed into a new 3D space, the expression of which is:
wherein,is the coordinates of the ith keypoint in the new 3D space, [ LRF]Representing a local reference frame matrix, i.e. [ LRF]=[x T y T z T ] T ;/>Representing a matrix of keypoints.
Preferably, in step S3, a key point is foundThe three key points of the nearest neighbor are specifically:
set point P ik Is a key pointThe nearest neighbor is a key point, the z-axis deviation angle of the LRF is calculated, and the expression is as follows:
wherein,representing key points->And P ik Deviation angle of z-axis between +.>Representing key points->In the z-axis direction of LRF z (P ik ) Representing the key point P ik The z-axis direction of LRF.
The invention also provides a point cloud target identification device, which comprises:
the acquisition module is used for acquiring cloud data of the target point;
the extraction module is used for extracting key points of the point cloud data;
the construction module is used for constructing 4DCBS feature descriptions according to the key points and calculating 4DCBS descriptors of each feature point;
and the matching module is used for carrying out feature matching on the target key points according to the 4DCBS feature descriptors.
Preferably, the extraction module adopts an ISS feature point extraction algorithm to extract key points of the point cloud data.
The present invention also provides a storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a point cloud target identification method.
The invention can effectively reduce the three-dimensional retrieval space based on the 4DCBS (4D-based projection contour binarization feature) feature description, improves the speed of local feature description, and has good descriptive property and robustness. And 4DCBS feature descriptors are adopted to describe and match targets, so that the recognition effect of the targets under the conditions of noise interference, shielding and data resolution change is improved.
Drawings
FIG. 1 is a flow chart of a point cloud object recognition method of the present invention;
fig. 2 is a schematic diagram of recognition results according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
In order that the above-recited objects, features and advantages of the present invention will become more readily apparent, a more particular description of the invention will be rendered by reference to the appended drawings and appended detailed description.
Example 1:
as shown in fig. 1, the present invention provides a point cloud target identification method, including:
s1, acquiring target point cloud data;
s2, extracting key points of the point cloud data;
step S3, constructing 4DCBS (4D-based projection profile binarization feature) feature descriptions according to the key points, and calculating 4DCBS descriptors of each feature point;
and S4, performing feature matching on the target key points according to the 4DCBS feature descriptors.
As an implementation of the embodiment of the invention, the stepsS2, extracting point cloud data P by adopting an ISS feature point extraction algorithm source Key point p= { P on i |i∈N},P i Representative point cloud P source And each key point in the point cloud is N key points.
As an implementation manner of the embodiment of the present invention, step S3 specifically includes:
(3-1), point P i The surface sheet of (2) isFrom point P i Is composed of m neighborhood points. From->Removing centroid point P from surface points i-mean Reconstructing a three-dimensional surface patch +.>Removing centroid point P from detected keypoints i-mean Obtain->
(3-2) New dough sheet according to reconstructionEstimating LRF matrix and adding key points +.>Transformed into a new 3D space, the expression of which is:
wherein,is the coordinates of the ith keypoint in the new 3D space, [ LRF]Representing a local reference frame matrix, i.e. [ LRF]=[x T y T z T ] T ;/>Representing a matrix of keypoints.
(3-3), calculating key pointsEuclidean distance from other key points;
(3-4) finding a key point from the distance calculated in (3-3)Three key points of nearest neighbor. Set point P ik Is a key point->The nearest neighbor is a key point, the z-axis deviation angle of the LRF is calculated, and the expression is as follows:
wherein,representing key points->And P ik Deviation angle of z-axis between +.>Representing key points->In the z-axis direction of LRF z (P ik ) Representing the key point P ik The z-axis direction of LRF.
(3-5) setting a projection plane perpendicular to the viewpoint V as PP,the projection profile on the PP plane is s_rpc (θ si ). By key point P i Starting from N l The intersection point of the ray and the profile RPC is the profile point P RPC(θsi)
(3-6), calculating the key point P i And the contour point P S_RPC(θsi) The euclidean distance between the two features is taken as the contour sub-feature CS (theta si ):
Wherein,
(3-7), willAnd combining the contour sub-features under all rotation angles to form a contour feature CS, and carrying out normalization processing on the features.
CS={CS(θ s1s2 ,...,θ sN } (6)
Wherein the rotation angle θ= { θ s1s2 ,...,θ si i∈N θ CS feature is defined by N θ ×N l And the floating point number.
(3-8) and (3-7) are profile features formed by a series of floating point numbers, wherein the binary approximation of 0 and 1 is used for representing local features, and the higher the number of bits is, the higher the accuracy of the local features is, so that the 4DCBS feature descriptor of each feature point is obtained.
As an implementation manner of the embodiment of the present invention, step S4 specifically includes:
(4-1) calculating a scale factor λ=s of the scene point cloud resolution and the target point cloud resolution to be identified pr /T pr And searching a scaling factor lambda 'closest to lambda in a model library, and finding a 4DCBS descriptor corresponding to lambda'. Searching for a correct point pair by adopting a nearest neighbor distance ratio, and setting any point in a point cloud PC_a of a target to be identified as A i ={A i E PC_a }, scene feature point cloud PC_b; searching for and Point A in PC_b i Point B where euclidean distance is closest i1 ={B i1 E PC_b and second closest point B i2 ={B i2 E pc_b }; if it isPoint A i And point B i1 Is the corresponding point pair. And taking the type of the found descriptor of the correct point pair as a candidate target, and establishing a candidate target list.
(4-2) firstly, matching the three-dimensional space to obtain a Key point list Key list_1 with a large number of false positives removed, and carrying out first-layer description-matching;
(4-3) calculating the deviation angle between the Key points in the Key point list Key list_1 and the three nearest Key points, reducing the calculated amount to the maximum extent, storing the three deviation angles as a three-dimensional search space, and carrying out second-layer description-matching;
and (4-4) if a plurality of key points still exist after matching, calculating neighborhood characteristics of the key points, extracting contour information, performing binarization coding, and finishing characteristic matching by exclusive-or operation to obtain a final correct matching pair.
Step S5: and outputting the identification result.
Compared with the prior art, the method has the following beneficial effects:
the invention can effectively reduce the three-dimensional retrieval space based on the 4DCBS (4D-based projection contour binarization feature) feature description, improves the speed of local feature description, and has good descriptive property and robustness. According to the point cloud target identification method based on the 4DCBS, the descriptive timeliness and the robustness of local features are improved, and the retrieval space of feature matching is greatly reduced.
Example 2:
the invention provides a cloud target identification method, which comprises the following steps:
step 10: a model point cloud and a scene point cloud are input.
Step 20: and extracting 261 key points of the model and 3964 key points of the scene by adopting an ISS feature point extraction algorithm.
Step 30: each key point in the model and scene is characterized using a 4DCBS descriptor.
Step 40: after the feature description is completed in step 30, feature matching is performed.
(41) And calculating Euclidean distance between the model and the scene key points, wherein the distance threshold value is E less than or equal to 0.7. If the number is smaller than E, the corresponding correct point pair is considered, the type of the descriptor of the point pair is taken as a candidate target, and a candidate target list is established.
(42) Firstly, matching a three-dimensional space to obtain a Key point list Key list_1 with a large number of false positives removed, and performing first-layer description-matching, wherein the number of Key points is 276.
(43) And calculating deviation angles between the Key points in the Key point list Key list_1 and the three nearest Key points, furthest reducing the calculated amount, storing the three deviation angles as a three-dimensional search space, and performing second-layer description-matching to obtain 69 Key points.
(44) If a plurality of key points still exist after matching, calculating neighborhood characteristics of the key points, extracting contour information, performing binarization coding, and finishing characteristic matching by exclusive-or operation to obtain 62 final correct matching point pairs.
Step 50: the recognition result is output, see fig. 2.
Example 3:
the invention also provides a point cloud target identification device, which comprises:
the acquisition module is used for acquiring cloud data of the target point;
the extraction module is used for extracting key points of the point cloud data;
the construction module is used for constructing 4DCBS feature descriptions according to the key points and calculating 4DCBS descriptors of each feature point;
and the matching module is used for carrying out feature matching on the target key points according to the 4DCBS feature descriptors.
As one implementation of the embodiment of the invention, the extraction module adopts an ISS feature point extraction algorithm to extract key points of the point cloud data.
Example 4:
the present invention also provides a storage medium storing machine-executable instructions that, when invoked and executed by a processor, cause the processor to implement a point cloud target identification method.
The above description is merely illustrative of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention should be covered by the scope of the present invention, and the scope of the present invention should be defined by the claims.

Claims (4)

1. The point cloud target identification method is characterized by comprising the following steps of:
s1, acquiring target point cloud data;
s2, extracting key points of the point cloud data;
step S3, constructing 4DCBS feature description according to the key points, and calculating 4DCBS descriptors of each feature point;
step S4, performing feature matching on the target key points according to the 4DCBS feature descriptors; wherein,
in step S2, an ISS feature point extraction algorithm is adopted to extract point cloud data P source Key point p= { P on i |i∈N},P i Representative pointCloud P source Each key point in the point cloud, wherein N is the key point number in the point cloud;
the step S3 specifically comprises the following steps:
point P i The surface sheet of (2) isFrom point P i M neighborhood points; from->Removing centroid point P from surface points i-mean Reconstructing a three-dimensional surface patch according to the key points>And removing centroid from the key point to obtain P i i-mean
New dough sheet according to reconstructionEstimating LRF matrix and adding key point P i i-mean Transformed into a new 3D space, the expression of which is:
wherein,is the coordinates of the ith keypoint in the new 3D space, [ LRF]Representing a local reference frame matrix, i.e. [ LRF]=[x T y T z T ] T ;[P i i-mean ]Representing a key point matrix;
calculating the key point P i i-mean Euclidean distance from other key points;
finding a key point P according to the Euclidean distance i i-mean Three key points of nearest neighbor;
find and key point P i i-mean The three key points of the nearest neighbor are specifically:
set point P ik Is the key point P i i-mean The nearest neighbor is a key point, the z-axis deviation angle of the LRF is calculated, and the expression is as follows:
θ(P i i-mean ,P ik )=arccos(LRF z (P i i-mean ),LRF z (P ik ))θ∈[0,π],
wherein θ (P i i-mean ,P ik ) Representing the key point P i i-mean And P ik Offset angle of z-axis between LRF z (P i i-mean ) Representing the key point P i i-mean In the z-axis direction of LRF z (P ik ) Representing the key point P ik The z-axis direction of LRF;
let the projection plane perpendicular to the viewpoint V be PP,the projection profile on the PP plane is s_rpc (θ si ) The method comprises the steps of carrying out a first treatment on the surface of the By key point P i Starting from N l The intersection point of the ray and the profile RPC is the profile point P RPC(θsi)
Calculating the key point P i And the contour point P S_RPC(θsi) The euclidean distance between them as a contour sub-feature CS (θ si ):
Wherein,
will beContour sub-feature CS (θ si ) Combining to form a contour feature CS and carrying out normalization treatment;
CS={CS(θ s1s2 ,...,θ sN },
wherein the rotation angle θ= { θ s1s2 ,...,θ si i∈N θ CS feature is defined by N θ ×N l A plurality of floating point numbers;
the outline features after normalization processing are approximately represented by binarization, the more the number of bits is, the higher the accuracy of the represented local features is, and 4DCBS feature descriptors of each feature point are obtained;
the step S4 specifically comprises the following steps:
calculating a scale factor lambda=s of scene point cloud resolution and target point cloud resolution to be identified pr /T pr Searching a scaling factor lambda 'closest to lambda in a model library, finding a 4DCBS descriptor corresponding to lambda', searching a correct point pair by adopting a nearest neighbor distance ratio, and setting any point in a point cloud PC_a of a target to be identified as A i ={A i E PC_a }, scene feature point cloud PC_b; searching for and Point A in PC_b i Point B where euclidean distance is closest i1 ={B i1 E PC_b and second closest point B i2 ={B i2 E pc_b }; if it isPoint A i And point B i1 Is the corresponding point pair; the type of the found descriptor of the correct point pair is taken as a candidate target, and a candidate target list is established;
firstly, matching a three-dimensional space to obtain a Key point list Key_1 with a large number of false positives removed, and performing first-layer description-matching;
secondly, calculating deviation angles between the key points in the key point list keylist_1 and the three nearest key points, reducing calculated amount to the maximum extent, storing the three deviation angles as a three-dimensional search space, and carrying out second-layer description-matching;
finally, if a plurality of key points are obtained through matching, then the neighborhood characteristics of the key points are calculated, the contour information is extracted, binarization encoding is carried out, and the characteristic matching is completed through exclusive-or operation, so that the final correct matching pair is obtained.
2. A point cloud object recognition apparatus that implements the point cloud object recognition method of claim 1, comprising:
the acquisition module is used for acquiring cloud data of the target point;
the extraction module is used for extracting key points of the point cloud data;
the construction module is used for constructing 4DCBS feature descriptions according to the key points and calculating 4DCBS descriptors of each feature point;
and the matching module is used for carrying out feature matching on the target key points according to the 4DCBS feature descriptors.
3. The point cloud target recognition device of claim 2, wherein the extraction module extracts key points of the point cloud data using an ISS feature point extraction algorithm.
4. A storage medium storing machine-executable instructions which, when invoked and executed by a processor, cause the processor to implement the point cloud target identification method of claim 1.
CN202211433421.4A 2022-11-16 2022-11-16 Point cloud target identification method and device and storage medium Active CN115830340B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211433421.4A CN115830340B (en) 2022-11-16 2022-11-16 Point cloud target identification method and device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211433421.4A CN115830340B (en) 2022-11-16 2022-11-16 Point cloud target identification method and device and storage medium

Publications (2)

Publication Number Publication Date
CN115830340A CN115830340A (en) 2023-03-21
CN115830340B true CN115830340B (en) 2023-11-21

Family

ID=85528422

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211433421.4A Active CN115830340B (en) 2022-11-16 2022-11-16 Point cloud target identification method and device and storage medium

Country Status (1)

Country Link
CN (1) CN115830340B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN207475764U (en) * 2017-10-31 2018-06-08 白城师范学院 A kind of image element interpolation device of efficient video coding
CN108256529A (en) * 2017-11-29 2018-07-06 深圳慎始科技有限公司 Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method
CN108537805A (en) * 2018-04-16 2018-09-14 中北大学 A kind of target identification method of feature based geometry income
CN113184147A (en) * 2021-04-30 2021-07-30 白城师范学院 Multi-target collaborative search underwater robot with function of preventing sludge from being trapped
CN114972459A (en) * 2022-05-31 2022-08-30 哈尔滨理工大学 Point cloud registration method based on low-dimensional point cloud local feature descriptor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200020090A1 (en) * 2019-07-31 2020-01-16 Intel Corporation 3D Moving Object Point Cloud Refinement Using Temporal Inconsistencies

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN207475764U (en) * 2017-10-31 2018-06-08 白城师范学院 A kind of image element interpolation device of efficient video coding
CN108256529A (en) * 2017-11-29 2018-07-06 深圳慎始科技有限公司 Global point cloud based on Dian Yun projected outlines signature and distribution matrix describes method
CN108537805A (en) * 2018-04-16 2018-09-14 中北大学 A kind of target identification method of feature based geometry income
CN113184147A (en) * 2021-04-30 2021-07-30 白城师范学院 Multi-target collaborative search underwater robot with function of preventing sludge from being trapped
CN114972459A (en) * 2022-05-31 2022-08-30 哈尔滨理工大学 Point cloud registration method based on low-dimensional point cloud local feature descriptor

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
Binary Feature Description of 3D Point Cloud Based on Retina-like Sampling on Projection Planes;zhiqiang yan等;《Machines》;第1-16页 *
False Positive detection and prediction quality estimation for lidar point cloud segmentation;P Colling等;《2021 IEEE 33rd international conference on tools with Artificial Intelligence》;1-8 *
Rotational contour signatures for both real-valued and binary feature representations of 3D local shape;Jiaqi Yang等;《Computer Vision and Image Understanding》;第1-27页 *
zhiqiang yan等.Binary Feature Description of 3D Point Cloud Based on Retina-like Sampling on Projection Planes.《Machines》.2022,第1-16页. *
基于三维激光点云的室内机器人即时定位与建图算法研究;任健铭;《中国优秀硕士学位论文全文数据库 信息科技辑》;I140-839 *
基于自适应选择滤波系数的插值算法;王刚;陈贺新;陈绵书;;吉林大学学报(信息科学版)(第02期);4-12 *
基于超体素双向最近邻距离比的点云配准方法;李雪梅等;《吉林大学学报(工学版)》;1918-1925 *

Also Published As

Publication number Publication date
CN115830340A (en) 2023-03-21

Similar Documents

Publication Publication Date Title
CN109215129B (en) Local feature description method based on three-dimensional point cloud
CN110807473B (en) Target detection method, device and computer storage medium
CN103336957B (en) A kind of network homology video detecting method based on space-time characteristic
CN112184752A (en) Video target tracking method based on pyramid convolution
CN111340862B (en) Point cloud registration method and device based on multi-feature fusion and storage medium
CN111797744B (en) Multimode remote sensing image matching method based on co-occurrence filtering algorithm
CN104834931A (en) Improved SIFT algorithm based on wavelet transformation
CN113628263A (en) Point cloud registration method based on local curvature and neighbor characteristics thereof
Zhang et al. KDD: A kernel density based descriptor for 3D point clouds
CN110222661B (en) Feature extraction method for moving target identification and tracking
CN107180436A (en) A kind of improved KAZE image matching algorithms
Zhang et al. Saliency-driven oil tank detection based on multidimensional feature vector clustering for SAR images
CN111199558A (en) Image matching method based on deep learning
Liu et al. A novel rock-mass point cloud registration method based on feature line extraction and feature point matching
Chen et al. A local tangent plane distance-based approach to 3D point cloud segmentation via clustering
CN117132630A (en) Point cloud registration method based on second-order spatial compatibility measurement
CN114494380A (en) Binary shape context feature descriptor construction method and point cloud registration method
CN112926592B (en) Trademark retrieval method and device based on improved Fast algorithm
CN115830340B (en) Point cloud target identification method and device and storage medium
CN111127667A (en) Point cloud initial registration method based on region curvature binary descriptor
CN114627346B (en) Point cloud data downsampling method capable of retaining important features
CN116309026A (en) Point cloud registration method and system based on statistical local feature description and matching
CN112183596B (en) Linear segment matching method and system combining local grid constraint and geometric constraint
CN115861640A (en) Rapid image matching method based on ORB and SURF characteristics
CN113487713B (en) Point cloud feature extraction method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant