US11625454B2 - Method and device for 3D shape matching based on local reference frame - Google Patents
Method and device for 3D shape matching based on local reference frame Download PDFInfo
- Publication number
- US11625454B2 US11625454B2 US17/042,417 US201917042417A US11625454B2 US 11625454 B2 US11625454 B2 US 11625454B2 US 201917042417 A US201917042417 A US 201917042417A US 11625454 B2 US11625454 B2 US 11625454B2
- Authority
- US
- United States
- Prior art keywords
- point
- parameter
- reference frame
- axis
- local reference
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/653—Three-dimensional objects by matching three-dimensional models, e.g. conformal mapping of Riemann surfaces
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
-
- G06T3/0012—
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
- G06T7/62—Analysis of geometric attributes of area, perimeter, diameter or volume
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/757—Matching configurations of points or features
-
- G—PHYSICS
- G06—COMPUTING OR CALCULATING; COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/46—Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
- G06V10/462—Salient features, e.g. scale invariant feature transforms [SIFT]
Definitions
- the present application relates to a technology of 3D shape matching, and particularly to a method for 3D shape matching based on a local reference frame, and a device for 3D shape matching based on a local reference frame.
- 3D object recognition has become a research focus in the field of computer vision, and has been widely applied in intelligent monitoring, e-commerce, robots, biomedicine, etc.
- 3D shape matching served as the most important step in the 3D object recognition, there are mainly 3D shape matching methods based on global features and 3D shape matching methods based on local features.
- the 3D shape matching methods based on global features are provided with fast speed, the 3D shape matching methods based on local features are more robust to occlusion and clutter and can make subsequent pose estimation more accurate.
- describing local features of a 3D point cloud by using a 3D local feature descriptor is a key part of the whole methods, and is also a key factor that determines the accuracy of 3D shape matching or 3D object recognition.
- the key lies in how to establish a repeatable and robust local reference frame for the local features of the 3D point cloud.
- 3D local feature descriptors In order to maintain distinction and robustness for occlusion and clutter, many 3D local feature descriptors have been proposed and extensively studied. These 3D local feature descriptors may be classified into two categories, namely, the descriptors based on a LRA (Local Reference Axis) and the descriptors based on LRF (Local Reference Frame).
- the local reference frame is constituted by three orthogonal axes, and the local reference axis only contains a single orientation axis.
- the local reference axis in which only the single orientation axis is defined can only provide information about radial and elevation directions, which will result in that the 3D local feature descriptor lacks sufficient detailed information.
- the 3D local feature descriptor with the local reference frame can fully encode spatial distribution and/or geometric information of the 3D local surface by using three axes, which is not only provided with rotation invariance but also greatly enhances distinction of the 3D local feature descriptor.
- the local reference frames may be divided into local reference frames based on CA (Covariance Analysis) and local reference frames based on GAs (Geometric Attributes).
- CA Covariance Analysis
- GAs Geographic Attributes
- a method for 3D shape matching based on a local reference frame includes:
- an origin of the first spherical neighborhood coincides with the feature point p and the first spherical neighborhood has a support radius of R
- an origin of the local reference frame coincides with the feature point p and the local reference frame have an orthogonal and normalized x axis, y axis, and z axis;
- the establishing the local reference frame for the first spherical neighborhood of the feature point includes:
- T i W i ( p′ i ⁇ p )+ p
- the parameter W i in the feature transformation is determined by at least one of a first parameter w1 i , a second parameter w2 i , and a third parameter w3 i
- the first parameter w1 i is associated with a distance from the 3D point p i to the feature point p
- the second parameter w2 i is associated with a distance from the 3D point p i to the projected point p′ i
- the third parameter w3 i is associated with an average distance L ⁇ from the 3D point p i to 1-ring neighborhood points that are neighborhood points adjacent to the 3D point p i
- n j is a normal vector of the 3D point q j .
- the step, executed by a processor, of determining the calculation radius R z includes:
- determining a radius scale factor ⁇ according to the average grid resolution scene.mr of the real scene and the average grid resolution model.mr of the target object, here the radius scale factor ⁇ is determined as follows:
- the parameter W i in the feature transformation is determined by a product of any two of the first parameter w1 i , the second parameter w2 i , and the third parameter w3 i .
- the parameter W i in the feature transformation is determined by a product of the first parameter w1 i , the second parameter w2 i , and the third parameter w3 i .
- the 3D point cloud of the real scene may be acquired in real time, and the 3D point cloud of the target object may be pre-stored. That is to say, in the above described method, the 3D local surface information of the 3D point cloud acquired by real-time measurement of the real scene may be matched with the 3D local surface information acquired by calculating the pre-stored 3D point cloud of the target object, so as to realize recognition of a shape matching the model of the target object from the 3D point cloud of the real scene.
- a method for 3D shape matching based on a local reference frame is proposed, which is similar to the steps of the above described method, and their difference lies in that the 3D point cloud of the target object is pre-stored and the 3D point cloud of the scene may also be pre-stored after being acquired. That is to say, in this method, the 3D local surface information acquired by calculating the 3D point cloud of the pre-stored target object may be matched with the 3D local surface information acquired by calculating the 3D point cloud of the scene, so as to realize recognition of a shape matching the model of the target object from the 3D point cloud of the scene.
- a device for 3D shape matching based on a local reference frame which includes an acquisition apparatus, a memory and a processor.
- the acquisition apparatus is configured to acquire a 3D point cloud of a real scene
- a computer program is stored in the memory
- the processor when executing the computer program, implements the operations of the method described in the first aspect of the present application except for acquiring the 3D point cloud of the real scene.
- a device for 3D shape matching based on a local reference frame which includes a memory and a processor.
- a computer program is stored in the memory, and the processor, when executing the computer program, implements the methods described in the first aspect or the second aspect of the present application.
- the established local reference frame is repeatable, robust, and anti-noise by performing feature transformation on the neighborhood points in the neighborhood of each of the feature points of the 3D point cloud, and the established local reference frame is hardly affected by the grid resolution by configuring the calculation radius used to calculate the z axis of the local reference frame to be adaptively adjusted according to the grid resolution. Therefore, even if there is occlusion, clutter and noise interference, or even if the grids of the 3D point cloud of the scene or the target object are simplified, a correspondingly excellent 3D shape matching or recognition result can still be acquired by using the method and device for 3D shape matching based on the local reference frame proposed in the present application.
- FIG. 1 is a flow diagram of the method for 3D shape matching based on a local reference frame according to an embodiment of the present application.
- FIG. 2 is a flow diagram of a process of establishing a local reference frame according to an embodiment of the present application.
- FIG. 3 is a schematic diagram of projecting a 3D point set P in a spherical neighborhood to a plane L orthogonal to the z-axis according to an embodiment of the present application.
- FIG. 4 is a schematic diagram of 1-ring neighborhood points of 3D points according to an embodiment of the present application.
- FIG. 5 is a flow diagram of determining the z axis of the local reference frame according to an embodiment of the present application.
- FIG. 6 is a flow diagram of determining a calculation radius R z of the z axis according to an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of the device for 3D shape matching based on a local reference frame according to an embodiment of the present application.
- 3D point cloud records a surface of a scene or an object in the form of points after scanning the scene or the object, and each of the points is provided with a three-dimensional coordinate.
- the 3D shape matching is to match a surface of a scene or an object represented by 3D point data with another or more surfaces of scenes or objects represented by 3D point data, so as to further achieve a result of 3D object recognition.
- the present application proposes a method for 3D shape matching based on a local reference frame, and the method may include:
- an origin of the first spherical neighborhood coincides with the feature point p and the first spherical neighborhood has a support radius of R
- an origin of the local reference frame coincides with the feature point p and the local reference frame have an orthogonal and normalized x axis, y axis, and z axis;
- the real scene may be any scene in real life, especially in industrial applications.
- the present application does not make specific restrictions on the application scene, as long as it is a scene that requires a 3D shape matching or 3D recognition method.
- the 3D point cloud may be acquired in real time, and the 3D point cloud of the target object may be pre-stored, i.e., the target object may be a model used to match the same object in the real scene.
- the 3D local surface information of the 3D point cloud acquired by real-time measurement of the real scene can be matched with the 3D local surface information acquired by calculating the 3D point cloud of the pre-stored target object, so as to achieve recognition of a shape matching the model of the target object from the 3D point cloud of the real scene.
- the feature point is also called a key point or a point of interest, that is, a feature point provided with a specific shape.
- the feature points in the 3D point cloud may be acquired by using a method based on a fixed-scale and the method based on an adaptive-scale, or the feature points may be acquired by using any other existing technology, which is not limited herein.
- the 3D local feature descriptor may be any local feature descriptor established based on the local reference frame of the present application, for example, any existing local feature descriptor based on the GA method, which is not limited in the present application.
- the method includes the basic technical features of the above embodiment, and on the basis of the above embodiment, the step of establishing the local reference frame for the first spherical neighborhood of the feature point may include:
- T i W i ( p′ i ⁇ p )+ p
- the parameter W i in the feature transformation is determined by at least one of a first parameter w1 i , a second parameter w2 i , and a third parameter w3 i
- the first parameter w1 i is associated with the distance from the 3D point p i to the feature point p
- the second parameter w2 i is associated with the distance from the 3D point p i to the projected point p′ i
- the x axis of the local reference frame should be a coordinate axis that makes the point set P′ more stable in the x axis direction, therefore the local reference frame acquired by the above method is more robust.
- the point distribution T provided with a larger variance in the certain direction than the projected point set P′ is acquired by performing planar projection and feature transformation on the neighborhood points within the neighborhood of the feature point of the 3D point cloud, and the local reference frame established by analyzing the point distribution T provided with the larger variance in the certain one direction is repeatable, robust and anti-noise.
- the first parameter w1 i associated with the distance from the 3D point p i to the feature point p may be used to reduce the influence of occlusion and clutter on the projected point set P′
- the second parameter w2 i associated with the distance from the 3D point p i to the projected point p′ i may be used to make the point distribution of the projected point set P′ more characteristic
- the third parameter w3 i associated with the average distance L ⁇ from the 3D point p i to its 1-ring neighboring points may be used to reduce the influence of an outlier on the projected point set P′.
- the second parameter w2 i and the distance from the 3D point p i to the projected point p′ i are required to satisfy the following relationship:
- the third parameter w3 i and the average distance L ⁇ from the 3D point p i to its 1-ring neighboring points are required to satisfy the following relationship:
- r neighborhood points p i1 , p i2 , . . . . . , p ir of a certain 3D point p i in its 1-ring neighborhood there are r neighborhood points p i1 , p i2 , . . . . . , p ir of a certain 3D point p i in its 1-ring neighborhood.
- the number r of the 1-ring neighboring points may be 5, that is, the certain 3D point p i is provided with r neighboring points p i1 , P i2 , P i3 , P i4 , and P i5 in its 1-ring neighborhood.
- the constant s may be equal to 4.
- the parameter W i in the feature transformation may be commonly determined by a product of any two of the first parameter w1 i , the second parameter w2 i , and the third parameter w3 i .
- the parameter W i in the feature transformation may be commonly determined by a product of the first parameter w1 i , the second parameter w2 i , and the third parameter w3 i .
- the more factors used to determine the point distribution T provided with the larger variance in the certain direction the better the technical effect, and the more robust the acquired local reference frame.
- the method includes the basic technical features of the foregoing embodiment, and on the basis of the foregoing embodiment, the step of determining the z axis of the local reference frame may include:
- n j is a normal vector of the 3D point q j .
- the calculation radius R z may be not equal to the support radius R, so that the z axis of the local reference frame is more robust to occlusion and clutter.
- the calculated radius R z is equal to one third of the support radius R.
- the larger the grid resolution the larger the scale of the 3D point cloud, and the greater the number of 3D points on a surface of a scene or an object in the same space.
- the neighborhood points acquired in the real scene will be less than the neighborhood points of the model by using the same radius.
- the performance of the 3D shape matching will be greatly negatively affected and thus become very poor if the z axis of the local reference frame of the scene is calculated by using a relatively small radius of the neighborhood.
- the present application has proposed an adaptive scale factor which is used to determine the calculation radius R z , so that the acquired z axis is not only robust to occlusion, but also robust to different grid samplings.
- the method includes the basic technical features of the foregoing embodiment, and on the basis of the foregoing embodiment, the step of determining the calculation radius R z may include:
- the calculation radius used to calculate the z axis of the local reference frame is configured to be adaptively adjusted according to the grid resolution, so that the established local reference frame can be hardly affected by the grid resolution.
- the constant C may be equal to 3.
- the method includes the basic technical features of the foregoing embodiment, and the method, on the basis of the foregoing embodiment, may further include the following steps before determining the calculation radius R z of the real scene:
- predetermining at least two radius scale factors and predetermining local reference frames and 3D local feature descriptors corresponding to the at least two radius scale factors
- the method includes the basic technical features of the foregoing embodiment, and the method, on the basis of the foregoing embodiment, may further include:
- an embodiment of the present application proposes a method for 3D shape matching based on a local reference frame, and the method may include:
- an origin of the first spherical neighborhood coincides with the feature point p and the first spherical neighborhood has a support radius of R
- an origin of the local reference frame coincides with the feature point p and the local reference frame have an orthogonal and normalized x axis, y axis, and z axis;
- the step of establishing the local reference frame for the first spherical neighborhood of the feature point may include:
- T i W i ( p′ i ⁇ p )+ p
- the parameter W i in the feature transformation is determined by at least one of a first parameter w1 i , a second parameter w2 i , and a third parameter w3 i
- the first parameter w1 i is associated with the distance from the 3D point p i to the feature point p
- the second parameter w2 i is associated with the distance from the 3D point p i to the projected point p′ i
- the steps of the embodiments of the second aspect of the present application are similar to the steps of the embodiments of the first aspect, except that the 3D point cloud of the target object is pre-stored and the 3D point cloud of the scene may also be pre-stored after being acquired. That is to say, in this method, the 3D local surface information acquired by calculating the 3D point cloud of the pre-stored target object may be matched with the 3D local surface information acquired by calculating the 3D point cloud of the scene, so as to realize recognition of a shape matching the model of the target object from the 3D point cloud of the scene.
- the technical features of the second aspect of the present application reference may be made to the technical features in the specific embodiments of the first aspect of the present application, which will not be repeated herein again.
- a device for 3D shape matching based on a local reference frame may include an acquisition apparatus, a memory and a processor.
- the acquisition apparatus is configured to acquire a 3D point cloud of a real scene
- a computer program is stored in the memory
- the processor when executing the computer program, implements the operations of the method described in the first aspect of the present application except for acquiring the 3D point cloud of the real scene.
- the acquisition apparatus may be a 3D scanning apparatus, a laser scanning apparatus, an acquisition apparatus using structured light, or any other apparatus that can acquire the 3D point cloud of the real scene
- the memory may be any storage apparatus with a software storage function
- the processor may be any processor that may execute the computer program and instruct a certain execution subject to perform related operations.
- the 3D point cloud data acquired by the acquisition apparatus may be directly or indirectly stored in the memory, or may be accessed by the memory or the processor.
- the processor may directly or indirectly control the acquisition apparatus to acquire the 3D point cloud data.
- an embodiment proposes a device for 3D shape matching based on a local reference frame, which includes a memory and a processor.
- a computer program is stored in the memory, and the processor, when executing the computer program, implements the embodiments of the methods described in the first aspect or the second aspect of the present application.
- the processor when executing the computer program, implements the embodiments of the methods described in the first aspect or the second aspect of the present application.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Software Systems (AREA)
- Geometry (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computer Graphics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Databases & Information Systems (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Computing Systems (AREA)
- Image Processing (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
T i =W i(p′ i −p)+p,
where the parameter Wi in the feature transformation is determined by at least one of a first parameter w1i, a second parameter w2i, and a third parameter w3i, here the first parameter w1i is associated with a distance from the 3D point pi to the feature point p, the second parameter w2i is associated with a distance from the 3D point pi to the projected point p′i, and the third parameter w3i is associated with an average distance
and performing sign disambiguation on the eigenvector v′ corresponding to the maximum eigenvalue according to the following definition to determine the x axis of the local reference frame:
and
where
where nj is a normal vector of the 3D point qj.
where C is a constant;
T i =W i(p′ i −p)+p,
where the parameter Wi in the feature transformation is determined by at least one of a first parameter w1i, a second parameter w2i, and a third parameter w3i, where the first parameter w1i is associated with the distance from the 3D point pi to the feature point p, the second parameter w2i is associated with the distance from the 3D point pi to the projected point p′i, and the third parameter w3i is associated with the average distance
and performing sign disambiguation on the eigenvector v′ corresponding to the maximum eigenvalue according to the following definition to determine the x axis of the local reference frame:
and
w1i =R−∥p i −p∥.
where H={hi}, and σ represents a standard deviation of the above Gaussian function.
where r is the number of the 1-ring neighboring points, and s is a constant.
where
where nj is a normal vector of the 3D point qj.
where C is a constant;
T i =W i(p′ i −p)+p,
where the parameter Wi in the feature transformation is determined by at least one of a first parameter w1i, a second parameter w2i, and a third parameter w3i, where the first parameter w1i is associated with the distance from the 3D point pi to the feature point p, the second parameter w2i is associated with the distance from the 3D point pi to the projected point p′i, and the third parameter w3i is associated with the average distance
and performing sign disambiguation on the eigenvector v′ corresponding to the maximum eigenvalue according to the following definition to determine the x axis of the local reference frame:
and
Claims (20)
T i =W i(p′ i −p)+p,
w1i=R−||pi−p||.
T i =W i(p′ i −p)+p,
T i =W i(p′ i −p)+p,
Applications Claiming Priority (1)
| Application Number | Priority Date | Filing Date | Title |
|---|---|---|---|
| PCT/CN2019/124037 WO2021114026A1 (en) | 2019-12-09 | 2019-12-09 | 3d shape matching method and apparatus based on local reference frame |
Publications (2)
| Publication Number | Publication Date |
|---|---|
| US20220343105A1 US20220343105A1 (en) | 2022-10-27 |
| US11625454B2 true US11625454B2 (en) | 2023-04-11 |
Family
ID=76329287
Family Applications (1)
| Application Number | Title | Priority Date | Filing Date |
|---|---|---|---|
| US17/042,417 Active US11625454B2 (en) | 2019-12-09 | 2019-12-09 | Method and device for 3D shape matching based on local reference frame |
Country Status (3)
| Country | Link |
|---|---|
| US (1) | US11625454B2 (en) |
| CN (1) | CN113168729B (en) |
| WO (1) | WO2021114026A1 (en) |
Families Citing this family (6)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN113723917B (en) * | 2021-08-24 | 2024-03-29 | 中国人民解放军32382部队 | Method and equipment for constructing association between instrument management standard and instrument technical standard |
| US11810249B2 (en) * | 2022-01-03 | 2023-11-07 | Signetron Inc. | 3D point cloud processing |
| CN115170735A (en) * | 2022-07-05 | 2022-10-11 | 西安工业大学 | Three-dimensional local surface feature description method based on deviation angle statistics |
| CN115311473B (en) * | 2022-08-09 | 2025-09-19 | 安徽大学 | Method, system and medium for creating three-dimensional feature descriptor based on local curved surface change information |
| CN115984803B (en) * | 2023-03-10 | 2023-12-12 | 安徽蔚来智驾科技有限公司 | Data processing methods, equipment, driving equipment and media |
| CN116168056B (en) * | 2023-03-28 | 2024-09-20 | 武汉理工大学 | A method, device, equipment and storage medium for extracting target object contour point cloud |
Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20090092486A (en) | 2008-02-27 | 2009-09-01 | 성균관대학교산학협력단 | A self-modeling method for 3D cylindrical objects and system thereof |
| CN105160344A (en) | 2015-06-18 | 2015-12-16 | 北京大学深圳研究生院 | Method and device for extracting local features of three-dimensional point cloud |
| US9390552B1 (en) * | 2013-05-23 | 2016-07-12 | Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences | Method and device for extracting skeleton from point cloud |
| CN106780459A (en) | 2016-12-12 | 2017-05-31 | 华中科技大学 | A kind of three dimensional point cloud autoegistration method |
| US20170243397A1 (en) * | 2016-02-24 | 2017-08-24 | Vangogh Imaging, Inc. | Shape-based registration for non-rigid objects with large holes |
| CN107274423A (en) | 2017-05-26 | 2017-10-20 | 中北大学 | A kind of point cloud indicatrix extracting method based on covariance matrix and projection mapping |
| US20170316597A1 (en) * | 2016-04-29 | 2017-11-02 | Adobe Systems Incorporated | Texturing a three-dimensional scanned model with localized patch colors |
| CN109215129A (en) | 2017-07-05 | 2019-01-15 | 中国科学院沈阳自动化研究所 | A kind of method for describing local characteristic based on three-dimensional point cloud |
| US10186049B1 (en) * | 2017-03-06 | 2019-01-22 | URC Ventures, Inc. | Determining changes in object structure over time using mobile device images |
| CN110211163A (en) | 2019-05-29 | 2019-09-06 | 西安财经学院 | A kind of point cloud matching algorithm based on EPFH feature |
| CN110335297A (en) | 2019-06-21 | 2019-10-15 | 华中科技大学 | A kind of point cloud registration method based on feature extraction |
Family Cites Families (4)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| CN101794461B (en) * | 2010-03-09 | 2011-12-14 | 深圳大学 | Three-dimensional modeling method and system |
| US8274508B2 (en) * | 2011-02-14 | 2012-09-25 | Mitsubishi Electric Research Laboratories, Inc. | Method for representing objects with concentric ring signature descriptors for detecting 3D objects in range images |
| CN106096503A (en) * | 2016-05-30 | 2016-11-09 | 东南大学 | A kind of based on key point with the three-dimensional face identification method of local feature |
| EP3457357B1 (en) * | 2017-09-13 | 2021-07-07 | Tata Consultancy Services Limited | Methods and systems for surface fitting based change detection in 3d point-cloud |
-
2019
- 2019-12-09 CN CN201980002893.4A patent/CN113168729B/en active Active
- 2019-12-09 US US17/042,417 patent/US11625454B2/en active Active
- 2019-12-09 WO PCT/CN2019/124037 patent/WO2021114026A1/en not_active Ceased
Patent Citations (11)
| Publication number | Priority date | Publication date | Assignee | Title |
|---|---|---|---|---|
| KR20090092486A (en) | 2008-02-27 | 2009-09-01 | 성균관대학교산학협력단 | A self-modeling method for 3D cylindrical objects and system thereof |
| US9390552B1 (en) * | 2013-05-23 | 2016-07-12 | Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences | Method and device for extracting skeleton from point cloud |
| CN105160344A (en) | 2015-06-18 | 2015-12-16 | 北京大学深圳研究生院 | Method and device for extracting local features of three-dimensional point cloud |
| US20170243397A1 (en) * | 2016-02-24 | 2017-08-24 | Vangogh Imaging, Inc. | Shape-based registration for non-rigid objects with large holes |
| US20170316597A1 (en) * | 2016-04-29 | 2017-11-02 | Adobe Systems Incorporated | Texturing a three-dimensional scanned model with localized patch colors |
| CN106780459A (en) | 2016-12-12 | 2017-05-31 | 华中科技大学 | A kind of three dimensional point cloud autoegistration method |
| US10186049B1 (en) * | 2017-03-06 | 2019-01-22 | URC Ventures, Inc. | Determining changes in object structure over time using mobile device images |
| CN107274423A (en) | 2017-05-26 | 2017-10-20 | 中北大学 | A kind of point cloud indicatrix extracting method based on covariance matrix and projection mapping |
| CN109215129A (en) | 2017-07-05 | 2019-01-15 | 中国科学院沈阳自动化研究所 | A kind of method for describing local characteristic based on three-dimensional point cloud |
| CN110211163A (en) | 2019-05-29 | 2019-09-06 | 西安财经学院 | A kind of point cloud matching algorithm based on EPFH feature |
| CN110335297A (en) | 2019-06-21 | 2019-10-15 | 华中科技大学 | A kind of point cloud registration method based on feature extraction |
Non-Patent Citations (1)
| Title |
|---|
| International Search Report and Written Opinion of the International Searching Authority, issued in PCT/CN2019/124037, dated Aug. 24, 2020; ISA/CN. |
Also Published As
| Publication number | Publication date |
|---|---|
| US20220343105A1 (en) | 2022-10-27 |
| CN113168729A (en) | 2021-07-23 |
| CN113168729B (en) | 2023-06-30 |
| WO2021114026A1 (en) | 2021-06-17 |
Similar Documents
| Publication | Publication Date | Title |
|---|---|---|
| US11625454B2 (en) | Method and device for 3D shape matching based on local reference frame | |
| CN119919749B (en) | Interaction method and system based on deep learning | |
| Hu et al. | An automatic 3D registration method for rock mass point clouds based on plane detection and polygon matching | |
| CN113033270B (en) | 3D object partial surface description method, device and storage medium using auxiliary axes | |
| CN112017233A (en) | Reaction force cone topography measurement method, device, computer equipment and system | |
| US12307738B2 (en) | 3D shape matching method and device based on 3D local feature description using SGHS | |
| Ji et al. | Adaptive denoising-enhanced LiDAR odometry for degeneration resilience in diverse terrains | |
| CN120894432B (en) | A rapid method for locating screw holes in charger housings based on 3D point cloud recognition | |
| CN117930254A (en) | Unmanned aerial vehicle laser radar repositioning method and system based on structured information assistance | |
| Gou et al. | A visual SLAM with tightly coupled integration of multiobject tracking for production workshop | |
| CN115205462B (en) | Three-dimensional reconstruction method based on semi-ordered point cloud aiming at optical cut type | |
| CN119152040B (en) | Pose estimation method and device and electronic equipment | |
| CN114387351A (en) | Monocular vision calibration method and computer readable storage medium | |
| CN117422771A (en) | An external parameter calibration method and system for camera and lidar sensor equipment | |
| Sa et al. | Depth grid-based local description for 3D point clouds | |
| Dierenbach et al. | Next-Best-View method based on consecutive evaluation of topological relations | |
| CN115239899A (en) | Pose map generation method, high-precision map generation method and device | |
| Cheng et al. | SVM-LO: An accurate, robust, real-time LiDAR odometry with segmentation voxel map for autonomous vehicles | |
| Rink et al. | Monte Carlo registration and its application with autonomous robots | |
| CN118657743B (en) | Industrial parts detection method and system based on tensor voting | |
| CN118691685B (en) | Multi-criteria method for extracting surface traces of jointed rock mass at tunnel face | |
| Magnier et al. | Highly specific pose estimation with a catadioptric omnidirectional camera | |
| CN119762429B (en) | Steel pipe identification position judging and positioning method based on 3D vision technology | |
| Sun et al. | EA2D-LSLAM: Environment Analysis-Based Adaptive Downsampling for Point Clouds in LiDAR SLAM | |
| Zhang et al. | A Study on Streamlining and Denoising Point Clouds for Aero-engine Blade Surface Reconstruction |
Legal Events
| Date | Code | Title | Description |
|---|---|---|---|
| AS | Assignment |
Owner name: SHENZHEN UNIVERSITY, CHINA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:LI, DONG;AO, SHENG;TIAN, JINDONG;AND OTHERS;REEL/FRAME:053901/0682 Effective date: 20200922 |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO UNDISCOUNTED (ORIGINAL EVENT CODE: BIG.); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| FEPP | Fee payment procedure |
Free format text: ENTITY STATUS SET TO SMALL (ORIGINAL EVENT CODE: SMAL); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
| STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
| STCF | Information on status: patent grant |
Free format text: PATENTED CASE |