CN105261060A - Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method - Google Patents
Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method Download PDFInfo
- Publication number
- CN105261060A CN105261060A CN201510437252.5A CN201510437252A CN105261060A CN 105261060 A CN105261060 A CN 105261060A CN 201510437252 A CN201510437252 A CN 201510437252A CN 105261060 A CN105261060 A CN 105261060A
- Authority
- CN
- China
- Prior art keywords
- point cloud
- point
- compression
- carrier
- inertial navigation
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Processing Or Creating Images (AREA)
- Navigation (AREA)
Abstract
The invention relates to a PFH and inertial navigation based mobile context real-time three-dimensional reconstruction method. The PFH and inertial navigation based mobile context real-time three-dimensional reconstruction method is divided into three stages below: a first stage of designing a PFH based point cloud compression algorithm; a second stage of designing an inertial navigation based point cloud registration algorithm; and a third stage of finishing point cloud fusion. The PFH and inertial navigation based mobile context real-time three-dimensional reconstruction method can realize a relatively high compression ratio and can effectively reduce number of input point sets of an ICP algorithm, thereby saving computation time for point cloud registration and point cloud fusion. The number of iterations of the ICP algorithm is reduced, the rate of convergence is increased, and the real-time performance of the whole reconstruction algorithm is consequently improved.
Description
Technical field
The present invention relates to a kind of method based on real-time three-dimensional reconstruct under the mobile context of point cloud compression and inertial navigation, belong to computer machine vision, augmented reality field.
Background technology
Three Dimensional Reconfiguration is one of hot research direction in computer vision field, and how main research obtains object three-dimensional information in space, realizes true reappearance objective environment in a computer, for machine and the mankind provide 3D scene information accurately.
The main direction of studying of three-dimensionalreconstruction comprises scene reconstruction, map structuring and Model Reconstruction etc.The 3D scene information of surrounding environment can help environment residing for robot understanding clearly self and realize with the close friend of environment mutual, plays a significant role in the fields such as robot navigation, archaeology exploration, disaster assistance; In addition, 3D scene information also can be used on the fields such as virtual reality, military simulation, city planning, historical relic's protection.In the CAD processing of industrial products part, Mold Making, medical science plastic operation, face modeling and identification, computer three-dimensional animation etc. are applied, often need the three-dimensional model of object, this has also made requirement to Three Dimensional Reconfiguration.At present, three-dimensional scenic reconfiguration technique relies on its important application in these fields, is more and more subject to the favor of all circles researcher, becomes multi-field study hotspot gradually.
The three-dimensional reconstruction method of current 3D scene point cloud model is mainly divided into two large classes: based on the three-dimensional reconstruction method of multiple image with based on three-dimensional reconstruction method that is infrared or laser scanner technique.The former recovers the three-dimensional information of building surface by the two dimensional image that Digital Video etc. is taken, as publication number be CN101398937, name is called the patent of " three-dimensional reconstruction method based on Same Scene photograph collection at random ".Although these class methods are with low cost, automaticity is higher, and the three-dimensional model rebuild comprises the information such as abundant texture, but need to carry out a large amount of process to pictorial information, travelling speed is slower, be applicable to small-sized single Building Modeling, but for the large scale scene modeling of landforms complexity, owing to being subject to the restriction of computer processing time and inapplicable.
Therefore, become study hotspot based on Three Dimensional Reconfiguration that is infrared or laser scanning in recent years, infrared or laser scanner directly can obtain the depth information of scenery, and have speed fast, precision is high, not by advantages such as surperficial complexity effect.In addition, utilize laser scanner technique to carry out three-dimensionalreconstruction and can effectively recover the three-dimensional model with accurate geometry information and photorealistic.These accurately three-dimensional model the function of environment Visualization and virtual roaming aspect can not only be provided, more can meet the file of data, measure and demand that analysis etc. is higher level.
Summary of the invention
The object of this invention is to provide one utilizes sensor to move at indoor scene, constantly obtain the some cloud under different visual angles, and Real-time Reconstruction goes out the method for whole scene three-dimensional model.
In order to achieve the above object, technical scheme of the present invention there is provided a kind of mobile context real-time three-dimensional reconstructing method based on point cloud compression and inertial navigation, it is characterized in that, comprises the steps:
The first step, for scene S, utilize the camera of Kinect somatosensory controller respectively from visual angle V
1..., V
k... V
nacquisition point is converged and is closed M successively
1..., M
k... M
n, M
kfor the k moment is from vision V
kthe point cloud obtained;
Second step, point is converged conjunction { M
1..., M
k... M
nboil down to { P
1..., P
k... P
n, wherein, P
kfor M
ka subset, and meet:
in formula, ρ is for presetting ratio, F
c() be unique point quality metric function, δ is the unique point retention rate of setting;
3rd step, inertial navigation technology is utilized to obtain in scene S carrier at visual angle V
1..., V
k... V
nunder position, offset, be converted into the initial transformation matrix needed for ICP algorithm
with
4th step, utilize initial transformation matrix
with
to the some cloud { P after compression
1..., P
k... P
ncarry out rough registration, then based on ICP algorithm determination transformation matrix R
kwith T
k, complete accuracy registration;
5th step, the roaming of sensor in scene S along with Kinect somatosensory controller, constantly splice the every a slice point cloud newly obtained and partial model, merge, until reconstruct whole three-dimensional scene models.
Preferably, described 3rd step comprises:
Step 3.1, the depth camera of carrier being placed Kinect somatosensory controller in scene S, accelerometer and gyroscope;
Step 3.2, set a fixed coordinate system, if motion reference position is true origin, depth camera inceptive direction overlaps with X-axis positive dirction, utilize accelerometer and the acceleration of gyroscope survey carrier in X, Y, Z axis, degree of will speed up integration obtains the speed of carrier, and degree of will speed up quadratic integral obtains the position of carrier, thus reaches the object to carrier location, meanwhile, the anglec of rotation of carrier is obtained by gyroscope;
Step 3.3, by the relative displacement of carrier between different positions and pose, rotation angle information, be converted into the initial transformation matrix between two panels point cloud
with
Preferably, described 4th step comprises:
Step 4.1, by the compression point cloud P under two different visual angles
kwith P
k-1as the input set of ICP algorithm, by initial transformation matrix
with
as the initial transformation matrix of ICP algorithm;
Step 4.2, utilize initial transformation matrix
with
to the reference point clouds P after compression
kcarry out rotating, translation process, obtain P
k';
Step 4.3, screening matching double points: adopt most proximal point algorithm impact point cloud P upon compression
k-1middle searching and P
k' in the minimum matching characteristic point of unique point Euclidean distance, composition characteristic point is to set;
Step 4.4, the error of calculation: to calculate after conversion the right square distance of 2 stack features points and, be designated as d
k;
Step 4.5, the rotation utilizing the coordinate transform of matching double points solution room based on Quaternion Method, translation matrix
with
be used further to reference point clouds, obtain P
k", again screen matching double points, upgrade d simultaneously
k;
Computing in step 4.6, iterative step 4.5, until convergence, d
kbe less than threshold value or reach set iterations
The innovative point of this method is:
(1) a kind of point cloud compression algorithm based on PFH feature is used: higher compression ratios can be realized, the quantity of effective minimizing ICP algorithm input point set, thus save point cloud registering, some cloud fusion calculation time, remain abundant unique point, for point cloud registering is provided convenience simultaneously.
(2) a kind of point cloud registering based on inertial navigation technology is proposed: the essence of point cloud registering calculates to obtain rotation under different visual angles between two panels point cloud, translation matrix R and T, use ICP algorithm to solve this kind of problem in a lot of experiment, but the computational accuracy of traditional IC P algorithm depend on given initial value.Herein in test, Kinect camera moves in indoor scene with carrier, obtain the some cloud under different visual angles, if the Kinect camera position on carrier is fixed, so R and T equally also can be represented by the rotation of carrier between different visual angles, translation matrix, there is provided position and offset by inertial sensor, an approximate initial value accurately can be provided to ICP
with
reduce the iterations of ICP algorithm with this, improve speed of convergence, thus improve the real-time of whole restructing algorithm.
Accompanying drawing explanation
Fig. 1 is three-dimensionalreconstruction module introduction figure;
Fig. 2 is three-dimensionalreconstruction algorithm flow chart;
Fig. 3 is point cloud compression algorithm flow chart;
Fig. 4 is point cloud registration algorithm process flow diagram.
Embodiment
For making the present invention become apparent, hereby with preferred embodiment, and accompanying drawing is coordinated to be described in detail below.
The main research and utilization sensor of the present invention moves at indoor scene, constantly obtain the some cloud under different visual angles, and Real-time Reconstruction goes out the technique study of whole scene three-dimensional model.
The description of problem:
For scene S, Kinect camera respectively from visual angle { V
1..., V
k... V
nsuccessively acquisition point converge conjunction { M
1..., M
k... M
n, the object of moving three dimension reconstruct is the cloud data by obtaining under different visual angles, under the effect of restructing algorithm R (), asks for S '=R (M
1..., M
k... M
n), make S and S ' meet the constraint of similarity measurements flow function E ():
|E(S)-E(S′)|<ε
The analysis of problem:
Due to the some cloud substantial amounts in scene, being the real-time of implementation algorithm, first needing the some cloud to obtaining to carry out the pre-service such as denoising, compression; Meanwhile, in restructing algorithm R (), a most important step asks for M
kwith M
k-1transformation matrix R needed for registration
kwith T
k, and registration process depends on M
kwith M
k-1in feature point pairs.Therefore, compression algorithm not only will have higher ratio of compression, also needs to retain abundant unique point.
ICP algorithm is the study on classics method of carrying out point cloud registering, but traditional ICP algorithm is owing to lacking registration initial value or initial value precision is poor, is easy to be absorbed in locally optimal solution, affects iterative convergence speed.Therefore, be that there is a good registration initial value based on the precondition of ICP algorithm realization point cloud registering.
For the analysis of above problem, the present invention proposes a kind of mobile context real-time three-dimensional reconstructing method based on PFH and inertial navigation, comprises the steps:
Step 1, start up system, reset sensor information, set up a fixed coordinate system;
Step 2, the some cloud obtained under initial pose (namely carrier is positioned at true origin), carry out after compression process for setting up scene partial points cloud model;
Step 3, by inertance element (accelerometer, gyroscope) Real-time Obtaining posture information (pose refers to position and the angle information of carrier), when posture information changes greatly, when exceeding threshold value, obtain the some cloud under New Century Planned Textbook, carry out compression process simultaneously;
Step 4, to be calculated under current point cloud and fixed coordinate system between scene partial points cloud initial transformation matrix by change in location;
Step 5, based on improve ICP algorithm (initial transformation matrix is as ICP initial value) carry out point cloud registering;
Step 6, by after registration two panels point cloud merge, repeat step 3 to 6, until judge that reconstruct completes, stop test.
As shown in Figure 3, be the step 2 in three-dimensionalreconstruction algorithm flow, based on the mobile context point cloud compression algorithm flow chart of PFH.Concrete operation step is as follows:
(1) acceptance point cloud M from a some cloud acquisition device
k.
(2) the PFH feature operator of each point is calculated.
(3) calculate the standard deviation of PFH feature histogram, known by observing, analyzing, what standard deviation was little is angle point, and what standard deviation was large is planar point, thus utilizes standard deviation to simplify criterion as characteristic of division design.
(4) determine the standard deviation threshold method needed for compression, any one point in selected point cloud, judges the magnitude relationship of its PFH standard deviation and threshold value; If be less than or equal to threshold value, retain this point, otherwise reject this point; Repeat above-mentioned steps, until some Yun Zhongsuo a little calculates complete, the point after compression is kept at set P
kin, be designated as the some cloud after compression.
Based on above-mentioned compression algorithm, point cloud compression rate can reach more than 95%, is follow-up point cloud registering, fusion process saving a large amount of computing time, improves the feasibility of Real-time Reconstruction.Meanwhile, utilize PFH feature operator in the significant difference of diverse location, remain the unique point that can complete scene reconstruction, for follow-up point cloud registering is provided convenience.The implementation procedure of above-mentioned compression algorithm and theoretical foundation can with reference to " based on the point patterns histogrammic mobile context point cloud compressing algorithm " patents of invention proposed before author.
Be the step 4 in three-dimensionalreconstruction algorithm flow below, utilize inertial sensing information to obtain step and the principle explanation of initial transformation matrix:
(1) inertance element (accelerometer, gyroscope) is utilized to obtain relative displacement between different positions and pose, rotation angle information:
This experiment, by having laid a dolly in scene, dolly is installed the sensor of obtaining information, and used herein is kinect depth camera, accelerometer and gyroscope.
First, heat transfer agent is reset, set a fixed coordinate system, if motion reference position is true origin, camera inceptive direction overlaps with X-axis positive dirction, utilize inertance element (accelerometer, gyroscope) measure carrier originally in X, Y, acceleration on Z axis, as formula (1), (2) shown in, one time integration obtains speed, quadratic integral obtains position, speed and the position of carrier itself is obtained through twice integral operation, thus the object reached carrier location, simultaneously, the rotation angle information of carrier can be obtained by gyroscope.
In formula,
be respectively acceleration, speed and position.
Although the information obtained after long-time integration can accumulate certain error, but under the pose that any two intervals are less, obtain dolly relative displacement, rotation angle information can realize very little error, and the equipment of composition positioning system is all arranged in carrier, external information is not relied on during work, also not outwardly emittance, is thus not easily interfered.
(2) by the relative displacement of dolly between different positions and pose, rotation angle information, the initial transformation matrix between two panels point cloud is converted into
with
As shown in Figure 4, be the step 5 in three-dimensionalreconstruction algorithm flow, the point cloud registration algorithm process flow diagram based on inertial navigation:
(1) by the compression point cloud P under two different visual angles
kwith P
k-1as the input set of ICP algorithm, by initial transformation matrix
with
as the initial transformation matrix of ICP algorithm;
(2) initial transformation matrix is utilized
with
to the reference point clouds P after compression
kcarry out rotating, translation process, obtain P
k';
(3) matching double points is screened: adopt most proximal point algorithm impact point cloud P upon compression
k-1middle searching and P
k' in the minimum matching characteristic point of unique point Euclidean distance, composition characteristic point is to set;
(4) error of calculation: to calculate after conversion the right square distance of 2 stack features points and, be designated as d
k;
(5) rotation, the translation matrix of the coordinate transform of matching double points solution room is utilized based on Quaternion Method
with
be used further to reference point clouds, obtain P
k", again screen matching double points, upgrade d simultaneously
k.
(6) computing in iterative step (5), until convergence, d
kbe less than threshold value or reach set iterations.
Claims (3)
1., based on a mobile context real-time three-dimensional reconstructing method for point cloud compression and inertial navigation, it is characterized in that, comprise the steps:
The first step, for scene S, utilize the camera of Kinect somatosensory controller respectively from visual angle V
1..., V
k... V
nacquisition point is converged and is closed M successively
1..., M
k... M
n, M
kfor the k moment is from vision V
kthe point cloud obtained;
Second step, point is converged conjunction { M
1..., M
k... M
nboil down to { P
1..., P
k... P
n, wherein, P
kfor M
ka subset, and meet:
in formula, ρ is for presetting ratio, F
c() be unique point quality metric function, δ is the unique point retention rate of setting;
3rd step, inertial navigation technology is utilized to obtain in scene S carrier at visual angle V
1..., V
k... V
nunder position, offset, be converted into the initial transformation matrix needed for ICP algorithm
with
4th step, utilize initial transformation matrix
with
to the some cloud { P after compression
1..., P
k... P
ncarry out rough registration, then based on ICP algorithm determination transformation matrix R
kwith T
k, complete accuracy registration;
5th step, the roaming of sensor in scene S along with Kinect somatosensory controller, constantly splice the every a slice point cloud newly obtained and partial model, merge, until reconstruct whole three-dimensional scene models.
2. a kind of mobile context real-time three-dimensional reconstructing method based on point cloud compression and inertial navigation as claimed in claim 1, it is characterized in that, described 3rd step comprises:
Step 3.1, the depth camera of carrier being placed Kinect somatosensory controller in scene S, accelerometer and gyroscope;
Step 3.2, set a fixed coordinate system, if motion reference position is true origin, depth camera inceptive direction overlaps with X-axis positive dirction, utilize accelerometer and the acceleration of gyroscope survey carrier in X, Y, Z axis, degree of will speed up integration obtains the speed of carrier, and degree of will speed up quadratic integral obtains the position of carrier, thus reaches the object to carrier location, meanwhile, the anglec of rotation of carrier is obtained by gyroscope;
Step 3.3, by the relative displacement of carrier between different positions and pose, rotation angle information, be converted into the initial transformation matrix between two panels point cloud
with
3. a kind of mobile context real-time three-dimensional reconstructing method based on point cloud compression and inertial navigation as claimed in claim 1, it is characterized in that, described 4th step comprises:
Step 4.1, by the compression point cloud P under two different visual angles
kwith P
k-1as the input set of ICP algorithm, by initial transformation matrix
with
as the initial transformation matrix of ICP algorithm;
Step 4.2, utilize initial transformation matrix
with
to the reference point clouds P after compression
kcarry out rotating, translation process, obtain P
k';
Step 4.3, screening matching double points: adopt most proximal point algorithm impact point cloud P upon compression
k-1middle searching and P
k' in the minimum matching characteristic point of unique point Euclidean distance, composition characteristic point is to set;
Step 4.4, the error of calculation: to calculate after conversion the right square distance of 2 stack features points and, be designated as d
k;
Step 4.5, the rotation utilizing the coordinate transform of matching double points solution room based on Quaternion Method, translation matrix
with
be used further to reference point clouds, obtain P
k", again screen matching double points, upgrade d simultaneously
k;
Computing in step 4.6, iterative step 4.5, until convergence, d
kbe less than threshold value or reach set iterations.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510437252.5A CN105261060A (en) | 2015-07-23 | 2015-07-23 | Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201510437252.5A CN105261060A (en) | 2015-07-23 | 2015-07-23 | Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN105261060A true CN105261060A (en) | 2016-01-20 |
Family
ID=55100731
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201510437252.5A Pending CN105261060A (en) | 2015-07-23 | 2015-07-23 | Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN105261060A (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056664A (en) * | 2016-05-23 | 2016-10-26 | 武汉盈力科技有限公司 | Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision |
CN106504275A (en) * | 2016-10-12 | 2017-03-15 | 杭州深瞳科技有限公司 | A kind of inertial positioning and the real-time three-dimensional method for reconstructing of point cloud registering coupling and complementing |
CN106529394A (en) * | 2016-09-19 | 2017-03-22 | 广东工业大学 | Indoor scene and object simultaneous recognition and modeling method |
CN106796728A (en) * | 2016-11-16 | 2017-05-31 | 深圳市大疆创新科技有限公司 | Generate method, device, computer system and the mobile device of three-dimensional point cloud |
CN106898022A (en) * | 2017-01-17 | 2017-06-27 | 徐渊 | A kind of hand-held quick three-dimensional scanning system and method |
CN107194964A (en) * | 2017-05-24 | 2017-09-22 | 电子科技大学 | A kind of VR social intercourse systems and its method based on real-time body's three-dimensional reconstruction |
CN107507177A (en) * | 2017-08-30 | 2017-12-22 | 广东工业大学 | Processing of robots object localization method and device based on 3-D scanning |
CN107610219A (en) * | 2017-08-29 | 2018-01-19 | 武汉大学 | The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct |
CN107633518A (en) * | 2017-09-26 | 2018-01-26 | 南昌航空大学 | A kind of product design detection method based on Kinect |
WO2018049843A1 (en) * | 2016-09-14 | 2018-03-22 | 杭州思看科技有限公司 | Three-dimensional sensor system and three-dimensional data acquisition method |
CN108280866A (en) * | 2016-12-30 | 2018-07-13 | 乐视汽车(北京)有限公司 | Road Processing Method of Point-clouds and system |
CN108345005A (en) * | 2018-02-22 | 2018-07-31 | 重庆大学 | The real-time continuous autonomous positioning orientation system and navigation locating method of tunnelling machine |
CN108847121A (en) * | 2018-07-04 | 2018-11-20 | 深圳地平线机器人科技有限公司 | The method and apparatus for constructing high-precision map |
WO2019033882A1 (en) * | 2017-08-16 | 2019-02-21 | 北京京东尚科信息技术有限公司 | Data processing method, apparatus and system, and computer readable storage medium |
CN109931923A (en) * | 2017-12-15 | 2019-06-25 | 阿里巴巴集团控股有限公司 | A kind of navigation guide map generalization method and apparatus |
CN110345936A (en) * | 2019-07-09 | 2019-10-18 | 上海有个机器人有限公司 | The track data processing method and its processing system of telecontrol equipment |
CN111028247A (en) * | 2019-12-12 | 2020-04-17 | 长春工业大学 | Disc type element identification method and system based on point cloud loss |
CN111133476A (en) * | 2017-09-18 | 2020-05-08 | 苹果公司 | Point cloud compression |
CN111784835A (en) * | 2020-06-28 | 2020-10-16 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
CN111784766A (en) * | 2020-06-08 | 2020-10-16 | 易思维(杭州)科技有限公司 | Method for calculating pose of threaded target object |
CN114565689A (en) * | 2022-02-28 | 2022-05-31 | 燕山大学 | Axial symmetry three-dimensional model data compression reconstruction method |
CN114820604A (en) * | 2022-06-27 | 2022-07-29 | 四川大学 | Blade profile data splicing method and device based on nearest point distance loss |
US11663744B2 (en) | 2018-07-02 | 2023-05-30 | Apple Inc. | Point cloud compression with adaptive filtering |
US11676309B2 (en) | 2017-09-18 | 2023-06-13 | Apple Inc | Point cloud compression using masks |
US11683525B2 (en) | 2018-07-05 | 2023-06-20 | Apple Inc. | Point cloud compression with multi-resolution video encoding |
CN116342671A (en) * | 2023-05-23 | 2023-06-27 | 第六镜科技(成都)有限公司 | Point cloud and CAD model registration method, device, electronic equipment and storage medium |
US11748916B2 (en) | 2018-10-02 | 2023-09-05 | Apple Inc. | Occupancy map block-to-patch information compression |
US11798196B2 (en) | 2020-01-08 | 2023-10-24 | Apple Inc. | Video-based point cloud compression with predicted patches |
US11818401B2 (en) | 2017-09-14 | 2023-11-14 | Apple Inc. | Point cloud geometry compression using octrees and binary arithmetic encoding with adaptive look-up tables |
US11895307B2 (en) | 2019-10-04 | 2024-02-06 | Apple Inc. | Block-based predictive coding for point cloud compression |
US11935272B2 (en) | 2017-09-14 | 2024-03-19 | Apple Inc. | Point cloud compression |
US11948338B1 (en) | 2021-03-29 | 2024-04-02 | Apple Inc. | 3D volumetric content encoding using 2D videos and simplified 3D meshes |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101236660A (en) * | 2008-03-06 | 2008-08-06 | 张利群 | Three-dimensional scanners and its three-dimensional model reconfiguration method |
CN102467753A (en) * | 2010-11-04 | 2012-05-23 | 中国科学院深圳先进技术研究院 | Method and system for reconstructing time-varying point cloud based on framework registration |
US20120194516A1 (en) * | 2011-01-31 | 2012-08-02 | Microsoft Corporation | Three-Dimensional Environment Reconstruction |
CN103092897A (en) * | 2011-11-08 | 2013-05-08 | 南京理工大学常熟研究院有限公司 | Quick K neighbor searching method for point cloud data processing |
-
2015
- 2015-07-23 CN CN201510437252.5A patent/CN105261060A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101236660A (en) * | 2008-03-06 | 2008-08-06 | 张利群 | Three-dimensional scanners and its three-dimensional model reconfiguration method |
CN102467753A (en) * | 2010-11-04 | 2012-05-23 | 中国科学院深圳先进技术研究院 | Method and system for reconstructing time-varying point cloud based on framework registration |
US20120194516A1 (en) * | 2011-01-31 | 2012-08-02 | Microsoft Corporation | Three-Dimensional Environment Reconstruction |
CN103092897A (en) * | 2011-11-08 | 2013-05-08 | 南京理工大学常熟研究院有限公司 | Quick K neighbor searching method for point cloud data processing |
Non-Patent Citations (1)
Title |
---|
黄军君: ""基于PFH与信息融合的移动场景实时三维重构研究 精简算法"", 《中国优秀硕士学位论文全文数据库 信息科技辑》 * |
Cited By (48)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106056664B (en) * | 2016-05-23 | 2018-09-21 | 武汉盈力科技有限公司 | A kind of real-time three-dimensional scene reconstruction system and method based on inertia and deep vision |
CN106056664A (en) * | 2016-05-23 | 2016-10-26 | 武汉盈力科技有限公司 | Real-time three-dimensional scene reconstruction system and method based on inertia and depth vision |
US10309770B2 (en) | 2016-09-14 | 2019-06-04 | Hangzhou Scantech Co., Ltd | Three-dimensional sensor system and three-dimensional data acquisition method |
WO2018049843A1 (en) * | 2016-09-14 | 2018-03-22 | 杭州思看科技有限公司 | Three-dimensional sensor system and three-dimensional data acquisition method |
CN106529394B (en) * | 2016-09-19 | 2019-07-19 | 广东工业大学 | A kind of indoor scene object identifies simultaneously and modeling method |
CN106529394A (en) * | 2016-09-19 | 2017-03-22 | 广东工业大学 | Indoor scene and object simultaneous recognition and modeling method |
CN106504275A (en) * | 2016-10-12 | 2017-03-15 | 杭州深瞳科技有限公司 | A kind of inertial positioning and the real-time three-dimensional method for reconstructing of point cloud registering coupling and complementing |
CN106504275B (en) * | 2016-10-12 | 2019-03-05 | 杭州深瞳科技有限公司 | A kind of real-time three-dimensional method for reconstructing of inertial positioning and point cloud registering coupling and complementing |
CN106796728A (en) * | 2016-11-16 | 2017-05-31 | 深圳市大疆创新科技有限公司 | Generate method, device, computer system and the mobile device of three-dimensional point cloud |
US11004261B2 (en) | 2016-11-16 | 2021-05-11 | SZ DJI Technology Co., Ltd. | Method, device, computer system, and mobile apparatus for generating three-dimensional point cloud |
CN108280866A (en) * | 2016-12-30 | 2018-07-13 | 乐视汽车(北京)有限公司 | Road Processing Method of Point-clouds and system |
CN108280866B (en) * | 2016-12-30 | 2021-07-27 | 法法汽车(中国)有限公司 | Road point cloud data processing method and system |
CN106898022A (en) * | 2017-01-17 | 2017-06-27 | 徐渊 | A kind of hand-held quick three-dimensional scanning system and method |
CN107194964B (en) * | 2017-05-24 | 2020-10-09 | 电子科技大学 | VR social contact system based on real-time human body three-dimensional reconstruction and method thereof |
CN107194964A (en) * | 2017-05-24 | 2017-09-22 | 电子科技大学 | A kind of VR social intercourse systems and its method based on real-time body's three-dimensional reconstruction |
WO2019033882A1 (en) * | 2017-08-16 | 2019-02-21 | 北京京东尚科信息技术有限公司 | Data processing method, apparatus and system, and computer readable storage medium |
CN107610219A (en) * | 2017-08-29 | 2018-01-19 | 武汉大学 | The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct |
CN107610219B (en) * | 2017-08-29 | 2020-03-10 | 武汉大学 | Pixel-level point cloud densification method for sensing geometric clues in three-dimensional scene reconstruction |
CN107507177A (en) * | 2017-08-30 | 2017-12-22 | 广东工业大学 | Processing of robots object localization method and device based on 3-D scanning |
US11935272B2 (en) | 2017-09-14 | 2024-03-19 | Apple Inc. | Point cloud compression |
US11818401B2 (en) | 2017-09-14 | 2023-11-14 | Apple Inc. | Point cloud geometry compression using octrees and binary arithmetic encoding with adaptive look-up tables |
US11922665B2 (en) | 2017-09-18 | 2024-03-05 | Apple Inc. | Point cloud compression |
CN111133476B (en) * | 2017-09-18 | 2023-11-10 | 苹果公司 | System, apparatus and method for compression and decompression of a point cloud comprising a plurality of points |
CN111133476A (en) * | 2017-09-18 | 2020-05-08 | 苹果公司 | Point cloud compression |
US11676309B2 (en) | 2017-09-18 | 2023-06-13 | Apple Inc | Point cloud compression using masks |
CN107633518A (en) * | 2017-09-26 | 2018-01-26 | 南昌航空大学 | A kind of product design detection method based on Kinect |
CN109931923A (en) * | 2017-12-15 | 2019-06-25 | 阿里巴巴集团控股有限公司 | A kind of navigation guide map generalization method and apparatus |
CN108345005B (en) * | 2018-02-22 | 2020-02-07 | 重庆大学 | Real-time continuous autonomous positioning and orienting system and navigation positioning method of tunnel boring machine |
CN108345005A (en) * | 2018-02-22 | 2018-07-31 | 重庆大学 | The real-time continuous autonomous positioning orientation system and navigation locating method of tunnelling machine |
US11663744B2 (en) | 2018-07-02 | 2023-05-30 | Apple Inc. | Point cloud compression with adaptive filtering |
CN108847121A (en) * | 2018-07-04 | 2018-11-20 | 深圳地平线机器人科技有限公司 | The method and apparatus for constructing high-precision map |
US11683525B2 (en) | 2018-07-05 | 2023-06-20 | Apple Inc. | Point cloud compression with multi-resolution video encoding |
US11748916B2 (en) | 2018-10-02 | 2023-09-05 | Apple Inc. | Occupancy map block-to-patch information compression |
CN110345936B (en) * | 2019-07-09 | 2021-02-09 | 上海有个机器人有限公司 | Track data processing method and processing system of motion device |
CN110345936A (en) * | 2019-07-09 | 2019-10-18 | 上海有个机器人有限公司 | The track data processing method and its processing system of telecontrol equipment |
US11895307B2 (en) | 2019-10-04 | 2024-02-06 | Apple Inc. | Block-based predictive coding for point cloud compression |
CN111028247A (en) * | 2019-12-12 | 2020-04-17 | 长春工业大学 | Disc type element identification method and system based on point cloud loss |
US11798196B2 (en) | 2020-01-08 | 2023-10-24 | Apple Inc. | Video-based point cloud compression with predicted patches |
CN111784766A (en) * | 2020-06-08 | 2020-10-16 | 易思维(杭州)科技有限公司 | Method for calculating pose of threaded target object |
CN111784766B (en) * | 2020-06-08 | 2024-05-24 | 易思维(杭州)科技股份有限公司 | Method for calculating pose of threaded target object |
CN111784835A (en) * | 2020-06-28 | 2020-10-16 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
CN111784835B (en) * | 2020-06-28 | 2024-04-12 | 北京百度网讯科技有限公司 | Drawing method, drawing device, electronic equipment and readable storage medium |
US11948338B1 (en) | 2021-03-29 | 2024-04-02 | Apple Inc. | 3D volumetric content encoding using 2D videos and simplified 3D meshes |
CN114565689B (en) * | 2022-02-28 | 2024-02-02 | 燕山大学 | Axisymmetric three-dimensional model data compression reconstruction method |
CN114565689A (en) * | 2022-02-28 | 2022-05-31 | 燕山大学 | Axial symmetry three-dimensional model data compression reconstruction method |
CN114820604A (en) * | 2022-06-27 | 2022-07-29 | 四川大学 | Blade profile data splicing method and device based on nearest point distance loss |
CN116342671B (en) * | 2023-05-23 | 2023-08-08 | 第六镜科技(成都)有限公司 | Point cloud and CAD model registration method, device, electronic equipment and storage medium |
CN116342671A (en) * | 2023-05-23 | 2023-06-27 | 第六镜科技(成都)有限公司 | Point cloud and CAD model registration method, device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN105261060A (en) | Point cloud compression and inertial navigation based mobile context real-time three-dimensional reconstruction method | |
CN109272537B (en) | Panoramic point cloud registration method based on structured light | |
CN105469405B (en) | Positioning and map constructing method while view-based access control model ranging | |
CN111156998B (en) | Mobile robot positioning method based on RGB-D camera and IMU information fusion | |
CN100388319C (en) | Multi-viewpoint attitude estimating and self-calibrating method for three-dimensional active vision sensor | |
CN105913489B (en) | A kind of indoor three-dimensional scenic reconstructing method using plane characteristic | |
Fathi et al. | Automated sparse 3D point cloud generation of infrastructure using its distinctive visual features | |
CN109166149A (en) | A kind of positioning and three-dimensional wire-frame method for reconstructing and system of fusion binocular camera and IMU | |
CN112001926B (en) | RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping | |
CN101435732B (en) | Space target rotating axle based on binocular light flow and mass center estimation method | |
CN107240129A (en) | Object and indoor small scene based on RGB D camera datas recover and modeling method | |
CN106446815A (en) | Simultaneous positioning and map building method | |
CN107917710B (en) | Indoor real-time positioning and three-dimensional map construction method based on single line laser | |
CN103994765B (en) | Positioning method of inertial sensor | |
JP6483832B2 (en) | Method and system for scanning an object using an RGB-D sensor | |
CN102848389A (en) | Realization method for mechanical arm calibrating and tracking system based on visual motion capture | |
Li et al. | Leveraging planar regularities for point line visual-inertial odometry | |
Cheng et al. | Mobile robot indoor dual Kalman filter localisation based on inertial measurement and stereo vision | |
Xu et al. | 3D Reconstruction system for collaborative scanning based on multiple RGB-D cameras | |
CN113870366A (en) | Calibration method and calibration system of three-dimensional scanning system based on pose sensor | |
Wang et al. | Online extrinsic parameter calibration for robotic camera–encoder system | |
CN112833892A (en) | Semantic mapping method based on track alignment | |
TW202238449A (en) | Indoor positioning system and indoor positioning method | |
Luo et al. | Multisensor integrated stair recognition and parameters measurement system for dynamic stair climbing robots | |
Liu et al. | A real-time stereo visual-inertial SLAM system based on point-and-line features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20160120 |
|
WD01 | Invention patent application deemed withdrawn after publication |