CN103413352A - Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion - Google Patents
Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion Download PDFInfo
- Publication number
- CN103413352A CN103413352A CN2013103234393A CN201310323439A CN103413352A CN 103413352 A CN103413352 A CN 103413352A CN 2013103234393 A CN2013103234393 A CN 2013103234393A CN 201310323439 A CN201310323439 A CN 201310323439A CN 103413352 A CN103413352 A CN 103413352A
- Authority
- CN
- China
- Prior art keywords
- point
- carry out
- image
- value
- unique
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a scene three-dimensional reconstruction method based on RGBD multi-sensor fusion. The method is used for solving the technical problem that an existing scene three-dimensional reconstruction method is low in reconstruction speed. According to the technical scheme, firstly, a SURF algorithm is used for replacing a SIFT algorithm to carry out feature matching; then, the division standard of a sub-map is improved according to the method of an observation center, an image pyramid after multilayer image abstraction is made to reflect the topological structure of an environment better, and the improvement of the speed of constructing a large-scale scene three-dimensional map is facilitated better. Meanwhile, in a map generating part, a classical multiframe fusion method based on a space body is used for reducing redundant points, and the reconstruction speed is improved. As is tested, the reconstruction speed of the method is three times of that of the background technology.
Description
Technical field
The present invention relates to a kind of scene three-dimensional rebuilding method, particularly a kind of three-dimensional rebuilding method of scene based on the RGBD Multi-sensor Fusion.
Background technology
In the autonomous mobile robot field, with setting up, be all a study hotspot to the exploration of circumstances not known all the time.Existing method for reconstructing three-dimensional scene mainly contains: based on the infrared laser sensor with based on the large class of vision sensor two.
Document " RGB-D mapping:Using depth cameras for dense 3D modeling of indoor environments, 12th ISER. 2010,20:p22-25 " discloses a kind of figure of TORO based on RGB-D sensor optimized algorithm.The method adopts the SIFT algorithm to carry out characteristic matching, obtain the initial estimation with respect to the present frame pose of the first frame, and then use the ICP algorithm to carry out point cloud matching to optimize initial estimation, when loop closure being detected, all frames of input are added in pose figure and carry out global optimization, thereby obtain high-precision map.But aspect time cost, characteristic matching is to adopt the general SIFT algorithm of computing velocity, can not meet well real-time three-dimensional and rebuild the requirement to real-time speed, therefore, should adopt the algorithm that speed is higher when characteristic matching, to improve real-time.Aspect map optimization, because the track of mobile device only builds map centered by a position, cause the generation of area of absence, therefore can not set up complete map; Simultaneously, when output 3D point cloud chart, only, by the simple addition of the cloud data of each frame, cause the increase of redundant points, not only reduced arithmetic speed, and brought very big interference for follow-up processing again.
Summary of the invention
In order to overcome existing scene three-dimensional rebuilding method, rebuild the low deficiency of speed, the invention provides a kind of three-dimensional rebuilding method of scene based on the RGBD Multi-sensor Fusion.At first the method adopts the SURF algorithm to replace the SIFT algorithm to carry out characteristic matching, secondly, according to the observation the method improvement at center the criteria for classifying of sub-map, make image pyramid after multilayer figure is abstract better reflect the topological structure of environment, more help to improve the speed that builds the large scene three-dimensional map.Simultaneously, in the map generating portion, proposed to reduce redundant points by the classical fusion method of the multiframe based on space body, not only can improve reconstruction speed, and provide data more accurately for subsequent operation.
The technical solution adopted for the present invention to solve the technical problems is: a kind of three-dimensional rebuilding method of scene based on the RGBD Multi-sensor Fusion is characterized in comprising the following steps:
Step 1, employing SURF algorithm carry out feature extracting and matching.The extreme value of the Hessian determinant of a matrix that the extraction and application calculating of feature is approximate is determined the position of unique point.Input piece image I, to scheming upper each pixel x=(x, y), utilize formula (1) to calculate the Hessian matrix H (X, σ) that it at x place yardstick is.
Wherein, L
Xx(x, σ) is the Gaussian second-order partial differential coefficient
Convolution at the x place to image I.Then, calculate the determinant of each H:
det(H
approx)=D
xxD
yy-(0.9D
xy)
2 (2)
Wherein, D
XxBe the convolution of box masterplate and image, use D
XxThe approximate L that replaces
Xx.Then, the symbol of judgement determinant (2).If symbol is not Local Extremum for negative; If symbol is for just, this point classifies as extreme point.
Then, the metric space of design of graphics picture, SURF are asked for the response image of Hessian matrix determinant as filtering template and integral image by the different size box, then, on response image, adopt the non-maximum value of 3D to suppress, and ask for the spot of each different scale.Use 3 * 3 * 3 masterplate in three dimensions, to carry out non-maximum value inhibition, according to default Hessian threshold value H relatively, when h is greater than H, and the analog value of neighbour near 26 points is chosen as unique point when all large, then adopts three-dimensional linear interpolation to make it more accurate.
Centered by unique point, take 6s in the circle of radius, image is carried out to Haar small echo response computing, design one centered by direction, subtended angle is the fan-shaped moving window of PI/3d, and the step-length of take is 0.2 radian, rotates moving window, and in window Haar response a little add up, principal direction is direction corresponding to maximum Haar response accumulated value.
Carry out the coupling of unique point.
1) when calculating the H determinant of a matrix, calculate simultaneously matrix trace.By the brightness difference, unique point is divided into to two kinds: 1. unique point and the brightness of small neighbourhood on every side thereof are greater than background area, and the H matrix trace is for just; 2. unique point and the brightness of small neighbourhood on every side thereof are less than background area, and the H matrix trace is for negative.First the H matrix trace of two unique points compared: jack per line illustrates that two unique points have same contrast; Contrary sign, illustrate the contrast difference, abandons step 2) similarity measurement;
2), on the basis of step 1), to the similarity measurement of two unique point descriptors, adopt Euclidean distance to calculate:
Wherein, X
IkK the element that means i Feature Descriptor in figure subject to registration, X
JkK the element that means j Feature Descriptor in reference diagram, the dimension of n representation feature vector.
For the unique point on figure subject to registration, calculate its Euclidean distance to all unique points on reference picture, obtain a distance set.By the set of adjusting the distance, compare computing and obtain little Euclidean distance and time Euclidean distance.Setting threshold, when the ratio of minimum euclidean distance and time Euclidean distance is less than this threshold value, think that unique point mate with the unique point of corresponding minimum euclidean distance, otherwise, not and this unique point be not complementary.
Step 2, the three-dimensional reconstruction point cloud data registration problems of estimating and optimizing for pose, first adopt the RANSAC registration Algorithm slightly to mate, and carry out initial pose estimation in conjunction with depth information, to calculate the initial value of result as the ICP registration Algorithm, re-use depth information and carry out the ICP algorithm optimization of initial pose is realized to registration from coarse to fine.
1) utilize the RANSAC algorithm to mark off point not in the know and intra-office point, reject the coupling abnormity point, the random foundation point of point in the owning administration obtained is converged and closes, first slightly mate, if two point sets that obtain are respectively S and T, from set of source data S, randomly drawing not three point { S point-blank
a, S
b, S
c, in target point set T, three point { T that search is corresponding
a, T
b, T
c, utilize three pairs of reference mark to carry out the estimation of European transformation matrix, calculate H
c.Then calculate and estimating transformation relation H
cUnder error threshold δ, the consistent degree between two point sets, and utilize the depth information of corresponding point to obtain the estimation of initial pose;
2) using the result of step 1) as the input of ICP algorithm, avoid monotony and converge to local minimum, estimate registration parameter according to minimizing objective function<maths TranNum="93" num="0003"><![CDATA[<math> <mrow> <msub> <mi>H</mi> <mi>k</mi> </msub> <mo>=</mo> <mfenced open='[' close=']'> <mtable> <mtr> <mtd> <msub> <mi>R</mi> <mi>k</mi> </msub> </mtd> <mtd> <msub> <mi>T</mi> <mi>k</mi> </msub> </mtd> </mtr> <mtr> <mtd> <mn>0</mn> </mtd> <mtd> <mn>1</mn> </mtd> </mtr> </mtable> </mfenced> <mo>,</mo> </mrow></math>]]></maths>And calculating evaluated error d<sub TranNum="94">k</sub>Use transformation matrix H<sub TranNum="95">k</sub>To data set P<sub TranNum="96">k</sub>Upgrade P<sub TranNum="97">K+1</sub>=H<sub TranNum="98">k</sub>(P<sub TranNum="99">0</sub>)=R<sub TranNum="100">k</sub>P<sub TranNum="101">O</sub>+ T<sub TranNum="102">k</sub>; Iteration, when the evaluated error of twice calculating changes while being less than threshold tau, i.e. ‖ d<sub TranNum="103">k</sub>-d<sub TranNum="104">K+1</sub>‖<during τ, stops;
Step 3, obtain the initial pose through optimizing after, using it as node, the position orientation relation of the interframe be adjacent adds in figure as limit, and then adopts Hog-man figure optimized algorithm to carry out the global optimization of map, obtains the high accuracy three-dimensional map.
The distance value of camera being observed to center is divided into a plurality of subgraphs as standard by original figure, with a node in subgraph, mean this subgraph, obtain the abstract of original graph one deck, carry out successively more abstract to the figure obtained, just obtain a multilayer graph structure, extract the topological structure in original graph.
Wherein, O is the video camera center, and IP is the center of interest point, and d is the distance of IP to O.
When having new node to join in figure, at first be to join in original graph, then check that whether changing subgraph divides, if change, just need to upgrade high-level diagram, simultaneously top layer is optimized, when only having the topological structure of working as top layer that very large change occurs, ability to bottom figure back transfer, is upgraded bottom figure by top level diagram.
When the multiframe cloud data was merged, space body that comprises all frames of model, then processed each frame successively, revises the value of aerial image vegetarian refreshments.Null value pixel in last space body is the point on last curved surface, and the curved surface obtained like this is the curved surface with least square character.Adopt weight function to carry out the distance value stack, last stack equation such as formula (5), formula (6).
W
i+1(x)=W
i(x)+W
i-1(x) (6)
Err(dist)=dist
2×0.0035 (8)
According to error equation (8) weight function of Kinect sensor, choose formula (7).Wherein, x * is the distance to image center of putting on the single frames curved surface, finally obtains the three-dimensional reconstruction design sketch of whole scene.
The invention has the beneficial effects as follows: at first the method adopts the SURF algorithm to replace the SIFT algorithm to carry out characteristic matching, secondly, according to the observation the method improvement at center the criteria for classifying of sub-map, make image pyramid after multilayer figure is abstract better reflect the topological structure of environment, more help to improve the speed that builds the large scene three-dimensional map.Simultaneously, in the map generating portion, proposed to reduce redundant points by the classical fusion method of the multiframe based on space body, improved reconstruction speed.After tested, the reconstruction speed of the inventive method has improved nearly twice than background technology.
Below in conjunction with embodiment, the present invention is elaborated.
Embodiment
The scene three-dimensional rebuilding method concrete steps that the present invention is based on the RGBD Multi-sensor Fusion are as follows:
1, feature extracting and matching.
Adopt the SURF algorithm to carry out feature extracting and matching.The extreme value of the Hessian determinant of a matrix that the extraction and application calculating of feature is approximate is determined the position of unique point.Input piece image I, to scheming upper each pixel x=(x, y), utilizing formula (1) to calculate it is the Hessian matrix H (X, σ) of σ at x place yardstick.
Wherein, L
Xx(x, σ) is the Gaussian second-order partial differential coefficient
Convolution at the x place to image I.Then, calculate the determinant of each H:
det(H
approx)=D
xxD
yy-(0.9D
xy)
2 (2)
Wherein, D
XxBe the convolution of box masterplate and image, use D
XxClosely to replace L
Xx.Then, the symbol of judgement determinant (2).If symbol is not Local Extremum for negative; If symbol is for just, this point classifies as extreme point.
Then, the metric space of design of graphics picture, the i.e. expression of image under different resolutions.To set up pyramidal principle opposite with SIFT, SURF asks for the response image of Hessian matrix determinant as filtering template and integral image by the different size box, then, on response image, adopt the non-maximum value of 3D to suppress, ask for the spot of each different scale.Use 3 * 3 * 3 masterplate in three dimensions, to carry out non-maximum value inhibition, according to default Hessian threshold value H relatively, when h is greater than H, and the analog value of neighbour near 26 points is chosen as unique point when all large, then adopts three-dimensional linear interpolation to make it more accurate.
In order to guarantee that eigenvector has rotational invariance, need to distribute a principal direction to each unique point.Centered by unique point, take 6s in the circle of radius, image is carried out to Haar small echo response computing, design one centered by direction, subtended angle is the fan-shaped moving window of PI/3d, and the step-length of take is 0.2 radian, rotates moving window, and in window Haar response a little add up, principal direction is direction corresponding to maximum Haar response accumulated value.
Carried out after feature point extraction carrying out the coupling of unique point, had the following steps:
1) when calculating the H determinant of a matrix, calculate simultaneously matrix trace.Due to by the brightness difference, unique point is divided into to two kinds: 1. unique point and the brightness of small neighbourhood on every side thereof are greater than background area, and the H matrix trace is for just; 2. unique point and the brightness of small neighbourhood on every side thereof are less than background area, and the H matrix trace is for negative.Characteristic according to this, first the H matrix trace of two unique points compared: jack per line illustrates that two unique points have same contrast; Contrary sign, illustrate the contrast difference, abandons 2) similarity measurement;
2) 1) basis on, to the similarity measurement of two unique point descriptors, adopt Euclidean distance to calculate:
Wherein, X
Ik; K element of i Feature Descriptor from mean figure subject to registration, X
JkK the element that means j Feature Descriptor in reference diagram, the dimension of n representation feature vector.
For the unique point on figure subject to registration, calculate its Euclidean distance to all unique points on reference picture, obtain a distance set.By the set of adjusting the distance, compare computing and obtain little Euclidean distance and time Euclidean distance.Setting threshold, be generally 0.8, when the ratio of minimum euclidean distance and time Euclidean distance is less than this threshold value, thinks that unique point mate with the unique point of corresponding minimum euclidean distance, otherwise, not and this unique point be not complementary.Threshold value is less, mates more stablely, but extreme point is fewer.
2, pose is estimated and is optimized.
Three-dimensional reconstruction point cloud data registration problems for pose estimation and optimization, in order to prevent that the ICP algorithm convergence is to local smallest point, produce wrong registration results, first adopt the RANSAC registration Algorithm slightly to mate, and carry out initial pose estimation in conjunction with depth information, the initial value of result as the ICP registration Algorithm will be calculated, reusing depth information carries out the ICP algorithm optimization of initial pose is realized to registration strategies from coarse to fine, and the minimizing iterations, reach gratifying convergence result.
Concrete steps are as follows:
1) utilize the RANSAC algorithm to mark off " point not in the know " and " intra-office point ", reject the coupling abnormity point, the random foundation point of point in the owning administration obtained is converged and closes, first slightly mate, if two point sets that obtain are respectively S and T, from set of source data S, randomly drawing not three point { S point-blank
a, S
b, S
c, in target point set T, three point { T that search is corresponding
a, T
b, T
c, utilize three pairs of reference mark to carry out the estimation of European transformation matrix, calculate H
c.Then calculate and estimating transformation relation H
cUnder error threshold δ, the consistent degree between two point sets, and utilize the depth information of corresponding point to obtain the estimation of initial pose;
2) using 1) result that calculates is as the input of ICP algorithm, avoids monotony and converges to local minimum, estimates registration parameter according to minimizing objective function
And calculating evaluated error d
k; Use transformation matrix H
kTo data set P
kUpgrade P
K+1=H
k(P
0)=R
kP
0+ T
k; Iteration, when the evaluated error of twice calculating changes while being less than threshold tau, i.e. ‖ d
k-d
K+1During ‖<τ, stop;
3, the Hog-man map is optimized.
After obtaining the initial pose through optimizing, using it as node, the position orientation relation of the interframe be adjacent adds in figure as limit, and then adopts Hog-man figure optimized algorithm to carry out the global optimization of map, obtains the high accuracy three-dimensional map.
Due to when wanting to set up the complete information in zone, gather the foundation that more point belongs to a kind of enclosed around can be in this piece zone.Therefore, core concept is as standard, original figure to be divided into to a plurality of subgraphs according to the distance value (suc as formula (5)) at camera observation center, with a node in subgraph, mean this subgraph, thereby obtain the abstract of original graph one deck, to the figure obtained, carry out successively more abstract, just obtain a multilayer graph structure, extract the topological structure in original graph.
Wherein, establishing O is the video camera center, and IP is the center of interest point, and d is the distance of IP to O, gets 2m here.The pose of supposing current camera means with the matrix P of 4 * 4, and the coordinate of IP is for using the observation center to replace image center as the subgraph segmentation standard, the node characteristic distributions when complete map is set up in reaction better.
When having new node to join in figure, at first be to join in original graph, then check that whether changing subgraph divides, if change, just need to upgrade high-level diagram, simultaneously top layer is optimized, when only having the topological structure of working as top layer that very large change occurs, ability to bottom figure back transfer, is upgraded bottom figure by top level diagram.So just guaranteed real-time optimization.
According to the optimum results obtained, carry out simple superposition and obtain the redundant points of bringing in order to remove all frames of direct stack, the cloud of the point based on the space body fusion method of employing.Its main thought is when the multiframe cloud data is merged, space body that comprises all frames of model, space body is divided into the aerial image vegetarian refreshments with certain resolution, on each point, will be recorded to the distance of object in environment, then successively each frame is processed, revised the value of aerial image vegetarian refreshments.Null value pixel in last space body is the point on last curved surface, and the curved surface obtained like this is the curved surface with least square character.In order to reflect the Some features of the every frame data that collect, when carrying out the distance value stack, adopt weight function simultaneously, last stack equation is suc as formula (5), formula (6).
W
i+1(x)=W
i(x)+W
i-1(x) (6)
Err(dist)=dist
2×0.0035 (8)
In experiment, use the kinet camera as the RGBD camera, therefore, according to choosing as formula (7) of error equation (8) weight function of Kinect sensor.Wherein x* is the distance to image center of putting on the single frames curved surface, finally obtains the 3D point cloud chart, i.e. the three-dimensional reconstruction design sketch of whole scene.
Claims (1)
1. three-dimensional rebuilding method of the scene based on the RGBD Multi-sensor Fusion is characterized in that comprising the following steps:
Step 1, employing SURF algorithm carry out feature extracting and matching; The extreme value of the Hessian determinant of a matrix that the extraction and application calculating of feature is approximate is determined the position of unique point; Input piece image I, to scheming upper each pixel x=(x, y), utilizing formula (1) to calculate it is the Hessian matrix H (X, σ) of σ at x place yardstick;
Wherein, L
Xx(x, σ) is the Gaussian second-order partial differential coefficient
Convolution at the x place to image I; Then, calculate the determinant of each H:
det(H
approx)=D
xxD
yy-(0.9D
xy)
2 (2)
Wherein, D
XxBe the convolution of box masterplate and image, use D
XxThe approximate L that replaces
Xx; Then, the symbol of judgement determinant (2); If symbol is not Local Extremum for negative; If symbol is for just, this point classifies as extreme point;
Then, the metric space of design of graphics picture, SURF are asked for the response image of Hessian matrix determinant as filtering template and integral image by the different size box, then, on response image, adopt the non-maximum value of 3D to suppress, and ask for the spot of each different scale; Use 3 * 3 * 3 masterplate in three dimensions, to carry out non-maximum value inhibition, according to default Hessian threshold value H relatively, when h is greater than H, and the analog value of neighbour near 26 points is chosen as unique point when all large, then adopts three-dimensional linear interpolation to make it more accurate;
Centered by unique point, take 6s in the circle of radius, image is carried out to Haar small echo response computing, design one centered by direction, subtended angle is the fan-shaped moving window of PI/3d, and the step-length of take is 0.2 radian, rotates moving window, and in window Haar response a little add up, principal direction is direction corresponding to maximum Haar response accumulated value;
Carry out the coupling of unique point;
1) when calculating the H determinant of a matrix, calculate simultaneously matrix trace; By the brightness difference, unique point is divided into to two kinds: 1. unique point and the brightness of small neighbourhood on every side thereof are greater than background area, and the H matrix trace is for just; 2. unique point and the brightness of small neighbourhood on every side thereof are less than background area, and the H matrix trace is for negative; First the H matrix trace of two unique points compared: jack per line illustrates that two unique points have same contrast; Contrary sign, illustrate the contrast difference, abandons step 2) similarity measurement;
2), on the basis of step 1), to the similarity measurement of two unique point descriptors, adopt Euclidean distance to calculate:
Wherein, X
IkK the element that means i Feature Descriptor in figure subject to registration, X
JkK the element that means j Feature Descriptor in reference diagram, the dimension of n representation feature vector;
For the unique point on figure subject to registration, calculate its Euclidean distance to all unique points on reference picture, obtain a distance set; By the set of adjusting the distance, compare computing and obtain little Euclidean distance and time Euclidean distance; Setting threshold, when the ratio of minimum euclidean distance and time Euclidean distance is less than this threshold value, think that unique point mate with the unique point of corresponding minimum euclidean distance, otherwise, not and this unique point be not complementary;
Step 2, the three-dimensional reconstruction point cloud data registration problems of estimating and optimizing for pose, first adopt the RANSAC registration Algorithm slightly to mate, and carry out initial pose estimation in conjunction with depth information, to calculate the initial value of result as the ICP registration Algorithm, re-use depth information and carry out the ICP algorithm optimization of initial pose is realized to registration from coarse to fine;
1) utilize the RANSAC algorithm to mark off point not in the know and intra-office point, reject the coupling abnormity point, the random foundation point of point in the owning administration obtained is converged and closes, first slightly mate, if two point sets that obtain are respectively S and T, from set of source data S, randomly drawing not three point { S point-blank
a, S
b, S
c, in target point set T, three point { T that search is corresponding
a, T
b, T
c, utilize three pairs of reference mark to carry out the estimation of European transformation matrix, calculate H
cThen calculate and estimating transformation relation H
cUnder error threshold δ, the consistent degree between two point sets, and utilize the depth information of corresponding point to obtain the estimation of initial pose;
2) using the result of step 1) as the input of ICP algorithm, avoid monotony and converge to local minimum, estimate registration parameter according to minimizing objective function
And calculating evaluated error d
k; Use transformation matrix H
kTo data set P
kUpgrade P
K+1=H
k(P
0)=R
kP
0+ T
k; Iteration, when the evaluated error of twice calculating changes while being less than threshold tau, i.e. ‖ d
k-d
K+1During ‖<τ, stop;
Step 3, obtain the initial pose through optimizing after, using it as node, the position orientation relation of the interframe be adjacent adds in figure as limit, and then adopts Hog-man figure optimized algorithm to carry out the global optimization of map, obtains the high accuracy three-dimensional map;
The distance value of camera being observed to center is divided into a plurality of subgraphs as standard by original figure, with a node in subgraph, mean this subgraph, obtain the abstract of original graph one deck, carry out successively more abstract to the figure obtained, just obtain a multilayer graph structure, extract the topological structure in original graph;
Wherein, O is the video camera center, and IP is the center of interest point, and d is the distance of IP to O;
When having new node to join in figure, at first be to join in original graph, then check that whether changing subgraph divides, if change, just need to upgrade high-level diagram, simultaneously top layer is optimized, when only having the topological structure of working as top layer that very large change occurs, ability to bottom figure back transfer, is upgraded bottom figure by top level diagram;
When the multiframe cloud data was merged, space body that comprises all frames of model, then processed each frame successively, revises the value of aerial image vegetarian refreshments; Null value pixel in last space body is the point on last curved surface, and the curved surface obtained like this is the curved surface with least square character; Adopt weight function to carry out the distance value stack, last stack equation such as formula (5), formula (6);
W
i+1(x)=W
i(x)+W
i-1(x) (6)
E
rr(dist)-dist
2×0.0035 (8)
According to error equation (8) weight function of Kinect sensor, choose formula (7); Wherein, x* is the distance to image center of putting on the single frames curved surface, finally obtains the three-dimensional reconstruction design sketch of whole scene.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013103234393A CN103413352A (en) | 2013-07-29 | 2013-07-29 | Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2013103234393A CN103413352A (en) | 2013-07-29 | 2013-07-29 | Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion |
Publications (1)
Publication Number | Publication Date |
---|---|
CN103413352A true CN103413352A (en) | 2013-11-27 |
Family
ID=49606357
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN2013103234393A Pending CN103413352A (en) | 2013-07-29 | 2013-07-29 | Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103413352A (en) |
Cited By (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104240297A (en) * | 2014-09-02 | 2014-12-24 | 东南大学 | Rescue robot three-dimensional environment map real-time construction method |
CN104299260A (en) * | 2014-09-10 | 2015-01-21 | 西南交通大学 | Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration |
CN104318198A (en) * | 2014-02-28 | 2015-01-28 | 郑州金惠计算机系统工程有限公司 | Identification method and device suitable for substation robot patrolling |
CN104463953A (en) * | 2014-11-11 | 2015-03-25 | 西北工业大学 | Three-dimensional reconstruction method based on inertial measurement unit and RGB-D sensor |
CN104715504A (en) * | 2015-02-12 | 2015-06-17 | 四川大学 | Robust large-scene dense three-dimensional reconstruction method |
CN104778748A (en) * | 2015-04-03 | 2015-07-15 | 四川大学 | High-precision three-dimensional reconstruction method for uncalibrated images |
CN105469103A (en) * | 2014-09-11 | 2016-04-06 | 清华大学 | Scene recovery method and device based on low-quality GRB-D data |
CN105513128A (en) * | 2016-01-13 | 2016-04-20 | 中国空气动力研究与发展中心低速空气动力研究所 | Kinect-based three-dimensional data fusion processing method |
CN105608467A (en) * | 2015-12-16 | 2016-05-25 | 西北工业大学 | Kinect-based non-contact type student physical fitness evaluation method |
CN105825544A (en) * | 2015-11-25 | 2016-08-03 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN106529838A (en) * | 2016-12-16 | 2017-03-22 | 湖南拓视觉信息技术有限公司 | Virtual assembling method and device |
CN106846479A (en) * | 2017-02-15 | 2017-06-13 | 山东大学 | Three-dimensional visialization of tunnel system and method based on depth camera |
CN106952338A (en) * | 2017-03-14 | 2017-07-14 | 网易(杭州)网络有限公司 | Method, system and the readable storage medium storing program for executing of three-dimensional reconstruction based on deep learning |
CN107063258A (en) * | 2017-03-07 | 2017-08-18 | 重庆邮电大学 | A kind of mobile robot indoor navigation method based on semantic information |
CN107409146A (en) * | 2015-03-20 | 2017-11-28 | 英特尔公司 | Sensing data visualization device and method |
CN107564012A (en) * | 2017-08-01 | 2018-01-09 | 中国科学院自动化研究所 | Towards the augmented reality method and device of circumstances not known |
CN107833253A (en) * | 2017-09-22 | 2018-03-23 | 北京航空航天大学青岛研究院 | A kind of camera pose refinement method towards the generation of RGBD three-dimensional reconstructions texture |
CN107862735A (en) * | 2017-09-22 | 2018-03-30 | 北京航空航天大学青岛研究院 | A kind of RGBD method for reconstructing three-dimensional scene based on structural information |
CN108527359A (en) * | 2017-03-03 | 2018-09-14 | 欧姆龙株式会社 | control system, setting device, setting method and storage device |
CN108648224A (en) * | 2018-05-18 | 2018-10-12 | 杭州电子科技大学 | A method of the real-time scene layout identification based on artificial neural network and reconstruction |
CN108731648A (en) * | 2018-03-15 | 2018-11-02 | 广州市城市规划勘测设计研究院 | 2000 coordinate system parameter acquiring methods, device and computer readable storage medium |
CN109483409A (en) * | 2018-11-21 | 2019-03-19 | 无锡荣恩科技有限公司 | The paint removal method that aviation components fill spray automatically |
CN109993828A (en) * | 2019-04-02 | 2019-07-09 | 深圳市掌网科技股份有限公司 | Three-dimensional reconstruction system and method |
CN110070607A (en) * | 2019-04-02 | 2019-07-30 | 深圳市掌网科技股份有限公司 | Three-dimensional reconstruction system and method |
CN110349247A (en) * | 2018-04-08 | 2019-10-18 | 哈尔滨工业大学 | A kind of indoor scene CAD 3D method for reconstructing based on semantic understanding |
CN111174799A (en) * | 2019-12-24 | 2020-05-19 | Oppo广东移动通信有限公司 | Map construction method and device, computer readable medium and terminal equipment |
US11604078B2 (en) * | 2017-06-27 | 2023-03-14 | Continental Automotive Gmbh | Method and device for generating digital map models |
CN116433829A (en) * | 2023-06-09 | 2023-07-14 | 北京全路通信信号研究设计院集团有限公司 | Scene visual monitoring method, device, equipment and storage medium |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102880866A (en) * | 2012-09-29 | 2013-01-16 | 宁波大学 | Method for extracting face features |
CN103106688A (en) * | 2013-02-20 | 2013-05-15 | 北京工业大学 | Indoor three-dimensional scene rebuilding method based on double-layer rectification method |
-
2013
- 2013-07-29 CN CN2013103234393A patent/CN103413352A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102880866A (en) * | 2012-09-29 | 2013-01-16 | 宁波大学 | Method for extracting face features |
CN103106688A (en) * | 2013-02-20 | 2013-05-15 | 北京工业大学 | Indoor three-dimensional scene rebuilding method based on double-layer rectification method |
Non-Patent Citations (6)
Title |
---|
YUNCHU ZHANG ET AL.: "Visual navigation for a power transmission line inspection robot", 《LECTURE NOTES IN COMPUTER SCIENCE》, vol. 4114, 31 December 2006 (2006-12-31), pages 887 - 892 * |
卜珂: "基于SURF的图像配准与拼接技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》, 15 July 2010 (2010-07-15) * |
朱笑笑 等: "基于RGB-D传感器的3D环境地图实时创建", 《中国科技论文在线》, 22 January 2013 (2013-01-22) * |
王大炜: "基于点特征从运动恢复三维形状", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》, 15 September 2004 (2004-09-15), pages 138 - 604 * |
肖南峰: "《服务机器人》", 31 January 2013, article "军用机器人" * |
袁亮: "三维重建过程中的点云数据配准算法的研究", 《万方学位论文》, 30 July 2010 (2010-07-30) * |
Cited By (44)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104318198A (en) * | 2014-02-28 | 2015-01-28 | 郑州金惠计算机系统工程有限公司 | Identification method and device suitable for substation robot patrolling |
CN104240297A (en) * | 2014-09-02 | 2014-12-24 | 东南大学 | Rescue robot three-dimensional environment map real-time construction method |
CN104299260B (en) * | 2014-09-10 | 2017-05-17 | 西南交通大学 | Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration |
CN104299260A (en) * | 2014-09-10 | 2015-01-21 | 西南交通大学 | Contact network three-dimensional reconstruction method based on SIFT and LBP point cloud registration |
CN105469103B (en) * | 2014-09-11 | 2018-10-16 | 清华大学 | Scene restoration methods based on low quality RGB-D data and device |
CN105469103A (en) * | 2014-09-11 | 2016-04-06 | 清华大学 | Scene recovery method and device based on low-quality GRB-D data |
CN104463953A (en) * | 2014-11-11 | 2015-03-25 | 西北工业大学 | Three-dimensional reconstruction method based on inertial measurement unit and RGB-D sensor |
CN104463953B (en) * | 2014-11-11 | 2017-06-16 | 西北工业大学 | Three-dimensional rebuilding method based on Inertial Measurement Unit Yu RGB D sensors |
CN104715504A (en) * | 2015-02-12 | 2015-06-17 | 四川大学 | Robust large-scene dense three-dimensional reconstruction method |
CN107409146B (en) * | 2015-03-20 | 2022-10-28 | 英特尔公司 | Sensor data visualization apparatus and method |
CN107409146A (en) * | 2015-03-20 | 2017-11-28 | 英特尔公司 | Sensing data visualization device and method |
CN104778748A (en) * | 2015-04-03 | 2015-07-15 | 四川大学 | High-precision three-dimensional reconstruction method for uncalibrated images |
CN105825544A (en) * | 2015-11-25 | 2016-08-03 | 维沃移动通信有限公司 | Image processing method and mobile terminal |
CN105825544B (en) * | 2015-11-25 | 2019-08-20 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN105608467A (en) * | 2015-12-16 | 2016-05-25 | 西北工业大学 | Kinect-based non-contact type student physical fitness evaluation method |
CN105608467B (en) * | 2015-12-16 | 2019-03-22 | 西北工业大学 | Non-contact type physique constitution of students assessment method based on Kinect |
CN105513128A (en) * | 2016-01-13 | 2016-04-20 | 中国空气动力研究与发展中心低速空气动力研究所 | Kinect-based three-dimensional data fusion processing method |
CN106529838A (en) * | 2016-12-16 | 2017-03-22 | 湖南拓视觉信息技术有限公司 | Virtual assembling method and device |
CN106846479A (en) * | 2017-02-15 | 2017-06-13 | 山东大学 | Three-dimensional visialization of tunnel system and method based on depth camera |
CN106846479B (en) * | 2017-02-15 | 2018-11-30 | 山东大学 | Three-dimensional visialization of tunnel system and method based on depth camera |
CN108527359A (en) * | 2017-03-03 | 2018-09-14 | 欧姆龙株式会社 | control system, setting device, setting method and storage device |
CN108527359B (en) * | 2017-03-03 | 2021-04-30 | 欧姆龙株式会社 | Control system, setting device, setting method, and storage device |
CN107063258A (en) * | 2017-03-07 | 2017-08-18 | 重庆邮电大学 | A kind of mobile robot indoor navigation method based on semantic information |
CN106952338A (en) * | 2017-03-14 | 2017-07-14 | 网易(杭州)网络有限公司 | Method, system and the readable storage medium storing program for executing of three-dimensional reconstruction based on deep learning |
CN106952338B (en) * | 2017-03-14 | 2020-08-14 | 网易(杭州)网络有限公司 | Three-dimensional reconstruction method and system based on deep learning and readable storage medium |
US11604078B2 (en) * | 2017-06-27 | 2023-03-14 | Continental Automotive Gmbh | Method and device for generating digital map models |
CN107564012B (en) * | 2017-08-01 | 2020-02-28 | 中国科学院自动化研究所 | Augmented reality method and device for unknown environment |
CN107564012A (en) * | 2017-08-01 | 2018-01-09 | 中国科学院自动化研究所 | Towards the augmented reality method and device of circumstances not known |
CN107862735A (en) * | 2017-09-22 | 2018-03-30 | 北京航空航天大学青岛研究院 | A kind of RGBD method for reconstructing three-dimensional scene based on structural information |
CN107862735B (en) * | 2017-09-22 | 2021-03-05 | 北京航空航天大学青岛研究院 | RGBD three-dimensional scene reconstruction method based on structural information |
CN107833253B (en) * | 2017-09-22 | 2020-08-04 | 北京航空航天大学青岛研究院 | RGBD three-dimensional reconstruction texture generation-oriented camera attitude optimization method |
CN107833253A (en) * | 2017-09-22 | 2018-03-23 | 北京航空航天大学青岛研究院 | A kind of camera pose refinement method towards the generation of RGBD three-dimensional reconstructions texture |
CN108731648A (en) * | 2018-03-15 | 2018-11-02 | 广州市城市规划勘测设计研究院 | 2000 coordinate system parameter acquiring methods, device and computer readable storage medium |
CN108731648B (en) * | 2018-03-15 | 2020-12-22 | 广州市城市规划勘测设计研究院 | 2000 independent coordinate system parameter obtaining method, device and computer readable storage medium |
CN110349247A (en) * | 2018-04-08 | 2019-10-18 | 哈尔滨工业大学 | A kind of indoor scene CAD 3D method for reconstructing based on semantic understanding |
CN108648224B (en) * | 2018-05-18 | 2021-07-13 | 杭州电子科技大学 | Real-time scene layout recognition and reconstruction method based on artificial neural network |
CN108648224A (en) * | 2018-05-18 | 2018-10-12 | 杭州电子科技大学 | A method of the real-time scene layout identification based on artificial neural network and reconstruction |
CN109483409A (en) * | 2018-11-21 | 2019-03-19 | 无锡荣恩科技有限公司 | The paint removal method that aviation components fill spray automatically |
CN110070607A (en) * | 2019-04-02 | 2019-07-30 | 深圳市掌网科技股份有限公司 | Three-dimensional reconstruction system and method |
CN109993828A (en) * | 2019-04-02 | 2019-07-09 | 深圳市掌网科技股份有限公司 | Three-dimensional reconstruction system and method |
CN111174799A (en) * | 2019-12-24 | 2020-05-19 | Oppo广东移动通信有限公司 | Map construction method and device, computer readable medium and terminal equipment |
CN111174799B (en) * | 2019-12-24 | 2023-02-17 | Oppo广东移动通信有限公司 | Map construction method and device, computer readable medium and terminal equipment |
CN116433829A (en) * | 2023-06-09 | 2023-07-14 | 北京全路通信信号研究设计院集团有限公司 | Scene visual monitoring method, device, equipment and storage medium |
CN116433829B (en) * | 2023-06-09 | 2023-08-18 | 北京全路通信信号研究设计院集团有限公司 | Scene visual monitoring method, device, equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103413352A (en) | Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion | |
CN109597087B (en) | Point cloud data-based 3D target detection method | |
CN109949375B (en) | Mobile robot target tracking method based on depth map region of interest | |
CN106097304B (en) | A kind of unmanned plane real-time online ground drawing generating method | |
CN102722697B (en) | Unmanned aerial vehicle autonomous navigation landing visual target tracking method | |
CN113269837B (en) | Positioning navigation method suitable for complex three-dimensional environment | |
CN110221603A (en) | A kind of long-distance barrier object detecting method based on the fusion of laser radar multiframe point cloud | |
CN109544636A (en) | A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method | |
CN106940704A (en) | A kind of localization method and device based on grating map | |
CN112734765B (en) | Mobile robot positioning method, system and medium based on fusion of instance segmentation and multiple sensors | |
CN110070025B (en) | Monocular image-based three-dimensional target detection system and method | |
CN107862735B (en) | RGBD three-dimensional scene reconstruction method based on structural information | |
CN110533716B (en) | Semantic SLAM system and method based on 3D constraint | |
CN111998862B (en) | BNN-based dense binocular SLAM method | |
CN114424250A (en) | Structural modeling | |
CN114419147A (en) | Rescue robot intelligent remote human-computer interaction control method and system | |
CN104517289A (en) | Indoor scene positioning method based on hybrid camera | |
Qian et al. | Robust visual-lidar simultaneous localization and mapping system for UAV | |
CN112833892B (en) | Semantic mapping method based on track alignment | |
CN116182837A (en) | Positioning and mapping method based on visual laser radar inertial tight coupling | |
CN105513094A (en) | Stereo vision tracking method and stereo vision tracking system based on 3D Delaunay triangulation | |
CN115479598A (en) | Positioning and mapping method based on multi-sensor fusion and tight coupling system | |
CN109459759A (en) | City Terrain three-dimensional rebuilding method based on quadrotor drone laser radar system | |
Tian et al. | Research on multi-sensor fusion SLAM algorithm based on improved gmapping | |
CN111812978A (en) | Cooperative SLAM method and system for multiple unmanned aerial vehicles |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C02 | Deemed withdrawal of patent application after publication (patent law 2001) | ||
WD01 | Invention patent application deemed withdrawn after publication |
Application publication date: 20131127 |