CN103106688A  Indoor threedimensional scene rebuilding method based on doublelayer rectification method  Google Patents
Indoor threedimensional scene rebuilding method based on doublelayer rectification method Download PDFInfo
 Publication number
 CN103106688A CN103106688A CN2013100538293A CN201310053829A CN103106688A CN 103106688 A CN103106688 A CN 103106688A CN 2013100538293 A CN2013100538293 A CN 2013100538293A CN 201310053829 A CN201310053829 A CN 201310053829A CN 103106688 A CN103106688 A CN 103106688A
 Authority
 CN
 China
 Prior art keywords
 point
 amp
 kinect
 image
 matrix
 Prior art date
Links
Abstract
Description
Technical field
The invention belongs to the crossing domain of computer vision and intelligent robot, relate to a kind of indoor environment threedimensional reconstruction, relate in particular to a kind of method for reconstructing of the indoor scene on a large scale based on doubledeck method for registering.
Background technology
In recent years, along with the development of infotech, the demand of 3 D scene rebuilding technology is constantly increased, economy indoor method for reconstructing threedimensional scene fast becomes the guardian technique problem that numerous areas needs to be resolved hurrily.In family's service robot field, the Intelligent home service robot market demand that aging population causes is day by day strong.At present, on market, most of service robot is because can't providing single simple service by the perception threedimensional environment under specific scene, and this problem is seriously restricting the development of home services Robot industry.
3 D scene rebuilding is one of the study hotspot problem in the fields such as computer vision, intelligent robot, virtual reality.Traditional threedimensional rebuilding method can be divided into two classes according to the mode difference of obtaining threedimensional data: based on the threedimensional rebuilding method of laser scanner technique with based on the threedimensional rebuilding method of vision.Still deposit larger limitation for the existing method of indoor largescale 3 D scene rebuilding problem.
Based on the threedimensional rebuilding method of laser, obtain depth data or the range image of scene by laser scanner, utilize the registration of depth data to realize aliging of frame data and global data.So just obtained the geological information of threedimensional scenic, need to obtain the texture information of scene and be mapped to and reconstruct on geometric model by increasing a video camera, this will solve one by the mapping problems of photo to how much.Although can obtain the 3D geometric model of degree of precision based on the threedimensional rebuilding method of laser scanning, but the difficulty of texture is larger, thereby it is more difficult to generate realistic threedimensional model, the laser equipment price is high simultaneously, generally be applied in the fields such as digital archaeology, topographic(al) reconnaissance, digital museum, be difficult at largescale civil area universal.
Threedimensional rebuilding method based on vision, namely adopt computer vision methods to carry out the object dimensional Model Reconstruction, refer to utilize digital camera as imageing sensor, the technology such as integrated use image processing vision calculating are carried out threedimensional noncontact measurement, obtain the threedimensional information of object with computer program.It is advantageous that not limited by body form, rebuild speed, can realize fullautomatic or semiautomatic modeling etc., is an important development direction of threedimensional reconstruction.Difference according to using the video camera number can be divided into monocular vision method, binocular vision method, trinocular vision method or used for multivision visual method.The monocular vision method uses a video camera to carry out threedimensional reconstruction, derives depth information by two dimension characteristic of image, and these two dimensional characters comprise light and shade, texture, focus, profile etc.Its advantage is that device structure is simple, uses single width or several images just can reconstruct the object dimensional model.But more satisfactoryization of condition that usually requires, practical situations is not very desirable, the reconstruction effect is general.The binocular vision method also claims stereo vision method, and binocular parallax information is converted to depth information.Its advantage is that method is ripe, can stably obtain reconstructed results preferably; Not enough is that operand is still bigger than normal, and in the situation that the larger reconstruction of parallax range successful reduction.The basic thought of used for multivision visual method is to provide extra constraint by increasing video camera, avoids problem in binocular vision with this.Its advantage is to rebuild effect to be better than the binocular vision method, but device structure is more complicated, and cost is higher, and is also more difficult in control.
In recent years, along with RGBD(colour and the degree of depth) development of sensor technology, the Kinect that releases such as Microsoft is for 3 D scene rebuilding provides new scheme.At present about the research of Kinect threedimensional rebuilding method having obtained some achievements aspect the threedimensional reconstruction of single object, still be in the starting stage in the research aspect the indoor scene reconstruction.The people such as RichardA.Newcombe adopt Kinect to obtain environmental information, utilize the ICP method, realize the threedimensional reconstruction of environmental information.Because the method realizes on GPU hardware, the GPU hardware configuration to be had relatively high expectations, and be subjected to the restriction of GPU internal memory, the scope that can only rebuild 3m * 3m * 3m can't satisfy the demand of indoor threedimensional scenic establishment on a large scale.
Summary of the invention
In order to overcome the problem that exists in abovementioned threedimensional rebuilding method, the invention provides a kind of economy fast, based on the indoor method for reconstructing threedimensional scene of doubledeck method for registering.
The technical solution used in the present invention is as follows:
Utilize Kinect to obtain RGB and the depth image information of environment, by extracting the SURF unique point of RGB image, with Feature Points Matching information as associated data, in conjunction with random sampling consistance (Random sample Consensus, RANSAC) point of proximity (the Iterative closest point of method and iteration, ICP) method proposes the doubledeck method for registering of a kind of threedimensional data.The method mainly comprises following content: the first, and utilize the RANSAC method to obtain the rotation translation transformation matrix of adjacent two frames (FrameToFrame) threedimensional data, accumulate the relative position variation that this result is obtained Kinect.By setting threshold, increase by one frame data when the Kinect change in location surpasses a certain size are key frame (KeyFrame) with this data setting and complete first registration; The second, utilize the ICP method to obtain the accurate transformation matrix of adjacent key frame (KeyFrameToKeyFrame), complete accuracy registration.Utilize KeyFrame data that doubledeck method for registering obtains and the transformation matrix between adjacent KeyFrame, complete the reconstruction of threedimensional environment.
Based on the indoor method for reconstructing threedimensional scene of doubledeck method for registering, it comprises the following steps:
Step 1 is carried out Kinect and is demarcated.
In image measurement process and machine vision applications, for determining threedimensional geometry position and its mutual relationship between corresponding point in image of surperficial certain point of space object, must set up the geometric model of camera imaging, these geometric model parameters consist of camera parameters.These parameters must just can obtain with calculating by experiment under most of conditions, and this process of finding the solution parameter just is referred to as camera calibration (or camera calibration).In image measurement and machine vision applications, the demarcation of camera parameters is unusual the key link, and the precision of its calibration result and stability directly affect the accuracy of net result.
Kinect is the XBOX360 body sense periphery peripheral hardware of a kind of Microsoft issue, and the degree of depth and colour (RGB) image information is provided simultaneously.Depth information utilizes thermal camera to adopt active mode to obtain, each frame is comprised of the 640*480 pixel, and the investigation depth scope is 0.5 ~ 4.0 meter, and the regulation of longitudinal angle scope is 43 °, the lateral angles scope is 57 °, can obtain the depth information of object in 6 square metres of scopes.The RGB camera of a 640*480 pixel is housed on Kinect simultaneously.Provide simultaneously this characteristic of RGB information and depth information most important for threedimensional reconstruction, facilitate depth information to align with RGB information.
The calibrating parameters of Kinect sensor comprises thermal camera (depth transducer) internal reference, three parts of the outer ginseng between RGB video camera internal reference and thermal camera and RGB video camera.The present invention adopts the plane reference method of Zhang Zhengyou that the RGB video camera is demarcated.The data that outer ginseng between thermal camera intrinsic parameter and thermal camera and RGB video camera uses official of Microsoft to provide.
Step 2, the extraction of unique point and coupling.
Feature extraction: by analysis image information, determine whether each point in image belongs to a characteristics of image.The result of feature extraction is that the point on image is divided into different subsets, and these subsets often belong to isolated point, continuous curve or continuous zone.SURF(SpeededUp RobustFeatures) unique point is the most popular method of present computed image feature, and the feature that the method is extracted has that yardstick is constant, the performance of invariable rotary, simultaneously illumination variation and affine, perspective transform is had unchangeability.SURF all surmounts or approaches the same class methods that in the past proposed, and have obvious advantage on computing velocity aspect 3 of multiplicities, uniqueness, robustness.
The present invention extracts the SURF unique point of RGB image, comprises that feature point detection and unique point describe two parts.Employing is carried out Feature Points Matching based on the nearest neighbor algorithm of Euclidean distance, utilizes the data structure of KD tree to search for, and is more right than determining whether to accept this coupling according to the distance of nearest two unique points.
Step 3, images match are put the threedimensional coordinate mapping.
Set up transformational relation between the plane of delineation and space threedimensional point coordinate according to the calibration model of Kinect, determine that three dimensions puts the projection model of the plane of delineation, with following function representation:
u＝π(p)
Wherein, p is the three dimensions point, and u is plane of delineation coordinate, and π (p) representation space threedimensional point is to the mapping function of the plane of delineation.Coupling by image characteristic point obtains in the corresponding point of the plane of delineation pair, and the projection model that utilizes three dimensions to put the plane of delineation obtains three dimensions point coordinate corresponding to image characteristic point, further obtains threedimensional point corresponding to two frame data pair.
Step 4 is based on the doubledeck registration of the three dimensions point of RANSAC and ICP.
Registration refers to the interior coupling with the different images geographic coordinate that different imaging means was obtained of the same area.Comprise geometric correction, projective transformation and three kinds of processing of unified engineer's scale.Registration results is expressed as matrix:
T _{cw}＝[R _{cw},t _{cw}]
Wherein, subscript " cw " expression is tied to current Kinect coordinate system, R from world coordinates _{cw}The expression world coordinates is tied to the rotation matrix of current coordinate system, t _{cw}The expression world coordinates is tied to the translation of current coordinate system.T _{cw}Described Kinect and rotated translation relation under world coordinate system.Put p under the Kinect coordinate system _{c}To world coordinates p _{w}Transformation relation be:
p _{c}＝T _{cw}p _{w}
The problem high for three dimensional point cloud method for registering complexity, that calculated amount is large the present invention is based on RANSAC and ICP method, proposes a kind of doubledeck method for registering, is comprised of first registration and accuracy registration two parts.First registration adopts RANSAC, to obtain KeyFrame and Relative Transformation matrix; Adopt ICP to realize accuracy registration, realize the alignment of threedimensional data points and provide threedimension varying information accurately for upgrading threedimensional scenic on the basis of first registration.
Step 5, scene update.
Each frame threedimensional data of obtaining by Kinect approximately comprises 250,000 points.There is very large information redundancy in adjacent two frame data, in order to improve the sharpness of reconstructed results, and to the description that the threedimensional map that generates provides an essence to want, the burden of minimizing system aspect internal memory, the present invention adopts KeyFrame Data Update threedimensional scenic.
The invention has the beneficial effects as follows: adopt Kinect to obtain environmental data, for the characteristics of Kinect sensor, propose a kind of doubledeck method for registering based on RANSAC and ICP, realize quick indoor 3 D scene rebuilding on a large scale.Effectively solve threedimensional rebuilding method cost and real time problems, improved reconstruction precision.
Description of drawings
Fig. 1 is the indoor method for reconstructing threedimensional scene block diagram based on Kinect;
Fig. 2 is Kinect coordinate system schematic diagram;
Fig. 3 is the doubledeck method for registering process flow diagram based on RANSAC and ICP;
Fig. 4 creates the actual environment schematic diagram of threedimensional scenic for using the present invention: in figure, (a) for the experiment real scene, (b) be the twodimensional geometry schematic diagram of experimental situation;
Fig. 5 creates the result schematic diagram of threedimensional scenic for using the present invention.
Embodiment
The present invention is described in further detail by reference to the accompanying drawings.As shown in Figure 1, the present invention includes following step:
Step 1 is carried out Kinect and is demarcated, and concrete grammar is as follows:
(1) print a chessboard template.The present invention adopts an A4 paper, chessboard be spaced apart 0.25cm.
(2) from a plurality of angle shot chessboards.During shooting, should allow chessboard take screen, and 8 template picture be taken altogether in each angle that guarantees chessboard in screen as far as possible.
(3) detect unique point in image, i.e. each black point of crossing of chessboard.
(4) obtain the parameter that Kinect demarcates.
The internal reference matrix K of thermal camera _{ir}:
Wherein, (f _{uIR}, f _{vIR}) be the focal length of thermal camera, value (5,5), (u _{IR}, v _{IR}) be thermal camera as the planar central coordinate, value (320,240).
The Intrinsic Matrix K of RGB video camera _{c}:
Wherein, (f _{u}, f _{v}) be the focal length of RGB video camera, (u _{0}, v _{0}) be that the RGB video camera is as the planar central coordinate.
External parameter between thermal camera and RGB video camera is:
T＝[R _{IRc},t _{IRc}]
Wherein, R _{IRc}Be rotation matrix, t _{bc}Translation vector, the parameter of directly using official of Microsoft to provide:
t _{IRc}＝[0.075?0] ^{T}
In the present invention, the Kinect coordinate system as shown in Figure 2, for y axle positive dirction, is upwards forward z axle positive dirction, is to the right the x positive dirction.The initial point position of Kinect is set as the world coordinate system initial point, and the X of world coordinate system, Y, Z direction are identical with x, y, the z direction of Kinect initial point position.
Step 2, the extraction of unique point and coupling, method is as follows:
(1) obtain integral image.Integral image refer to calculate given all pixels of gray level image accumulation and, for the integration I (X) of certain some X=(x, y) in image be:
Calculate the grayscale value sum of a rectangular area with 3 plus and minus calculations in integral image, irrelevant with the area of rectangle.Can see in the step of back, the convolution mask that uses in the SURF feature point extraction is frame shape template, has greatly improved operation efficiency.
(2) ask for approximate Hessian matrix H _{Approx}For certain some X=(x, y) in image I, the Hessian matrix H (X, s) on the s yardstick that X is ordered is defined as:
Wherein, L _{xx}(X, s), L _{xy}(X, s), L _{yy}(X, s) expression Gauss secondorder partial differential coefficient is in the convolution of X place and image I.Use the second order Gauss filtering in the approximate Hessian of replacement of square frame filtering matrix.The value of frame shape Filtering Template after with image convolution is respectively D _{xx}, D _{yy}, D _{xy}, further replace L with them _{xx}, L _{yy}, L _{xy}Obtain approximate Hessian matrix H _{Approx}, its determinant is:
det(H _{approx})＝D _{xx}D _{yy}(wD _{xy}) ^{2}
Wherein, w is weight coefficient, and value is 0.9 in enforcement of the present invention.
(3) location feature point.The feature point detection of SURF is based on the Hessian matrix, according to the local maximum location feature point position of Hessian matrix determinant.
With the frame shape wave filter of different size, original image is processed and obtained the yardstick image pyramid, according to H _{Approx}Obtain the extreme value that the scalogram picture is located at (X, s).
Use frame shape wave filter to build metric space, in every single order, select the scalogram picture of 4 layers, the structure parameter on 4 rank sees Table 1.
Size (the unit: s) of 16 templates in quadravalence before table 1 metric space
Use H _{Approx}Matrix is obtained extreme value, in 3 dimension (X, s) metric spaces, each regional area of 3 * 3 * 3 is carried out nonmaximum value suppress (keep maximum value, other values are set to 0).Elect response as unique point greater than the point of 26 neighborhood values.Utilize the quadratic fit function that unique point is accurately located, fitting function D (X) is:
So far, position, the yardstick information (X, s) of unique point have been obtained.
(4) determine the direction character of unique point.With the Haar wavelet filter, circular neighborhood is processed, obtained the response of the corresponding x of each point, y direction in this neighborhood.Choose the Gaussian function (σ gets 2s, and s is yardstick corresponding to this unique point) centered by unique point, these responses are weighted, the vector of search length maximum, its direction is the corresponding direction of this unique point.
(5) construction feature description vectors.Determine a foursquare neighborhood centered by unique point, the length of side is got 20s, is the unique point direction setting y direction of principal axis of this neighborhood.Square area is divided into 4 * 4 sub regions, processes with the Haar wavelet filter in each subregion that (Haar small echo template size is 2s * 2s).Use d _{x}The little wave response of Haar of expression horizontal direction is used d _{y}The little wave response of Haar of expression vertical direction.For all d _{x}, d _{y}In order to the Gaussian function weighting centered by unique point, the σ of this Gaussian function is 3.3s.In every sub regions respectively to d _{x}, d _{y}, d _{x}, d _{y} summation obtains 4 dimensional vector V (∑ d _{x}, ∑ d _{y}, ∑ d _{y}, ∑ d _{y}).The vector of 4 * 4 sub regions is coupled together just obtained one 64 vector of tieing up, this vector has rotation, yardstick unchangeability, then after carrying out normalization, has illumination invariant.So far, obtained the proper vector of Expressive Features point.
(6) characteristic matching.Employing is based on the nearest neighbor method of Euclidean distance, utilize the KD tree to search in image to be matched, find with benchmark image in the nearest the first two unique point of unique point Euclidean distance, if minimum distance less than the proportion threshold value (0.7) of setting, is accepted this a pair of match point except the value that closely obtains in proper order.
Step 3, images match are put the threedimensional coordinate mapping.
According to calibrating parameters, Kinect depth image and RGB image mid point mapping calculation method are as follows:
1 p=(x in depth image _{d}, y _{d}) coordinate P under the Kinect coordinate system _{3D}=(x, y, z) is:
Wherein, depth (x _{d}, y _{d}) depth value of expression depth image mid point p.
So, derive the corresponding 3D coordinate of pixel of RGB image, and then obtain the coordinate (x in the RGB image _{rgb}, y _{rgb}).Computing formula is as follows:
Wherein, P ' _{3D} ^{T}R _{IRc}* P _{3D} ^{T}+ t _{IRc}
According to abovementioned conversion relation, the matching double points that obtains in step 2 is converted to three dimensions point right.
Step 4, based on the doubledeck registration of the three dimensions point of RANSAC and ICP, method comprises the following steps as shown in Figure 3:
(1) first registration.The corresponding point that Feature Points Matching obtains are to existing larger mistake coupling.Using RANSAC in the first registration stage, to remove the three dimensions point of mistake coupling right, finds the imperial palace point set that satisfies transformation model and estimate transformation matrix T by iteration.Accumulation KeyFrame obtains the transformation matrix of the relative KeyFrame of current Kinect to each Relative Transformation matrix of current data.Go out translational movement and the anglec of rotation mould value of Kinect according to this matrix computations, compare with the threshold value of setting, judging whether to choose this frame is KeyFrame.In embodiment, the threshold value setting of translational movement is 0.4, and angle threshold is 40 degree.
The detailed process of RANSAC is as follows:
1) from from the initial N of reference point collection A and point set B subject to registration to choosing at random 7 pairs of data threedimensional matching double points;
2) utilize basis matrix to find the solution 7 methods of minimal configuration, be it is calculated that the transformation matrix T of benchmark point set and point set data subject to registration by 7 logarithms of choosing ^{AB}
3) utilize transformation matrix T ^{AB}With Characteristic of Image point set subject to registration A remaining N7 threedimensional point in (expression comprises the point set B subject to registration of N point) Transform under reference point cloud coordinate system;
4) the point set P ' after computational transformation _{N7}With the benchmark point set Between error of coordinate;
5) from N to finding out the unique point of error of coordinate in certain threshold value matching double points to number, be designated as i;
6) repeat 1) ~ 5) step n(n sets by the user, in the present embodiment, iterations is set as 50) inferior, make the i value obtain maximum set for imperial palace point set, be interior point, all the other Ni are Mismatching point, are exterior point.Utilize imperial palace point set to estimate the least square solution of transformation model, as the transformation matrix T of current adjacent two frame data.
(2) accuracy registration.Obtain KeyFrame and Relative Transformation matrix thereof through first registration, the present invention adopts the accurate transformation matrix of ICP calculating K eyFrame, and first registration results as the priori conversion, is tried to achieve the accurate transformation matrix of KeyFrameToKeyFrame.
In the depth map of Kinect, pixel value is that 0 zone is invalid metrical information.Obtain effective information and invalid information in depth map for convenience of describing, be defined as follows function for abovementioned phenomenon:
Wherein, X is picture planimetric coordinates point.
In order to obtain k Kinect pose constantly According to the transformational relation of Kinect coordinate system and world coordinate system, set up following energy function:
Wherein, p _{w}Be the point under world coordinate system, p ^{k}Be the point under current coordinate system, Ω has the set of the pixel of significant depth value in the k moment plane of delineation, that is:
Ω={ p ^{k} u=π (p ^{k}) and M (u)=1}
The abovementioned energy function of setting up is the mathematical description of threedimensional ICP method.In the ICP algorithm, by the minimization energy function, obtain the pose under world coordinate system at k moment Kinect.Usually the ICP algorithm is under the prerequisite of supposition relative pose, constantly sets up the corresponding relation between the some cloud, and by optimizing corresponding point error iterative.Therefore in the solution procedure of ICP algorithm, initial relative pose is set and is played vital effect, and inappropriate initial pose will make the ICP algorithm be absorbed in local optimum, can't obtain correct result.Based on the ICP algorithm of unordered some cloud, along with an increase of cloud quantity, the space complexity of algorithm and time complexity will significantly improve, and greatly reduce the execution efficient of this algorithm.Therefore initial relative pose is the prerequisite of setting up corresponding relation between the some cloud, plays vital effect in the iterative process of ICP method.The Relative Transformation that this method obtains first registration is as initial relative pose, to obtain the optimal estimation of current KeyFrame.
Suppose in the sideplay amount of the k1 moment and k moment Kinect to be
Kinect is at x, y, and the rotation amount on the z direction of principal axis (α, beta, gamma) and the translational movement on three directions are (t _{x}, t _{y}, t _{z}).When enough hour of abovementioned two vectors, launch according to the single order Taylor's formula, make x=(α, beta, gamma, t _{x}, t _{y}, t _{z}):
World coordinates for the k moment spatial point of obtaining is Under the coordinate system of k1 Kinect during the moment, energy function is transformed to so with this spot projection:
Ω={ p ^{k} u=π (p ^{k}) and M (u)=1}
Wherein, p _{w}And p ^{k}Be corresponding point, Be p _{w}Coordinate under k1 moment camera coordinates is.
By
Obtain the final expression of energy function:
Ω={ p ^{k} u=π (p ^{k}) and M (u)=1}
Wherein,
Step 5, scene update.
The renewal of scene is divided into two kinds of situations, and a kind of is to carry out for the first time scene update, and this moment, the set positions with Kinect was the initial point of world coordinate system, and added the current contextual data of obtaining; Another kind is newlyincreased frame KeyFrame data, according to formula Current newlyincreased KeyFrame data are transformed in world coordinate system, complete the renewal of contextual data.
The below provides and uses the method for the invention is carried out the threedimensional environment establishment under indoor true environment a experiment embodiment.
The depth camera that experiment is adopted is KinectXBOX360, and the RGB image resolution ratio is 640 * 480, and the highest frame frequency is 30fps.Indoor environment as shown in Figure 4, Fig. 4 (a) for the experiment real scene, Fig. 4 (b) is the twodimensional geometry schematic diagram of experimental situation.During experiment, handheld Kinect, begin from starting point, stops by behind the fixed route walking place of reaching home, and synchronously increases progressively in the process of walking the generation global map, the scope of 9m * 9m in the map covering chamber of establishment.Fig. 5 is the reconstructed results schematic diagram.
Experimental result shows, the method for the invention can be used on a large scale that indoor threedimensional scenic creates, and has higher precision and good realtime.
The above is only preferred embodiment of the present invention, is not for limiting protection scope of the present invention, all any modifications of doing within the spirit and principles in the present invention, is equal to and replaces and improvement etc., within all should being included in protection scope of the present invention.
Claims (2)
Priority Applications (1)
Application Number  Priority Date  Filing Date  Title 

CN201310053829.3A CN103106688B (en)  20130220  20130220  Based on the indoor method for reconstructing threedimensional scene of doubledeck method for registering 
Applications Claiming Priority (1)
Application Number  Priority Date  Filing Date  Title 

CN201310053829.3A CN103106688B (en)  20130220  20130220  Based on the indoor method for reconstructing threedimensional scene of doubledeck method for registering 
Publications (2)
Publication Number  Publication Date 

CN103106688A true CN103106688A (en)  20130515 
CN103106688B CN103106688B (en)  20160427 
Family
ID=48314513
Family Applications (1)
Application Number  Title  Priority Date  Filing Date 

CN201310053829.3A CN103106688B (en)  20130220  20130220  Based on the indoor method for reconstructing threedimensional scene of doubledeck method for registering 
Country Status (1)
Country  Link 

CN (1)  CN103106688B (en) 
Cited By (30)
Publication number  Priority date  Publication date  Assignee  Title 

CN103260015A (en) *  20130603  20130821  程志全  Threedimensional visual monitoring system based on RGBDepth camera 
CN103268729A (en) *  20130522  20130828  北京工业大学  Mobile robot cascading type map creating method based on mixed characteristics 
CN103325142A (en) *  20130529  20130925  南京大学  Computer threedimensional model establishing method based on Kinect 
CN103413352A (en) *  20130729  20131127  西北工业大学  Scene threedimensional reconstruction method based on RGBD multisensor fusion 
CN103456038A (en) *  20130819  20131218  华中科技大学  Method for rebuilding threedimensional scene of downhole environment 
CN104126989A (en) *  20140730  20141105  福州大学  Foot surface threedimensional information obtaining method based on multiple RGBD cameras 
CN104517287A (en) *  20141210  20150415  广州赛意信息科技有限公司  Image matching method and device 
CN104517289A (en) *  20141212  20150415  浙江大学  Indoor scene positioning method based on hybrid camera 
CN105222789A (en) *  20151023  20160106  哈尔滨工业大学  A kind of building indoor plane figure method for building up based on laser range sensor 
CN105319991A (en) *  20151125  20160210  哈尔滨工业大学  Kinect visual informationbased robot environment identification and operation control method 
CN105335399A (en) *  20140718  20160217  联想(北京)有限公司  Information processing method and electronic device 
CN105513128A (en) *  20160113  20160420  中国空气动力研究与发展中心低速空气动力研究所  Kinectbased threedimensional data fusion processing method 
CN105509748A (en) *  20151229  20160420  深圳先进技术研究院  Navigation method and apparatus for robot 
CN105913489A (en) *  20160419  20160831  东北大学  Indoor threedimensional scene reconstruction method employing plane characteristics 
CN105987693A (en) *  20150519  20161005  北京蚁视科技有限公司  Visual positioning device and threedimensional surveying and mapping system and method based on visual positioning device 
CN106091921A (en) *  20150428  20161109  三菱电机株式会社  For the method determining the size in scene 
CN106384383A (en) *  20160908  20170208  哈尔滨工程大学  RGBD and SLAM scene reconfiguration method based on FAST and FREAK feature matching algorithm 
CN106529838A (en) *  20161216  20170322  湖南拓视觉信息技术有限公司  Virtual assembling method and device 
CN106596557A (en) *  20161107  20170426  东南大学  Threedimensional scanning mobile type platform carrying Kinect and method thereof 
CN106780297A (en) *  20161130  20170531  天津大学  Image high registration accuracy method under scene and Varying Illumination 
CN106780590A (en) *  20170103  20170531  成都通甲优博科技有限责任公司  The acquisition methods and system of a kind of depth map 
CN106803267A (en) *  20170110  20170606  西安电子科技大学  Indoor scene threedimensional rebuilding method based on Kinect 
CN106952299A (en) *  20170314  20170714  大连理工大学  A kind of 3 d light fields Implementation Technology suitable for Intelligent mobile equipment 
CN107025661A (en) *  20160129  20170808  成都理想境界科技有限公司  A kind of method for realizing augmented reality, server, terminal and system 
CN107123138A (en) *  20170428  20170901  电子科技大学  Based on vanilla R points to rejecting tactful point cloud registration algorithm 
CN107274440A (en) *  20170626  20171020  赵红林  A kind of image matching algorithm 
CN107577451A (en) *  20170803  20180112  中国科学院自动化研究所  More Kinect human skeletons coordinate transformation methods and processing equipment, readable storage medium storing program for executing 
CN107610212A (en) *  20170725  20180119  深圳大学  Scene reconstruction method, device, computer equipment and computerreadable storage medium 
CN107833270A (en) *  20170928  20180323  浙江大学  Realtime object dimensional method for reconstructing based on depth camera 
CN108055456A (en) *  20171207  20180518  中煤航测遥感集团有限公司  Texture collection method and device 
Citations (2)
Publication number  Priority date  Publication date  Assignee  Title 

US20030137508A1 (en) *  20011220  20030724  Mirko Appel  Method for three dimensional image reconstruction 
CN101976455A (en) *  20101008  20110216  东南大学  Color image threedimensional reconstruction method based on threedimensional matching 

2013
 20130220 CN CN201310053829.3A patent/CN103106688B/en active IP Right Grant
Patent Citations (2)
Publication number  Priority date  Publication date  Assignee  Title 

US20030137508A1 (en) *  20011220  20030724  Mirko Appel  Method for three dimensional image reconstruction 
CN101976455A (en) *  20101008  20110216  东南大学  Color image threedimensional reconstruction method based on threedimensional matching 
NonPatent Citations (2)
Title 

PETER HENRY等: "RGBD Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments", 《THE 12TH INTERNATIONAL SYMPOSIUM ON EXPERIMENTAL ROBOTICS》 * 
刘鑫等: "基于GPU和Kinect的快速物体重建", 《自动化学报》 * 
Cited By (43)
Publication number  Priority date  Publication date  Assignee  Title 

CN103268729A (en) *  20130522  20130828  北京工业大学  Mobile robot cascading type map creating method based on mixed characteristics 
CN103325142A (en) *  20130529  20130925  南京大学  Computer threedimensional model establishing method based on Kinect 
CN103325142B (en) *  20130529  20160217  南京大学  A kind of electronic 3D model modeling method based on Kinect 
CN103260015A (en) *  20130603  20130821  程志全  Threedimensional visual monitoring system based on RGBDepth camera 
CN103260015B (en) *  20130603  20160224  程志全  Based on the threedimensional visible supervisory control system of RGBDepth camera 
CN103413352A (en) *  20130729  20131127  西北工业大学  Scene threedimensional reconstruction method based on RGBD multisensor fusion 
CN103456038A (en) *  20130819  20131218  华中科技大学  Method for rebuilding threedimensional scene of downhole environment 
CN105335399B (en) *  20140718  20190329  联想(北京)有限公司  A kind of information processing method and electronic equipment 
CN105335399A (en) *  20140718  20160217  联想(北京)有限公司  Information processing method and electronic device 
CN104126989A (en) *  20140730  20141105  福州大学  Foot surface threedimensional information obtaining method based on multiple RGBD cameras 
CN104517287A (en) *  20141210  20150415  广州赛意信息科技有限公司  Image matching method and device 
CN104517289B (en) *  20141212  20170808  浙江大学  A kind of indoor scene localization method based on hybrid camera 
CN104517289A (en) *  20141212  20150415  浙江大学  Indoor scene positioning method based on hybrid camera 
CN106091921B (en) *  20150428  20190618  三菱电机株式会社  Method for determining the size in scene 
CN106091921A (en) *  20150428  20161109  三菱电机株式会社  For the method determining the size in scene 
CN105987693B (en) *  20150519  20190430  北京蚁视科技有限公司  A kind of vision positioning device and threedimensional mapping system and method based on the device 
WO2016184255A1 (en) *  20150519  20161124  北京蚁视科技有限公司  Visual positioning device and threedimensional mapping system and method based on same 
CN105987693A (en) *  20150519  20161005  北京蚁视科技有限公司  Visual positioning device and threedimensional surveying and mapping system and method based on visual positioning device 
CN105222789A (en) *  20151023  20160106  哈尔滨工业大学  A kind of building indoor plane figure method for building up based on laser range sensor 
CN105319991A (en) *  20151125  20160210  哈尔滨工业大学  Kinect visual informationbased robot environment identification and operation control method 
CN105509748A (en) *  20151229  20160420  深圳先进技术研究院  Navigation method and apparatus for robot 
CN105513128A (en) *  20160113  20160420  中国空气动力研究与发展中心低速空气动力研究所  Kinectbased threedimensional data fusion processing method 
CN107025661A (en) *  20160129  20170808  成都理想境界科技有限公司  A kind of method for realizing augmented reality, server, terminal and system 
CN105913489A (en) *  20160419  20160831  东北大学  Indoor threedimensional scene reconstruction method employing plane characteristics 
CN105913489B (en) *  20160419  20190423  东北大学  A kind of indoor threedimensional scenic reconstructing method using plane characteristic 
CN106384383B (en) *  20160908  20190806  哈尔滨工程大学  A kind of RGBD and SLAM scene reconstruction method based on FAST and FREAK Feature Correspondence Algorithm 
CN106384383A (en) *  20160908  20170208  哈尔滨工程大学  RGBD and SLAM scene reconfiguration method based on FAST and FREAK feature matching algorithm 
CN106596557A (en) *  20161107  20170426  东南大学  Threedimensional scanning mobile type platform carrying Kinect and method thereof 
CN106780297A (en) *  20161130  20170531  天津大学  Image high registration accuracy method under scene and Varying Illumination 
CN106780297B (en) *  20161130  20191025  天津大学  Image high registration accuracy method under scene and Varying Illumination 
CN106529838A (en) *  20161216  20170322  湖南拓视觉信息技术有限公司  Virtual assembling method and device 
CN106780590A (en) *  20170103  20170531  成都通甲优博科技有限责任公司  The acquisition methods and system of a kind of depth map 
CN106780590B (en) *  20170103  20191224  成都通甲优博科技有限责任公司  Method and system for acquiring depth map 
CN106803267A (en) *  20170110  20170606  西安电子科技大学  Indoor scene threedimensional rebuilding method based on Kinect 
CN106952299A (en) *  20170314  20170714  大连理工大学  A kind of 3 d light fields Implementation Technology suitable for Intelligent mobile equipment 
CN106952299B (en) *  20170314  20190716  大连理工大学  A kind of 3 d light fields Implementation Technology suitable for Intelligent mobile equipment 
CN107123138B (en) *  20170428  20190730  电子科技大学  Based on vanillaR point to the point cloud registration method for rejecting strategy 
CN107123138A (en) *  20170428  20170901  电子科技大学  Based on vanilla R points to rejecting tactful point cloud registration algorithm 
CN107274440A (en) *  20170626  20171020  赵红林  A kind of image matching algorithm 
CN107610212A (en) *  20170725  20180119  深圳大学  Scene reconstruction method, device, computer equipment and computerreadable storage medium 
CN107577451A (en) *  20170803  20180112  中国科学院自动化研究所  More Kinect human skeletons coordinate transformation methods and processing equipment, readable storage medium storing program for executing 
CN107833270A (en) *  20170928  20180323  浙江大学  Realtime object dimensional method for reconstructing based on depth camera 
CN108055456A (en) *  20171207  20180518  中煤航测遥感集团有限公司  Texture collection method and device 
Also Published As
Publication number  Publication date 

CN103106688B (en)  20160427 
Similar Documents
Publication  Publication Date  Title 

Kar et al.  Learning a multiview stereo machine  
Sinha et al.  Efficient highresolution stereo matching using local plane sweeps  
Wöhler  3D computer vision: efficient methods and applications  
KR101554241B1 (en)  A method for depth map quality enhancement of defective pixel depth data values in a threedimensional image  
Klingner et al.  Street view motionfromstructurefrommotion  
CN105205858B (en)  A kind of indoor scene threedimensional rebuilding method based on single deep vision sensor  
Henry et al.  RGBD mapping: Using depth cameras for dense 3D modeling of indoor environments  
Lhuillier et al.  A quasidense approach to surface reconstruction from uncalibrated images  
Agarwal et al.  A survey of planar homography estimation techniques  
Zhao et al.  Alignment of continuous video onto 3D point clouds  
Borrmann et al.  Globally consistent 3D mapping with scan matching  
Košecká et al.  Extraction, matching, and pose recovery based on dominant rectangular structures  
Fitzgibbon et al.  Automatic 3D model acquisition and generation of new images from video sequences  
Pizarro et al.  Toward largearea mosaicing for underwater scientific applications  
CN103247075B (en)  Based on the indoor environment threedimensional rebuilding method of variation mechanism  
Xiao et al.  Uncalibrated perspective reconstruction of deformable structures  
Coughlan et al.  Manhattan world: Orientation and outlier detection by bayesian inference  
CN100559398C (en)  Automatic deepness image registration method  
Liebowitz  Camera calibration and reconstruction of geometry from images  
CN105245841B (en)  A kind of panoramic video monitoring system based on CUDA  
US7509241B2 (en)  Method and apparatus for automatically generating a site model  
EP2731075B1 (en)  Backfilling points in a point cloud  
CN103345736B (en)  A kind of virtual viewpoint rendering method  
Teller et al.  Calibrated, registered images of an extended urban area  
Quan  Imagebased modeling 
Legal Events
Date  Code  Title  Description 

C06  Publication  
PB01  Publication  
C10  Entry into substantive examination  
SE01  Entry into force of request for substantive examination  
C14  Grant of patent or utility model  
GR01  Patent grant 