CN103247075B - Based on the indoor environment three-dimensional rebuilding method of variation mechanism - Google Patents

Based on the indoor environment three-dimensional rebuilding method of variation mechanism Download PDF

Info

Publication number
CN103247075B
CN103247075B CN201310173608.XA CN201310173608A CN103247075B CN 103247075 B CN103247075 B CN 103247075B CN 201310173608 A CN201310173608 A CN 201310173608A CN 103247075 B CN103247075 B CN 103247075B
Authority
CN
China
Prior art keywords
camera
formula
current
point
algorithm
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310173608.XA
Other languages
Chinese (zh)
Other versions
CN103247075A (en
Inventor
贾松敏
王可
李雨晨
李秀智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201310173608.XA priority Critical patent/CN103247075B/en
Publication of CN103247075A publication Critical patent/CN103247075A/en
Application granted granted Critical
Publication of CN103247075B publication Critical patent/CN103247075B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the crossing domain of computer vision and intelligent robot, disclose a kind of method for reconstructing of the indoor scene on a large scale based on variation mechanism, comprising: step one, obtain the calibrating parameters of camera, and set up distortion correction model; Step 2, sets up camera pose and describes and camera projection model; Step 3, utilizes the monocular SLAM algorithm realization camera pose based on SFM to estimate; Step 4, sets up the depth map estimation model based on variation mechanism, and solves this model; Step 5, sets up key frame extraction mechanism, realizes the renewal of three-dimensional scenic.The present invention adopts RGB camera to obtain environmental data, for utilizing high precision monocular location algorithm, propose a kind of degree of depth drawing generating method based on variation mechanism, achieve large-scale quick indoor 3 D scene rebuilding, efficiently solve three-dimensional reconstruction algorithm cost and real time problems.

Description

Based on the indoor environment three-dimensional rebuilding method of variation mechanism
Technical field
The invention belongs to the crossing domain of computer vision and intelligent robot, relate to a kind of indoor environment three-dimensional reconstruction, particularly relate to a kind of method for reconstructing of the indoor scene on a large scale based on variation mechanism.
Technical background
That studies along with simultaneous localization and mapping (Simultaneous Localization And Mapping, SLAM) deepens continuously, and the modeling of surrounding three-dimensional three-dimensional progressively becomes this area research focus, causes the concern of numerous scholar.G.Klein equals first 2007 propose simultaneous localization and mapping (Parallel Tracking and Mapping, PTAM) concept in augmented reality (AR) field, to solve environment Real-time modeling set problem.Camera location and map generate and are divided into two separate threads by PTAM, while utilizing FastCorner method to upgrade detection unique point, adopt optimum local and overall light-stream adjustment (Bundle Adjustment, BA), the renewal of camera pose and three-dimensional feature point map is constantly realized.The method establishes surrounding three-dimensional map based on sparse some cloud, but this map lacks the three-dimensional description directly perceived to environment.The people such as Pollefeys achieve the three-dimensional reconstruction of large-scale outdoor scene by Multi-sensor Fusion.But the method exist calculate high complexity and to shortcomings such as noise-sensitive.In real-time follow-up and dense environment model reconstruction, there has also been some tentative progress at present, but be only confined to the reconstruct of some simple objects, and can only can obtain higher precision under particular constraints condition.The people such as Richard A.Newcombe, the SLAM algorithm based on SFM (Structure from Moving) is utilized to obtain space sparse features point cloud, adopt multiple dimensioned radial base interpolation, use Implicit Surface Polygonization method in graph image, structure three dimensions initialization grid map, and upgrade mesh vertex coordinates, to reach the object of approaching to reality scene in conjunction with scene flows constraint with high precision TV-L1 optical flow algorithm.This algorithm can obtain high-precision environmental model, but due to its algorithm complex higher, in two graphic hardware processor (GPU) acceleration situations, process the time that a two field picture still needs to spend a few second.
Summary of the invention
For the above-mentioned problems in the prior art, the invention provides a kind of quick three-dimensional reconstructing method based on variation mechanism, to realize the three-dimensional modeling under indoor complex environment.The method reduces required process data volume while ensureing environmental information, can realize large-scale quick indoor 3 D scene rebuilding.Efficiently solve three-dimensional reconstruction algorithm cost and real time problems, improve reconstruction precision.
The technical solution used in the present invention is as follows:
PTAM algorithm is utilized to estimate means as camera pose, and the depth map estimated energy function of suitable image sequence structure based on variation pattern is chosen at key frame place, use primal dual algorithm to optimize above-mentioned energy function, realize the acquisition at current key frame place environment depth map.Because this algorithm utilizes contiguous frames information structuring energy function, and the relevance that effectively make use of between certain viewing angles coordinate system, and translating camera perspective projection relation, data item is contained and looks imaging constraint more, reduce the computation complexity that algorithm model solves.Under unified calculation framework, the present invention utilizes graphics accelerator hardware to achieve the parallel optimization of algorithm, effectively improves algorithm real-time.
Based on a method for the indoor environment three-dimensional reconstruction of variation mechanism, it is characterized in that comprising the following steps:
Step one, obtains the calibrating parameters of camera, and sets up distortion correction model.
In computer vision application, by the geometric model of camera imaging, effectively set up the mapping relations between pixel and space three-dimensional point in image.The geometric parameter forming camera model must just can obtain with calculating by experiment, and the process solving above-mentioned parameter is just referred to as camera calibration.The demarcation of camera parameter is in the present invention unusual the key link, and the precision of calibrating parameters directly affects the accuracy of net result three-dimensional map.
The detailed process of camera calibration is:
(1) a chessboard template is printed.The present invention adopts an A4 paper, chessboard be spaced apart 0.25cm.
(2) from multiple angle shot chessboard.During shooting, should chessboard be allowed to take screen as far as possible, and ensure that each angle of chessboard is in screen, altogether shooting 6 template picture.
(3) unique point in image is detected, i.e. each black point of crossing of chessboard.
(4) ask for the inner parameter of camera, method is as follows:
RGB camera calibrating parameters is mainly camera internal reference.The internal reference matrix K of camera is:
K = f u 0 u 0 0 f v v 0 0 0 1
In formula, u, v are camera plane coordinate axis, (u 0, v 0) be camera as planar central coordinate, (f u, f v) be the focal length of camera.
According to calibrating parameters, the mapping relations of RGB image mid point and three dimensions point are as follows: the coordinate P of RGB image mid point p=(u, v) under camera coordinates system 3D=(x, y, z) is expressed as:
x = ( u - u 0 ) * z / f u y = ( v - v 0 ) * z / f v z = d
In formula, d represents the depth value of depth image mid point p.
In the present invention, camera coordinates system as shown in Figure 2, and be y-axis positive dirction downwards, being forward z-axis positive dirction, is to the right x positive dirction.The initial point position of camera is set as world coordinate system initial point, and X, Y, the Z-direction of world coordinate system are identical with the definition of camera.
FOV (Field of Viewer) camera correction model is:
u d = u 0 v 0 + f u 0 0 f v r d r u x u
r d = 1 ω arctan ( 2 r u tan ω 2 )
r u = tan ( r d ω ) 2 tan ω 2
In formula, x ufor the pixel coordinate in z=1 face, u dfor pixel coordinate in original image, ω is FOV camera distortion coefficient.
Step 2, sets up camera pose and describes and camera projection model.
Under the world coordinate system set up, camera pose can be expressed as matrix:
T cw = R cw t cw 0 1
In formula, " cw " expression is tied to Current camera coordinate system from world coordinates, T cwthe rotation translation transformation space that ∈ SE (3), SE (3) are rigid body.T cwcan by following hexa-atomic group of μ=(μ 1, μ 2, μ 3, μ 4, μ 5, μ 6) represent, that is:
T cw = exp ( μ ^ )
μ ^ = 0 μ 6 - μ 5 μ 1 μ 6 0 μ 4 μ 2 μ 5 - μ 4 0 μ 3 0 0 0 0
In formula, μ 1, μ 2, μ 3be respectively the translational movement of Kinect under global coordinate system, μ 4, μ 5, μ 6the rotation amount of coordinate axis under expression local coordinate system.
The pose T of camera cwestablish spatial point cloud coordinate p under current coordinate system cto world coordinates p wtransformation relation, that is:
p c=T cwp w
Under current mark system, three dimensions point cloud to z=1 plane projects and is defined as:
π(p)=(x/z,y/z) T
In formula, p ∈ R 3for three dimensions point, x, y, z are the coordinate figure of this point.According to changing coordinates point depth value d, utilize backwards projection method determination current spatial three-dimensional point coordinate p, its coordinate relation can be expressed as:
π -1(u,d)=dK -1u
Step 3, utilizes the monocular SLAM algorithm realization camera pose based on SFM to estimate.
At present, monocular vision SLAM algorithm mainly comprises the SLAM algorithm based on filtering and SFM (Structure from Moving).The present invention adopts PTAM algorithm realization to the location of camera.This algorithm is a kind of monocular vision SLAM method based on SFM, by by system divides being camera tracking and map building two independently thread.In camera track thread, system utilizes camera to obtain current environment texture information, and build four layers of Gaussian image pyramid, use FAST-10 Corner Detection Algorithm to extract characteristic information in present image, the mode of employing Block-matching sets up the data correlation between Corner Feature.On this basis, according to Current projection error, set up the accurate location that pose estimation model realizes camera, and generate current three-dimensional point cloud map in conjunction with characteristic matching information and triangulation algorithm.The detailed process that camera pose is estimated is:
(1) initialization of sparse map
PTAM algorithm utilizes standard stereo camera algorithm model to set up current environment initialization map, and combination newly increases key frame continuous renewal three-dimensional map on this basis.In the initialization procedure of map, by artificially selecting two independent key frames, utilize FAST corners Matching relation in image, adopt the five-spot based on stochastic sampling consistance (Random Sample Consensus, RANSAC) to realize important matrix F between above-mentioned key frame to estimate, and calculate the three-dimensional coordinate at current signature point place, simultaneously, set up current consistance plane in conjunction with the suitable spatial point of RANSAC algorithm picks, to determine overall world coordinate system, realize the initialization of map.
(2) camera pose is estimated
System utilizes camera to obtain current environment texture information, and builds four layers of Gaussian image pyramid, uses FAST-10 Corner Detection Algorithm to extract characteristic information in present image, adopts the mode of Block-matching to set up data correlation between Corner Feature.On this basis, according to Current projection error, set up pose estimation model, its mathematical description is as follows:
ξ = arg min ξ ΣObj ( | e j | σ j , σ T )
e j = u i v i - Kπ ( exp ( ξ ^ ) p )
In formula, e jprojection error, ∑ Obj (, σ t) be Tukey two power objective function, σ tfor the unbiased estimator of the match-on criterion difference of unique point, ξ is current pose 6 element group representation, for the antisymmetric matrix be made up of ξ.
According to above-mentioned pose estimation model, choose 50 the characteristic matching points being positioned at image pyramid top layer, realize estimating the initialization pose of camera.Further, the initial pose of this algorithm combining camera, adopts polar curve to receive the mode of rope, sets up Corner Feature sub-pixel precision matching relationship in image pyramid, and by above-mentioned coupling to bringing pose estimation model into, realize the accurate reorientation of camera.
(3) camera pose is optimized
System is after initialization, and the new key frame of wait enters by map building thread.If camera and current key interframe number of image frames exceed threshold condition, and when camera tracking effect is best, add key frame process by automatically performing.Now, system will carry out Shi-Tomas assessment to newly increasing all FAST angle points in key frame, to obtain the current angle point information with notable feature, and choose key frame nearest with it and utilize polar curve to receive rope and block matching method to set up unique point mapping relations, the accurate reorientation of camera is realized in conjunction with pose estimation model, match point is projected to space simultaneously, generate current global context three-dimensional map.
In order to realize the maintenance of global map, in the process that the new key frame of map building thread waits enters, the consistance optimization that system will utilize local and the Levenberg-Marquardt boundling adjustment algorithm of the overall situation to realize current map.The mathematical description of this boundling adjustment algorithm is:
{ { ξ 1 . . ξ N } , { p 1 . . p M } } = arg min { { μ } , { p } } Σ i = 1 N Σ j ∈ S i Obj ( | e ji | σ ji , σ T )
In formula, σ jifor in i-th key frame, the unbiased esti-mator of the match-on criterion difference of FAST unique point, ξ irepresent 6 element group representations of i-th key frame pose, p ifor the point in global map.
Step 4, sets up the depth map estimation model based on variation mechanism, and solves this model.
Under the accurate pose of PTAM estimates prerequisite, the present invention is based on many apparent weights construction method, utilize variation Mechanism establishing depth solving model.The method is based on illumination invariant and depth map smoothness assumption, set up L1 type data penalty term and variation regularization term, this model sets up data penalty term by under the prerequisite supposed at illumination invariant, and utilizes data penalty term to ensure the flatness of current depth figure, and its mathematical model is as follows:
E d = ∫ Ω ( E data + λ E reg ) dx
In formula, λ is data penalty term E datawith variation regularization term E regbetween weight coefficient, for depth map span.
By choosing the reference frame I that current key frame is depth map algorithm for estimating r, utilize its adjacent picture sequence I={I 1, I 2..., I n, set up data penalty term E in conjunction with projection model data, its mathematical description is:
E data = 1 | I ( r ) | Σ I i ∈ I | I r ( x ) - I i ( x ′ ) |
In formula, | I (r) | for having the image frames numbers of the information of coincidence in current picture sequence with reference frame, x ' is for being in I as reference frame x under degree of depth d ithe projection coordinate at place, that is:
x ′ = π - 1 ( KT r i π ( x , d ) )
Under depth map smoothness assumption prerequisite, in order to ensure the uncontinuity of boundary in the picture, introduce Weighted H uber operator and build variation regularization term, its mathematical description is:
E reg = g ( u ) | | ▿ d ( u ) | | a
In formula, for the gradient of depth map, g (u) is pixel gradient weight coefficient, and huber operator || x|| αmathematical description be:
| | x | | a = | | x | | 2 2 α , | | x | | ≤ α | | x | | - α 2 , others
In formula, α is constant.
According to Legendre-Fenchel conversion, energy function can be expressed as:
g | | &dtri; d | | a = < g &dtri; d , q > - &delta; ( q ) - &alpha; 2 | | q | | 2
In formula, &delta; ( q ) = &alpha; 2 &alpha; < | | q | | &le; 1 &infin; others
The three-dimensional reconstruction process that is introduced as of above-mentioned Huber operator provides slickness and ensures also there is discontinuous border in depth map for guaranteeing simultaneously, improves three-dimensional map and creates quality.
The problem high for above-mentioned mathematical model solving complexity, calculated amount is large, introduces auxiliary variable and sets up convex Optimized model, and adopt alternately descent method to realize the optimization to above-mentioned model, its detailed process is as follows:
(1) fixing h, solves:
arg max q { arg min d E d , q }
E d , q = &Integral; &Omega; ( < g &dtri; d , q > + 1 2 &theta; ( d - h ) 2 - &delta; ( q ) - &alpha; 2 | | q | | 2 ) dx
In formula, θ is quadratic term constant coefficient, and g is gradient weight coefficient in variation regularization term.
According to Lagrangian extremum method, the condition that above-mentioned energy function reaches extreme value is:
&PartialD; E d , q &PartialD; q = g &dtri; d - &alpha;q = 0
&PartialD; E d , q &PartialD; d = g div q + 1 &theta; ( d - h ) = 0
In formula, divq is the divergence of q.
Describe in conjunction with partial derivative discretize, above-mentioned extremum conditions can be expressed as:
q n + 1 - q n &epsiv; q = g &dtri; d - &alpha; q n + 1
d n + 1 - d n &epsiv; d = g div p + 1 &theta; ( d n + 1 - h )
Primal dual algorithm now can be adopted to realize the iteration optimization of energy function, that is:
p n + 1 = ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) max ( 1 , ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) )
d n + 1 = d n + &epsiv; d ( g div q n + 1 + h n / &theta; ) ( 1 + &epsiv; d / &theta; )
In formula, ε q, ε dfor constant, expression maximizes and minimizes gradient and describes coefficient respectively.
(2) fixing d, solves:
arg min h E h
E h = &Integral; &Omega; ( &theta; 2 ( d - h ) 2 + &lambda; | I ( r ) | &Sigma; i = 0 n | I i ( x ) - I ref ( x , h ) | ) dx
In above-mentioned energy function solution procedure, in order to effectively reduce the complexity of algorithm, ensure the part detailed information in process of reconstruction simultaneously.The present invention is by degree of depth span [d min, d max] be divided into S sample plane, adopt exhaustive mode to obtain the optimum solution of present energy function.Wherein being chosen as of step-length:
d inc k = Sd min d max ( S - k ) d min + d max
In formula, for k and k-1 sample plane interval.
Step 5, sets up key frame extraction mechanism, realizes the renewal of three-dimensional scenic.
Considering the elimination of system redundancy information, in order to improve sharpness and the real-time of reconstructed results, reducing system in computation burden, the present invention only realizes the estimation to three-dimensional scenic at key frame place, and upgrades and safeguard the three-dimensional scenic generated.When after newly-increased frame KeyFrame data, according to formula current newly-increased KeyFrame data are transformed in world coordinate system, complete the renewal of contextual data.
Data penalty term in the estimation of Depth model utilizing step 4 to set up, the information set up between present frame with key frame overlaps scale evaluation function, that is:
N = &Sigma; x &Element; R 2 c ( x )
In formula, for constant.
If when now N is less than 0.7 of image size, namely determine that present frame is new key frame.
The invention has the beneficial effects as follows: the present invention adopts RGB camera to obtain environmental data.For utilizing high precision monocular location algorithm, proposing a kind of degree of depth drawing generating method based on variation mechanism, achieving large-scale quick indoor 3 D scene rebuilding, efficiently solving three-dimensional reconstruction algorithm cost and real time problems.
Accompanying drawing explanation
Fig. 1 is the indoor method for reconstructing three-dimensional scene process flow diagram based on Variation Model;
Fig. 2 is camera coordinates system schematic diagram;
Fig. 3 is the three-dimensional reconstruction experimental result of application example of the present invention.
Embodiment
Fig. 1 is the indoor method for reconstructing three-dimensional scene process flow diagram based on Variation Model, comprises the following steps:
Step one, obtains the calibrating parameters of camera, and sets up distortion correction model.
Step 2, sets up camera pose and describes and camera projection model.
Step 3, utilizes the monocular SLAM algorithm realization camera pose based on SFM to estimate.
Step 4, sets up the depth map estimation model based on variation mechanism, and solves this model.
Step 5, sets up key frame extraction mechanism, realizes the renewal of three-dimensional scenic.
Provide an application example of the present invention below.
The RGB camera that this example adopts is Point Grey Flea2, and image distinguishes that rate is 640 × 480, and most high frame rate is 30fps, and horizontal field of view angle is 65 °, and focal length is approximately 3.5mm.The PC used is equipped with GTS450GPU and i5 tetra-core CPU.
In experimentation, obtain environment depth information by color camera, combining camera pose algorithm for estimating realizes accurately locating self.After entering key frame, around selection key frame, 20 two field pictures are as the input of this paper depth estimation algorithm.In depth estimation algorithm implementation, make d 0=h 0and q 0=0, calculate to obtain the initialization input of current depth figure, and iteration optimization E d,qwith E huntil convergence.Meanwhile, in algorithm iteration process, constantly should reduce θ value, and increase the weight of quadratic function in algorithm implementation, effectively improve algorithm the convergence speed.As shown in Figure 3, experiment shows that the method effectively can realize the dense three-dimensional reconstruction of environment to final experimental result, and a step of going forward side by side demonstrates the feasibility of the method.

Claims (3)

1., based on a method for the indoor environment three-dimensional reconstruction of variation mechanism, it is characterized in that comprising the following steps:
Step one, obtains the calibrating parameters of camera, and sets up distortion correction model;
The detailed process of camera calibration is:
(1) a chessboard template is printed;
(2) from multiple angle shot chessboard, should chessboard be allowed to take screen as far as possible, and ensure that each angle of chessboard is in screen, altogether shooting 6 template picture;
(3) unique point in image is detected, i.e. each black point of crossing of chessboard;
(4) inner parameter asked for, method is as follows:
RGB camera calibrating parameters is mainly camera internal reference, and the internal reference matrix K of camera is:
K = f u 0 u 0 0 f v v 0 0 0 1
In formula, u, v are camera plane coordinate axis, (u 0, v 0) be camera as planar central coordinate, (f u, f v) be the focal length of camera;
According to calibrating parameters, the mapping relations of RGB image mid point and three dimensions point are as follows: the coordinate P of RGB image mid point p=(u, v) under camera coordinates system 3D=(x, y, z) is expressed as:
x = ( u - u 0 ) * z / f u y = ( v - v 0 ) * z / f v z = d
In formula, d represents the depth value of depth image mid point p;
Camera coordinates system is downwards y-axis positive dirction, is forward z-axis positive dirction, is to the right x positive dirction; The initial point position of camera is set as world coordinate system initial point, and X, Y, the Z-direction of world coordinate system are identical with the definition of camera;
FOV camera correction model is:
u d = u 0 v 0 + f u 0 0 f v r d r u x u
r d = 1 &omega; arctan ( 2 r u tan &omega; 2 )
r u = tan ( r d &omega; ) 2 tan &omega; 2
In formula, x ufor the pixel coordinate in z=1 face, u dfor pixel coordinate in original image, ω is FOV camera distortion coefficient;
Step 2, set up camera pose and describe and camera projection model, direction is as follows:
Under the world coordinate system set up, camera pose can be expressed as matrix:
T cw = R cw t cw 0 1
In formula, cw represents from world coordinates and is tied to Current camera coordinate system, T cwthe rotation translation transformation space that ∈ SE (3), SE (3) are rigid body; T cwcan by following hexa-atomic group of μ=(μ 1, μ 2, μ 3, μ 4, μ 5, μ 6) represent, that is:
T cw = exp ( &mu; ^ )
&mu; ^ = 0 &mu; 6 - &mu; 5 &mu; 1 &mu; 6 0 &mu; 4 &mu; 2 &mu; 5 - &mu; 4 0 &mu; 3 0 0 0 0
In formula, μ 1, μ 2, μ 3be respectively the translational movement of Kinect under global coordinate system, μ 4, μ 5, μ 6the rotation amount of coordinate axis under expression local coordinate system;
The pose T of camera cwestablish spatial point cloud coordinate p under current coordinate system cto world coordinates p wtransformation relation, that is:
p c=T cwp w
Under current mark system, three dimensions point cloud to z=1 plane projects and is defined as:
π(p)=(x/z,y/z) T
In formula, p ∈ R 3for three dimensions point, x, y, z are the coordinate figure of this point; According to changing coordinates point depth value d, utilize backwards projection method determination current spatial three-dimensional point coordinate p, its coordinate relation can be expressed as:
π -1(u,d)=dK -1u
Step 3, utilizes the monocular SLAM algorithm realization camera pose based on SFM to estimate;
Step 4, sets up the depth map estimation model based on variation mechanism, and solves this model;
Step 5, sets up key frame extraction mechanism, and realize the renewal of three-dimensional scenic, method is as follows:
Realize the estimation to three-dimensional scenic at key frame place, and upgrade and safeguard the three-dimensional scenic generated; When after newly-increased frame KeyFrame data, according to formula current newly-increased KeyFrame data are transformed in world coordinate system, complete the renewal of contextual data;
Data penalty term in the estimation of Depth model utilizing step 4 to set up, the information set up between present frame with key frame overlaps scale evaluation function, that is:
N = &Sigma; x &Element; R 2 c ( x )
In formula, for constant;
If when now N is less than 0.7 of image size, namely determine that present frame is new key frame.
2. the method for a kind of indoor environment three-dimensional reconstruction based on variation mechanism according to claim 1, is characterized in that, the method that step 3 utilizes the monocular SLAM algorithm realization camera pose based on SFM to estimate is further comprising the steps of:
(1) initialization of sparse map
PTAM algorithm utilizes standard stereo camera algorithm model to set up current environment initialization map, and combination newly increases key frame continuous renewal three-dimensional map on this basis; In the initialization procedure of map, by artificially selecting two independent key frames, utilize FAST corners Matching relation in image, adopt the estimation realizing the important matrix F between above-mentioned key frame based on the conforming five-spot of stochastic sampling, and calculate the three-dimensional coordinate at current signature point place, meanwhile, set up current consistance plane in conjunction with the suitable spatial point of RANSAC algorithm picks, to determine overall world coordinate system, realize the initialization of map;
(2) camera pose is estimated
System utilizes camera to obtain current environment texture information, and builds four layers of Gaussian image pyramid, uses FAST-10 Corner Detection Algorithm to extract characteristic information in present image, and the mode of employing Block-matching sets up the data correlation between Corner Feature; On this basis, according to Current projection error, set up pose estimation model, its mathematical description is as follows:
&xi; = arg min &xi; &Sigma;Obj ( | e j | &sigma; j , &sigma; T )
e j = u i v i - K&pi; ( exp ( &xi; ) p )
In formula, e jprojection error, Σ Obj (, σ t) be Tukey two power objective function function, σ tfor the unbiased estimator of the match-on criterion difference of unique point, ξ is current pose 6 element group representation, for the antisymmetric matrix be made up of ξ;
According to above-mentioned pose estimation model, choose 50 the characteristic matching points being positioned at image pyramid top layer, realize estimating the initialization pose of camera; Further, the initial pose of this algorithm combining camera, adopts polar curve to receive the mode of rope, sets up Corner Feature sub-pixel precision matching relationship in image pyramid, and by above-mentioned coupling to bringing pose estimation model into, realize the accurate reorientation of camera;
(3) camera pose is optimized
System is after initialization, and the key frame that map building thread waits is new enters; If camera and current key interframe number of image frames exceed threshold condition, and when camera tracking effect is best, add key frame process by automatically performing; Now, system will carry out Shi-Tomas assessment to FAST angle points all in the key frame newly increased, to obtain the current Corner Feature information with notable feature, and choose key frame nearest with it and utilize polar curve to receive rope and block matching method to set up unique point mapping relations, the accurate reorientation of camera is realized in conjunction with pose estimation model, match point is projected to space simultaneously, generate current global context three-dimensional map;
In order to realize the maintenance of global map, in the process that the key frame that map building thread waits is new enters, system utilizes local and the Levenberg-Marquardt boundling adjustment algorithm of the overall situation to realize the global coherency optimization of current map; The mathematical description of this boundling adjustment algorithm is:
{ { &xi; 2 . . &xi; N } , { p 1 . . p M } } = arg min { { &mu; } , { p } } &Sigma; i = 1 N &Sigma; j &Element; S i Obj ( | e ji | &sigma; ji , &sigma; T )
In formula, σ jifor in i-th key frame, the unbiased esti-mator of the match-on criterion difference of FAST unique point, ξ irepresent 6 element group representations of i-th key frame pose, p ifor the point in global map.
3. the method for a kind of indoor environment three-dimensional reconstruction based on variation mechanism according to claim 1, is characterized in that, step 4 is set up and the method solved based on the depth map estimation model of variation mechanism is as follows:
Based on the depth map estimation model of variation mechanism, under the prerequisite of illumination invariant hypothesis, set up data penalty term, and utilize data penalty term to ensure the flatness of current depth figure, its mathematical model is as follows:
E d=∫ Ω(E data+λE reg)dx
In formula, λ is data penalty term E datawith variation regularization term E regbetween weight coefficient, for depth map span;
By choosing the reference frame I that current key frame is depth map algorithm for estimating r, utilize its adjacent picture sequence I={I 1, I 2..., I n, set up data penalty term E in conjunction with projection model data, its mathematical description is:
E data = 1 | I ( r ) | &Sigma; I i &Element; I | I r ( x ) - I i ( x &prime; ) |
In formula, | I (r) | for having the image frames numbers of the information of coincidence in current picture sequence with reference frame, x ' is for being in I as reference frame x under degree of depth d ithe projection coordinate at place, that is:
x &prime; = &pi; - 1 ( KT r i &pi; ( x , d ) )
Under depth map smoothness assumption prerequisite, in order to ensure the uncontinuity of boundary in the picture, introduce Weighted H uber operator and build variation regularization term, its mathematical description is:
E reg=g(u)||▽d(u)|| α
In formula, ▽ d is the gradient of depth map, and g (u) is pixel gradient weight coefficient, g (u)=exp (-a|| ▽ I r(u) ||) Huber operator || x|| αmathematical description be:
| | x | | &alpha; = | | x | | 2 2 &alpha; , | | x | | &le; &alpha; | | x | | - &alpha; 2 , others
In formula, α is constant;
According to Legendre-Fenchel conversion, energy function is transformed to:
g | | &dtri; d | | &alpha; = < g &dtri; d , q > - &delta; ( q ) - &alpha; 2 | | q | | 2
In formula, &delta; ( q ) = &alpha; 2 &alpha; < | | q | | &le; 1 &infin; others
In view of above-mentioned mathematical model solving complexity is high, calculated amount large, introduce auxiliary variable and set up convex Optimized model, adopt alternately descent method to realize the optimization to above-mentioned model, detailed process is as follows:
(1) fixing h, solves:
arg max q { arg min d E d , q }
E d , q = &Integral; &Omega; ( < g &dtri; d , q > + 1 2 &theta; ( d - h ) 2 - &delta; ( q ) - &alpha; 2 | | q | | 2 ) dx
In formula, g is gradient weight coefficient in variation regularization term, and θ is quadratic term constant coefficient;
According to Lagrangian extremum method, the condition that above-mentioned energy function reaches extreme value is:
&PartialD; E d , q &PartialD; q = g &dtri; d - &alpha;q = 0
&PartialD; E d , q &PartialD; d = g div q + 1 &theta; ( d - h ) = 0
In formula, divq is the divergence of q;
Describe in conjunction with partial derivative discretize, above-mentioned extremum conditions can be expressed as:
q n + 1 - q n &epsiv; q = g &dtri; d - &alpha;q n + 1
d n + 1 - d n &epsiv; d = g div p + 1 &theta; ( d n + 1 - h )
Primal dual algorithm is adopted to realize the iteration optimization of energy function, that is:
p n + 1 = ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) max ( 1 , ( p n + &epsiv; q g &dtri; d n ) / ( 1 + &epsiv; q &alpha; ) )
d n + 1 = d n + &epsiv; d ( g div q n + 1 + h n / &theta; ) ( 1 + &epsiv; d / &theta; )
In formula, ε q, ε dfor constant, expression maximizes and minimizes gradient and describes coefficient respectively;
(2) fixing d, solves:
arg min h E h
E h = &Integral; &Omega; ( &theta; 2 ( d - h ) 2 + &lambda; | I ( r ) | &Sigma; i = 0 n | I i ( x ) - I ref ( x , h ) | ) dx
In above-mentioned energy function solution procedure, in order to effectively reduce the complexity of algorithm, ensure the part detailed information in process of reconstruction, by degree of depth span [d simultaneously min, d max] be divided into S sample plane, adopt exhaustive mode to obtain the optimum solution of present energy function; Wherein being chosen as of step-length:
d inc k = Sd min d max ( S - k ) d min + d max
In formula, for k and k-1 sample plane interval.
CN201310173608.XA 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism Expired - Fee Related CN103247075B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310173608.XA CN103247075B (en) 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310173608.XA CN103247075B (en) 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism

Publications (2)

Publication Number Publication Date
CN103247075A CN103247075A (en) 2013-08-14
CN103247075B true CN103247075B (en) 2015-08-19

Family

ID=48926580

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310173608.XA Expired - Fee Related CN103247075B (en) 2013-05-13 2013-05-13 Based on the indoor environment three-dimensional rebuilding method of variation mechanism

Country Status (1)

Country Link
CN (1) CN103247075B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701811A (en) * 2016-01-12 2016-06-22 浙江大学 Sound coding interaction method based on RGB-IR camera
CN108645398A (en) * 2018-02-09 2018-10-12 深圳积木易搭科技技术有限公司 A kind of instant positioning and map constructing method and system based on structured environment

Families Citing this family (72)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104427230B (en) * 2013-08-28 2017-08-25 北京大学 The method of augmented reality and the system of augmented reality
US9367922B2 (en) * 2014-03-06 2016-06-14 Nec Corporation High accuracy monocular moving object localization
CN103914874B (en) * 2014-04-08 2017-02-01 中山大学 Compact SFM three-dimensional reconstruction method without feature extraction
CN103942832B (en) * 2014-04-11 2016-07-06 浙江大学 A kind of indoor scene real-time reconstruction method based on online structural analysis
CN103901891A (en) * 2014-04-12 2014-07-02 复旦大学 Dynamic particle tree SLAM algorithm based on hierarchical structure
WO2016078728A1 (en) 2014-11-21 2016-05-26 Metaio Gmbh Method and system for determining spatial coordinates of a 3d reconstruction of at least part of a real object at absolute spatial scale
CN104463962B (en) * 2014-12-09 2017-02-22 合肥工业大学 Three-dimensional scene reconstruction method based on GPS information video
CN104537709B (en) * 2014-12-15 2017-09-29 西北工业大学 It is a kind of that method is determined based on the real-time three-dimensional reconstruction key frame that pose changes
CN105825520A (en) * 2015-01-08 2016-08-03 北京雷动云合智能技术有限公司 Monocular SLAM (Simultaneous Localization and Mapping) method capable of creating large-scale map
CN105869136A (en) * 2015-01-22 2016-08-17 北京雷动云合智能技术有限公司 Collaborative visual SLAM method based on multiple cameras
CN104881029B (en) * 2015-05-15 2018-01-30 重庆邮电大学 Mobile Robotics Navigation method based on a point RANSAC and FAST algorithms
CN105654492B (en) * 2015-12-30 2018-09-07 哈尔滨工业大学 Robust real-time three-dimensional method for reconstructing based on consumer level camera
CN105678754B (en) * 2015-12-31 2018-08-07 西北工业大学 A kind of unmanned plane real-time map method for reconstructing
CN105513083B (en) * 2015-12-31 2019-02-22 新浪网技术(中国)有限公司 A kind of PTAM video camera tracking method and device
CN105678842A (en) * 2016-01-11 2016-06-15 湖南拓视觉信息技术有限公司 Manufacturing method and device for three-dimensional map of indoor environment
CN105686936B (en) * 2016-01-12 2017-12-29 浙江大学 A kind of acoustic coding interactive system based on RGB-IR cameras
CN105928505B (en) * 2016-04-19 2019-01-29 深圳市神州云海智能科技有限公司 The pose of mobile robot determines method and apparatus
CN105856230B (en) * 2016-05-06 2017-11-24 简燕梅 A kind of ORB key frames closed loop detection SLAM methods for improving robot pose uniformity
CN106052674B (en) * 2016-05-20 2019-07-26 青岛克路德机器人有限公司 A kind of SLAM method and system of Indoor Robot
CN105955273A (en) * 2016-05-25 2016-09-21 速感科技(北京)有限公司 Indoor robot navigation system and method
CN106097304B (en) * 2016-05-31 2019-04-23 西北工业大学 A kind of unmanned plane real-time online ground drawing generating method
CN106127739B (en) * 2016-06-16 2021-04-27 华东交通大学 Monocular vision combined RGB-D SLAM method
CN106289099B (en) * 2016-07-28 2018-11-20 汕头大学 A kind of single camera vision system and the three-dimensional dimension method for fast measuring based on the system
CN106485744B (en) * 2016-10-10 2019-08-20 成都弥知科技有限公司 A kind of synchronous superposition method
CN106780576B (en) * 2016-11-23 2020-03-17 北京航空航天大学 RGBD data stream-oriented camera pose estimation method
CN106780588A (en) * 2016-12-09 2017-05-31 浙江大学 A kind of image depth estimation method based on sparse laser observations
CN106595601B (en) * 2016-12-12 2020-01-07 天津大学 Accurate repositioning method for camera pose with six degrees of freedom without hand-eye calibration
CN106529838A (en) * 2016-12-16 2017-03-22 湖南拓视觉信息技术有限公司 Virtual assembling method and device
CN106875437B (en) * 2016-12-27 2020-03-17 北京航空航天大学 RGBD three-dimensional reconstruction-oriented key frame extraction method
CN106940186B (en) * 2017-02-16 2019-09-24 华中科技大学 A kind of robot autonomous localization and navigation methods and systems
CN106875446B (en) * 2017-02-20 2019-09-20 清华大学 Camera method for relocating and device
CN106803275A (en) * 2017-02-20 2017-06-06 苏州中科广视文化科技有限公司 Estimated based on camera pose and the 2D panoramic videos of spatial sampling are generated
CN106997614B (en) * 2017-03-17 2021-07-20 浙江光珀智能科技有限公司 Large-scale scene 3D modeling method and device based on depth camera
CN108629843B (en) * 2017-03-24 2021-07-13 成都理想境界科技有限公司 Method and equipment for realizing augmented reality
US10482632B2 (en) 2017-04-28 2019-11-19 Uih America, Inc. System and method for image reconstruction
CN107481279B (en) * 2017-05-18 2020-07-07 华中科技大学 Monocular video depth map calculation method
CN107292949B (en) * 2017-05-25 2020-06-16 深圳先进技术研究院 Three-dimensional reconstruction method and device of scene and terminal equipment
WO2018214086A1 (en) * 2017-05-25 2018-11-29 深圳先进技术研究院 Method and apparatus for three-dimensional reconstruction of scene, and terminal device
CN107160395B (en) * 2017-06-07 2020-10-16 中国人民解放军装甲兵工程学院 Map construction method and robot control system
EP3418976A1 (en) * 2017-06-22 2018-12-26 Thomson Licensing Methods and devices for encoding and reconstructing a point cloud
CN109254579B (en) * 2017-07-14 2022-02-25 上海汽车集团股份有限公司 Binocular vision camera hardware system, three-dimensional scene reconstruction system and method
CN107506040A (en) * 2017-08-29 2017-12-22 上海爱优威软件开发有限公司 A kind of space path method and system for planning
CN107657640A (en) * 2017-09-30 2018-02-02 南京大典科技有限公司 Intelligent patrol inspection management method based on ORB SLAM
CN107909643B (en) * 2017-11-06 2020-04-24 清华大学 Mixed scene reconstruction method and device based on model segmentation
CN107862720B (en) * 2017-11-24 2020-05-22 北京华捷艾米科技有限公司 Pose optimization method and pose optimization system based on multi-map fusion
CN107818592B (en) * 2017-11-24 2022-04-01 北京华捷艾米科技有限公司 Method, system and interactive system for collaborative synchronous positioning and map construction
CN107833245B (en) * 2017-11-28 2020-02-07 北京搜狐新媒体信息技术有限公司 Monocular visual feature point matching-based SLAM method and system
CN108171787A (en) * 2017-12-18 2018-06-15 桂林电子科技大学 A kind of three-dimensional rebuilding method based on the detection of ORB features
CN108062537A (en) * 2017-12-29 2018-05-22 幻视信息科技(深圳)有限公司 A kind of 3d space localization method, device and computer readable storage medium
CN108242079B (en) * 2017-12-30 2021-06-25 北京工业大学 VSLAM method based on multi-feature visual odometer and graph optimization model
CN108154531B (en) * 2018-01-03 2021-10-08 深圳北航新兴产业技术研究院 Method and device for calculating area of body surface damage region
CN108447116A (en) * 2018-02-13 2018-08-24 中国传媒大学 The method for reconstructing three-dimensional scene and device of view-based access control model SLAM
CN110555883B (en) * 2018-04-27 2022-07-22 腾讯科技(深圳)有限公司 Repositioning method and device for camera attitude tracking process and storage medium
CN108898669A (en) * 2018-07-17 2018-11-27 网易(杭州)网络有限公司 Data processing method, device, medium and calculating equipment
CN109191526B (en) * 2018-09-10 2020-07-07 杭州艾米机器人有限公司 Three-dimensional environment reconstruction method and system based on RGBD camera and optical encoder
CN110966917A (en) * 2018-09-29 2020-04-07 深圳市掌网科技股份有限公司 Indoor three-dimensional scanning system and method for mobile terminal
CN109870118B (en) * 2018-11-07 2020-09-11 南京林业大学 Point cloud collection method for green plant time sequence model
CN109697753B (en) * 2018-12-10 2023-10-03 智灵飞(北京)科技有限公司 Unmanned aerial vehicle three-dimensional reconstruction method based on RGB-D SLAM and unmanned aerial vehicle
CN109739079B (en) * 2018-12-25 2022-05-10 九天创新(广东)智能科技有限公司 Method for improving VSLAM system precision
CN110059651B (en) * 2019-04-24 2021-07-02 北京计算机技术及应用研究所 Real-time tracking and registering method for camera
CN112634371B (en) * 2019-09-24 2023-12-15 阿波罗智联(北京)科技有限公司 Method and device for outputting information and calibrating camera
CN111145238B (en) * 2019-12-12 2023-09-22 中国科学院深圳先进技术研究院 Three-dimensional reconstruction method and device for monocular endoscopic image and terminal equipment
CN111340864B (en) * 2020-02-26 2023-12-12 浙江大华技术股份有限公司 Three-dimensional scene fusion method and device based on monocular estimation
CN111652901B (en) * 2020-06-02 2021-03-26 山东大学 Texture-free three-dimensional object tracking method based on confidence coefficient and feature fusion
CN112221132A (en) * 2020-10-14 2021-01-15 王军力 Method and system for applying three-dimensional weiqi to online game
CN112348868A (en) * 2020-11-06 2021-02-09 养哇(南京)科技有限公司 Method and system for recovering monocular SLAM scale through detection and calibration
CN112348869A (en) * 2020-11-17 2021-02-09 的卢技术有限公司 Method for recovering monocular SLAM scale through detection and calibration
CN112614185B (en) * 2020-12-29 2022-06-21 浙江商汤科技开发有限公司 Map construction method and device and storage medium
CN112597334B (en) * 2021-01-15 2021-09-28 天津帕克耐科技有限公司 Data processing method of communication data center
CN113034606A (en) * 2021-02-26 2021-06-25 嘉兴丰鸟科技有限公司 Motion recovery structure calculation method
CN113902847B (en) * 2021-10-11 2024-04-16 岱悟智能科技(上海)有限公司 Monocular depth image pose optimization method based on three-dimensional feature constraint
CN117214860B (en) * 2023-08-14 2024-04-19 北京科技大学顺德创新学院 Laser radar odometer method based on twin feature pyramid and ground segmentation

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07182541A (en) * 1993-12-21 1995-07-21 Nec Corp Preparing method for three-dimensional model
CN101369348A (en) * 2008-11-07 2009-02-18 上海大学 Novel sight point reconstruction method for multi-sight point collection/display system of convergence type camera
CN102800127A (en) * 2012-07-18 2012-11-28 清华大学 Light stream optimization based three-dimensional reconstruction method and device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07182541A (en) * 1993-12-21 1995-07-21 Nec Corp Preparing method for three-dimensional model
CN101369348A (en) * 2008-11-07 2009-02-18 上海大学 Novel sight point reconstruction method for multi-sight point collection/display system of convergence type camera
CN102800127A (en) * 2012-07-18 2012-11-28 清华大学 Light stream optimization based three-dimensional reconstruction method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Taguchi.Y,etal.SLAM using both points and planes for hand-held 3D sensors.《Mixed and Augmented Reality (ISMAR), 2012 IEEE International Symposium on》.2012,第321-322页. *
刘鑫,等.基于GPU和Kinect的快速物体重建.《自动化学报》.2012,第1288-1297页. *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105701811A (en) * 2016-01-12 2016-06-22 浙江大学 Sound coding interaction method based on RGB-IR camera
CN105701811B (en) * 2016-01-12 2018-05-22 浙江大学 A kind of acoustic coding exchange method based on RGB-IR cameras
CN108645398A (en) * 2018-02-09 2018-10-12 深圳积木易搭科技技术有限公司 A kind of instant positioning and map constructing method and system based on structured environment

Also Published As

Publication number Publication date
CN103247075A (en) 2013-08-14

Similar Documents

Publication Publication Date Title
CN103247075B (en) Based on the indoor environment three-dimensional rebuilding method of variation mechanism
CN111968129B (en) Instant positioning and map construction system and method with semantic perception
CN109003325B (en) Three-dimensional reconstruction method, medium, device and computing equipment
CN107564061B (en) Binocular vision mileage calculation method based on image gradient joint optimization
CN107292965B (en) Virtual and real shielding processing method based on depth image data stream
CN109272537B (en) Panoramic point cloud registration method based on structured light
CN103400409B (en) A kind of coverage 3D method for visualizing based on photographic head attitude Fast estimation
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN105096386B (en) A wide range of complicated urban environment geometry map automatic generation method
US9613420B2 (en) Method for locating a camera and for 3D reconstruction in a partially known environment
Turner et al. Fast, automated, scalable generation of textured 3D models of indoor environments
CN110189399B (en) Indoor three-dimensional layout reconstruction method and system
CN106485675B (en) A kind of scene flows estimation method smooth based on 3D local stiffness and depth map guidance anisotropy
CN112001926B (en) RGBD multi-camera calibration method, system and application based on multi-dimensional semantic mapping
CN104376596B (en) A kind of three-dimensional scene structure modeling and register method based on single image
CN108053437B (en) Three-dimensional model obtaining method and device based on posture
CN107680159B (en) Space non-cooperative target three-dimensional reconstruction method based on projection matrix
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN106960442A (en) Based on the infrared night robot vision wide view-field three-D construction method of monocular
Honegger et al. Embedded real-time multi-baseline stereo
WO2013112749A1 (en) 3d body modeling, from a single or multiple 3d cameras, in the presence of motion
Yang The study and improvement of Augmented reality based on feature matching
Yang et al. Reactive obstacle avoidance of monocular quadrotors with online adapted depth prediction network
CN107481313A (en) A kind of dense three-dimensional object reconstruction method based on study available point cloud generation
CN102722697A (en) Unmanned aerial vehicle autonomous navigation landing visual target tracking method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150819

Termination date: 20200513