CN103106688B - Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering - Google Patents

Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering Download PDF

Info

Publication number
CN103106688B
CN103106688B CN201310053829.3A CN201310053829A CN103106688B CN 103106688 B CN103106688 B CN 103106688B CN 201310053829 A CN201310053829 A CN 201310053829A CN 103106688 B CN103106688 B CN 103106688B
Authority
CN
China
Prior art keywords
point
kinect
image
matrix
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201310053829.3A
Other languages
Chinese (zh)
Other versions
CN103106688A (en
Inventor
贾松敏
郭兵
王可
李秀智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201310053829.3A priority Critical patent/CN103106688B/en
Publication of CN103106688A publication Critical patent/CN103106688A/en
Application granted granted Critical
Publication of CN103106688B publication Critical patent/CN103106688B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Processing (AREA)

Abstract

The invention belongs to the crossing domain of computer vision and intelligent robot, relate to a kind of method for reconstructing of the indoor scene on a large scale based on double-deck method for registering.Solve that existing indoor scene method for reconstructing equipment needed thereby is expensive, computation complexity is high and the problem of poor real.The method comprises: Kinect demarcates, SURF feature point extraction with mate, feature point pairs to the right mapping of three dimensions point, based on three dimensions point bilayer registration, the more new scene of RANSAC and ICP method.The present invention adopts Kinect to obtain environmental data, based on RANSAC and ICP, proposes double-deck method for registering, realizes economy indoor 3 D scene rebuilding fast, effectively improves real-time and the reconstruction precision of reconstruction algorithm.The method is applicable to service robot field and other computer vision fields relevant with 3 D scene rebuilding.

Description

Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
Technical field
The invention belongs to the crossing domain of computer vision and intelligent robot, relate to a kind of indoor environment three-dimensional reconstruction, particularly relate to a kind of method for reconstructing of the indoor scene on a large scale based on double-deck method for registering.
Background technology
In recent years, along with the development of infotech, constantly increase the demand of 3 D scene rebuilding technology, economy fast indoor method for reconstructing three-dimensional scene becomes numerous areas crucial technical problem urgently to be resolved hurrily.In home-services robot field, the Intelligent home service robot market demand that aging population causes is day by day strong.At present, on market, most of service robot because cannot providing single simple service by perception three-dimensional environment under specific scene, and this problem seriously governs the development of home-services robot industry.
3 D scene rebuilding is one of the study hotspot problem in the fields such as computer vision, intelligent robot, virtual reality.Traditional three-dimensional rebuilding method can be divided into two classes according to the mode difference obtaining three-dimensional data: based on the three-dimensional rebuilding method of laser scanner technique and the three-dimensional rebuilding method of view-based access control model.Still larger limitation is deposited for the indoor existing method of large-scale 3 D scene rebuilding problem.
Based on the three-dimensional rebuilding method of laser, obtain the depth data of scene or range image by laser scanner, utilize the registration of depth data to realize aliging of frame data and global data.So only be that of obtaining the geological information of three-dimensional scenic, need the texture information by increasing a video camera acquisition scene and be mapped to reconstruct on geometric model, this will solve one by the mapping problems of photo to geometry.Although the three-dimensional rebuilding method based on laser scanning can obtain the 3-D geometric model of degree of precision, but the difficulty of texture is larger, thus it is more difficult to generate realistic three-dimensional model, laser equipment price is high simultaneously, generally be applied in the fields such as digital prospect, topographic(al) reconnaissance, digital museum, be difficult to popularize at large-scale civil area.
The three-dimensional rebuilding method of view-based access control model, namely computer vision methods is adopted to carry out three-dimensional object model reconstruction, refer to and utilize digital camera as imageing sensor, the technology such as integrated use image procossing vision calculating carry out three-dimensional non-contact measurement, obtain the three-dimensional information of object with computer program.It is advantageous that and do not limit by body form, rebuild speed, can realize full-automatic or semi-automatic modeling etc., be an important development direction of three-dimensional reconstruction.According to the difference using video camera number, monocular vision method, binocular vision method, trinocular vision method or multi-vision visual method can be divided into.Monocular vision method uses a video camera to carry out three-dimensional reconstruction, and derive depth information by the two dimensional character of image, these two dimensional characters comprise light and shade, texture, focus, profile etc.Its advantage is that device structure is simple, uses single width or several images just can reconstruct object dimensional model.But condition more satisfactoryization usually required, practical situations is not very desirable, and reconstruction effect is general.Binocular vision method, also claims stereo vision method, and binocular parallax information is converted to depth information.Its advantage is that method is ripe, can stably obtain good reconstructed results; Unfortunately operand is still bigger than normal, and rebuilds successful reduction when parallax range is larger.The basic thought of multi-vision visual method provides extra constraint by increasing video camera, avoids the problem in binocular vision with this.Its advantage rebuilds effect to be better than binocular vision method, but device structure is more complicated, and cost is higher, also more difficult in control.
In recent years, along with the development of RGBD (the colored and degree of depth) sensor technology, the Kinect that such as Microsoft releases, for 3 D scene rebuilding provides new scheme.Research at present about Kinect three-dimensional rebuilding method achieves some achievements in the three-dimensional reconstruction of single object, and the research in indoor scene reconstruction is still in the starting stage.The people such as RichardA.Newcombe adopt Kinect to obtain environmental information, utilize ICP method, realize the three-dimensional reconstruction of environmental information.Because the method realizes in GPU hardware, higher to GPU hardware configuration requirement, and be subject to the restriction of GPU internal memory, the scope of 3m × 3m × 3m can only be rebuild, the demand that indoor three-dimensional scenic on a large scale creates cannot be met.
Summary of the invention
In order to overcome Problems existing in above-mentioned three-dimensional rebuilding method, the invention provides a kind of economy fast, based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering.
The technical solution used in the present invention is as follows:
Kinect is utilized to obtain RGB and the deep image information of environment, by extracting the SURF unique point of RGB image, using Feature Points Matching information as associated data, in conjunction with random sampling consistance (RandomsampleConsensus, RANSAC) method and the most point of proximity (Iterativeclosestpoint of iteration, ICP) method, proposes the double-deck method for registering of a kind of three-dimensional data.The method mainly comprises following content: the first, utilizes RANSAC method to obtain the rotation translation transformation matrix of adjacent two frames (Frame-To-Frame) three-dimensional data, accumulates the relative position change that this result obtains Kinect.By setting threshold value, increase by frame data when Kinect change in location exceedes a certain size, this data setting is key frame (KeyFrame) and completes first registration; The second, utilize ICP method to obtain the precise transformation matrix of adjacent key frame (KeyFrame-To-KeyFrame), complete accuracy registration.Transformation matrix between the KeyFrame data utilizing double-deck method for registering to obtain and adjacent KeyFrame, completes the reconstruction of three-dimensional environment.
Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering, it comprises the following steps:
Step one, carries out Kinect demarcation.
In image measurement process and machine vision applications, for determining three-dimensional geometry position and its mutual relationship in the picture between corresponding point of certain point of space object surface, must set up the geometric model of camera imaging, these geometric model parameters form camera parameter.These parameters must just can obtain with calculating by experiment in most conditions, and this process solving parameter is just referred to as camera calibration (or camera calibration).In image measurement and machine vision applications, the demarcation of camera parameters is unusual the key link, and the precision of its calibration result and stability directly affect the accuracy of net result.
Kinect is the XBOX360 body sense periphery peripheral hardware that a kind of Microsoft issues, and provides the degree of depth and colored (RGB) image information simultaneously.Depth information utilizes thermal camera to adopt active mode to obtain, each frame is made up of 640*480 pixel, and investigation depth scope is 0.5 ~ 4.0 meter, and regulation of longitudinal angle scope is 43 °, lateral angles scope is 57 °, can obtain the depth information of object within the scope of 6 square metres.Meanwhile, the RGB camera of a 640*480 pixel Kinect is equipped with.There is provided simultaneously RGB information and this characteristic of depth information most important for three-dimensional reconstruction, facilitate depth information to align with RGB information.
The calibrating parameters of Kinect sensor comprises thermal camera (depth transducer) internal reference, RGB video camera internal reference and three parts of the outer ginseng between thermal camera and RGB video camera.The present invention adopts the plane reference method of Zhang Zhengyou to demarcate RGB video camera.Thermal camera intrinsic parameter and outer between thermal camera and RGB video camera join the data using official of Microsoft to provide.
Step 2, the extraction of unique point with mate.
Feature extraction: by analyzing image information, determines whether each point in image belongs to a characteristics of image.The result of feature extraction is that the point on image is divided into different subsets, and these subsets often belong to isolated point, continuous print curve or continuous print region.SURF (Speeded-UpRobustFeatures) unique point is the most popular method of current computed image feature, the feature that the method is extracted has the performance of Scale invariant, invariable rotary, has unchangeability to illumination variation and affine, perspective transform simultaneously.SURF, in multiplicity, uniqueness, robustness 3, all surmounts or the close congenic method in the past proposed, and in computing velocity, has obvious advantage.
The present invention extracts the SURF unique point of RGB image, comprises feature point detection and unique point describes two parts.Adopt the nearest neighbor algorithm based on Euclidean distance to carry out Feature Points Matching, the data structure utilizing K-D to set is searched for, and determines whether to accept this coupling right according to the distance ratio of nearest two unique points.
Step 3, images match point maps to three-dimensional coordinate.
Set up the transformational relation between the plane of delineation and space three-dimensional point coordinate according to the calibration model of Kinect, determine the projection model of three dimensions point to the plane of delineation, the function representation with below:
u=π(p)
Wherein, p is three dimensions point, and u is plane of delineation coordinate, and π (p) representation space three-dimensional point is to the mapping function of the plane of delineation.Obtain the corresponding point pair at the plane of delineation by the coupling of image characteristic point, utilize three dimensions point to obtain three dimensions point coordinate corresponding to image characteristic point to the projection model of the plane of delineation, obtain the three-dimensional point pair that two frame data are corresponding further.
Step 4, the double-deck registration of the three dimensions point based on RANSAC and ICP.
Registration refers to the coupling of the different images geographic coordinate obtained with different imaging means in the same area.Comprise geometric correction, projective transformation and common scale chi three kinds to process.Registration result is expressed as matrix:
T cw=[R cw,t cw]
Wherein, subscript " cw " expression is tied to current Kinect coordinate system from world coordinates, R cwrepresent that world coordinates is tied to the rotation matrix of current coordinate system, t cwrepresent that world coordinates is tied to the translation of current coordinate system.T cwdescribe Kinect and rotate translation relation under world coordinate system.P is put under Kinect coordinate system cto world coordinates p wtransformation relation be:
p c=T cwp w
The problem high for three dimensional point cloud method for registering complexity, calculated amount is large, the present invention is based on RANSAC and ICP method, proposes a kind of double-deck method for registering, is made up of first registration and accuracy registration two parts.First registration adopts RANSAC, to obtain KeyFrame and relative transform matrix; Adopt ICP to realize accuracy registration, the basis of first registration realizes the alignment of three-dimensional data points and provides three-dimension varying information accurately for upgrading three-dimensional scenic.
Step 5, scene update.
Each the frame three-dimensional data obtained by Kinect approximately comprises 250,000 point.There is very large information redundancy in adjacent two frame data, in order to improve the sharpness of reconstructed results, to the description that the three-dimensional map generated provides an essence to want, reduce the burden of system in internal memory, the present invention adopts KeyFrame Data Update three-dimensional scenic.
The invention has the beneficial effects as follows: adopt Kinect to obtain environmental data, for the feature of Kinect sensor, propose a kind of double-deck method for registering based on RANSAC and ICP, realize large-scale quick indoor 3 D scene rebuilding.Efficiently solve three-dimensional rebuilding method cost and real time problems, improve reconstruction precision.
Accompanying drawing explanation
Fig. 1 is the indoor method for reconstructing three-dimensional scene block diagram based on Kinect;
Fig. 2 is Kinect coordinate system schematic diagram;
Fig. 3 is the double-deck method for registering process flow diagram based on RANSAC and ICP;
Fig. 4 is the actual environment schematic diagram that application the present invention creates three-dimensional scenic: in figure, and (a) is experiment real scene, the two-dimensional geometry schematic diagram that (b) is experimental situation;
Fig. 5 is the result schematic diagram that application the present invention creates three-dimensional scenic.
Embodiment
The present invention is described in further detail by reference to the accompanying drawings.As shown in Figure 1, the present invention includes following step:
Step one, carry out Kinect demarcation, concrete grammar is as follows:
(1) a chessboard template is printed.The present invention adopts an A4 paper, chessboard be spaced apart 0.25cm.
(2) from multiple angle shot chessboard.During shooting, should chessboard be allowed to take screen as far as possible, and ensure that each angle of chessboard is in screen, altogether shooting 8 template picture.
(3) unique point in image is detected, i.e. each black point of crossing of chessboard.
(4) parameter that Kinect demarcates is obtained.
The internal reference matrix K of thermal camera ir:
K i r = f u I R 0 u I R 0 f v I R v I R 0 0 1
Wherein, (f uIR, f vIR) be the focal length of thermal camera, value (5,5), (u iR, v iR) be thermal camera as planar central coordinate, value (320,240).
The Intrinsic Matrix K of RGB video camera c:
K c = f u 0 u 0 0 f v v 0 0 0 1
Wherein, (f u, f v) be the focal length of RGB video camera, (u 0, v 0) be RGB camera image plane centre coordinate.
External parameter between thermal camera and RGB video camera is:
T=[R IRc,t IRc]
Wherein, R iRcfor rotation matrix, t iRcfor translation vector, directly use the parameter that official of Microsoft provides:
R I R c = 1 0 0 0 1 0 0 0 1
t IRc=[0.07500] T
In the present invention, Kinect coordinate system as shown in Figure 2, is upwards y-axis positive dirction, is forward z-axis positive dirction, is to the right x-axis positive dirction.The initial point position of Kinect is set as world coordinate system initial point, and X, Y, the Z-direction of world coordinate system are identical with the x, y, z direction of Kinect initial point position.
Step 2, the extraction of unique point with mate, method is as follows:
(1) integral image is obtained.Integral image refers to the Cumulate Sum calculating all pixels of given gray level image, for the integration I (X) of certain some X=(x, y) in image is:
I Σ ( X ) = Σ i = 0 i ≤ x Σ j = 0 j ≤ y I ( i , j )
Wherein, I (i, j) represents that in image, pixel coordinate is the pixel value of (i, j).
In integral image, calculate the gray-scale value sum of a rectangular area with 3 plus and minus calculations, have nothing to do with the area of rectangle.Can see in step below, the convolution mask used in SURF feature point extraction is frame-shaped template, substantially increases operation efficiency.
(2) approximate Hessian matrix H is asked for approx.For certain some X=(x, y) in image I, the Hessian matrix H (X, s) on the s yardstick of X point is defined as:
H ( X , s ) = L x x ( X , s ) L x y ( X , s ) L x y ( X , s ) L y y ( X , s )
Wherein, L xx(X, s), L xy(X, s), L yy(X, s) represents the convolution of Gauss's second-order partial differential coefficient at X place and image I.Square frame filtering is used to be similar to the second order Gauss filtering replaced in Hessian matrix.Frame-shaped Filtering Template is respectively D with the value after image convolution xx, D yy, D xy, replace L with them further xx, L yy, L xyobtain approximate Hessian matrix H approx, its determinant is:
det(H approx)=D xxD yy-(wD xy) 2
Wherein, w is weight coefficient, and in enforcement of the present invention, value is 0.9.
(3) location feature point.The feature point detection of SURF based on Hessian matrix, according to the local maximum location feature point position of Hessian matrix determinant.
With the frame-shaped wave filter of different size, process is carried out to original image and obtain yardstick image pyramid, according to H approxobtain the extreme value of scalogram picture at (X, s) place.
Use frame-shaped wave filter to build metric space, in every single order, select the scalogram picture of 4 layers, the structure parameter on 4 rank is in table 1.
Size (the unit: s) of 16 templates in quadravalence before table 1 metric space
Use H approxmatrix obtains extreme value, and in 3 dimension (X, s) metric spaces, the regional area to each 3 × 3 × 3 carries out non-maxima suppression (retain maximum value, other values are set to 0).Point response being greater than 26 neighborhood values elects unique point as.Utilize quadratic fit function accurately to locate unique point, fitting function D (X) is:
D ( X ) = D + ∂ D T ∂ X X + 1 2 X T ∂ D ∂ X 2 X
So far, the position of unique point, dimensional information (X, s) is obtained.
(4) direction character of unique point is determined.With Haar wavelet filter, circle shaped neighborhood region is processed, obtain the response in x, y direction in this neighborhood corresponding to each point.Choose the Gaussian function (σ gets 2s, and s is the yardstick of this Feature point correspondence) centered by unique point, be weighted these responses, the vector that search length is maximum, its direction is the direction corresponding to this unique point.
(5) construction feature description vectors.Centered by unique point, determine a foursquare neighborhood, the length of side gets 20s, is the y-axis direction of this neighborhood unique point direction setting.Square area is divided into 4 × 4 sub regions, carries out processing (Haar small echo template size is 2s × 2s) with Haar wavelet filter in each subregion.Use d xrepresent the little wave response of Haar of horizontal direction, use d yrepresent the little wave response of Haar of vertical direction.For all d x, d yin order to the Gaussian function weighting centered by unique point, the σ of this Gaussian function is 3.3s.In every sub regions respectively to d x, d y, | d x|, | d y| summation, obtains 4 dimensional vector V (Σ d x, Σ d y, Σ | d x|, Σ | d y|).The vector of 4 × 4 sub regions is coupled together and just obtains one 64 vector tieed up, this vector has rotation, scale invariability, then after being normalized, has illumination invariant.So far, the proper vector of Expressive Features point is obtained.
(6) characteristic matching.Adopt the nearest neighbor method based on Euclidean distance, utilize K-D to set to search in image to be matched, find the first two unique point nearest with the unique point Euclidean distance in benchmark image, if minimum distance is less than the proportion threshold value (0.7) of setting except the value closely obtained in proper order, then accept this pair match point.
Step 3, images match point maps to three-dimensional coordinate.
According to calibrating parameters, Kinect depth image and RGB image mid point mapping calculation method as follows:
1 p=(x in depth image d, y d) coordinate P under Kinect coordinate system 3D=(x, y, z) is:
P 3 D . x = ( x d - u I R ) × P 3 D . z / f u I R P 3 D . y = ( y d - v I R ) × P 3 D . z / f v I R P 3 D . z = d e p t h ( x d , y d )
Wherein, P 3D.x, P 3Dand P .y 3D.z P is respectively 3Dthe coordinate x of=(x, y, z), y, z, depth (x d, y d) represent the depth value of depth image mid point p.
So, derive the 3D coordinate corresponding to pixel of RGB image, and then obtain the coordinate (x in RGB image rgb, y rgb).Computing formula is as follows:
x r g b = ( P 3 D ′ . x * f u / P 3 D ′ . z ) + u 0 y r g b = ( P 3 D ′ . y * f v / P 3 D ′ . z ) + v 0
Wherein, P ' 3D t=R iRc* P 3D t+ t iRc.
According to above-mentioned conversion relation, the matching double points obtained is converted to three dimensions point pair in step 2.
Step 4, the double-deck registration of the three dimensions point based on RANSAC and ICP, method as shown in Figure 3, comprises the following steps:
(1) first registration.The corresponding point that Feature Points Matching obtains are to there is larger error hiding.Remove the three dimensions point pair of error hiding at first registration stage application RANSAC, find by iteration the most imperial palace point set that meets transformation model and estimate transformation matrix T.KeyFrame is to each relative transform matrix of current data in accumulation, obtains the transformation matrix of the relative KeyFrame of current Kinect.Go out translational movement and the anglec of rotation modulus value of Kinect according to this matrix computations, compare with the threshold value of setting, judge whether that choosing this frame is KeyFrame.In embodiment, the threshold value of translational movement is set to 0.4, and angle threshold is 40 degree.
The detailed process of RANSAC is as follows:
1) from the initial N from reference point collection A and point set B subject to registration to the 7 pairs of data of random selecting three-dimensional matching double points;
2) basis matrix is utilized to solve minimal configuration 7 methods, by the transformation matrix T of the 7 pairs of data Calculation Basis point sets chosen and point set data subject to registration aB;
3) transformation matrix T is utilized aBby the unique point set of image subject to registration a remaining N-7 three-dimensional point in (expression comprises the point set B subject to registration of N number of point) under transforming to reference point cloud coordinate system;
4) the point set P ' after conversion is calculated n-7with benchmark point set between error of coordinate;
5) from N to finding out the feature point pairs number of error of coordinate in certain threshold value matching double points, be designated as m;
6) 1 is repeated) ~ 5) step n (n is set by user, and in the present embodiment, iterations is set as 50) is secondary, make m value obtain maximum set for most imperial palace point set, be interior point, all the other N-m are Mismatching point, are exterior point.Most imperial palace point set is utilized to estimate the least square solution of transformation model, as the transformation matrix T of current adjacent two frame data.
(2) accuracy registration.Obtain KeyFrame and relative transform matrix thereof through first registration, the present invention adopts the precise transformation matrix of ICP calculating K eyFrame, using first registration result as prior transformation, tries to achieve the Accurate translation matrix of KeyFrame-To-KeyFrame.
In the depth map of Kinect, pixel value is the region of 0 is invalid metrical information.For convenience of describe obtain effective information in depth map and invalid information, be defined as follows function for above-mentioned phenomenon:
Wherein, X is picture planimetric coordinates point.
In order to obtain k moment Kinect pose according to the transformational relation of Kinect coordinate system and world coordinate system, set up following energy function:
E = m i n Σ p k ∈ Ω | T c w k p w - p k |
Wherein, p wfor the point under world coordinate system, p kfor the point under current coordinate system, Ω is the set in the k moment plane of delineation with the pixel of significant depth value, that is:
Ω={ p k| u=π (p k) and M (u)=1}
Above-mentioned set up energy function is the mathematical description of three-dimensional ICP method.In ICP algorithm, by minimization energy function, obtain at k moment Kinect pose under world coordinate system.Usual ICP algorithm is under the prerequisite of supposition relative pose, constantly sets up the corresponding relation between some cloud, and by optimizing corresponding point error iterative.Therefore, in the solution procedure of ICP algorithm, initial relative pose setting plays vital effect, and inappropriate initial pose will make ICP algorithm be absorbed in local optimum, cannot obtain correct result.Based on the ICP algorithm of unordered some cloud, along with an increase for cloud quantity, the space complexity of algorithm and time complexity will significantly improve, and greatly reduce the execution efficiency of this algorithm.Therefore initial relative pose is the prerequisite setting up corresponding relation between some cloud, plays vital effect in the iterative process of ICP method.The Relative Transformation that first registration obtains by this method as initial relative pose, to obtain the optimal estimation of current KeyFrame.
Suppose in the side-play amount of k-1 moment and k moment Kinect be then
T c w k = T i n c k T c w k - 1
Kinect is at x, y, and the rotation amount (α, beta, gamma) on z-axis direction and translational movement are in three directions (t x, t y, t z).When enough hour of above-mentioned two vectors, launch according to first order Taylor formula, make x=(α, beta, gamma, t x, t y, t z), then:
T i n c k = exp ( x ) = 1 γ - β - γ 1 α β - α 1 t x t y t z = R [ R i n c | t i n c ]
For the kth moment obtain spatial point world coordinates be by this spot projection under the coordinate system of Kinect during the k-1 moment, so energy function is transformed to:
E = min Σ p k ∈ Ω | | T c w k p w - p k | | = min Σ p k ∈ Ω | | T i n c k T c w k - 1 p w - p k | | = min Σ p k ∈ Ω | | T i n c k p w k - 1 - p k | |
Ω={ p k| u=π (p k) and M (u)=1}
Wherein, p wand p kfor corresponding point, for p wcoordinate under kth-1 moment camera coordinates system.
By
T i n c k p w k - 1 = R i n c p w k - 1 + t i n c = G ( p w k - 1 ) x + p w k - 1
Obtain the final expression of energy function:
m i n x ∈ s e ( 3 ) Σ Ω | | G ( p w k - 1 ) x + p w k - 1 - p k | |
Ω={ p k| u=π (p k) and M (u)=1}
Wherein, serve as reasons the antisymmetric matrix formed.The threshold value of setting energy function is 0.05, utilizes Cholesky to decompose and obtains hexa-atomic group of solution x=(α, beta, gamma, t x, t y, t z), be mapped to special European group (rigid motion group) SE (3) space in Lie group, and current Kinect pose can be obtained in conjunction with the pose of k-1 moment Kinect.
Step 5, scene update.
The renewal of scene is divided into two kinds of situations, and one carries out scene update at first time, now the position of Kinect is set as the initial point of world coordinate system, and adds the contextual data of current acquisition; Another kind is newly-increased frame KeyFrame data, according to formula current newly-increased KeyFrame data are transformed in world coordinate system, complete the renewal of contextual data.
Provide the experiment embodiment that application the method for the invention carries out three-dimensional environment establishment under indoor true environment below.
The depth camera that experiment adopts be Kinect-XBOX360, RGB image resolution ratio is 640 × 480, and most high frame rate is 30fps.As shown in Figure 4, Fig. 4 (a) is the real scene of experiment to indoor environment, the two-dimensional geometry schematic diagram that Fig. 4 (b) is experimental situation.Hand-held Kinect during experiment, from starting point place, stops by behind the fixed route walking place of reaching home, synchronously increases progressively generation global map in the process of walking, the scope of 9m × 9m in the map covering chamber of establishment.Fig. 5 is reconstructed results schematic diagram.
Experimental result shows, the method for the invention can be used for indoor three-dimensional scenic on a large scale and creates, and has higher precision and good real-time.
The above, be only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention, and all any amendments done within the spirit and principles in the present invention, equivalent replacement and improvement etc., all should be included within protection scope of the present invention.

Claims (2)

1., based on an indoor method for reconstructing three-dimensional scene for double-deck method for registering, it is characterized in that comprising the following steps:
Step one, carry out Kinect demarcation, method is as follows:
(1) a chessboard template is printed;
(2) from multiple angle shot chessboard;
(3) unique point in image is detected, i.e. each black point of crossing of chessboard;
(4) parameter that Kinect demarcates is obtained:
The internal reference matrix K of thermal camera ir:
K i r = f u I R 0 u I R 0 f v I R v I R 0 0 1
Wherein, (f uIR, f vIR) be the focal length of thermal camera, value (5,5), (u iR, v iR) be thermal camera as planar central coordinate, value (320,240);
The Intrinsic Matrix K of RGB video camera c:
K c = f u 0 u 0 0 f v v 0 0 0 1
Wherein, (f u, f v) be the focal length of RGB video camera, (u 0, v 0) be RGB camera image plane centre coordinate;
External parameter between thermal camera and RGB video camera is:
T=[R IRc,t IRc]
Wherein, R iRcfor rotation matrix, t iRcfor translation vector, directly use the parameter that official of Microsoft provides:
R I R c = 1 0 0 0 1 0 0 0 1
t IRc=[0.07500] T
Kinect coordinate system is upwards y-axis positive dirction, is forward z-axis positive dirction, is to the right x-axis positive dirction; The initial point position of Kinect is set as world coordinate system initial point, and X, Y, the Z-direction of world coordinate system are identical with the x, y, z direction of Kinect initial point position;
Step 2, the extraction of unique point with mate, method is as follows:
(1) integral image is obtained: for the integration I (X) of certain some X=(x, y) in image be:
I Σ ( X ) = Σ i = 0 x Σ j = 0 y I ( i , j )
Wherein, I (i, j) represents that in image, pixel coordinate is the pixel value of (i, j);
In integral image, calculate the gray-scale value sum of a rectangular area with 3 plus and minus calculations, have nothing to do with the area of rectangle;
(2) approximate Hessian matrix H is asked for approx: for certain some X=(x, y) in image I, the essian matrix H (X, s) on the s yardstick of X point is defined as:
H ( X , s ) = L x x ( X , s ) L x y ( X , s ) L x y ( X , s ) L y y ( X , s )
Wherein, L xx(X, s), L xy(X, s), L yy(X, s) represents the convolution of Gauss's second-order partial differential coefficient at X place and image I; Use square frame filtering to be similar to the second order Gauss filtering replaced in Hessian matrix, frame-shaped Filtering Template is respectively D with the value after image convolution xx, D yy, D xy, replace L with them further xx, L yy, L xyobtain approximate Hessian matrix H approx, its determinant is:
det(H approx)=D xxD yy-(wD xy) 2
Wherein, w is weight coefficient;
(3) feature point detection of location feature point: SURF is based on Hessian matrix, according to the local maximum location feature point position of Hessian matrix determinant;
With the frame-shaped wave filter of different size, process is carried out to original image and obtain yardstick image pyramid, according to H approxobtain the extreme value of scalogram picture at (X, s) place;
Use frame-shaped wave filter to build metric space, in every single order, select the scalogram picture of 4 layers, use H approxmatrix obtains extreme value, and in 3 dimension (X, s) metric spaces, the regional area to each 3 × 3 × 3 carries out non-maxima suppression; Point response being greater than 26 neighborhood values elects unique point as; Utilize quadratic fit function accurately to locate unique point, fitting function D (X) is:
D ( X ) = D + ∂ D T ∂ X X + 1 2 X T ∂ D ∂ X 2 X
Thus obtain position, the dimensional information (X, s) of unique point;
(4) determine the direction character of unique point: with Haar wavelet filter, circle shaped neighborhood region is processed, obtain the response in x, y direction in this neighborhood corresponding to each point; Choose the Gaussian function centered by unique point, σ gets 2s, and s is the yardstick of this Feature point correspondence, and be weighted these responses, the vector that search length is maximum, its direction is the direction corresponding to this unique point;
(5) construction feature description vectors: determine a foursquare neighborhood centered by unique point, the length of side gets 20s is the y-axis direction of this neighborhood unique point direction setting; Square area is divided into 4 × 4 sub regions, processes in each subregion with Haar wavelet filter, Haar small echo template size is 2s × 2s; Use d xrepresent the little wave response of Haar of horizontal direction, use d yrepresent the little wave response of Haar of vertical direction, for all d x, d yin order to the Gaussian function weighting centered by unique point, the σ of this Gaussian function is 3.3s; In every sub regions respectively to d x, d y, | d x|, | d y| summation, obtains 4 dimensional vector V (Σ d x, Σ d y, Σ | d x|, Σ | d y|), the vector of 4 × 4 sub regions is coupled together and just obtains one 64 vector tieed up, this vector has rotation, scale invariability, after normalization, has illumination invariant; This vector is the proper vector of Expressive Features point;
(6) characteristic matching: adopt the nearest neighbor method based on Euclidean distance, utilize K-D to set to search in image to be matched, find the first two unique point nearest with the unique point Euclidean distance in benchmark image, if minimum distance is less than the proportion threshold value of setting except the value closely obtained in proper order, then accept this pair match point;
Step 3, images match point maps to three-dimensional coordinate, and method is as follows:
Ask 1 p=(x in depth image d, y d) coordinate P under Kinect coordinate system 3D=(x, y, z):
P 3 D . x = ( x d - u I R ) × P 3 D . z / f u I R P 3 D . y = ( y d - v I R ) × P 3 D . z / f v I R P 3 D . z = d e p t h ( x d , y d )
Wherein, P 3D.x, P 3Dand P .y 3D.z P is respectively 3Dthe coordinate x of=(x, y, z), y, z, depth (x d, y d) represent the depth value of depth image mid point p;
3D coordinate corresponding to RGB image pixel obtains the coordinate (x in RGB image rgb, y rgb):
x r g b = ( P 3 D ′ . x * f u / P 3 D ′ . z ) + u 0 y r g b = ( P 3 D ′ . y * f v / P 3 D ′ . z ) + v 0
Wherein, P ' 3D t=R iRc* P 3D t+ t iRc;
According to above-mentioned conversion relation, the matching double points obtained is converted to three dimensions point pair in step 2;
Step 4, the double-deck registration of the three dimensions point based on RANSAC and ICP method, method is as follows:
(1) first registration: the three dimensions point pair removing error hiding at first registration stage application RANSAC, finds by iteration the most imperial palace point set that meets transformation model and estimates transformation matrix T'; KeyFrame is to each relative transform matrix of current data in accumulation, obtains the transformation matrix of the relative KeyFrame of current Kinect; Go out translational movement and the anglec of rotation modulus value of Kinect according to this matrix computations, compare with the threshold value of setting, judge whether that choosing current data is KeyFrame;
(2) accuracy registration: for obtaining k moment Kinect pose according to the transformational relation of Kinect coordinate system and world coordinate system, set up following energy function:
E = m i n Σ p k ∈ Ω | T c w k p w - p k |
Wherein, p wfor the point under world coordinate system, p kfor the point under current coordinate system, Ω is the set in the k moment plane of delineation with the pixel of significant depth value, that is:
Ω={ p k| u=π (p k) and M (u)=1}
Wherein, M (X) be describe the function of effective information and invalid information in acquisition depth map:
Wherein, X is picture planimetric coordinates point;
Suppose in the side-play amount of k-1 moment and k moment Kinect be then:
T c w k = T i n c k T c w k - 1
When Kinect is at x, y, the rotation amount (α, beta, gamma) on z-axis direction and translational movement are in three directions (t x, t y, t z) enough hour, launch according to first order Taylor formula, make x=(α, beta, gamma, t x, t y, t z), then:
T i n c k = exp ( x ) = [ 1 γ - β - γ 1 α β - α 1 | t x t y t z ] = [ R i n c | t i n c ]
For the kth moment obtain spatial point world coordinates be by this spot projection under the coordinate system of Kinect during the k-1 moment, energy function is transformed to:
E = min Σ p k ∈ Ω | | T c w k p w - p k | | = min Σ p k ∈ Ω | | T i n c k T c w k - 1 p w - p k | | = min Σ p k ∈ Ω | | T i n c k p w k - 1 - p k | |
Ω={ p k| u=π (p k) and M (u)=1}
Wherein, p wand p kfor corresponding point, for p wcoordinate under kth-1 moment camera coordinates system;
By
T i n c k p w k - 1 = R i n c p w k - 1 + t i n c = G ( p w k - 1 ) x + p w k - 1
Obtain the final expression of energy function:
m i n x ∈ s e ( 3 ) Σ Ω | | G ( p w k - 1 ) x + p w k - 1 - p k | |
Ω={ p k| u=π (p k) and M (u)=1}
Wherein, G ( p w k - 1 ) = [ [ p w k - 1 ] × | I 3 × 3 ] , serve as reasons the antisymmetric matrix formed;
The threshold value of setting energy function, utilizes Cholesky to decompose and obtains hexa-atomic group of solution x=(α, beta, gamma, t x, t y, t z), be mapped to special European group SE (3) space in Lie group, and current Kinect pose can be obtained in conjunction with the pose of k-1 moment Kinect;
Step 5, scene update, method is as follows:
The renewal of scene is divided into two kinds of situations, and one carries out scene update at first time, now the position of Kinect is set as the initial point of world coordinate system, and adds the contextual data of current acquisition; Another kind is newly-increased frame KeyFrame data, according to formula current newly-increased KeyFrame data are transformed in world coordinate system, complete the renewal of contextual data.
2. the indoor method for reconstructing three-dimensional scene based on double-deck method for registering according to claim 1, is characterized in that, matrix T is changed in the application RANSAC changes persuing described in step 4 ,method as follows:
(1) from the initial N from reference point collection A and point set B subject to registration to the 7 pairs of data of random selecting three-dimensional matching double points;
(2) basis matrix is utilized to solve minimal configuration 7 methods, by the transformation matrix T of the 7 pairs of data Calculation Basis point sets chosen and point set data subject to registration aB;
(3) transformation matrix T is utilized aBby the unique point set of image subject to registration in a remaining N-7 three-dimensional point under transforming to reference point cloud coordinate system;
(4) the point set P ' after conversion is calculated n-7with benchmark point set between error of coordinate;
(5) from N to finding out the feature point pairs number of error of coordinate in certain threshold value matching double points, be designated as m;
(6) repeat (1) ~ (5) n time, making m value obtain maximum set is most imperial palace point set, and be interior point, all the other N-m are Mismatching point, are exterior point; Most imperial palace point set is utilized to estimate the least square solution of transformation model, as the transformation matrix T' of current adjacent two frame data.
CN201310053829.3A 2013-02-20 2013-02-20 Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering Expired - Fee Related CN103106688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310053829.3A CN103106688B (en) 2013-02-20 2013-02-20 Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310053829.3A CN103106688B (en) 2013-02-20 2013-02-20 Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering

Publications (2)

Publication Number Publication Date
CN103106688A CN103106688A (en) 2013-05-15
CN103106688B true CN103106688B (en) 2016-04-27

Family

ID=48314513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310053829.3A Expired - Fee Related CN103106688B (en) 2013-02-20 2013-02-20 Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering

Country Status (1)

Country Link
CN (1) CN103106688B (en)

Families Citing this family (53)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103268729B (en) * 2013-05-22 2015-08-19 北京工业大学 Based on mobile robot's tandem type map creating method of composite character
CN103325142B (en) * 2013-05-29 2016-02-17 南京大学 A kind of electronic 3-D model modeling method based on Kinect
CN103260015B (en) * 2013-06-03 2016-02-24 程志全 Based on the three-dimensional visible supervisory control system of RGB-Depth camera
CN103413352A (en) * 2013-07-29 2013-11-27 西北工业大学 Scene three-dimensional reconstruction method based on RGBD multi-sensor fusion
CN103456038A (en) * 2013-08-19 2013-12-18 华中科技大学 Method for rebuilding three-dimensional scene of downhole environment
CN105335399B (en) * 2014-07-18 2019-03-29 联想(北京)有限公司 A kind of information processing method and electronic equipment
CN104126989B (en) * 2014-07-30 2016-06-01 福州大学 A kind of based on the foot surfaces 3 D information obtaining method under multiple stage RGB-D pick up camera
CN104517287A (en) * 2014-12-10 2015-04-15 广州赛意信息科技有限公司 Image matching method and device
CN104517289B (en) * 2014-12-12 2017-08-08 浙江大学 A kind of indoor scene localization method based on hybrid camera
US9761015B2 (en) * 2015-04-28 2017-09-12 Mitsubishi Electric Research Laboratories, Inc. Method for determining dimensions in an indoor scene from a single depth image
CN105987693B (en) * 2015-05-19 2019-04-30 北京蚁视科技有限公司 A kind of vision positioning device and three-dimensional mapping system and method based on the device
CN105222789A (en) * 2015-10-23 2016-01-06 哈尔滨工业大学 A kind of building indoor plane figure method for building up based on laser range sensor
CN105319991B (en) * 2015-11-25 2018-08-28 哈尔滨工业大学 A kind of robot environment's identification and job control method based on Kinect visual informations
CN105509748B (en) * 2015-12-29 2019-03-01 深圳先进技术研究院 The air navigation aid and device of robot
CN105513128A (en) * 2016-01-13 2016-04-20 中国空气动力研究与发展中心低速空气动力研究所 Kinect-based three-dimensional data fusion processing method
CN107025661B (en) * 2016-01-29 2020-08-04 成都理想境界科技有限公司 Method, server, terminal and system for realizing augmented reality
CN105913489B (en) * 2016-04-19 2019-04-23 东北大学 A kind of indoor three-dimensional scenic reconstructing method using plane characteristic
US10380767B2 (en) * 2016-08-01 2019-08-13 Cognex Corporation System and method for automatic selection of 3D alignment algorithms in a vision system
TWI588685B (en) * 2016-08-31 2017-06-21 宅妝股份有限公司 System for building a virtual reality and an augmented reality and method thereof
CN106384383B (en) * 2016-09-08 2019-08-06 哈尔滨工程大学 A kind of RGB-D and SLAM scene reconstruction method based on FAST and FREAK Feature Correspondence Algorithm
CN106596557A (en) * 2016-11-07 2017-04-26 东南大学 Three-dimensional scanning mobile type platform carrying Kinect and method thereof
CN106780297B (en) * 2016-11-30 2019-10-25 天津大学 Image high registration accuracy method under scene and Varying Illumination
CN106529838A (en) * 2016-12-16 2017-03-22 湖南拓视觉信息技术有限公司 Virtual assembling method and device
CN106780590B (en) * 2017-01-03 2019-12-24 成都通甲优博科技有限责任公司 Method and system for acquiring depth map
CN106803267B (en) * 2017-01-10 2020-04-14 西安电子科技大学 Kinect-based indoor scene three-dimensional reconstruction method
CN106952299B (en) * 2017-03-14 2019-07-16 大连理工大学 A kind of 3 d light fields Implementation Technology suitable for Intelligent mobile equipment
CN107123138B (en) * 2017-04-28 2019-07-30 电子科技大学 Based on vanilla-R point to the point cloud registration method for rejecting strategy
CN107274440A (en) * 2017-06-26 2017-10-20 赵红林 A kind of image matching algorithm
CN107610212B (en) * 2017-07-25 2020-05-12 深圳大学 Scene reconstruction method and device, computer equipment and computer storage medium
CN107577451B (en) * 2017-08-03 2020-06-12 中国科学院自动化研究所 Multi-Kinect human body skeleton coordinate transformation method, processing equipment and readable storage medium
CN107748569B (en) * 2017-09-04 2021-02-19 中国兵器工业计算机应用技术研究所 Motion control method and device for unmanned aerial vehicle and unmanned aerial vehicle system
CN107833270B (en) * 2017-09-28 2020-07-03 浙江大学 Real-time object three-dimensional reconstruction method based on depth camera
CN109798830B (en) * 2017-11-17 2020-09-08 上海勘察设计研究院(集团)有限公司 Tunnel appendage geometric characteristic measuring method
CN108055456B (en) * 2017-12-07 2020-09-29 中煤航测遥感集团有限公司 Texture acquisition method and device
CN107917701A (en) * 2017-12-28 2018-04-17 人加智能机器人技术(北京)有限公司 Measuring method and RGBD camera systems based on active binocular stereo vision
CN108537805B (en) * 2018-04-16 2021-09-21 中北大学 Target identification method based on feature geometric benefits
CN108534782B (en) * 2018-04-16 2021-08-17 电子科技大学 Binocular vision system-based landmark map vehicle instant positioning method
CN108765328B (en) * 2018-05-18 2021-08-27 凌美芯(北京)科技有限责任公司 High-precision multi-feature plane template and distortion optimization and calibration method thereof
CN108960280B (en) * 2018-05-21 2020-07-24 北京中科闻歌科技股份有限公司 Picture similarity detection method and system
CN109682385A (en) * 2018-11-05 2019-04-26 天津大学 A method of instant positioning and map structuring based on ORB feature
CN110012280B (en) * 2019-03-22 2020-12-18 盎锐(上海)信息科技有限公司 TOF module for VSLAM system and VSLAM calculation method
WO2021042376A1 (en) * 2019-09-06 2021-03-11 罗伯特·博世有限公司 Calibration method and apparatus for industrial robot, three-dimensional environment modeling method and device for industrial robot, computer storage medium, and industrial robot operating platform
CN110610517A (en) * 2019-09-18 2019-12-24 电子科技大学 Method for detecting heat source center in three-dimensional space
CN110587579A (en) * 2019-09-30 2019-12-20 厦门大学嘉庚学院 Kinect-based robot teaching programming guiding method
CN111178138B (en) * 2019-12-04 2021-01-12 国电南瑞科技股份有限公司 Distribution network wire operating point detection method and device based on laser point cloud and binocular vision
CN111275810B (en) * 2020-01-17 2022-06-24 五邑大学 K nearest neighbor point cloud filtering method and device based on image processing and storage medium
CN113724365B (en) * 2020-05-22 2023-09-26 杭州海康威视数字技术股份有限公司 Three-dimensional reconstruction method and device
CN111932670B (en) * 2020-08-13 2021-09-28 北京未澜科技有限公司 Three-dimensional human body self-portrait reconstruction method and system based on single RGBD camera
CN112001955A (en) * 2020-08-24 2020-11-27 深圳市建设综合勘察设计院有限公司 Point cloud registration method and system based on two-dimensional projection plane matching constraint
CN112116703A (en) * 2020-09-08 2020-12-22 苏州小优智能科技有限公司 3D camera and infrared light scanning algorithm for aligning point cloud and color texture
CN112269851B (en) * 2020-11-16 2024-05-17 Oppo广东移动通信有限公司 Map data updating method and device, storage medium and electronic equipment
CN112541932B (en) * 2020-11-30 2024-03-26 西安电子科技大学昆山创新研究院 Multi-source image registration method based on different focal length transformation parameters of dual-light camera
CN113470085B (en) * 2021-05-19 2023-02-10 西安电子科技大学 Improved RANSAC-based image registration method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976455A (en) * 2010-10-08 2011-02-16 东南大学 Color image three-dimensional reconstruction method based on three-dimensional matching

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6965386B2 (en) * 2001-12-20 2005-11-15 Siemens Corporate Research, Inc. Method for three dimensional image reconstruction

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101976455A (en) * 2010-10-08 2011-02-16 东南大学 Color image three-dimensional reconstruction method based on three-dimensional matching

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
RGB-D Mapping: Using Depth Cameras for Dense 3D Modeling of Indoor Environments;Peter Henry等;《the 12th International Symposium on Experimental Robotics》;20101231;第1-15页 *
基于GPU和Kinect的快速物体重建;刘鑫等;《自动化学报》;20120831;第38卷(第8期);第1288-1297页 *

Also Published As

Publication number Publication date
CN103106688A (en) 2013-05-15

Similar Documents

Publication Publication Date Title
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
CN111815757B (en) Large member three-dimensional reconstruction method based on image sequence
CN103247075B (en) Based on the indoor environment three-dimensional rebuilding method of variation mechanism
CN109242954B (en) Multi-view three-dimensional human body reconstruction method based on template deformation
CN107679537B (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matching
CN103761737B (en) Robot motion's method of estimation based on dense optical flow
CN104240289B (en) Three-dimensional digitalization reconstruction method and system based on single camera
CN106910242A (en) The method and system of indoor full scene three-dimensional reconstruction are carried out based on depth camera
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN104539928B (en) A kind of grating stereo printing image combining method
CN106485690A (en) Cloud data based on a feature and the autoregistration fusion method of optical image
CN105096386A (en) Method for automatically generating geographic maps for large-range complex urban environment
CN102750697A (en) Parameter calibration method and device
CN104574432B (en) Three-dimensional face reconstruction method and three-dimensional face reconstruction system for automatic multi-view-angle face auto-shooting image
CN103400409A (en) 3D (three-dimensional) visualization method for coverage range based on quick estimation of attitude of camera
CN109325995B (en) Low-resolution multi-view hand reconstruction method based on hand parameter model
CN103839277A (en) Mobile augmented reality registration method of outdoor wide-range natural scene
CN111462302B (en) Multi-view human body dynamic three-dimensional reconstruction method and system based on depth coding network
CN112991420A (en) Stereo matching feature extraction and post-processing method for disparity map
Stucker et al. ResDepth: Learned residual stereo reconstruction
Alcantarilla et al. Large-scale dense 3D reconstruction from stereo imagery
CN107610219A (en) The thick densification method of Pixel-level point cloud that geometry clue perceives in a kind of three-dimensional scenic reconstruct
CN101661623A (en) Three-dimensional tracking method of deformable body based on linear programming
Yong-guo et al. The navigation of mobile robot based on stereo vision
CN108734148A (en) A kind of public arena image information collecting unmanned aerial vehicle control system based on cloud computing

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20160427

Termination date: 20200220