CN105339981A - Method for registering data using set of primitives - Google Patents

Method for registering data using set of primitives Download PDF

Info

Publication number
CN105339981A
CN105339981A CN201480034631.3A CN201480034631A CN105339981A CN 105339981 A CN105339981 A CN 105339981A CN 201480034631 A CN201480034631 A CN 201480034631A CN 105339981 A CN105339981 A CN 105339981A
Authority
CN
China
Prior art keywords
plane
point
coordinate system
primitive
group
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201480034631.3A
Other languages
Chinese (zh)
Other versions
CN105339981B (en
Inventor
田口裕一
E·阿塔埃尔-坎斯佐古力
S·拉姆阿里加姆
T·W·加拉斯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Mitsubishi Electric Corp
Original Assignee
Mitsubishi Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/921,296 external-priority patent/US9420265B2/en
Application filed by Mitsubishi Electric Corp filed Critical Mitsubishi Electric Corp
Publication of CN105339981A publication Critical patent/CN105339981A/en
Application granted granted Critical
Publication of CN105339981B publication Critical patent/CN105339981B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

A method registers data using a set of primitives including points and planes. First, the method selects a first set of primitives from the data in a first coordinate system, wherein the first set of primitives includes at least three primitives and at least one plane. A transformation is predicted from the first coordinate system to a second coordinate system. The first set of primitives is transformed to the second coordinate system using the transformation. A second set of primitives is determined according to the first set of primitives transformed to the second coordinate system. Then, the second coordinate system is registered with the first coordinate system using the first set of primitives in the first coordinate system and the second set of primitives in the second coordinate system. The registration can be used to track a pose of a camera acquiring the data.

Description

For using the method for one group of primitive registration data
Technical field
The present invention relates in general to computer vision, and more particularly, relates to the posture estimating camera.
Background technology
The system and method for posture following the tracks of camera while the 3D structure of re-construct is widely used in that augmented reality (AR) is visual, robot navigation, scene modeling and computer vision application.Such process is commonly called simultaneous localization and mapping (SLAM).Real-time SLAM system can use obtain two dimension (2D) image traditional camera, obtain three-dimensional (3D) and put the depth camera of cloud (one group of 3D point) or obtain the red, green, blue of 2D image and 3D point cloud and the degree of depth (RGB-D) camera (such as ).Follow the tracks of and refer to and use the predicted motion of camera for the process of posture sequentially estimating camera, and reorientation refers to and uses the global registration (registration) of some feature based for from the process following the tracks of failure recovery.
Use the SLAM system of 2D camera normally successful for texture scene, but for probably failed without texture region.Under the help of iterative closest point (ICP) method, use the Geometrical change (such as, curved surface and depth boundary) in the system dependence scene of depth camera.But, when Geometrical change is very little (such as, in plane scene), based on the system often failure of ICP.Use the system of RGB-D camera can utilize texture and geometric properties, but they still require unique texture.
A lot of method does not all clearly solve difficulty when rebuilding the three-dimensional model larger than single room.In order to those methods are expanded to larger scene, require good memory management technique.But storer restriction is not unique challenge.Usually, the scene of room scale comprises many objects with texture and geometric properties.In order to expand to larger scene, need to follow the tracks of the camera posture had in the region (such as, corridor) of limited texture and insufficient Geometrical change.
Camera is followed the tracks of
Consider that some 3D are corresponding, tracking problem is summarized as registration problems by the system using 3D sensor to obtain 3D point cloud.ICP method, from predicting by camera motion the initial pose estimation provided, is located point-to-point or puts plane corresponding iteratively.ICP has been widely used in line sweep 3D sensor (also referred to as scan matching) in mobile robot and for generation of the depth camera of whole 3D point cloud and 3D sensor.U.S.20120194516 with ICP method by point to plane to being applied to the posture of camera is followed the tracks of.This expression of map is one group of volume elements (voxel).Each volume elements represents the unblind distance function (truncatedsigneddistancefunction) for arriving closest to the distance of surface point.The method is not from 3D data reduction plane; On the contrary, by using local neighborhood to determine, the normal of 3D point is set up a little corresponding to plane.The method based on ICP like this requires that scene has abundant Geometrical change for accuracy registration.
Another kind method is from RGB image zooming-out feature and perform based on the Point matching of descriptor, to determine that point-to-point is corresponding and to estimate camera posture, then utilizes ICP method by the refinement of camera posture.Texture (RGB) in the method use scenes and geometry (degree of depth) feature.But only using point patterns to process still has problem without texture region and the region with repetition texture.
Use the SLAM of plane
Plane characteristic has been used in many SLAM systems.In order to determine camera posture, require that its normal crosses over R 3at least three planes.Therefore, particularly (such as, exist when the scope of visual field (FOV) or sensor is less in) time, only use plane to cause many degenerate problems (degeneracyissue).The combination of large FOV line sweep 3D sensor and small field of view (FOV) depth camera can utilize additional system cost to avoid degenerating.
The method described in related application employs a little-planar S LAM, and point-planar S LAM uses point and plane to avoid using Failure Mode common in the method for a primitive in these primitives.This system does not use any camera motion to predict.On the contrary, this system, by anchor point peace face is corresponding globally, performs reorientation for all frames.As a result, this system only can process per second about three frames, and some running into owing to causing based on the Point matching of descriptor repeat the failure of textures.
The method described in related application also proposed and uses point-to-point and plane to plane correspondence registration 3D data in different coordinates.
Summary of the invention
In the indoor and outdoors scene comprising man-made structures, plane is main.Embodiments of the present invention provide and a kind ofly use point and plane as the system and method for the RGB-D camera of primitive feature for following the tracks of.By fit Plane, the method implicitly processes the noise in the distinctive depth data of 3D sensor.Tracking adjusts (bundleadjustment) by reorientation and boundling and processes support, to show real-time simultaneous localization and mapping (SLAM) system using the RGB-D camera that hand-held or robot are installed.
An object of the present invention is by cause the degenerate problem of registration failure reduce to minimum while enable quick and accurate registration.The method uses camera motion prediction anchor point peace face corresponding, and provides the tracker based on prediction and correction framework.The method is combined a little and the reorientation of plane and boundling adjustment process, to continue refinement camera pose estimation from tracking failure recovery.
Particularly, a kind of method uses and to comprise a little and one group of primitive of plane carrys out registration data.First, the method is from the first group of primitive of the data selection the first coordinate system, and wherein, first group of primitive comprises at least three primitives and at least one plane.
The conversion being tied to the second coordinate system from the first coordinate is predicted.Use this conversion by first group of basis element change to the second coordinate system.First group of primitive according to transforming to the second coordinate system determines second group of primitive.
Then, first group of primitive in the first coordinate system and second group of primitive in the second coordinate system is used, by the second coordinate system and the first co-registration of coordinate systems used.Described registration may be used for the posture of following the tracks of the camera obtaining data.
Accompanying drawing explanation
Fig. 1 is the process flow diagram of the method for the posture for following the tracks of camera according to the embodiment of the present invention; And
Fig. 2 be according to the embodiment of the present invention for using the prediction posture of camera to set up point-to-point and the plane schematic diagram to process corresponding to plane between present frame with map (map).
Embodiment
Embodiments of the present invention provide a kind of system and method for the posture for following the tracks of camera.The present invention, by camera motion prediction is used for corresponding search and registration faster, extends the related U.S.Patent application Sn.13/539 at us, the embodiment described in 060.We are used in the point-to-point set up between present frame and map and plane is corresponding to plane.Map comprises from the point of the frame of registration and plane in global coordinate system before.Here, what we paid close attention to is uses camera motion prediction to set up plane to plane correspondence and set up point-to-point and the plane mixing situation to plane correspondence.
System survey
In optimum decision system, RGB-D camera 102 is or xtionPROLIVE, it obtains series of frames 101.We use the SLAM system based on key frame, and we select multiple representative frame as key frame here, and are stored in map by the key frame of registration in single global coordinate system.Compared with the prior art SLAM only used a little, in our all process in systems in which, use point and plane as primitive.Point in each frame and plane are called as to be measured (measurement), and is stored in map as terrestrial reference from measuring of key frame.
When given map, our usage forecastings and correction framework estimate the posture of present frame: we predict the posture of camera, and use this posture to determine a little and plane measure with to put between plane terrestrial reference corresponding, then use described correspondence to determine camera posture.
Tracking may the failure due to incorrect or insufficient correspondence.After the Continuous Tracking failure of predetermined quantity, our reorientation, wherein, we use the corresponding search in face of global point peace between present frame with map.We also apply the boundling adjustment using point and plane, with the terrestrial reference of refinement asynchronously in map.
Method is summarized
As described in Figure 1,110 present frames 101 are obtained by the red, green, blue of scene 103 and the degree of depth (RGB-D) camera 102.The posture of this camera during prediction acquisition described frame, the peaceful face of point that the posture of camera is used between location 130 frame with ground Figure 194 is corresponding.In stochastic sampling consistance (RANSAC) framework 140, use the peaceful face of point corresponding, so that frame is registrated to map.If 150 registration failures, then counting 154 is carried out to continuous failed quantity, and if be false (F), then continue next frame, be true (T) else if, then use global registration method and do not use camera motion to predict reorientation 158 camera.
If the success of RANSAC registration, then the posture 160 estimated in RANSAC framework is used as the posture of frame.Next, determine whether 170 present frames are key frame, and if be false, then in step 110 place, proceed next frame.Otherwise, extract 180 point added and planes in the current frame, upgrade 190 ground Figure 194, and proceed for next frame.Boundling adjustment is used to carry out asynchronous refinement 198 to map.
These steps can be performed in the processor being connected to storer as known in the art and input/output interface.
Camera posture is followed the tracks of
As mentioned above, our tracking uses and comprises a little and the feature of plane.Described tracking is based on prediction and correcting scheme, and this can be summarized as follows.For each frame, we use camera motion model to predict posture.Based on predicted posture, point corresponding with the point in map and plane terrestrial reference in our locating frame and plane are measured.We use the corresponding registration performed based on RANSAC in a peaceful face.If this posture is different from the posture of the current any key frame be stored in map, then we extract additional point and plane is measured, and add in map this frame as new key frame to.
Camera motion is predicted
The posture of a kth frame is expressed as by we
T k = R k t k 0 T 1 , - - - ( 1 )
Wherein, R kand t krepresent rotation matrix and translation vector respectively.We use the first frame to define the coordinate system of map; Therefore, T 1unit matrix, and T krepresent the posture of a kth frame relative to map.
We predict the posture of a kth frame by use constant speed hypothesis Δ T is made to represent the motion of the previous estimation between (k-1) individual frame and (k-2) individual frame, that is, Δ T=T k-1t k-2 -1.Then, the posture of a kth frame is predicted as by we
Anchor point peace face is corresponding
As shown in Figure 2, we use predicted posture locate point in the kth corresponding with the terrestrial reference in a map frame and plane is measured.Consider the prediction posture 201 of present frame, the point in our positioning map 202 and plane terrestrial reference measure with the point in present frame 203 and plane between corresponding.First we use predicted posture that the terrestrial reference in map is converted to present frame.Then, for each point, we bring into use light stream process to perform Local Search from the location of pixels predicted present frame.For each plane, first we locate the parameter of predicted plane.Then, we consider one group of reference point in predicted plane, and locate the pixel be connected with each reference point be positioned in predicted plane.Select the reference point with the pixel of the connection of maximum quantity, and use the pixel of all connections to carry out refinement plane parameter.
Point is corresponding: make p i=(x i, y i, z i, 1) trepresent that i-th some terrestrial reference, 210, i-th the some terrestrial reference 210 in map is represented as homogeneous vector.P in present frame i2D image projection 220 be predicted to be
p ^ i k = T ^ k p i , u ^ i k = F P ( p ^ i k ) , - - - ( 2 )
Wherein, be the 3D point of the coordinate system transforming to a kth frame, and function F P () use inner camera calibration parameter to determine that 3D point is to the forward projection on the plane of delineation.We by use Lucas-Kanade optical flow approach from initial position start to locate respective point and measure.Make for determined light stream vector 230.Then, respective point is measured for
u i k = u ^ i k + Δu i k , p i k = B P ( u i k ) D ( u i k ) , - - - ( 3 )
Wherein, function BP () is by 2D image pixel rear orientation projection to 3 D ray, and D () refers to the depth value of pixel.If light stream vector is not determined or location of pixels there is invalid depth value, then think this Character losing.
Plane is corresponding: replace performing plane leaching process (prior art) consuming time to each frame independent of other frame, we utilize predicted posture to extract plane.This produces plane faster and measures extraction, and provides plane corresponding.
Make π j=(a j, b j, c j, d j) trepresent the plane equation of the jth plane terrestrial reference 240 in map.We suppose that plane terrestrial reference and corresponding measuring have some overlapping regions in the picture.Measure to locate such respective planes, we select multiple reference point 250q randomly from point (inlier) in a jth plane earth target j,r(r=1 ..., N), and reference point is converted to a kth frame as 255
q ^ j , r k = T ^ k q j , r , ( r = 1 , ... , N ) - - - ( 4 ) .
π j is also converted to a kth frame as 245 by us
π ^ j k = T ^ k - T π j - - - ( 5 ) .
We are from plane on each transformation after reference point the pixel 260 be located by connecting, and select the pixel with most imperial palace point.In these, point is used for refinement plane equation, obtains corresponding plane and measures if the quantity of interior point is less than threshold value, then plane terrestrial reference is declared as loss.Such as, we use N=5 reference point, use for putting the threshold value for 50mm of plan range to determine point in plane, and use 9000 as the threshold value put in minimum number.
Terrestrial reference is selected
Use all terrestrial references in map perform above-mentioned process can energy efficiency low.Therefore, the terrestrial reference occurred in the single key frame that we are used in closest to present frame.Before tracking process, by using the posture T of previous frame k-1select immediate key frame.
RANSAC registration
Correspondence search based on prediction provides point-to-point and plane to candidate corresponding to plane, and described candidate may comprise exterior point (outlier).Therefore, we perform based on the registration of RANSAC to determine interior point and to determine camera posture.In order to determine posture clearly, we need at least three correspondences.Therefore, be less than three corresponding candidates if existed, then we determine to follow the tracks of unsuccessfully immediately.In order to carry out accurate camera tracking, when only there is a small amount of corresponding candidate, we also determine to follow the tracks of unsuccessfully.
If there is the candidate of sufficient amount, then we use the mixing correspondence of closing form to solve registration problems.Because the quantity of plane is usually far less than the quantity of point, and owing to causing plane to have less noise from the support of many points, this process makes plane correspondence have precedence over a correspondence.If RANSAC located sufficient amount interior point (such as, somewhat peaceful face measure 40% of quantity), then think and follow the tracks of successfully.The method produces the correction posture T of a kth frame k.
Map rejuvenation
If estimated posture T kfully different from the posture of any existing key frame in map, then a kth frame is defined as key frame by us.In order to check this situation, we such as can use translation 100mm and rotate the threshold value of 5 °.For new key frame, make to be positioned as in based on the registration of RANSAC the point of interior point and plane and measure and be associated with corresponding terrestrial reference, abandon those points of being positioned as exterior point and plane is measured simultaneously.Then, we can extract up-to-date appearance annex point in the frame and plane is measured.It is keeping off in the pixel that any existing point measures that annex point is measured, and uses Keypoint detector (such as, Scale invariant features transform (SIFT) and acceleration robust features (SURF)) to extract.Additional plane is measured by using the plane fitting based on RANSAC to extract in the pixel not being point in any existing plane is measured.Annex point and plane are measured and adds map to as new landmark.In addition, we extract for the feature descriptor (such as, SIFT and SURF) of reorientation for measuring a little in frame.

Claims (15)

1. for using one group of primitive to carry out a method for registration data, wherein, described data have three dimensions (3D) and described primitive comprises a little and plane, said method comprising the steps of:
From the first group of primitive of the described data selection the first coordinate system, wherein, described first group of primitive comprises at least three primitives and at least one plane;
Prediction is tied to the conversion of the second coordinate system from described first coordinate;
Use described conversion by described first group of basis element change to described second coordinate system;
Described first group of primitive according to transforming to described second coordinate system determines second group of primitive; And
Use the described first group of primitive in mutually corresponding described first coordinate system and the described second group of primitive in described second coordinate system, by described second coordinate system and described first co-registration of coordinate systems used, wherein, perform above step within a processor.
2. method according to claim 1, wherein, described first group of primitive comprises at least one point and at least one plane in described first coordinate system, and described second group of primitive comprises at least one point and at least one plane in described second coordinate system.
3. method according to claim 1, wherein, obtains described data by removable camera.
4. method according to claim 1, wherein, described data comprise texture and the degree of depth.
5. method according to claim 1, wherein, described registration uses stochastic sampling consistance (RANSAC).
6. method according to claim 1, wherein, described data are the form of the series of frames obtained by camera.
7. method according to claim 6, described method also comprises:
Select a framing as key frame from described series of frames; And
Be stored in map by described key frame, wherein, described key frame comprises described point and described plane, and described point and described plane are stored in described map as terrestrial reference.
8. method according to claim 7, described method also comprises:
The posture of described camera is predicted for each frame; And
According to described registration, determine the posture of described camera for each frame, to follow the tracks of described camera.
9. method according to claim 1, wherein, described registration is real-time.
10. method according to claim 7, described method also comprises:
The boundling adjustment of the application described point of use and described plane carrys out the terrestrial reference in map described in refinement.
11. methods according to claim 8, wherein, the posture of a kth frame is
T k = R k t k 0 T 1 ,
Wherein, R kand t krepresent rotation matrix and translation vector respectively.
12. methods according to claim 8, wherein, described prediction uses constant speed hypothesis.
13. methods according to claim 6, wherein, use light stream process locates the described point in described frame.
14. methods according to claim 1, wherein, make the correspondence of described plane have precedence over the correspondence of described point.
15. methods according to claim 1, wherein, described registration is used to simultaneous localization and mapping (SLAM).
CN201480034631.3A 2013-06-19 2014-05-30 Method for using one group of primitive registration data Active CN105339981B (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US13/921,296 US9420265B2 (en) 2012-06-29 2013-06-19 Tracking poses of 3D camera using points and planes
US13/921,296 2013-06-19
PCT/JP2014/065026 WO2014203743A1 (en) 2013-06-19 2014-05-30 Method for registering data using set of primitives

Publications (2)

Publication Number Publication Date
CN105339981A true CN105339981A (en) 2016-02-17
CN105339981B CN105339981B (en) 2019-04-12

Family

ID=50979838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201480034631.3A Active CN105339981B (en) 2013-06-19 2014-05-30 Method for using one group of primitive registration data

Country Status (4)

Country Link
JP (1) JP6228239B2 (en)
CN (1) CN105339981B (en)
DE (1) DE112014002943T5 (en)
WO (1) WO2014203743A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780601A (en) * 2016-12-01 2017-05-31 北京未动科技有限公司 A kind of locus method for tracing, device and smart machine
CN108171733A (en) * 2016-12-07 2018-06-15 赫克斯冈技术中心 Scanner vis

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6775969B2 (en) * 2016-02-29 2020-10-28 キヤノン株式会社 Information processing equipment, information processing methods, and programs
EP3494447B1 (en) 2016-08-04 2021-05-19 Reification Inc. Methods for simultaneous localization and mapping (slam) and related apparatus and systems

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009237845A (en) * 2008-03-27 2009-10-15 Sony Corp Information processor, information processing method, and computer program
JP2010288112A (en) * 2009-06-12 2010-12-24 Nissan Motor Co Ltd Self-position estimation device, and self-position estimation method
CN102609942A (en) * 2011-01-31 2012-07-25 微软公司 Mobile camera localization using depth maps
CN103123727A (en) * 2011-11-21 2013-05-29 联想(北京)有限公司 Method and device for simultaneous positioning and map building

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5580164B2 (en) * 2010-10-18 2014-08-27 株式会社トプコン Optical information processing apparatus, optical information processing method, optical information processing system, and optical information processing program

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009237845A (en) * 2008-03-27 2009-10-15 Sony Corp Information processor, information processing method, and computer program
JP2010288112A (en) * 2009-06-12 2010-12-24 Nissan Motor Co Ltd Self-position estimation device, and self-position estimation method
CN102609942A (en) * 2011-01-31 2012-07-25 微软公司 Mobile camera localization using depth maps
CN103123727A (en) * 2011-11-21 2013-05-29 联想(北京)有限公司 Method and device for simultaneous positioning and map building

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ANDREW J DAVISON ET AL: "MonoSLAM:Real-Time Single Camera SLAM", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
BOLAN JIANG ET AL: "Camera tracking for augmented reality media", 《MULTIMEDIA AND EXPO,2000.ICME 2000.2000 IEEE INTERNATIONAL CONFEREN CE ON NEW YORK》 *
JAN WEINGARTEN ET AL: "3D SLAM using planar segments", 《IEEE/RSJ INTERNATIONAL CONFERENCE ON INTELLIGENT ROBOTS AND SYSTEMS》 *
P.HENRY ET AL: "RGB一D mapping:Using Kinect一style depth cameras for dense 3D modeling of indoor environments", 《THE INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH》 *
SEBASTIAN LIEBERKNECHT ET AL: "RGB-D camera-based parallel tracking and meshing", 《MIXED AND AUGMENTED REALITY(ISMAR),2011 10TH IEEE INTERNATIONAL SYMPOSIUM ON》 *
TAGUCHI YUICHI ET AL: "Point一plane SLAM for hand-held 3D sensors", 《2013 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION》 *
孙峰: "三维深度图像配准的研究", 《中国优秀博硕士学位论文全文数据库 (硕士) 信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106780601A (en) * 2016-12-01 2017-05-31 北京未动科技有限公司 A kind of locus method for tracing, device and smart machine
CN106780601B (en) * 2016-12-01 2020-03-27 北京未动科技有限公司 Spatial position tracking method and device and intelligent equipment
CN108171733A (en) * 2016-12-07 2018-06-15 赫克斯冈技术中心 Scanner vis
CN108171733B (en) * 2016-12-07 2022-04-19 赫克斯冈技术中心 Method of registering two or more three-dimensional 3D point clouds

Also Published As

Publication number Publication date
DE112014002943T5 (en) 2016-03-10
WO2014203743A1 (en) 2014-12-24
JP6228239B2 (en) 2017-11-08
JP2016527574A (en) 2016-09-08
CN105339981B (en) 2019-04-12

Similar Documents

Publication Publication Date Title
US9420265B2 (en) Tracking poses of 3D camera using points and planes
US10553026B2 (en) Dense visual SLAM with probabilistic surfel map
Hsiao et al. Keyframe-based dense planar SLAM
US7187809B2 (en) Method and apparatus for aligning video to three-dimensional point clouds
Lu et al. Visual navigation using heterogeneous landmarks and unsupervised geometric constraints
US9852238B2 (en) 4D vizualization of building design and construction modeling with photographs
Daftry et al. Building with drones: Accurate 3D facade reconstruction using MAVs
Sato et al. Dense 3-d reconstruction of an outdoor scene by hundreds-baseline stereo using a hand-held video camera
Cohen et al. Discovering and exploiting 3d symmetries in structure from motion
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN109472828B (en) Positioning method, positioning device, electronic equipment and computer readable storage medium
Chien et al. Visual odometry driven online calibration for monocular lidar-camera systems
CN112902953A (en) Autonomous pose measurement method based on SLAM technology
Taketomi et al. Real-time and accurate extrinsic camera parameter estimation using feature landmark database for augmented reality
CN110675455B (en) Natural scene-based self-calibration method and system for vehicle body looking-around camera
CN105339981A (en) Method for registering data using set of primitives
Bethmann et al. Object-based multi-image semi-global matching–concept and first results
Suttasupa et al. Plane detection for Kinect image sequences
CN113554102A (en) Aviation image DSM matching method for cost calculation dynamic programming
Tu et al. PanoVLM: Low-Cost and accurate panoramic vision and LiDAR fused mapping
Barrois et al. Resolving stereo matching errors due to repetitive structures using model information
Sourimant et al. Gps, gis and video fusion for urban modeling
Sato et al. 3-D modeling of an outdoor scene from multiple image sequences by estimating camera motion parameters
Ren An improved binocular LSD_SLAM method for object localization
Kim et al. Planar patch based 3D environment modeling with stereo camera

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant