CN108416385A - It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method - Google Patents

It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method Download PDF

Info

Publication number
CN108416385A
CN108416385A CN201810184701.3A CN201810184701A CN108416385A CN 108416385 A CN108416385 A CN 108416385A CN 201810184701 A CN201810184701 A CN 201810184701A CN 108416385 A CN108416385 A CN 108416385A
Authority
CN
China
Prior art keywords
point
pose
image
characteristic
points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201810184701.3A
Other languages
Chinese (zh)
Inventor
贾松敏
郑泽玲
张祥银
李明爱
李秀智
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201810184701.3A priority Critical patent/CN108416385A/en
Publication of CN108416385A publication Critical patent/CN108416385A/en
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components

Abstract

The present invention proposes a kind of based on positioning while improving Image Matching Strategy and the method for building figure.This method is using improved ORB features to image each region into Oriented FAST (Oriented Features from Accelerated Segment Test) Corner Detection, description for calculating characteristic point, carries out characteristic matching and screening using improved RANSAC algorithms later.PnP (Perspective n Point) method is finally used to solve the initial pose of robot, pose figure is generated using obtained initial pose, then pose is updated using row literary Burger Ma Kuaer special formula methods and utilizes g2o (General Graphic Optimization, G2O) optimization library optimizes pose.A cloud map finally is put in the generation that is combined together of pose and corresponding frame after optimization.The method, which can improve the efficiency of error hiding rejecting and can improve the robustness of tracking process, obtains accurate map.

Description

It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
Technical field
The present invention relates to a kind of based on improving the synchronous positioning of Image Matching Strategy and building drawing method, belong to images match and The field vision SLAM (Simultaneous Localization and Mapping).
Background technology
The synchronous positioning of robot is the key that realize robot autonomous navigation with figure is built, and how robot is in circumstances not known It is middle complete itself positioning and build the problem of figure is a great challenge.Traditional GPS positioning is usually applied to outdoor environment, in room Often there are problems that positioning failure in interior environment.With the continuous development that computer vision field is studied, base under indoor environment It is increasingly becoming research hotspot in the SLAM technologies of vision.Since camera can obtain abundanter environmental information, and with The release of Microsoft's kinect depth cameras of relative low price is applied to vision SLAM domain variabilities and achieves in recent years It is good to position and build figure effect.
Classical vision SLAM flows include mainly following steps:(1) information is read, the predominantly reading of image information It takes and pre-processes;(2) visual odometry, the main establishment for estimating the movement between adjacent image and local map;(3) rear end Optimization, rear end are used for constantly receiving the pose that visual odometry measures, are optimized to pose by nonlinear optimization method, from And obtain globally consistent track and map;(4) winding detects, and mainly judges whether robot reached previous position, if It detects, gives rear end and handled;(5) corresponding map is established in map building, the track estimated after being optimized according to rear end.
Wherein visual odometry part mainly carries out according to a preliminary estimate camera pose.The process is mainly carried including feature Take, characteristic matching, error hiding reject, pose solve etc. several steps.RANSAC algorithms, which are generally applied to, rejects image mistake With in the process.Standard RANSAC (Random Sample Consensus) algorithm due to randomly selecting sample data, put inside compared with Iterations can be caused to increase in the case of few, inefficiency.Original ORB (Oriented FAST and Rotated BRIEF) feature compares aggregation, and robustness is poor during tracking.Further, since there is accumulative miss in visual odometry Difference causes built map inaccurate.
In view of the above problems, it positions and builds while the present invention proposes a kind of Image Matching Strategy based on improvement The method of figure.The method can improve the efficiency of error hiding rejecting and can improve the robustness of tracking process and obtain compared with subject to True map.
Invention content
The present invention proposes a kind of based on positioning while improving Image Matching Strategy and the method for building figure.This method uses Improved ORB features are to image each region into Oriented FAST (Oriented Features from Accelerated Segment Test) Corner Detection, description of characteristic point is calculated, carries out characteristic matching using improved RANSAC algorithms later And screening.PnP (Perspective-n-Point) method is finally used to solve the initial pose of robot, it is first using what is obtained Beginning pose generates pose figure, is then updated to pose using the literary Burger-Ma Kuaer special formula methods of row and utilizes g2o (General Graphic Optimization,G2O) optimization library optimizes pose.Finally after optimization pose and corresponding frame Be combined together generation point a cloud map.
Preparation:
It first has to that (SuSE) Linux OS is installed on computers, what the present invention installed is Ubuntu14.04 versions.Then pacify Fill indispensable software package:At OpenCV 3.1.0 (library of increasing income for image procossing), PCL (Point Cloud Library) The reason point indispensable tool of cloud, the libraries Eigen (the Linear Algebra Operation library in relation to matrix), g2o (library optimized based on figure) and The driving libfreenect2 that increases income of kinect depth cameras.
The main operational steps of the present invention are as follows:
Step 1, the acquisition of image data
The coloured image and depth image that ambient enviroment is obtained using the kinect cameras of Microsoft write program realization Obtain image function.
Step 2, feature extraction
Characteristic point detection is carried out to image using improved ORB features, the step of this innovatory algorithm is as follows:
1) image pyramid is calculated, set the pyramid number of plies as 8, interlayer dimension scale is 1.2.It is to be extracted to every Layer assignment Feature points, specific distribution is that the number of each layer of characteristic point is calculated by way of Geometric Sequence summation.
2) characteristic point calculating is carried out to each tomographic image in image pyramid, is then divided into the image in each layer greatly Small is the regions 14*20, then carries out FAST Corner Detections.
3) image feature representation is quaternary tree form, is detected by each layer by the image-region divided according to every layer Feature points determine the interstitial content of quaternary tree.
4) node after division is judged, no longer divides this node if the feature of this node points are 1 and (protects Stay characteristic point best inside each node).
5) include a spy when node number is more than in initially set total characteristic points (1000) or all nodes only Then termination detection when sign point, it is on the contrary then continue.
Step 3, characteristic matching and initial pose solve
During RANSAC algorithms are generally used for image mismatch rejecting, wherein the relationship of interior point ratio and iterations As shown in formula (1).
Wherein p is confidence level, and γ is interior ratio, and η is the minimal features point quantity needed for computation model, and L is iteration time Number.Improvement RANSAC algorithm key steps proposed by the present invention are as follows:
1) Feature Descriptor is calculated, the image of the environmental information around record characteristic point is chosen using Gaussian distribution model Point.Wherein coordinate (the x of sampled pointi,yi) meet Gaussian ProfileS is zone radius value.
2) calculate characteristic point to be matched description son other description son between Euclidean distance, obtain minimum range and Secondary small distance (distance represents similarity).Shown in computational methods such as formula (2).
Wherein piAnd pjFor two characteristic points, DesikAnd DesjkRespectively piAnd pjK-th of component of feature point description.L (pi,pj) Euclidean distance between feature point description to be matched, m is first pair of description, and n is characterized the total of description Logarithm.
3) judgment step 2) in minimum Eustachian distance and the ratio of time small Euclidean distance whether be less than 0.7, ratio is smaller Represent that discrimination is bigger, the probability that error hiding occurs is smaller.These characteristic points are retained as data sample.If being unsatisfactory for this Threshold condition does not use the data point then when calculating homography matrix model.
4) after screening initial data by step 3), the homography matrix between two images to be matched is calculated, with this mould Type rejects the erroneous matching in two images.
5) PnP is utilized to solve the initial pose of robot, shown in method for solving such as formula (3).
Wherein (x, y) is image coordinate point, and (X, Y, Z) is space coordinate point, rtVariable required by indicating, wherein t=1,2, 3,…,11,12.Since r mono- shares 12 dimensions, 6 pairs of match points is at least needed to calculate.
Step 4:Pose optimizes and builds figure
Common practice is that the method first using PnP obtains the pose of robot initial in SLAM, is then built non-thread Property least square problem is adjusted initial value.Consider w three dimensions point P and its projection p, to obtain the pose R of camera, T (pose Lie algebra form is expressed as ξ).Assuming that certain space point coordinates is Pa=[Xa,Ya,Za]T, camera internal reference matrix is K, wherein The pixel coordinate of projection is u=(ua,va) T, SaFor zoom factor.The relationship of location of pixels and spatial point position such as formula (4) institute Show:
Steps are as follows with figure is built for specific pose optimization:
1) error term is determined:Pixel coordinate point is projected with space coordinate point according to the pose currently estimated, is obtained Position (and actual position is misaligned) after spatial point re-projection in the picture.This error is built into non-linear least square to ask Topic, shown in relationship such as formula (5):
Wherein ε*Indicate re-projection error, b representation space point total numbers, oepratorIndicate square of two norms, Independent variable value when what argmin was indicated is function value minimum.
2) pose figure is established:The present invention establishes pose figure using g2o, variable-definition to be optimized at vertex, error Item is defined as side.
3) error term is to pose derivation:
Wherein fx,fyFor, to the description of camera focus, X', Y', Z' is the 3D coordinates of spatial point, and ξ is pose in g2o.
4) error term is to spatial point derivation:
Wherein p is expressed as spatial point, and R is expressed as spin matrix.
5) pose and spatial point are optimized using the pose figure established in g2o graph models.
6) point cloud map is generated according to the pose after pinhole camera model and optimization.If pixel coordinate is [u, v, 1], 3D Point coordinates is [X, Y, Z], and zoom factor D, camera photocentre is (cx,cy) according to camera imaging model relationship between the two such as Shown in formula (8).
Compared with original method, advantage of the invention is as follows:
(1) improved ORB features are used, robustness when posture tracking is enhanced.
(2) error hiding reject when use improved RANSAC algorithms, improve reject efficiency especially inside point ratio compared with In the case of low.
(3) being optimized to pose using figure Optimized model can obtain with accurate pose and map.
Description of the drawings
Fig. 1 is based on the synchronous positioning for improving Image Matching Strategy and builds nomography flow chart
The improved ORB characteristics algorithms flow charts of Fig. 2
The improved RANSAC algorithm flow charts of Fig. 3
Fig. 4 characteristic point distribution maps
Fig. 5 improves RANSAC algorithm effect figures
Fig. 6 characteristic point re-projection design sketch
Fig. 7 three-dimensional reconstruction design sketch
Specific implementation mode
Robustness is poor during the present invention is directed to original ORB signature trackings, and when characteristic matching, it is relatively low to put ratio inside In the case of during RANSAC iterations are higher and position fixing process the problem of deviation accumulation, ORB features are improved using the present invention And improved RANSAC algorithms combination g2o optimizations library completes positioning and builds the task of figure.The present invention overall flow figure referring to Illustrate attached drawing 1, specific implementation mode is divided into following steps:
Step 1, install Kinect increase income driving libfreenect2, write corresponding program realize by Kinect come Obtain the cromogram and depth map of ambient enviroment.
Step 2 carries out feature extraction, referring to Figure of description 2 using improved ORB features to image.
Step 3 is carried out characteristic matching to image, will produce error hiding during initial matching, extracted using the present invention Improved RANSAC algorithms reject error hiding, accelerate matching efficiency.Algorithm flow is referring to Figure of description 3.
Step 4 solves initial pose, and 3d space point, the initial bit that will be obtained later are generated in conjunction with corresponding depth information Error term is defined as side by appearance and spatial point as vertex, establishes corresponding pose figure.
Step 5 optimizes pose and spatial point using g2o figure Optimizing Model Bases, updates pose and space coordinate Point.
Step 6 finally establishes some cloud maps using the three-dimensional information of the pose combination spatial point obtained after update.
The experiment embodiment of the present invention is given below.
(1) the driving libfreenect2 of Kinect2 is installed under a linux operating system first.In conjunction with the correlation of driving Illustrate that document, programming are realized display depth image and coloured image and preserved.
(2) image information is acquired, this experiment uses the Kinect2 depth cameras of Microsoft, the cromogram of such camera Picture resolution ratio is 1920*1080, and the resolution sizes of depth image are 512*424.The range 0.5-4.5m of detection observes model It encloses:70 degree of horizontal direction, 60 degree of vertical direction.
(3) feature extraction is carried out to the image collected, this part is carried out using the improvement ORB features proposed in the present invention The extraction of characteristic point.Contrast on effect as in Figure of description 4 being improved ORB and original ORB extraction features, wherein left side It is original distribution on the right side of distribution for characteristic point after improvement, point indicates characteristic point.
(4) characteristic matching is carried out between consecutive frame, and characteristic matching is carried out after feature extraction to estimate the fortune between consecutive frame It is dynamic.Specific rejecting effect is as shown in Fig. 5, and wherein left figure and right figure are respectively the collected adjacent moment image of camera, circle Indicate that the characteristic point extracted, line indicate the matching double points obtained after error hiding rejecting.Pass through record standard RANSAC and improvement The execution time needed for RANSAC, it is 0.003888 second that the RANSAC of standard, which executes the time, improved RANSAC execute when Between for 0.000643 second time shorten it is nearly 6 times.Re-projection effect is as shown in fig. 6, wherein left figure is after original method re-projection Effect, right figure are the effect of algorithm re-projection after improving.It can be seen that the effect after improving after re-projection compares primal algorithm It is promoted.
(5) estimation obtains homography matrix, and the concrete outcome in this example is:
(6) according to the mathematical model of SLAM, figure Optimized model is established.By pose in g2o VertexSE3Expmap, VertexSBAPointXYZ representation spaces point position and projection equation side EdgeProjectXYZ2UV.Then it is walked according to invention Derivation of the error term to the derivation and error term of pose to spatial point, program is write by the Jacobian matrix that derivation obtains in rapid It is realized in the linearizeOplus functions of EdgeProjectXYZ2UV.
(7) the preceding result with after optimization of optimization is provided with the specific two field pictures combination depth image in this example.
Spin matrix and translation vector before this suboptimization are respectively:
T=[- 0.01038603-0.00524934 0.01139507]
Then it being optimized according to the figure optimum theory in step (6), the time that this suboptimization is spent is 0.004514 second, It is restrained after iteration 11 times.Transformation matrix after optimization is:
(8) graph model built in being optimized according to figure utilizes the pose and sky after the iterative solution optimization of Gauss-Newton method Between point, according to after optimization pose establish around environmental map, as shown in Fig. 7.
The experimental results showed that the method for the invention can estimate the pose of robot, characteristic matching rank is improved The error hiding of section rejects efficiency, increases the robustness during posture tracking.A globally consistent environment can be established Figure.

Claims (2)

1. a kind of based on positioning while improving Image Matching Strategy and the method for building figure, which is characterized in that include the following steps:
Step 1, the acquisition of image data
The coloured image and depth image that ambient enviroment is obtained using the kinect cameras of Microsoft are write program and realize acquisition Image function;
Step 2, feature extraction
Characteristic point detection is carried out to image using improved ORB features, the step of this innovatory algorithm is as follows:
1) image pyramid is calculated, set the pyramid number of plies as 8, interlayer dimension scale is 1.2;Give every Layer assignment spy to be extracted Sign points, specific distribution are that the number of each layer of characteristic point is calculated by way of Geometric Sequence summation;
2) characteristic point calculating is carried out to each tomographic image in image pyramid, the image in each layer, which is then divided into size, is Then the regions 14*20 carry out FAST Corner Detections;
3) image feature representation is quaternary tree form, the feature detected by each layer by the image-region divided according to every layer Points determine the interstitial content of quaternary tree;
4) node after division is judged, this node is no longer divided if the feature of this node points are 1 and (is retained every Best characteristic point inside a node);
5) when it only includes a characteristic point that node number, which is more than in initially set total characteristic points 1000 or all nodes, Then termination detection, it is on the contrary then continue;
Step 3, characteristic matching and initial pose solve
During RANSAC algorithms are generally used for image mismatch rejecting, wherein the relationship such as formula of interior point ratio and iterations (1) shown in;
Wherein p is confidence level, and γ is interior ratio, and η is the minimal features point quantity needed for computation model, and L is iterations;
It is as follows to improve RANSAC algorithm steps:
1) Feature Descriptor is calculated, the picture point of the environmental information around record characteristic point is chosen using Gaussian distribution model; Wherein coordinate (the x of sampled pointi,yi) meet Gaussian ProfileS is zone radius value;
2) Euclidean distance between description of characteristic point to be matched and other description is calculated, minimum range and time small is obtained Distance;Shown in computational methods such as formula (2);
Wherein piAnd pjFor two characteristic points, DesikAnd DesjkRespectively piAnd pjK-th of component of feature point description;L(pi, pj) Euclidean distance between feature point description to be matched, m is first pair of description, and n is characterized the total right of description Number;
3) judgment step 2) in minimum Eustachian distance and the ratio of time small Euclidean distance whether be less than 0.7,;If it is less than this A little characteristic points are retained as data sample;If being unsatisfactory for this threshold condition, not used when calculating homography matrix model should Data point;
4) by initial data by step 3) screen after, calculate two images to be matched between homography matrix, with this model come Reject the erroneous matching in two images;
5) PnP is utilized to solve the initial pose of robot, shown in method for solving such as formula (3);
Wherein (x, y) is image coordinate point, and (X, Y, Z) is space coordinate point, rt(t=1,2,3 ... 11,12) indicates required variable, Since r mono- shares 12 dimensions, 6 pairs of match points is at least needed to calculate;
Step 4:Pose optimizes and builds figure
First obtain the pose of robot initial using the method for PnP, then build non-linear least square problem to initial value into Row adjustment;Consider w three dimensions point P and its project p, to obtain the pose R of camera, t, pose Lie algebra form is expressed as ξ; Assuming that certain space point coordinates is Pa=[Xa,Ya,Za]T, camera internal reference matrix is K, wherein the pixel coordinate projected is u=(ua, va)T, SaFor zoom factor;Shown in the relationship of location of pixels and spatial point position such as formula (4):
2. according to the method described in claim 1, it is characterized in that, specific pose optimizes, steps are as follows with figure is built:
1) error term is determined:Pixel coordinate point is projected with space coordinate point according to the pose currently estimated, space is obtained Position after point re-projection in the picture;This error is built into non-linear least square problem, shown in relationship such as formula (5):
Wherein ε*Indicate re-projection error, b representation space point total numbers, oepratorIndicate square of two norms, argmin Independent variable value when what is indicated is function value minimum;
2) pose figure is established:Pose figure is established using g2o, variable-definition to be optimized at vertex, error term is defined as side;
3) error term is to pose derivation:
Wherein fx,fyFor, to the description of camera focus, X', Y', Z' is the 3D coordinates of spatial point, and ξ is pose in g2o;
4) error term is to spatial point derivation:
Wherein p is expressed as spatial point, and R is expressed as spin matrix;
5) pose and spatial point are optimized using the pose figure established in g2o graph models;
6) point cloud map is generated according to the pose after pinhole camera model and optimization;If pixel coordinate is [u, v, 1], 3D points are sat It is designated as [X, Y, Z], zoom factor D, camera photocentre is (cx,cy) according to camera imaging model relationship between the two such as formula (8) It is shown;
CN201810184701.3A 2018-03-07 2018-03-07 It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method Withdrawn CN108416385A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810184701.3A CN108416385A (en) 2018-03-07 2018-03-07 It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810184701.3A CN108416385A (en) 2018-03-07 2018-03-07 It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method

Publications (1)

Publication Number Publication Date
CN108416385A true CN108416385A (en) 2018-08-17

Family

ID=63129981

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810184701.3A Withdrawn CN108416385A (en) 2018-03-07 2018-03-07 It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method

Country Status (1)

Country Link
CN (1) CN108416385A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN110414458A (en) * 2019-08-01 2019-11-05 北京主线科技有限公司 Localization method and device based on planar tags and template matching
CN110706280A (en) * 2018-09-28 2020-01-17 成都家有为力机器人技术有限公司 Lightweight semantic driven sparse reconstruction method based on 2D-SLAM
CN111098850A (en) * 2018-10-25 2020-05-05 北京初速度科技有限公司 Automatic parking auxiliary system and automatic parking method
CN111540016A (en) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 Pose calculation method and device based on image feature matching, computer equipment and storage medium
CN111679661A (en) * 2019-02-25 2020-09-18 北京奇虎科技有限公司 Semantic map construction method based on depth camera and sweeping robot
CN115115708A (en) * 2022-08-22 2022-09-27 荣耀终端有限公司 Image pose calculation method and system

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109509211A (en) * 2018-09-28 2019-03-22 北京大学 Positioning simultaneously and the feature point extraction and matching process and system built in diagram technology
CN110706280A (en) * 2018-09-28 2020-01-17 成都家有为力机器人技术有限公司 Lightweight semantic driven sparse reconstruction method based on 2D-SLAM
CN109509211B (en) * 2018-09-28 2021-11-16 北京大学 Feature point extraction and matching method and system in simultaneous positioning and mapping technology
CN111098850A (en) * 2018-10-25 2020-05-05 北京初速度科技有限公司 Automatic parking auxiliary system and automatic parking method
CN111679661A (en) * 2019-02-25 2020-09-18 北京奇虎科技有限公司 Semantic map construction method based on depth camera and sweeping robot
CN110414458A (en) * 2019-08-01 2019-11-05 北京主线科技有限公司 Localization method and device based on planar tags and template matching
CN110414458B (en) * 2019-08-01 2022-03-08 北京主线科技有限公司 Positioning method and device based on matching of plane label and template
CN111540016A (en) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 Pose calculation method and device based on image feature matching, computer equipment and storage medium
CN111540016B (en) * 2020-04-27 2023-11-10 深圳南方德尔汽车电子有限公司 Pose calculation method and device based on image feature matching, computer equipment and storage medium
CN115115708A (en) * 2022-08-22 2022-09-27 荣耀终端有限公司 Image pose calculation method and system

Similar Documents

Publication Publication Date Title
CN108416385A (en) It is a kind of to be positioned based on the synchronization for improving Image Matching Strategy and build drawing method
US10977827B2 (en) Multiview estimation of 6D pose
US11244189B2 (en) Systems and methods for extracting information about objects from scene information
Park et al. Elastic lidar fusion: Dense map-centric continuous-time slam
US10832079B2 (en) Determining an architectural layout
US9483703B2 (en) Online coupled camera pose estimation and dense reconstruction from video
JP6011102B2 (en) Object posture estimation method
US7616807B2 (en) System and method for using texture landmarks for improved markerless tracking in augmented reality applications
Liu et al. A systematic approach for 2D-image to 3D-range registration in urban environments
Liu et al. Indoor localization and visualization using a human-operated backpack system
CN111462207A (en) RGB-D simultaneous positioning and map creation method integrating direct method and feature method
CN113537208A (en) Visual positioning method and system based on semantic ORB-SLAM technology
CN108776976B (en) Method, system and storage medium for simultaneously positioning and establishing image
EP2904545A2 (en) Systems and methods for relating images to each other by determining transforms without using image acquisition metadata
Pintore et al. Recovering 3D existing-conditions of indoor structures from spherical images
CN109613974B (en) AR home experience method in large scene
Alcantarilla et al. How to localize humanoids with a single camera?
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
Hou et al. A high-quality voxel 3D reconstruction system for large scenes based on the branch and bound method
CN111829522B (en) Instant positioning and map construction method, computer equipment and device
CN105339981B (en) Method for using one group of primitive registration data
Coorg Pose imagery and automated three-dimensional modeling of urban environments
Zaheer et al. Shape from angle regularity
Arevalo et al. Improving piecewise linear registration of high-resolution satellite images through mesh optimization
Pears et al. Mobile robot visual navigation using multiple features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20180817

WW01 Invention patent application withdrawn after publication