CN108537848A - A kind of two-stage pose optimal estimating method rebuild towards indoor scene - Google Patents

A kind of two-stage pose optimal estimating method rebuild towards indoor scene Download PDF

Info

Publication number
CN108537848A
CN108537848A CN201810352504.8A CN201810352504A CN108537848A CN 108537848 A CN108537848 A CN 108537848A CN 201810352504 A CN201810352504 A CN 201810352504A CN 108537848 A CN108537848 A CN 108537848A
Authority
CN
China
Prior art keywords
pose
camera
point
error
camera pose
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810352504.8A
Other languages
Chinese (zh)
Other versions
CN108537848B (en
Inventor
孔德慧
李文超
王立春
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN201810352504.8A priority Critical patent/CN108537848B/en
Publication of CN108537848A publication Critical patent/CN108537848A/en
Application granted granted Critical
Publication of CN108537848B publication Critical patent/CN108537848B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention discloses a kind of two-stage pose optimal estimating method rebuild towards indoor scene, on the basis of method of characteristic point, the luminosity information of blending image, to carry out the solution of camera pose, simultaneously using the camera pose of estimation as initial value, by by part to global two-stage pose optimisation strategy, further increasing the precision of camera pose solution.

Description

A kind of two-stage pose optimal estimating method rebuild towards indoor scene
Technical field
The invention belongs to computer vision fields, have studied a kind of two-stage pose optimal estimating rebuild towards indoor scene Method.
Background technology
Positioning and map structuring (Simultaneous Localization And Mapping, SLAM) are current simultaneously The key points and difficulties of robot field's research, are one of the key technologies that mobile robot really realizes independent navigation.With work The development of industry information technology, SLAM technologies are more and more widely used, such as indoors robot, augmented reality, nobody Machine and all various aspects such as unmanned all play highly important effect.
The key problem of SLAM is estimation camera pose, and positioning and the scene of camera are realized by estimating camera pose It rebuilds.
Existing camera position and orientation estimation method is broadly divided into two major classes, and one kind is the method for characteristic point solved using characteristic point, Another kind of is the direct method for not using characteristic point.Solving camera pose using characteristic point, there are mainly three types of methods:
(1) the Epipolar geometry method of 2D-2D.Assuming that a pair of characteristic point matched has been obtained from two figures, then this Epipolar geometry constraint is met to characteristic point.
As shown in Figure 1, X is a bit in three dimensions, x and x ' are imaging points of the X in two field pictures, according to Epipolar geometry Constraint, has:
x′TK-TEK-1X=0 (1)
In formula, K is camera internal reference matrix, and E is essential matrix.In above formula, only E is unknown, according to obtaining before Multipair characteristic matching point, we can find out essential matrix E, then to E carry out SVD decomposition, can be in the hope of camera pose.
(2) method of 2D-3D.During three-dimensional reconstruction, we can acquire the depth of characteristic point using some algorithms Information, or directly depth information is got using depth transducer.Utilize known depth information, so that it may with by first frame image Two dimensional character point be converted to the three-dimensional coordinate in real world.This three-dimensional coordinate is projected on the second frame image, by most Smallization re-projection error can be in the hope of camera pose.Object function is:
In formula, T is the camera pose finally acquired;π () indicates projection function;N is matched feature point number;XiIt indicates The corresponding three-dimensional point coordinate of characteristic point in first frame image;x′iIndicate the matched feature point coordinates on the second frame image.The party Method at least needs three pairs of match points that can solve camera pose, while needing additional a pair of of match point to be verified.
(3) method of 3D-3D.If the two-dimensional coordinate of each pair of matching characteristic point is all converted to three dimensional space coordinate To obtain the coordinate correspondence relationship of one group of 3D-3D.We can be come using ICP (Iterative Closest Point) algorithm It is solved, the object function of solution is:
In formula, XiIndicate the corresponding three-dimensional point coordinate of characteristic point in first frame image;X′iIndicate corresponding on the second frame image The three-dimensional coordinate of characteristic point.
The main thought that camera pose is solved using direct method is not extract characteristic point, directly passes through luminosity in two field pictures Difference calculate camera pose.Assuming that there are one pixel p on first frame image1, according to current camera pose Corresponding pixel p is found on two frame images2If the pose of camera is not good enough, p1And p2Appearance have significantly Difference.In order to reduce this difference, optimize camera pose, searching and p1More like p2.At this point, by minimizing luminosity error Pose is optimized, object function is:
In above formula, m indicates pixel sum in first frame image;I1()、I2() indicates to take first frame and the second frame respectively The brightness value of image;yjIndicate the location of pixels in first frame image;YjIndicate the corresponding three-dimensional point of pixel in first frame image Coordinate;π () indicates projection function;T is desired camera pose.
By method of characteristic point or direct method, the camera pose that can be estimated.But due to each side such as sensor noises The problem of the reason of face, the camera pose of estimation still remains accumulated error, therefore, it is necessary to use the method for optimization to estimation Pose optimizes to eliminate the influence of accumulated error.Currently used camera position and orientation estimation method is on key frame using closing Ring detects to build pose figure, is then based on pose figure to be optimized to whole camera poses.
The correspondence of feature based point solves camera pose, solving precision dependent on matching characteristic point quantity and The accuracy of Feature Points Matching.In many cases, enough characteristic points can not be extracted to be calculated, and only use spy Sign point has ignored other useful informations in image.Direct method need not use characteristic point, therefore the situation rare in characteristic point Under still can be with steady operation, but its robustness is poor.In terms of pose optimization, only come to camera pose using only pose figure It optimizes, can not completely eliminate accumulated error.Therefore, the present invention proposes a kind of two-stage rebuild towards indoor scene Pose optimal estimating method obtains accurate camera pose.
In existing camera pose method for solving, no matter method of characteristic point direct method, although having been achieved for good Effect, but there are still many insufficient.Method of characteristic point can not processing feature missing the case where, and have ignored in image big Part useful information.Direct method avoids the shortcomings that method of characteristic point but robustness is poor.In terms of pose optimization, position is only used Appearance figure carries out pose optimization, can not completely eliminate accumulated error.Therefore, the present invention proposes one kind towards indoor scene weight The two-stage pose optimal estimating method built, improves camera pose solving precision.
Invention content
The present invention provides a kind of two-stage pose optimal estimating method rebuild towards indoor scene, on the basis of method of characteristic point On, the luminosity information of blending image to carry out the solution of camera pose, while using the camera pose of estimation as initial value, being led to It crosses by part to global two-stage pose optimisation strategy, further increases the precision of camera pose solution.
Description of the drawings
Fig. 1 Epipolar geometries constrain;
Fig. 2 flow charts of the present invention;
Fig. 3 re-projection error schematic diagrames;
Fig. 4 luminosity error schematic diagrames;
The parts Fig. 5 key frame regards relationship altogether.
Specific implementation mode
As shown in Fig. 2, the present invention provides a kind of two-stage pose optimal estimating method rebuild towards indoor scene, including with Lower step:The solution of camera pose, the local optimum of camera pose and camera pose global optimization.Each step is specifically explained below Implementation.
Step 1. data preparation
The present invention uses the RGB-D image sequences of alignment and camera internal reference matrix K to be used as input.Simultaneously in view of camera exists There is distortion when imaging, we, which have done RGB-D images according to camera imaging principle, goes distortion to handle, the distortion that the present invention considers Predominantly radial distortion and tangential distortion.
Feature extraction phases, the present invention carry out feature extraction using ORB features to coloured image.It is calculated simultaneously using KNN Method matches the feature of extraction.Reliable match point in order to obtain, the present invention using minimum threshold+RANSAC algorithms come Characteristic matching point is filtered, ensures to obtain reliable characteristic matching point with this.The present invention is by all characteristic matching points It is denoted as set F, the quantity of characteristic matching in F is indicated with N.
In order to which the luminosity information and inapparent increase calculation amount, the present invention that obtain image extract gradient in image first Change apparent pixel, then extracts its luminosity information as the input subsequently calculated.Assuming that image is I, some upper pixel of I Gradient of the point in X-axis and Y-axis is respectively defined as:
So, when
When, then it is assumed that the pixel is the apparent point of graded.The present invention is by the apparent pixel of all gradeds It is denoted as set L, the quantity of pixel in L is indicated with M.
Step 2. camera pose solves
Camera pose be exactly under each viewpoint camera coordinates system to the transformation relation of world coordinate system.The present invention selected first The camera coordinates system of frame image is world coordinate system.In order to solve camera pose, present invention uses the characteristic points pair of consecutive frame It should be related to and luminosity information.
In order to avoid being absorbed in local optimum during Optimization Solution, it is desirable to provide a preferable initial pose.This Invention is estimated by camera motion model come the initial pose of camera to present frame.The motion model of camera is defined as:
T=speed*Tpre (7)
In formula, speed indicates the speed of camera motion, TpreIndicate that the camera pose of previous frame, * indicate two matrix phases Multiply.By formula (7), it is estimated that a rough camera pose, the calculating after facilitating.
For the characteristic point information of present frame and previous frame, the present invention is calculated using re-projection error.Two frames it Between re-projection error its principle it is as shown in Figure 3:
For a pair of matched characteristic point p1And p2, p1Corresponding spatial point is P, after P is projected on the second frame image The position of obtained point is p3.If camera pose is accurate, p2And p3It should be the same point.If camera pose is inaccurate, projection Point p3With actual position p2Between can exist an error e, camera pose is continued to optimize by the method for optimization, makes error e most It is small, optimal camera pose can be obtained.
As described above, the re-projection error between a pair of of characteristic point is defined first:
Wherein piShow some characteristic point in the corresponding real pixel coordinate of the second frame image, PiIndicate this feature point in the world Three-dimensional coordinate in coordinate system, T indicate that the camera pose to be solved, K indicate camera internal reference, ZiIt indicates PiZ axis pair after projection The coordinate answered.
Next consider all matched characteristic point informations in two field pictures, can obtain total error function is:
Wherein, N indicates the quantity of all matching characteristic points in two frames.
For luminosity information, the present invention considers the luminosity error between two frames.The principle of luminosity error is as shown in Figure 4:
Assuming that some pixel v in previous frame image1, corresponding spatial point is V.V is projected to using camera pose Location of pixels after on second frame image is v2.If camera pose is accurate, then v1And v2Luminosity should be it is the same, accidentally Difference is 0.If camera pose is inaccurate, then there are errors for luminosity between the two.By optimizing camera pose, make between the two Error it is minimum, so that it may to obtain optimal camera pose.
As described above, the luminosity error of some pixel can be defined as:
Wherein, vjIndicate the pixel coordinate in first frame image, VjIndicate point vjSpace point coordinates, T indicates to be solved Camera pose, K indicate camera internal reference, ZiIt indicates VjThe corresponding coordinate of Z axis after projection.I1() and I2() expression takes former frame The luminosity information of image and current frame image corresponding pixel points.
The sum of luminosity error function of multiple pixels pair is:
Wherein, M indicates pixel to sum.Compromise considers that computational efficiency and precision, the present invention select graded apparent Pixel is to calculating luminosity error.
The luminosity error for merging the re-projection error and pixel pair of matching characteristic point, it is as follows can to solve camera pose valuation:
Wherein, λ is adjustment factor, for adjusting the proportion of re-projection error and luminosity error in pose estimation.
The local optimum of step 3. camera pose
The thought of local optimum be using with current key frame have altogether the key frame depending on relationship optimize above-mentioned camera pose Valuation, principle are as shown in Figure 5:
In Figure 5, triangle indicates keyframe sequence, it is assumed that current key frame kf_i has jointly with current key frame The key frame in the visual field is kf_i-1, kf_i-2 and kf_i-3, and the present invention referred to as regards key frame altogether.Circle indicates locally in figure Figure point, local map point are exactly current key frame and the space three-dimensional point that can see altogether depending on key frame.
Each space three-dimensional spot projection in local map is regarded in key frame altogether to each, one can be obtained A re-projection error.Consider all local map points, so that it may to obtain whole error function:
In formula, H indicates to regard the quantity of key frame altogether;Q indicates the quantity of local map point;θijIndicate point map XjIt projects to I-th altogether depending on whether there is matched characteristic point after key frame, if having its value be 1, be otherwise 0;As point map corresponds to True pixel coordinate;It indicates point map XjProject to the Z axis coordinate on i-th of key frame;Ti *Expression each regards altogether The pose of key frame.
By the local optimum of camera pose, the camera pose of local key frame can be optimized simultaneously.It can be with after optimization Obtain more accurate camera pose T**.
The global optimization of step 4. camera pose
The local optimum for regarding relationship altogether based on key frame optimizes camera pose by reducing localized accumulated matching error.Herein On the basis of, the global optimization based on closed loop detection can advanced optimize camera pose by reducing global cumulative matches error, real Existing two-stage pose optimal estimating.
Closed loop detection is carried out by object of key frame first, and builds pose figure.Vertex correspondence key frame in pose figure Under world coordinate system the Relative Transformation square between respective vertices is indicated through the position auto―control obtained by local optimum, vertex line Battle array.The method of the present invention carries out closed loop detection using bag of words and characteristic matching, that is, meets point quantity in characteristic matching and be more than 30 Condition, then form closed loop.Correspondingly, line is added in pose figure, updates pose figure.
Remember that pose figure is G=<set-ver,set-edge>, wherein set-ver represent pose figure summit set conjunction, set- Edge representative edges collection simultaneously remembers side Wi,jTwo vertex of ∈ set-edge connections are Ti* andIt then can define the side corresponding generation Valence function is as follows:
Thus the global cost function that can define corresponding to pose figure is as follows:
By optimizing formula (16), following camera pose global optimization result is obtained:
From above procedure as can be seen that the present invention not only only used characteristic point information, also make when solving camera pose With luminosity information, this makes our method still can effectively calculate camera pose when characteristic point is very few.Meanwhile this Invention has fully considered that the part of key frame regards relationship altogether, the error that the relationship that regarded altogether by part corrects camera pose.The present invention Global optimization is carried out in the enterprising step of keyframe sequence using modified camera pose, to constitute one by part to entirely The two-stage pose optimization method of office.By the invention it is possible to acquire high-precision camera pose.
The present invention has carried out experimental verification to the above method, and achieves apparent effect.The RGB-D used in the present invention Image sequence comes from TUM official websites.The data of each scene include RGB image and Depth images and each frame figure As corresponding real camera pose.In order to verify effectiveness of the invention, absolute orbit error (Absolute is used Trajectory) difference of camera pose and true pose required by the present invention is assessed, meanwhile, we are also and several classical SLAM methods compare, the results showed that, the present invention has better precision and robustness.The results are shown in Table 1:
The absolute orbit error of 1 context of methods of table
As can be seen from Table 1, PTAM achieves best effect in fr1_xyz and fr2_xyz two datasets, still Its robustness is too poor, and final result can not be obtained on other data sets.In upper table, the row of ours+ method of characteristic points one are only It is calculated using re-projection error that camera pose obtains as a result, by comparison it can be found that method proposed by the present invention, in precision Have and be obviously improved, while also there is good robustness.

Claims (3)

1. a kind of two-stage pose optimal estimating method rebuild towards indoor scene, which is characterized in that comprise the steps of:
Step 1. data preparation
Using alignment RGB-D image sequences and camera internal reference matrix K as input, using ORB features come to coloured image into Row feature extraction, while the feature of extraction is matched using KNN algorithms;
Step 2. camera pose solves
The camera coordinates system of selected first frame image is world coordinate system, while using the feature point correspondence and light of consecutive frame Information is spent,
Estimate that the motion model of camera is defined as by camera motion model come the initial pose of camera to present frame:
T=speed*Tpre (7)
Wherein, speed indicates the speed of camera motion, TpreIt indicates that the camera pose of previous frame, * indicate two matrix multiples, leads to Formula (7) is crossed, it is estimated that a rough camera pose,
It for the characteristic point information of present frame and previous frame, is calculated using re-projection error, for a pair of matched spy Levy point p1And p2, p1Corresponding spatial point is P, and the position of the point obtained after P is projected on the second frame image is p3If phase Seat in the plane appearance is accurate, then p2And p3It should be the same point;If camera pose is inaccurate, subpoint p3With actual position p2Between meeting There are an error es, and camera pose is continued to optimize by the method for optimization, keep error e minimum, can obtain optimal phase seat in the plane Appearance,
As described above, the re-projection error between a pair of of characteristic point is defined first:
Wherein, piShow some characteristic point in the corresponding real pixel coordinate of the second frame image, PiIndicate this feature point in world coordinates Three-dimensional coordinate in system, T indicate that the camera pose to be solved, K indicate camera internal reference, ZiIt indicates PiZ axis is corresponding after projection Coordinate,
According to all matched characteristic point informations in two field pictures, can obtain total error function is:
Wherein, N indicates the quantity of all matching characteristic points in two frames.
For luminosity information, the luminosity error calculation between two frames is as follows:
Assuming that some pixel v in previous frame image1, corresponding spatial point is V, and V projects to the second frame using camera pose Location of pixels after on image is v2;If camera pose is accurate, then v1And v2Luminosity should be the same, error 0; If camera pose is inaccurate, then there are errors for luminosity between the two;By optimizing camera pose, make error between the two It is minimum, so that it may to obtain optimal camera pose,
As described above, the luminosity error of some pixel can be defined as:
Wherein, vjIndicate the pixel coordinate in first frame image, VjIndicate point vjSpace point coordinates, T indicate the camera to be solved Pose, K indicate camera internal reference, ZiIt indicates VjThe corresponding coordinate of Z axis, I after projection1() and I2() expression takes previous frame image With the luminosity information of current frame image corresponding pixel points,
The sum of luminosity error function of multiple pixels pair is:
Wherein, M indicates pixel to sum.Compromise considers that computational efficiency and precision, the present invention select the apparent pixel of graded It puts to calculating luminosity error,
The luminosity error for merging the re-projection error and pixel pair of matching characteristic point, it is as follows can to solve camera pose valuation:
Wherein, λ is adjustment factor, for adjusting the proportion of re-projection error and luminosity error in pose estimation,
The local optimum of step 3. camera pose
Using with current key frame have altogether the key frame depending on relationship optimize above-mentioned camera pose valuation;It will be every in local map One space three-dimensional spot projection regards in key frame altogether to each, can obtain a re-projection error;According to all Local map point, so that it may to obtain whole error function:
Wherein, H indicates to regard the quantity of key frame altogether;Q indicates the quantity of local map point;θijIndicate point map XjProject to i-th It is a altogether depending on whether there is matched characteristic point after key frame, if having its value be 1, be otherwise 0;As point map is corresponding true Real pixel coordinate;It indicates point map XjProject to the Z axis coordinate on i-th of key frame;Ti *It indicates each and regards key altogether The pose of frame.
By the local optimum of camera pose, the camera pose of local key frame can be optimized simultaneously, can be obtained after optimization More accurate camera pose T**,
The global optimization of step 4. camera pose
Using the global optimization detected based on closed loop, two-stage pose optimal estimating is realized.
2. the two-stage pose optimal estimating method rebuild as described in claim 1 towards indoor scene, which is characterized in that step 4 are specially:
Closed loop detection is carried out by object of key frame first, and builds pose figure, the vertex correspondence key frame in pose figure is alive Under boundary's coordinate system the relative transform matrix between respective vertices is indicated through the position auto―control obtained by local optimum, vertex line;Make Closed loop detection is carried out with bag of words and characteristic matching, that is, meets the condition that point quantity in characteristic matching is more than 30, is then formed and closed Ring correspondingly adds line in pose figure, updates pose figure,
Remember that pose figure is G=<set-ver,set-edge>, wherein set-ver represents the conjunction of pose figure summit set, set-edge generations Table W in Ji Bingjii,jTwo vertex of ∈ set-edge connections are Ti **WithIt then can define the corresponding cost function in the side It is as follows:
Thus the global cost function that can define corresponding to pose figure is as follows:
By optimizing formula (16), following camera pose global optimization result is obtained:
3. the two-stage pose optimal estimating method rebuild as described in claim 1 towards indoor scene, which is characterized in that step 1 body is:
Using minimum threshold+RANSAC algorithms characteristic matching point is filtered, ensures to obtain reliable feature with this With points, all characteristic matching points are denoted as set F, the quantity of characteristic matching in F is indicated with N,
The apparent pixel of graded in image is extracted first, then extracts its luminosity information as the input subsequently calculated,
Assuming that image is I, gradient of some the upper pixel of I in X-axis and Y-axis is respectively defined as:
So, when
When, then it is assumed that the pixel is the apparent point of graded,
The apparent pixel of all gradeds is denoted as set L, the quantity of pixel in L is indicated with M.
CN201810352504.8A 2018-04-19 2018-04-19 Two-stage pose optimization estimation method for indoor scene reconstruction Active CN108537848B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810352504.8A CN108537848B (en) 2018-04-19 2018-04-19 Two-stage pose optimization estimation method for indoor scene reconstruction

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810352504.8A CN108537848B (en) 2018-04-19 2018-04-19 Two-stage pose optimization estimation method for indoor scene reconstruction

Publications (2)

Publication Number Publication Date
CN108537848A true CN108537848A (en) 2018-09-14
CN108537848B CN108537848B (en) 2021-10-15

Family

ID=63477768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810352504.8A Active CN108537848B (en) 2018-04-19 2018-04-19 Two-stage pose optimization estimation method for indoor scene reconstruction

Country Status (1)

Country Link
CN (1) CN108537848B (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583604A (en) * 2018-12-10 2019-04-05 国网浙江义乌市供电有限公司 A kind of transformer equipment fault flag method based on SLAM technology
CN109658449A (en) * 2018-12-03 2019-04-19 华中科技大学 A kind of indoor scene three-dimensional rebuilding method based on RGB-D image
CN109798888A (en) * 2019-03-15 2019-05-24 京东方科技集团股份有限公司 Posture determining device, method and the visual odometry of mobile device
CN109974743A (en) * 2019-03-14 2019-07-05 中山大学 A kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure
CN110490967A (en) * 2019-04-12 2019-11-22 北京城市网邻信息技术有限公司 Image procossing and object-oriented modeling method and equipment, image processing apparatus and medium
CN110595479A (en) * 2019-09-23 2019-12-20 云南电网有限责任公司电力科学研究院 SLAM track evaluation method based on ICP algorithm
CN111768443A (en) * 2019-07-23 2020-10-13 北京京东尚科信息技术有限公司 Image processing method and device based on mobile camera
CN111932630A (en) * 2020-07-21 2020-11-13 清华大学 Personnel-oriented air supply regulation and control method and device based on image recognition
CN112053383A (en) * 2020-08-18 2020-12-08 东北大学 Method and device for real-time positioning of robot
CN112116661A (en) * 2019-06-20 2020-12-22 北京地平线机器人技术研发有限公司 High-precision map construction method and device
CN112530270A (en) * 2019-09-17 2021-03-19 北京初速度科技有限公司 Mapping method and device based on region allocation
CN112541423A (en) * 2020-12-09 2021-03-23 北京理工大学重庆创新中心 Synchronous positioning and map construction method and system
CN112767481A (en) * 2021-01-21 2021-05-07 山东大学 High-precision positioning and mapping method based on visual edge features
CN112862895A (en) * 2019-11-27 2021-05-28 杭州海康威视数字技术股份有限公司 Fisheye camera calibration method, device and system
CN112991515A (en) * 2021-02-26 2021-06-18 山东英信计算机技术有限公司 Three-dimensional reconstruction method, device and related equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
US20160364867A1 (en) * 2015-06-11 2016-12-15 Fujitsu Limited Camera pose estimation device and control method
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107610175A (en) * 2017-08-04 2018-01-19 华南理工大学 The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160364867A1 (en) * 2015-06-11 2016-12-15 Fujitsu Limited Camera pose estimation device and control method
CN105856230A (en) * 2016-05-06 2016-08-17 简燕梅 ORB key frame closed-loop detection SLAM method capable of improving consistency of position and pose of robot
CN107025668A (en) * 2017-03-30 2017-08-08 华南理工大学 A kind of design method of the visual odometry based on depth camera
CN107610175A (en) * 2017-08-04 2018-01-19 华南理工大学 The monocular vision SLAM algorithms optimized based on semi-direct method and sliding window

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
吴玉香等: "基于稀疏直接法和图优化的移动机器人SLAM", 《仪器仪表学报》 *
张国良等: "融合直接法与特征法的快速双目SLAM算法", 《机器人》 *

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109658449B (en) * 2018-12-03 2020-07-10 华中科技大学 Indoor scene three-dimensional reconstruction method based on RGB-D image
CN109658449A (en) * 2018-12-03 2019-04-19 华中科技大学 A kind of indoor scene three-dimensional rebuilding method based on RGB-D image
CN109583604A (en) * 2018-12-10 2019-04-05 国网浙江义乌市供电有限公司 A kind of transformer equipment fault flag method based on SLAM technology
CN109583604B (en) * 2018-12-10 2021-08-24 国网浙江义乌市供电有限公司 Substation equipment fault marking method based on SLAM technology
CN109974743A (en) * 2019-03-14 2019-07-05 中山大学 A kind of RGB-D visual odometry optimized based on GMS characteristic matching and sliding window pose figure
CN109798888B (en) * 2019-03-15 2021-09-17 京东方科技集团股份有限公司 Posture determination device and method for mobile equipment and visual odometer
CN109798888A (en) * 2019-03-15 2019-05-24 京东方科技集团股份有限公司 Posture determining device, method and the visual odometry of mobile device
CN110490967B (en) * 2019-04-12 2020-07-17 北京城市网邻信息技术有限公司 Image processing method, image processing apparatus, object modeling method, object modeling apparatus, image processing apparatus, object modeling apparatus, and medium
CN110490967A (en) * 2019-04-12 2019-11-22 北京城市网邻信息技术有限公司 Image procossing and object-oriented modeling method and equipment, image processing apparatus and medium
CN112116661A (en) * 2019-06-20 2020-12-22 北京地平线机器人技术研发有限公司 High-precision map construction method and device
CN111768443A (en) * 2019-07-23 2020-10-13 北京京东尚科信息技术有限公司 Image processing method and device based on mobile camera
CN112530270A (en) * 2019-09-17 2021-03-19 北京初速度科技有限公司 Mapping method and device based on region allocation
CN112530270B (en) * 2019-09-17 2023-03-14 北京初速度科技有限公司 Mapping method and device based on region allocation
CN110595479A (en) * 2019-09-23 2019-12-20 云南电网有限责任公司电力科学研究院 SLAM track evaluation method based on ICP algorithm
CN110595479B (en) * 2019-09-23 2023-11-17 云南电网有限责任公司电力科学研究院 SLAM track evaluation method based on ICP algorithm
CN112862895B (en) * 2019-11-27 2023-10-10 杭州海康威视数字技术股份有限公司 Fisheye camera calibration method, device and system
CN112862895A (en) * 2019-11-27 2021-05-28 杭州海康威视数字技术股份有限公司 Fisheye camera calibration method, device and system
CN111932630A (en) * 2020-07-21 2020-11-13 清华大学 Personnel-oriented air supply regulation and control method and device based on image recognition
CN112053383A (en) * 2020-08-18 2020-12-08 东北大学 Method and device for real-time positioning of robot
CN112053383B (en) * 2020-08-18 2024-04-26 东北大学 Method and device for positioning robot in real time
CN112541423A (en) * 2020-12-09 2021-03-23 北京理工大学重庆创新中心 Synchronous positioning and map construction method and system
CN112767481B (en) * 2021-01-21 2022-08-16 山东大学 High-precision positioning and mapping method based on visual edge features
CN112767481A (en) * 2021-01-21 2021-05-07 山东大学 High-precision positioning and mapping method based on visual edge features
CN112991515A (en) * 2021-02-26 2021-06-18 山东英信计算机技术有限公司 Three-dimensional reconstruction method, device and related equipment

Also Published As

Publication number Publication date
CN108537848B (en) 2021-10-15

Similar Documents

Publication Publication Date Title
CN108537848A (en) A kind of two-stage pose optimal estimating method rebuild towards indoor scene
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
CN109974707B (en) Indoor mobile robot visual navigation method based on improved point cloud matching algorithm
US9613420B2 (en) Method for locating a camera and for 3D reconstruction in a partially known environment
CN109544636A (en) A kind of quick monocular vision odometer navigation locating method of fusion feature point method and direct method
CN108776989B (en) Low-texture planar scene reconstruction method based on sparse SLAM framework
CN110070598B (en) Mobile terminal for 3D scanning reconstruction and 3D scanning reconstruction method thereof
US20090167843A1 (en) Two pass approach to three dimensional Reconstruction
CN110288712B (en) Sparse multi-view three-dimensional reconstruction method for indoor scene
CN109472820B (en) Monocular RGB-D camera real-time face reconstruction method and device
CN109754459B (en) Method and system for constructing human body three-dimensional model
CN111105460A (en) RGB-D camera pose estimation method for indoor scene three-dimensional reconstruction
CN116468786B (en) Semantic SLAM method based on point-line combination and oriented to dynamic environment
Yuan et al. 3D reconstruction of background and objects moving on ground plane viewed from a moving camera
CN116977596A (en) Three-dimensional modeling system and method based on multi-view images
CN113538569A (en) Weak texture object pose estimation method and system
CN114996814A (en) Furniture design system based on deep learning and three-dimensional reconstruction
CN109784297A (en) A kind of Three-dimensional target recognition based on deep learning and Optimal Grasp method
CN115393519A (en) Three-dimensional reconstruction method based on infrared and visible light fusion image
CN111325828A (en) Three-dimensional face acquisition method and device based on three-eye camera
CN112634305B (en) Infrared visual odometer implementation method based on edge feature matching
CN116843753A (en) Robust 6D pose estimation method based on bidirectional matching and global attention network
CN111179327A (en) Depth map calculation method
Kim et al. Global convolutional neural networks with self-attention for fisheye image rectification
CN112767481B (en) High-precision positioning and mapping method based on visual edge features

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant