CN110503688A - A kind of position and orientation estimation method for depth camera - Google Patents

A kind of position and orientation estimation method for depth camera Download PDF

Info

Publication number
CN110503688A
CN110503688A CN201910769449.7A CN201910769449A CN110503688A CN 110503688 A CN110503688 A CN 110503688A CN 201910769449 A CN201910769449 A CN 201910769449A CN 110503688 A CN110503688 A CN 110503688A
Authority
CN
China
Prior art keywords
characteristic point
orb characteristic
depth
depth information
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910769449.7A
Other languages
Chinese (zh)
Other versions
CN110503688B (en
Inventor
朱俊涛
陈强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN201910769449.7A priority Critical patent/CN110503688B/en
Publication of CN110503688A publication Critical patent/CN110503688A/en
Application granted granted Critical
Publication of CN110503688B publication Critical patent/CN110503688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to the technical fields of fixed point tracking, disclose a kind of position and orientation estimation method for depth camera, it being shot including being arranged depth camera in mobile mechanism, obtaining depth map and RBG cromogram, extract ORB characteristic point pair from every adjacent two field pictures of shooting;Using the ORB characteristic point pair of N number of loss of depth information, the estimated value ξ of camera pose variation is calculatedP, using the complete ORB characteristic point pair of M depth information, calculate the estimated value ξ of camera pose variationQ, and then obtain total estimates ξ0;Building merged the ORB characteristic point of loss of depth information to and the complete ORB characteristic point of depth information to the minimum re-projection error model of corresponding control information, and then obtain corresponding Jacobian matrix J;According to total estimates ξ0, minimum re-projection error function and Jacobian matrix J, the total estimates ξ using nonlinear optimization method, after calculation optimizationk, to complete the estimation of the camera pose variation in adjacent two field pictures shooting process.

Description

A kind of position and orientation estimation method for depth camera
Technical field
The present invention relates to the technical fields of location tracking, and in particular to a kind of position and orientation estimation method for depth camera.
Background technique
The depth image that existing Kinect camera on the market is captured blocking, absorbing, speckle, lead due to reflection etc. The phenomenon that causing depth areas missing often to generate missing depth areas, so that going out sometimes when traditional IC P algorithm iteration camera pose Existing characteristic point loss leads to that algorithm can not restrain or error is excessive, carries out Errors Catastrophic point occur when point cloud registering, Cause its matching double points to reduce, when initial pose is chosen, iteration section may be deviateed, so that the number of iterations increases, convergence speed Degree is slow, and convergence success rate is low.
For there are many research methods in the depth missing problem of Kinect, wherein most commonly seen method is to combine colour Image texture repairs depth image, such as (Le A V, Jung SW, Won C S.Directional joint Bilateral filter for depth images. [J] .Sensors, 2014,14 (7): 11362.) or (Matsuo T,Fukushima N,Ishibashi Y.Weighted jointbilateral filter with slope depth compensation filter for depth ma prefinement[C].International Conference on Computer Vision Theory andApplications.2015.).Although such method can recover complete depth Degree figure, but the edge of the edge of color image and depth image cannot match substantially and error is larger, and repair time is too long, Precision improves unobvious after optimization, is not suitable for the real time job of robot.
Summary of the invention
The present invention provides a kind of position and orientation estimation methods for depth camera, solve existing method and lack to depth information The problems such as repair time of mistake is too long, and precision improves unobvious after optimization, is not suitable for the real time job of robot.
The present invention can be achieved through the following technical solutions:
A kind of position and orientation estimation method for depth camera, comprising the following steps:
It is shot Step 1: being arranged depth camera in mobile mechanism, depth map and RBG cromogram is obtained, from bat Oriented FAST and Rotated BRIEF ORB characteristic point pair is extracted in the every adjacent two field pictures taken the photograph;
Step 2: calculating the estimated value ξ of camera pose variation using the ORB characteristic point pair of N number of loss of depth informationP, benefit With the complete ORB characteristic point pair of M depth information, the estimated value ξ of camera pose variation is calculatedQ, and then obtain total estimates ξ0
Step 3: building merged the ORB characteristic point of loss of depth information to and the complete ORB characteristic point pair of depth information The minimum re-projection error model of corresponding control information, and then obtain corresponding Jacobian matrix J;
Step 4: according to the total estimates ξ0, minimum re-projection error model and Jacobian matrix J, utilization it is non-linear Optimization method, the total estimates ξ after calculation optimizationk, to complete the camera pose variation in adjacent two field pictures shooting process Estimation.
Further, for the ORB characteristic point pair of N number of loss of depth information, using re-projection method, in conjunction with PnP algorithm, structure Build its corresponding error function eP;ORB characteristic point pair complete for M depth information, using re-projection method, in conjunction with ICP Algorithm constructs its corresponding error function eQ;Building fusion error function ePWith error function eQMinimum re-projection error model E, and then obtain corresponding Jacobian matrix J.
Further, the Jacobian matrix J calculation method the following steps are included:
Step I, the ORB characteristic point pair for N number of loss of depth information remember matched ORB feature in adjacent two field pictures Point is respectively P to corresponding spatial point homogeneous coordinatesi=(xi,yi,zi,1)T,ORB on its consecutive frame Characteristic pointObtaining corresponding pixel coordinate by the transformational relation of world coordinate system and pixel coordinate system is
Using re-projection method, the ORB characteristic point P of initial frame is calculatedi=(xi,yi,zi,1)TThe picture projected on consecutive frame Plain homogeneous coordinates pi=(ui,vi,1)T, Lie algebra relationship is as follows,
Wherein, si indicates that projection corresponding depth information of the ORB characteristic point of initial frame on consecutive frame, K indicate camera Internal reference matrix, ^, which makes difficulties, claims symbol,
In conjunction with PnP algorithm, the error function of the ORB characteristic point pair of loss of depth information is constructed;
Step II, ORB characteristic point pair complete for M depth information, remember matched ORB feature in adjacent two field pictures Point is respectively Q to corresponding spatial point coordinatei=(Xi,Yi,Zi)T,Using re-projection method, calculate just The ORB characteristic point Q of beginning framei=(Xi,Yi,Zi)TThe spatial point coordinate projected on consecutive frameIts Lie algebra Relationship is as follows,
Wherein, ^, which makes difficulties, claims symbol,
In conjunction with ICP algorithm, the error function of the complete ORB characteristic point pair of depth information is constructed;
Step III constructs minimum re-projection error model e, as follows;
Wherein, δ ξ indicates disturbance quantity, fxIndicate x-axis component of the focal length of camera in pixel coordinate system, fyIndicate the coke of camera Away from the y-axis component in pixel coordinate system, point Pi'=(x 'i,y′i,z′i,1)TIndicate that the ORB of loss of depth information in initial frame is special Levy point Pi=(xi,yi,zi,1)TThe total estimates ξ changed multiplied by camera pose0Corresponding point afterwards, point Qi'=(X 'i,Yi′,Z′i)T Indicate the complete ORB characteristic point Q of depth information in initial framei=(Xi,Yi,Zi)TThe total estimates ξ changed multiplied by camera pose0 Corresponding point afterwards.
Further, the nonlinear optimization method is set as gauss-newton method.
Further, the total estimates ξ using gauss-newton method, after calculation optimizationkMethod the following steps are included:
1) total estimates ξ is given0
2) for kth time iteration, current Jacobian matrix J and error e are found out;
3) increment Delta ξ is solvedk=g/H, wherein H=JTJ, g=-JTe(ξ);
If 4) Δ ξkLess than threshold value, then stop, and exports the total estimates ξ after optimizationk;Otherwise, ξ is enabledk+1k+Δξk, Return to the 2) step.
Further, for the ORB characteristic point pair of N number of loss of depth information, estimated value ξ is calculated using RANSAC algorithmP;It is right In the complete ORB characteristic point pair of M depth information, estimated value ξ is calculated using Method of Direct Liner TransformationQ, recycle following equation Formula calculates total estimates ξ0
Further, the often adjacent two field pictures of the depth camera shooting have overlapping region, and the ORB characteristic point is from again Folded extracted region.
The present invention is beneficial to be had the technical effect that
By to loss of depth information and complete ORB characteristic point to the estimated value for calculating separately the variation of camera pose, then The two is weighted and is fused into total estimates, then, building has merged loss of depth information and complete ORB characteristic point to right The minimum re-projection error model for the control information answered, obtains corresponding Jacobian matrix, finally, using non-thread on this basis Property optimization method such as gauss-newton method is iterated optimization, the total estimates after solving optimization, to considerably increase based on The number of feature points of the successful match of calculation, improves the computational accuracy of the estimated value of pose variation, while greatly reducing repeatedly Generation number improves the convergence success rate of nonlinear optimization method, improves the robustness of odometer;More traditional ICP algorithm tool There is the faster advantage of optimal speed, entire odometer algorithm is generally 10ms or so, and optimization precision spends big no less than tradition The precision of iteration optimization improves the control to mobile mechanism with efficient real-time again after amount efficiency progress depth reparation Precision meets the requirement of real-time of its motion control.In addition, the present invention is simple and reliable, and it is easy to operate, it is easy to accomplish, convenient for pushing away Wide application.
Detailed description of the invention
Fig. 1 is flow diagram of the invention;
Fig. 2 is the contrast schematic diagram estimated using different control methods the motion profile of mobile robot, wherein X-axis indicates move distance, and y-axis indicates error.
Specific embodiment
With reference to the accompanying drawing and the preferred embodiment specific embodiment that the present invention will be described in detail.
As shown in Figure 1, flow diagram of the invention, the present invention provides a kind of pose estimation sides for depth camera Method, primarily directed to adjacent two field pictures, by the ORB characteristic point of loss of depth information to and the complete ORB of depth information it is special Sign point calculates the estimated value of corresponding camera pose variation to separated processing, and then obtains total estimates, and recycling has merged two The minimum re-projection error model of the corresponding control information of person, obtains corresponding Jacobian matrix J, finally utilizes nonlinear optimization Method, the total estimates ξ after calculation optimizationk, complete the estimation of the camera pose variation in adjacent two field pictures shooting process.Tool Body the following steps are included:
Step 1: being arranged depth camera in mobile mechanism, current scene is shot, obtains depth map and RBG Cromogram extracts the Oriented FAST and Rotated BRIEF matched from every adjacent two field pictures of shooting ORB characteristic point pair.The extracting method of its ORB characteristic point is referred to by Rublee E, Rabaud V, Konolige K, et al.ORB:An efficient alternativeto SIFT or SURF[C]//IEEE International Method disclosed in Conference on Computer Vision.Piscataway, USA:IEEE, 2011:2564-2571. It carries out.
In order to ensure the accuracy and speed of calculating, the light of current scene is not too strong, and adjacent two field pictures must have Overlapping region is not necessarily adjacent previous frame and next frame and mainly takes as long as there is enough overlapping regions between two frames Certainly in the computing capability of system hardware.
Step 2: calculating the estimated value ξ of camera pose variation using the ORB characteristic point pair of N number of loss of depth informationP, benefit With the complete ORB characteristic point pair of M depth information, the estimated value ξ of camera pose variation is calculatedQ, such as use direct linear transformation Method recycles following equation, total estimates ξ can be obtained0, to considerably increase the feature of the successful match for calculating Point improves the computational accuracy of the estimated value of pose variation to quantity.
For the ORB characteristic point pair of depth information s missing, estimated value ξ is such as calculated using RANSAC algorithmP, remember initial frame Spatial point homogeneous coordinates be P=(x, y, z, 1)TIf its normalization pixel homogeneous coordinates projected on consecutive frame is p= (u,v,1)T, and pose, that is, spin matrix of camera variation at this time and translation matrix [R | | t] are unknown, can be defined as 3*4 increasing Wide matrix T, contains rotation and translation information, then its transformational relation is as follows:
Wherein, s indicates the depth information of ORB characteristic point, is eliminated with last line, and two constraints can be obtained:
It indicates to simplify, is such as given a definition:
T1=(t1,t2,t3,t4)T,T2=(t5,t6,t7,t8)T,T3=(t9,t10,t11,t12)T
Each ORB characteristic point provides two linear restrictions, it is assumed that has N number of ORB characteristic point pair, then can be listed below System of linear equations:
Since above formula shares 12 dimensions, only needs six groups of ORB characteristic points to its linear solution can be realized, obtain augmentation After matrix T, other ORB characteristic points pair are tested, by putting and rejecting error dot in given threshold Selection Model, to improve augmentation The computational accuracy of matrix then completes the optimization of ORB characteristic point pair when interior point is enough and obtains the estimation of depth information s missing Value ξP, wherein ξPFor Lie algebra expression.
ORB characteristic point pair complete for depth information, the foundation of parameter model then facilitate many, it is known that on initial frame The spatial point coordinate of ORB characteristic point is Q1=(X1,Y1,Z1), the spatial point of the ORB characteristic point on matching consecutive frame is sat It is designated as Q2=(X2,Y2,Z2), it is desirable to European transformation R, a t are looked for, so that:
Q1=RQ2+t
Similar to the model of loss of depth information, R, t share 12 dimensions, and the complete ORB characteristic point of depth information is to presence Three constraints, therefore only need four pairs of ORB characteristic points to linear solution can be completed, it is assumed that there is M group point pair, it is complete to obtain depth information ORB characteristic point to corresponding estimated value ξQ
Step 3: building merged the ORB characteristic point of loss of depth information to and the complete ORB characteristic point pair of depth information The minimum re-projection error model of corresponding control information, and then corresponding Jacobian matrix J is obtained, primarily with respect to N number of depth The ORB characteristic point pair of degree loss of learning, in conjunction with PnP algorithm, constructs its corresponding error function e using re-projection methodP;It is right In the complete ORB characteristic point pair of M depth information, using re-projection method, in conjunction with ICP algorithm, its corresponding error letter is constructed Number eQ;Then, building fusion error function ePWith error function eQMinimum re-projection error model e, and then obtain corresponding refined Than matrix J.
It is specific as follows:
Step I, the ORB characteristic point pair for N number of loss of depth information remember matched ORB feature in adjacent two field pictures Point is respectively P to corresponding spatial point homogeneous coordinatesi=(xi,yi,zi,1)T,ORB on its consecutive frame Characteristic pointCorresponding pixel homogeneous coordinates are obtained by the transformational relation of world coordinate system and pixel coordinate system ForIts transformational relation is as follows:
Wherein, cx、cyRespectively indicate coordinate of the optical center of camera in pixel coordinate system, fxIndicate the focal length of camera in picture The x-axis component of plain coordinate system, fyIndicate y-axis component of the focal length in pixel coordinate system of camera,Indicate phase The internal reference matrix of machine.
Using re-projection method, the ORB characteristic point P of initial frame is calculatedi=(xi,yi,zi,1)TThe picture projected on consecutive frame Plain homogeneous coordinates pi=(ui,vi,1)T, Lie algebra relationship is as follows,
Wherein, siIndicate that projection corresponding depth information of the ORB characteristic point of initial frame on consecutive frame, K indicate camera Internal reference matrix, ^, which makes difficulties, claims symbol,
In conjunction with PnP algorithm, the error function e of the ORB characteristic point pair of loss of depth information is constructedP
Step II, ORB characteristic point pair complete for M depth information, remember matched ORB feature in adjacent two field pictures Point is respectively Q to corresponding spatial point coordinatei=(Xi,Yi,Zi)T,Using re-projection method, calculate just The ORB characteristic point Q of beginning framei=(Xi,Yi,Zi)TThe spatial point coordinate projected on consecutive frameIts Lie algebra Relationship is as follows,
Wherein, ^, which makes difficulties, claims symbol,
In conjunction with ICP algorithm, the error function e of the complete ORB characteristic point pair of depth information is constructedQ
Step III constructs minimum re-projection error model e, and as follows, it is complete that error model e has merged depth information ORB characteristic point to and loss of depth information ORB characteristic point to corresponding control information, considerably increase the spy of successful match Sign point quantity, improves the precision of subsequent pose optimization.
And the calculating of the Jacobian matrix J of minimum re-projection error model e, it needs special according to the complete ORB of depth information Sign point to and loss of depth information ORB characteristic point to corresponding error function, calculate separately corresponding Jacobian matrix JpWith JQ
For the ORB characteristic point pair of loss of depth information, point P is rememberedi'=(x 'i,y′i,z′i)TIndicate that ORB is special in initial frame Levy point Pi=(xi,yi,zi)TThe total estimates ξ changed multiplied by camera pose0Corresponding point afterwards, P '=(x ', y ', z ')T=(exp (ξ ^) P), to ξ ^ premultiplication disturbance quantity δ ξ, then use chain rule, JpColumn are write as follows:
Wherein,Indicating the premultiplication disturbance in Lie algebra, the wherein first item on two, the right is the derivative about subpoint, It can be obtained by camera projection model:
AndJacobian matrix is can be obtained into the multiplication of two formulas JP, it is as follows:
Jacobian matrix JQDerivation it is similar, ORB characteristic point pair complete for depth information, point Qi'=(X 'i, Yi′,Z′i)TIndicate ORB characteristic point Q in initial framei=(Xi,Yi,Zi)TThe total estimates ξ changed multiplied by camera pose0After correspond to Point, can be obtained using Lie algebra Disturbance Model:
To sum up, can by characteristic point pair and the complete characteristic point of M depth information of N number of loss of depth information to forming The Jacobian matrix J of minimum re-projection error model e, as follows:
Wherein, δ ξ indicates disturbance quantity, fxIndicate x-axis component of the focal length of camera in pixel coordinate system, fyIndicate the coke of camera Away from the y-axis component in pixel coordinate system, point Pi'=(x 'i,y′i,z′i,1)TIndicate that the ORB of loss of depth information in initial frame is special Levy point Pi=(xi,yi,zi,1)TThe total estimates ξ changed multiplied by camera pose0Corresponding point afterwards, point Qi'=(X 'i,Yi′,Z′i)T Indicate the complete ORB characteristic point Q of depth information in initial framei=(Xi,Yi,Zi)TThe total estimates ξ changed multiplied by camera pose0 Corresponding point afterwards.
Step 4: according to total estimates ξ0, minimum re-projection error function and Jacobian matrix J, utilize nonlinear optimization Method, the total estimates ξ after calculation optimizationk, to complete estimating for the camera pose variation in adjacent two field pictures shooting process Meter.The nonlinear optimization method is preferably gauss-newton method, specific as follows:
1) total estimates ξ is given0
2) for kth time iteration, current Jacobian matrix J and error e are found out;
3) increment Delta ξ is solvedk=g/H, wherein H=JTJ, g=-JTe(ξ);
If 4) Δ ξkLess than threshold value, then stop, and exports the total estimates ξ after optimizationk;Otherwise, ξ is enabledk+1k+Δξk, Return to the 2) step.Its threshold value can according to the actual situation depending on.
In order to verify the feasibility of the method for the present invention, depth camera is arranged for we puts down in Turtlebot mobile robot Platform is shot, and point-to-point speed 0.3m/s, move distance 3m are set, and carries out traditional ICP algorithm, RANSAC+ICP respectively The motion profile of algorithm and blending algorithm estimation mobile robot, and pose Optimization Solution, the g2o are carried out using g2o solver Solver is an optimization library widely used in the field SLAM, nonlinear optimization can be combined with graph theory, be greatly improved The real-time and accuracy of system.Its motion profile is as shown in Figure 2, the results showed that, using the mean error of method of the invention It is 1.98%, has smaller error with the 5.28% of improved RANSAC+ICP algorithm compared to the 8.64% of traditional IC P.
Although specific embodiments of the present invention have been described above, it will be appreciated by those of skill in the art that these It is merely illustrative of, without departing from the principle and essence of the present invention, a variety of changes can be made to these embodiments It more or modifies, therefore, protection scope of the present invention is defined by the appended claims.

Claims (6)

1. a kind of position and orientation estimation method for depth camera, it is characterised in that the following steps are included:
It is shot Step 1: being arranged depth camera in mobile mechanism, depth map and RBG cromogram is obtained, from shooting Oriented FAST and Rotated BRIEF ORB characteristic point pair is extracted in per adjacent two field pictures;
Step 2: calculating the estimated value ξ of camera pose variation using the ORB characteristic point pair of N number of loss of depth informationP, utilize M The complete ORB characteristic point pair of depth information calculates the estimated value ξ of camera pose variationQ, and then obtain total estimates ξ0
Step 3: building merged the ORB characteristic point of loss of depth information to and the complete ORB characteristic point of depth information to correspondence Control information minimum re-projection error model, and then obtain corresponding Jacobian matrix J;
Step 4: according to the total estimates ξ0, minimum re-projection error model and Jacobian matrix J, utilize nonlinear optimization side Method, the total estimates ξ after calculation optimizationk, to complete the estimation of the camera pose variation in adjacent two field pictures shooting process.
2. the position and orientation estimation method according to claim 1 for depth camera, it is characterised in that: N number of depth is believed The ORB characteristic point pair of breath missing, in conjunction with PnP algorithm, constructs its corresponding error function e using re-projection methodP;For M The complete ORB characteristic point pair of depth information, in conjunction with ICP algorithm, constructs its corresponding error function e using re-projection methodQ; Then building fusion error function ePWith error function eQMinimum re-projection error model e, and then obtain corresponding Jacobean matrix Battle array J.
3. the position and orientation estimation method according to claim 2 for depth camera, it is characterised in that the Jacobian matrix J Calculation method the following steps are included:
Step I, the ORB characteristic point pair for N number of loss of depth information remember matched ORB characteristic point pair in adjacent two field pictures Corresponding spatial point homogeneous coordinates are respectively Pi=(xi,yi,zi,1)T,ORB feature on its consecutive frame PointObtaining corresponding pixel coordinate by the transformational relation of world coordinate system and pixel coordinate system is
Using re-projection method, the ORB characteristic point P of initial frame is calculatedi=(xi,yi,zi,1)TThe pixel projected on consecutive frame is neat Secondary coordinate pi=(ui,vi,1)T, Lie algebra relationship is as follows,
Wherein, si indicates that projection corresponding depth information of the ORB characteristic point of initial frame on consecutive frame, K indicate the interior of camera Joining matrix, ∧, which makes difficulties, claims symbol,
In conjunction with PnP algorithm, the error function e of the ORB characteristic point pair of loss of depth information is constructedP
Step II, ORB characteristic point pair complete for M depth information, remember matched ORB characteristic point pair in adjacent two field pictures Corresponding spatial point coordinate is respectively Qi=(Xi,Yi,Zi)T,Using re-projection method, initial frame is calculated ORB characteristic point Qi=(Xi,Yi,Zi)TThe spatial point coordinate projected on consecutive frameIts Lie algebra relationship It is as follows,
Wherein, ∧, which makes difficulties, claims symbol,
In conjunction with ICP algorithm, the error function e of the complete ORB characteristic point pair of depth information is constructedQ
Step III constructs minimum re-projection error model e, as follows;
Wherein, δ ξ indicates disturbance quantity, fxIndicate x-axis component of the focal length of camera in pixel coordinate system, fyIndicate that the focal length of camera exists The y-axis component of pixel coordinate system, point Pi'=(x 'i,y′i,z′i,1)TIndicate the ORB characteristic point of loss of depth information in initial frame Pi=(xi,yi,zi,1)TThe total estimates ξ changed multiplied by camera pose0Corresponding point afterwards, point Qi'=(X 'i,Y′i,Z′i)TIt indicates The complete ORB characteristic point Q of depth information in initial framei=(Xi,Yi,Zi)TThe total estimates ξ changed multiplied by camera pose0It is right afterwards The point answered.
4. the position and orientation estimation method according to claim 1 for depth camera, which is characterized in that utilize Gauss-Newton Method, the total estimates ξ after calculation optimizationkMethod the following steps are included:
1) total estimates ξ is given0
2) for kth time iteration, current Jacobian matrix J and error e are found out;
3) increment Delta ξ is solvedk=g/H, wherein H=JTJ, g=-JTe(ξ);
If 4) Δ ξkLess than threshold value, then stop, and exports the total estimates ξ after optimizationk;Otherwise, ξ is enabledk+1k+Δξk, return to the 2) step.
5. the position and orientation estimation method according to claim 1 for depth camera, it is characterised in that: N number of depth is believed The ORB characteristic point pair for ceasing missing calculates estimated value ξ using RANSAC algorithmP;ORB characteristic point complete for M depth information It is right, estimated value ξ is calculated using Method of Direct Liner TransformationQ, following equation is recycled, total estimates ξ is calculated0
6. the position and orientation estimation method according to claim 1 for depth camera, it is characterised in that: the depth camera is clapped That takes the photograph has overlapping region per adjacent two field pictures, and the ORB characteristic point is extracted from overlapping region.
CN201910769449.7A 2019-08-20 2019-08-20 Pose estimation method for depth camera Active CN110503688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910769449.7A CN110503688B (en) 2019-08-20 2019-08-20 Pose estimation method for depth camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910769449.7A CN110503688B (en) 2019-08-20 2019-08-20 Pose estimation method for depth camera

Publications (2)

Publication Number Publication Date
CN110503688A true CN110503688A (en) 2019-11-26
CN110503688B CN110503688B (en) 2022-07-22

Family

ID=68588838

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910769449.7A Active CN110503688B (en) 2019-08-20 2019-08-20 Pose estimation method for depth camera

Country Status (1)

Country Link
CN (1) CN110503688B (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145255A (en) * 2019-12-27 2020-05-12 浙江省北大信息技术高等研究院 Pose calculation method and system combining deep learning and geometric optimization
CN111260713A (en) * 2020-02-13 2020-06-09 青岛联合创智科技有限公司 Depth calculation method based on image
CN111360820A (en) * 2020-02-18 2020-07-03 哈尔滨工业大学 Distance space and image feature space fused hybrid visual servo method
CN111461998A (en) * 2020-03-11 2020-07-28 中国科学院深圳先进技术研究院 Environment reconstruction method and device
CN111540016A (en) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 Pose calculation method and device based on image feature matching, computer equipment and storage medium
CN111681279A (en) * 2020-04-17 2020-09-18 东南大学 Driving suspension arm space pose measurement method based on improved lie group nonlinear optimization
CN112102411A (en) * 2020-11-02 2020-12-18 中国人民解放军国防科技大学 Visual positioning method and device based on semantic error image
CN112419403A (en) * 2020-11-30 2021-02-26 海南大学 Indoor unmanned aerial vehicle positioning method based on two-dimensional code array
CN112435206A (en) * 2020-11-24 2021-03-02 北京交通大学 Method for reconstructing three-dimensional information of object by using depth camera
CN113012230A (en) * 2021-03-30 2021-06-22 华南理工大学 Method for placing surgical guide plate under auxiliary guidance of AR in operation
CN113298879A (en) * 2021-05-26 2021-08-24 北京京东乾石科技有限公司 Visual positioning method and device, storage medium and electronic equipment
CN113658264A (en) * 2021-07-12 2021-11-16 华南理工大学 Single image camera focal length estimation method based on distance information
CN113688816A (en) * 2021-07-21 2021-11-23 上海工程技术大学 Calculation method of visual odometer for improving ORB feature point extraction
CN114459507A (en) * 2022-03-03 2022-05-10 湖南大学无锡智能控制研究院 DVL installation error calibration method, device and system

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875437A (en) * 2016-12-27 2017-06-20 北京航空航天大学 A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
CN108303099A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 Autonomous navigation method in unmanned plane room based on 3D vision SLAM
JP2019087229A (en) * 2017-11-02 2019-06-06 キヤノン株式会社 Information processing device, control method of information processing device and program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106875437A (en) * 2016-12-27 2017-06-20 北京航空航天大学 A kind of extraction method of key frame towards RGBD three-dimensional reconstructions
JP2019087229A (en) * 2017-11-02 2019-06-06 キヤノン株式会社 Information processing device, control method of information processing device and program
CN108303099A (en) * 2018-06-14 2018-07-20 江苏中科院智能科学技术应用研究院 Autonomous navigation method in unmanned plane room based on 3D vision SLAM

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
SITAO YAN ET AL.: "Pose calibration of two cameras with non-overlapped field of view", 《PROCEEDINGS OF SPIE》 *
代维: "室内环境下基于视觉/惯性/里程计的自主定位技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
司增秀: "基于Kinect的室内三维场景构建算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111145255B (en) * 2019-12-27 2022-08-09 浙江省北大信息技术高等研究院 Pose calculation method and system combining deep learning and geometric optimization
CN111145255A (en) * 2019-12-27 2020-05-12 浙江省北大信息技术高等研究院 Pose calculation method and system combining deep learning and geometric optimization
CN111260713A (en) * 2020-02-13 2020-06-09 青岛联合创智科技有限公司 Depth calculation method based on image
CN111360820A (en) * 2020-02-18 2020-07-03 哈尔滨工业大学 Distance space and image feature space fused hybrid visual servo method
CN111461998A (en) * 2020-03-11 2020-07-28 中国科学院深圳先进技术研究院 Environment reconstruction method and device
CN111681279A (en) * 2020-04-17 2020-09-18 东南大学 Driving suspension arm space pose measurement method based on improved lie group nonlinear optimization
CN111681279B (en) * 2020-04-17 2023-10-31 东南大学 Driving suspension arm space pose measurement method based on improved Liqun nonlinear optimization
CN111540016B (en) * 2020-04-27 2023-11-10 深圳南方德尔汽车电子有限公司 Pose calculation method and device based on image feature matching, computer equipment and storage medium
CN111540016A (en) * 2020-04-27 2020-08-14 深圳南方德尔汽车电子有限公司 Pose calculation method and device based on image feature matching, computer equipment and storage medium
CN112102411A (en) * 2020-11-02 2020-12-18 中国人民解放军国防科技大学 Visual positioning method and device based on semantic error image
CN112435206B (en) * 2020-11-24 2023-11-21 北京交通大学 Method for reconstructing three-dimensional information of object by using depth camera
CN112435206A (en) * 2020-11-24 2021-03-02 北京交通大学 Method for reconstructing three-dimensional information of object by using depth camera
CN112419403A (en) * 2020-11-30 2021-02-26 海南大学 Indoor unmanned aerial vehicle positioning method based on two-dimensional code array
CN113012230B (en) * 2021-03-30 2022-09-23 华南理工大学 Method for placing surgical guide plate under auxiliary guidance of AR in operation
CN113012230A (en) * 2021-03-30 2021-06-22 华南理工大学 Method for placing surgical guide plate under auxiliary guidance of AR in operation
CN113298879A (en) * 2021-05-26 2021-08-24 北京京东乾石科技有限公司 Visual positioning method and device, storage medium and electronic equipment
CN113298879B (en) * 2021-05-26 2024-04-16 北京京东乾石科技有限公司 Visual positioning method and device, storage medium and electronic equipment
CN113658264B (en) * 2021-07-12 2023-08-18 华南理工大学 Single image camera focal length estimation method based on distance information
CN113658264A (en) * 2021-07-12 2021-11-16 华南理工大学 Single image camera focal length estimation method based on distance information
CN113688816A (en) * 2021-07-21 2021-11-23 上海工程技术大学 Calculation method of visual odometer for improving ORB feature point extraction
CN113688816B (en) * 2021-07-21 2023-06-23 上海工程技术大学 Calculation method of visual odometer for improving ORB feature point extraction
CN114459507A (en) * 2022-03-03 2022-05-10 湖南大学无锡智能控制研究院 DVL installation error calibration method, device and system
CN114459507B (en) * 2022-03-03 2024-02-09 湖南大学无锡智能控制研究院 DVL installation error calibration method, device and system

Also Published As

Publication number Publication date
CN110503688B (en) 2022-07-22

Similar Documents

Publication Publication Date Title
CN110503688A (en) A kind of position and orientation estimation method for depth camera
CN107292965B (en) Virtual and real shielding processing method based on depth image data stream
CN110223348B (en) Robot scene self-adaptive pose estimation method based on RGB-D camera
CN109166149B (en) Positioning and three-dimensional line frame structure reconstruction method and system integrating binocular camera and IMU
Wang et al. Region ensemble network: Towards good practices for deep 3D hand pose estimation
CN106553195B (en) Object 6DOF localization method and system during industrial robot crawl
CN103106688B (en) Based on the indoor method for reconstructing three-dimensional scene of double-deck method for registering
Tang et al. ESTHER: Joint camera self-calibration and automatic radial distortion correction from tracking of walking humans
CN112053447B (en) Augmented reality three-dimensional registration method and device
CN106780592A (en) Kinect depth reconstruction algorithms based on camera motion and image light and shade
CN109523589A (en) A kind of design method of more robust visual odometry
CN107679537A (en) A kind of texture-free spatial target posture algorithm for estimating based on profile point ORB characteristic matchings
CN109325995B (en) Low-resolution multi-view hand reconstruction method based on hand parameter model
CN111062966B (en) Method for optimizing camera tracking based on L-M algorithm and polynomial interpolation
CN110163902B (en) Inverse depth estimation method based on factor graph
CN110310331A (en) A kind of position and orientation estimation method based on linear feature in conjunction with point cloud feature
CN111882602A (en) Visual odometer implementation method based on ORB feature points and GMS matching filter
CN114494150A (en) Design method of monocular vision odometer based on semi-direct method
CN110176041B (en) Novel train auxiliary assembly method based on binocular vision algorithm
Ito et al. Accurate and robust planar tracking based on a model of image sampling and reconstruction process
CN112365589B (en) Virtual three-dimensional scene display method, device and system
CN104156933A (en) Image registering method based on optical flow field
CN110009683B (en) Real-time on-plane object detection method based on MaskRCNN
CN108491752A (en) A kind of hand gestures method of estimation based on hand Segmentation convolutional network
Kurz et al. Bundle adjustment for stereoscopic 3d

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant