WO2018214179A1 - Low-dimensional bundle adjustment calculation method and system - Google Patents

Low-dimensional bundle adjustment calculation method and system Download PDF

Info

Publication number
WO2018214179A1
WO2018214179A1 PCT/CN2017/087500 CN2017087500W WO2018214179A1 WO 2018214179 A1 WO2018214179 A1 WO 2018214179A1 CN 2017087500 W CN2017087500 W CN 2017087500W WO 2018214179 A1 WO2018214179 A1 WO 2018214179A1
Authority
WO
WIPO (PCT)
Prior art keywords
view
jth
relative
views
dimensional
Prior art date
Application number
PCT/CN2017/087500
Other languages
French (fr)
Chinese (zh)
Inventor
武元新
蔡奇
郁文贤
Original Assignee
上海交通大学
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 上海交通大学 filed Critical 上海交通大学
Publication of WO2018214179A1 publication Critical patent/WO2018214179A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches

Definitions

  • the present invention relates to the field of computer vision and photogrammetry, and in particular to a low-dimensional cluster adjustment calculation method and system.
  • Bundle Adjustment which restores 3D scene point coordinates, motion parameters and camera parameters from multiple views, is one of the core technologies in the fields of computer vision and photogrammetry.
  • the goal of the cluster adjustment technique is to minimize the reprojection error of the image points, which can be represented as a nonlinear function of the three-dimensional scene point coordinates, motion parameters, and camera parameters.
  • the parameter space is 3*m+6*n dimensions. Since the number of three-dimensional scene points is usually large, the dimension of the parameter space to be optimized is huge.
  • the mainstream method of cluster adjustment is implemented by a nonlinear optimization algorithm considering the sparseness of the Jacobian matrix to improve the calculation speed.
  • the parameter space of the mainstream method has many dimensions and needs to be further improved to adapt. Real-time computing needs.
  • an object of the present invention is to provide a low-dimensional cluster adjustment calculation method and system.
  • the invention expresses the depth of field of multiple views as a function of the relative motion parameters of the two views, and realizes directly recovering the motion parameters from the plurality of views, and then obtaining the coordinates of the three-dimensional scene points from the motion parameters.
  • a low-dimensional bundling adjustment calculation method includes the following steps:
  • Step 1 Determine the initial value of the motion parameter
  • Step 2 Minimize the objective function of the motion parameter to obtain the optimized motion parameter
  • Step 3 Calculate the coordinates of the three-dimensional scene points according to the optimized motion parameters.
  • the step 1 comprises the following steps:
  • n is the number of views participating in the bundle adjustment
  • R j,j+1 is the relative attitude of the j+1th view relative to the jth view
  • t j, j+1 is the unit relative displacement vector of the j+1th view relative to the jth view, ie
  • 1;
  • m (j, j+1) represents the number of matching image point pairs in the dual view composed of the jth and j+1th views
  • Step 1.2 Fixed
  • T j,j+1
  • T 1,2 is the relative displacement vector of the second view relative to the first view
  • T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view
  • T j-1,j is the relative displacement vector of the jth view relative to the j-1th view
  • m (j-1, j, j+1) represents the number of common matching image point pairs in the three views composed of the j-1th, jth, and j+1th views;
  • t j,j+1 is the unit relative displacement vector of the j+1th view relative to the jth view
  • Step 1.3 Calculate the absolute pose (R j+1 , T j+1 ) of the j+ 1th view according to the absolute pose (R j , T j ) of the jth view:
  • R j+1 R j,j+1 R j
  • T j+1 T j,j+1 +R j,j+1 T j
  • R j represents the absolute pose of the jth view
  • R j+1 represents the absolute pose of the j+1th view
  • R j,j+1 is the relative attitude of the j+1th view relative to the jth view
  • T j represents the absolute displacement vector of the jth view
  • T j+1 represents the absolute displacement vector of the j+1th view
  • T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view
  • R 1 represents the absolute posture of the first view
  • T 1 represents the absolute displacement vector of the first view
  • I 3 represents a 3-dimensional unit matrix
  • 0 3 ⁇ 1 represents a zero matrix of 3 rows and 1 column.
  • the objective function of the motion parameter is specifically as follows:
  • represents the absolute pose parameter set of all views
  • ⁇ ( ⁇ ) means to minimize the objective function
  • m (j, k) represents the number of matching image point pairs in the dual view composed of the jth and kth views
  • R j,k is the relative attitude of the kth view relative to the jth view
  • T j,k is the relative displacement vector of the kth view relative to the jth view.
  • the step 3 includes the following steps:
  • the coordinates of the three-dimensional scene points are calculated by weighting as follows:
  • T j,k T k R j,k T j
  • X i represents the three-dimensional coordinates of the i-th three-dimensional scene point, and the three-dimensional scene point X i corresponds to the sth image feature point in the dual view formed by the j-th and k-th views;
  • R j represents the absolute pose of the jth view
  • T j,k is the relative displacement vector of the kth view relative to the jth view
  • R k represents the absolute pose of the kth view
  • T j represents the absolute displacement vector of the jth view
  • T k represents the absolute displacement vector of the kth view
  • R j,k represents the relative pose of the kth view relative to the jth view
  • T j,k represents the relative displacement vector of the kth view relative to the jth view.
  • the low-dimensional bundling adjustment calculation method considers a situation in which the camera has been calibrated, and assumes that a matching image point pair between the views has been determined.
  • a low-dimensional bundling adjustment computing system includes a computer readable storage medium storing a computer program, the computer program being executed by a processor to implement the steps of the low-dimensional bundling adjustment calculation method described above.
  • the present invention has the following beneficial effects:
  • the invention is a low-dimensional bundle adjustment method with simple initialization, good Lubang, faster calculation speed and higher calculation precision.
  • the invention can be used as a core calculation engine for unmanned/undulous visual navigation, visual three-dimensional reconstruction, augmented reality and the like.
  • FIG. 1 is a flow chart showing the steps of a low dimensional bundling adjustment method provided in accordance with the present invention.
  • the present invention expresses the depth of field as a function of the motion parameters, thereby eliminating the coordinates of the three-dimensional scene points from the parameter optimization process of the bundle adjustment.
  • the parameter space is 6*n dimensions.
  • the bundle adjustment method proposed by the present invention greatly reduces the dimension of the parameter space.
  • the present invention contemplates situations where the camera has been calibrated and assumes that matching image point pairs between views have been determined.
  • n is the number of views adjusted by the bundle, which are sequentially numbered as view 1, view 2, ... view n;
  • R i represents the absolute pose of the i-th view
  • T i
  • t i represents the absolute displacement vector of the i-th view
  • t i represents the unit absolute displacement vector of the i-th view, ie
  • 1;
  • represents the absolute pose parameter set of all views
  • T j,k ⁇ T k -R jk T j represents a relative displacement vector of the kth view with respect to the jth view
  • T j,k
  • t j,k ,t j,k is the unit relative displacement vector of the kth view relative to the jth view, ie
  • 1 ;
  • ⁇ j ⁇ represents all feature point sets on the jth view
  • ⁇ j,k ⁇ denotes a set of common matching feature points on the jth and kth views, ⁇ j,k,... ⁇ and so on, representing a set of common matching feature points on three or more views;
  • (j, k) represents a dual view of the jth and kth views
  • m (j, k) represents the number of matching image point pairs in the dual view composed of the jth and kth views
  • the normalized image point coordinates of the i-th matching image point pair in the double view composed of the jth and kth views, respectively, in the jth view and the kth view, that is, the first two components are calibrated
  • the image point coordinates, the third component is 1.
  • a low-dimensional bundle adjustment method includes the following steps:
  • Step 1 Determine the initial value of the motion parameter
  • Step 2 Minimize the objective function of the motion parameter to obtain the optimized motion parameter
  • Step 3 Calculate the coordinates of the three-dimensional scene points according to the optimized motion parameters.
  • the step 1 includes the following steps:
  • DLT Direct Linear Transformation
  • n is the number of views participating in the bundle adjustment
  • R j,j+1 is the relative attitude of the j+1th view relative to the jth view
  • t j, j+1 is the unit relative displacement vector of the j+1th view relative to the jth view, ie
  • 1;
  • m (j, j+1) represents the number of matching image point pairs in the dual view composed of the jth and j+1th views
  • Step 1.2 Without loss of generality, fixed
  • T j,j+1
  • T 1,2 is the relative displacement vector of the second view relative to the first view
  • T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view
  • T j-1,j is the relative displacement vector of the jth view relative to the j-1th view
  • m (j-1, j, j+1) represents the number of common matching image point pairs in the three views composed of the j-1th, jth, and j+1th views;
  • t j,j+1 is the unit relative displacement vector of the j+1th view relative to the jth view
  • Step 1.3 Calculate the absolute pose (R j+1 , T j+1 ) of the j+ 1th view according to the absolute pose (R j , T j ) of the jth view:
  • R j+1 R j,j+1 R j
  • T j+1 T j,j+1 +R j,j+1 T j
  • R j represents the absolute pose of the jth view
  • R j+1 represents the absolute pose of the j+1th view
  • R j,j+1 is the relative attitude of the j+1th view relative to the jth view
  • T j represents the absolute displacement vector of the jth view
  • T j+1 represents the absolute displacement vector of the j+1th view
  • T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view
  • R 1 represents the absolute posture of the first view
  • T 1 represents the absolute displacement vector of the first view
  • I 3 represents a 3-dimensional unit matrix
  • 0 3 ⁇ 1 represents a zero matrix of 3 rows and 1 column
  • the objective function of the motion parameter is specifically as follows:
  • represents the absolute pose parameter set of all views
  • ⁇ ( ⁇ ) means to minimize the objective function
  • m (j, k) represents the number of matching image point pairs in the dual view composed of the jth and kth views
  • R j,k is the relative attitude of the kth view relative to the jth view
  • T j,k is the relative displacement vector of the kth view relative to the jth view
  • step 3 is calculated based on the optimized value of the motion parameter. Specifically, the step 3 includes the following steps:
  • the coordinates of the three-dimensional scene points are calculated by weighting as follows:
  • T j,k T k -R j,k T j
  • X i represents the three-dimensional coordinates of the i-th three-dimensional scene point, and the three-dimensional scene point X i corresponds to the sth image feature point in the dual view formed by the j-th and k-th views;
  • R j represents the absolute pose of the jth view
  • T j,k is the relative displacement vector of the kth view relative to the jth view
  • R k represents the absolute pose of the kth view
  • T j represents the absolute displacement vector of the jth view
  • T k represents the absolute displacement vector of the kth view
  • R j,k represents the relative pose of the kth view relative to the jth view
  • T j,k represents the relative displacement vector of the kth view relative to the jth view.

Abstract

A low-dimensional bundle adjustment calculation method and system. The method comprises: determining an initial value of a motion parameter; performing optimization calculation on an objective function of the motion parameter to acquire an optimized motion parameter; and calculating coordinates of a three-dimensional scene point according to the optimized motion parameter. By representing depth of field of multiple views as a function of relative motion parameters of every two views, directly restoring motion parameters from multiple views is achieved. The motion parameters are then analyzed to acquire coordinates of a three-dimensional scene point, thereby removing the coordinates of the three-dimensional scene point from a bundle adjustment parameter optimization process, significantly reducing spatial dimensions of parameters. The method is a low-dimensional bundle adjustment method having simple initialization, good robustness, a fast calculation speed and higher calculation precision. The present invention can be used as a core calculation engine for applications such as unmanned vehicle/unmanned aircraft visual navigation, visual three-dimensional reconstruction and augmented reality.

Description

低维度的集束调整计算方法与系统Low-dimensional cluster adjustment calculation method and system 技术领域Technical field
本发明涉及计算机视觉、摄影测量领域,具体而言,涉及一种低维度的集束调整计算方法与系统。The present invention relates to the field of computer vision and photogrammetry, and in particular to a low-dimensional cluster adjustment calculation method and system.
背景技术Background technique
集束调整(Bundle Adjustment),即从多幅视图中恢复三维场景点坐标、运动参数及相机参数,是计算机视觉和摄影测量等领域的核心技术之一。集束调整技术的目标是使得图像点的重投影误差最小化,而重投影误差可表示为三维场景点坐标、运动参数及相机参数的非线性函数。对于有m个三维场景点和n幅视图的情形,参数空间为3*m+6*n维。由于三维场景点的数目通常很大,造成待优化的参数空间的维度巨大。目前,集束调整的主流方法是采用考虑参数雅克比(Jacobian)矩阵稀疏性的非线性优化算法实现,以提升计算速度,但主流方法的参数空间的维度较多,仍有待于进一步改进,以适应实时计算的需求。Bundle Adjustment, which restores 3D scene point coordinates, motion parameters and camera parameters from multiple views, is one of the core technologies in the fields of computer vision and photogrammetry. The goal of the cluster adjustment technique is to minimize the reprojection error of the image points, which can be represented as a nonlinear function of the three-dimensional scene point coordinates, motion parameters, and camera parameters. For the case of m three-dimensional scene points and n views, the parameter space is 3*m+6*n dimensions. Since the number of three-dimensional scene points is usually large, the dimension of the parameter space to be optimized is huge. At present, the mainstream method of cluster adjustment is implemented by a nonlinear optimization algorithm considering the sparseness of the Jacobian matrix to improve the calculation speed. However, the parameter space of the mainstream method has many dimensions and needs to be further improved to adapt. Real-time computing needs.
发明内容Summary of the invention
针对现有技术中的缺陷,本发明的目的是提供一种低维度的集束调整计算方法与系统。本发明将多幅视图的景深表示为两两视图的相对运动参数的函数,实现了从多幅视图直接恢复运动参数,再从运动参数中解析获得三维场景点坐标。In view of the deficiencies in the prior art, an object of the present invention is to provide a low-dimensional cluster adjustment calculation method and system. The invention expresses the depth of field of multiple views as a function of the relative motion parameters of the two views, and realizes directly recovering the motion parameters from the plurality of views, and then obtaining the coordinates of the three-dimensional scene points from the motion parameters.
根据本发明提供的一种低维度的集束调整计算方法,包括如下步骤:A low-dimensional bundling adjustment calculation method according to the present invention includes the following steps:
步骤1:确定运动参数的初值;Step 1: Determine the initial value of the motion parameter;
步骤2:对运动参数的目标函数进行最小化计算,得到优化后的运动参数;Step 2: Minimize the objective function of the motion parameter to obtain the optimized motion parameter;
步骤3:根据优化后的运动参数,计算三维场景点坐标。Step 3: Calculate the coordinates of the three-dimensional scene points according to the optimized motion parameters.
优选地,所述步骤1包括如下步骤:Preferably, the step 1 comprises the following steps:
步骤1.1:对于第j幅和第j+1幅视图构成的双视图,j=1,2,…,n-1,对该双视图上的公共匹配特征点集{j,j+1}所对应的图像特征点,采用直接线性变换算法,求解第j+1幅视图相对于第j幅视图的相对位姿(Rj,j+1,tj,j+1); Step 1.1: For the double view of the jth and j+1th views, j=1, 2,..., n-1, the common matching feature point set {j, j+1} on the double view Corresponding image feature points, using a direct linear transformation algorithm to solve the relative pose of the j+1th view relative to the jth view (R j, j+1 , t j, j+1 );
其中:among them:
n为参与集束调整的视图数目;n is the number of views participating in the bundle adjustment;
Rj,j+1为第j+1幅视图相对于第j幅视图的相对姿态;R j,j+1 is the relative attitude of the j+1th view relative to the jth view;
tj,j+1为第j+1幅视图相对于第j幅视图的单位相对位移向量,即||tj,j+1||=1;t j, j+1 is the unit relative displacement vector of the j+1th view relative to the jth view, ie ||t j, j+1 ||=1;
计算公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标
Figure PCTCN2017087500-appb-000001
以及公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图坐标系下的三维坐标
Figure PCTCN2017087500-appb-000002
Calculate the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j-th view coordinate system
Figure PCTCN2017087500-appb-000001
And the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j+1th view coordinate system
Figure PCTCN2017087500-appb-000002
Figure PCTCN2017087500-appb-000003
Figure PCTCN2017087500-appb-000003
Figure PCTCN2017087500-appb-000004
Figure PCTCN2017087500-appb-000004
其中:among them:
i=1,2,...,m(j,j+1)i=1,2,...,m (j,j+1) ;
m(j,j+1)表示第j幅和第j+1幅视图组成的双视图中的匹配图像点对数目;m (j, j+1) represents the number of matching image point pairs in the dual view composed of the jth and j+1th views;
Figure PCTCN2017087500-appb-000005
为公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000005
Normalized image point coordinates on the jth view for the i-th matching image point pair corresponding to the common matching feature point set {j, j+1};
Figure PCTCN2017087500-appb-000006
为公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000006
Normalized image point coordinates on the j+1th view for the i-th matching image point pair corresponding to the common matching feature point set {j, j+1};
Figure PCTCN2017087500-appb-000007
表示公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
Figure PCTCN2017087500-appb-000007
Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j-th view coordinate system;
Figure PCTCN2017087500-appb-000008
表示公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图坐标系下的三维坐标;
Figure PCTCN2017087500-appb-000008
Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j+1th view coordinate system;
步骤1.2:固定||T1,2||=1;对于第j-1幅、第j幅以及第j+1幅视图构成的三视图,j=2,3,…,n-1,根据该三视图上的公共匹配特征点集{j-1,j,j+1},计算相对位移的尺度||Tj,j+1||/||Tj-1,j||,得到尺度统一的相对位移向量Tj,j+1Step 1.2: Fixed ||T 1,2 ||=1; for the three views of the j-1th, jth, and j+1th views, j=2,3,...,n-1, according to The common matching feature point set {j-1, j, j+1} on the three views, and the scale of the relative displacement ||T j,j+1 ||/||T j-1,j || The uniform relative displacement vector T j,j+1 :
Figure PCTCN2017087500-appb-000009
Figure PCTCN2017087500-appb-000009
Tj,j+1=||Tj,j+1||tj,j+1T j,j+1 =||T j,j+1 ||t j,j+1 ;
其中: among them:
T1,2为第2幅视图相对于第1幅视图的相对位移向量;T 1,2 is the relative displacement vector of the second view relative to the first view;
Tj,j+1为第j+1幅视图相对于第j幅视图的相对位移向量;T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view;
Tj-1,j为第j幅视图相对于第j-1幅视图的相对位移向量;T j-1,j is the relative displacement vector of the jth view relative to the j-1th view;
m(j-1,j,j+1)表示第j-1幅、第j幅以及第j+1视图幅构成的三视图中的公共匹配图像点对数目;m (j-1, j, j+1) represents the number of common matching image point pairs in the three views composed of the j-1th, jth, and j+1th views;
Figure PCTCN2017087500-appb-000010
表示第j-1幅和第j幅视图上的公共匹配特征点集{j-1,j}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
Figure PCTCN2017087500-appb-000010
Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j-1, j} on the j-1th and jth views in the j-th view coordinate system;
Figure PCTCN2017087500-appb-000011
表示第j幅和第j+1幅视图上的公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
Figure PCTCN2017087500-appb-000011
Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} on the j-th and j+1th views in the j-th view coordinate system;
tj,j+1为第j+1幅视图相对于第j幅视图的单位相对位移向量;t j,j+1 is the unit relative displacement vector of the j+1th view relative to the jth view;
步骤1.3:根据第j幅视图的绝对位姿(Rj,Tj),计算得到第j+1幅视图的绝对位姿(Rj+1,Tj+1):Step 1.3: Calculate the absolute pose (R j+1 , T j+1 ) of the j+ 1th view according to the absolute pose (R j , T j ) of the jth view:
Rj+1=Rj,j+1Rj R j+1 =R j,j+1 R j
Tj+1=Tj,j+1+Rj,j+1Tj T j+1 =T j,j+1 +R j,j+1 T j
其中:among them:
Rj表示第j幅视图的绝对姿态;R j represents the absolute pose of the jth view;
Rj+1表示第j+1幅视图的绝对姿态;R j+1 represents the absolute pose of the j+1th view;
Rj,j+1为第j+1幅视图相对于第j幅视图的相对姿态;R j,j+1 is the relative attitude of the j+1th view relative to the jth view;
Tj表示第j幅视图的绝对位移向量;T j represents the absolute displacement vector of the jth view;
Tj+1表示第j+1幅视图的绝对位移向量;T j+1 represents the absolute displacement vector of the j+1th view;
Tj,j+1为第j+1幅视图相对于第j幅视图的相对位移向量;T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view;
当以第一幅视图为参考时:When referring to the first view:
(R1,t1)≡(I3,03×1)(R 1 , t 1 )≡(I 3 ,0 3×1 )
其中:among them:
R1表示第一幅视图的绝对姿态;R 1 represents the absolute posture of the first view;
T1表示第一幅视图的绝对位移向量;T 1 represents the absolute displacement vector of the first view;
I3表示3维的单位矩阵;I 3 represents a 3-dimensional unit matrix;
03×1表示3行1列的零矩阵。 0 3 × 1 represents a zero matrix of 3 rows and 1 column.
优选地,在所述步骤2中,所述运动参数的目标函数具体如下:Preferably, in the step 2, the objective function of the motion parameter is specifically as follows:
运动参数θ=(Rj,Tj)j=1,2,…n的最小化目标函数δ(θ)如下给出:The minimum objective function δ(θ) of the motion parameter θ = (R j , T j ) j = 1, 2, ... n is given as follows:
Figure PCTCN2017087500-appb-000012
Figure PCTCN2017087500-appb-000012
e3=[0 0 1]T e 3 =[0 0 1] T
Figure PCTCN2017087500-appb-000013
Figure PCTCN2017087500-appb-000013
Figure PCTCN2017087500-appb-000014
Figure PCTCN2017087500-appb-000014
其中:among them:
θ表示所有视图的绝对位姿参数集合;θ represents the absolute pose parameter set of all views;
δ(·)表示最小化目标函数;δ(·) means to minimize the objective function;
m(j,k)表示第j幅和第k幅视图组成的双视图中的匹配图像点对数目;m (j, k) represents the number of matching image point pairs in the dual view composed of the jth and kth views;
Figure PCTCN2017087500-appb-000015
为第j幅和第k幅视图上的公共匹配特征点集{j,k}所对应的第i个匹配图像点对在第k幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000015
The normalized image point coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, k} on the jth and kth views on the kth view;
Figure PCTCN2017087500-appb-000016
为第j幅和第k幅视图上的公共匹配特征点集{j,k}所对应的第i个匹配图像点对在第j幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000016
Normalized image point coordinates on the jth view for the i-th matching image point pair corresponding to the common matching feature point set {j, k} on the jth and kth views;
Rj,k为第k幅视图相对于第j幅视图的相对姿态;R j,k is the relative attitude of the kth view relative to the jth view;
Tj,k为第k幅视图相对于第j幅视图的相对位移向量。T j,k is the relative displacement vector of the kth view relative to the jth view.
优选地,所述步骤2中给出的运动参数θ=(Rj,Tj)j=1,2,…n的最小化目标函数δ(θ)的前提是:相同三维场景点到相同视图的距离相等。Preferably, the minimum objective function δ(θ) of the motion parameter θ=(R j , T j ) j=1, 2, . . . n given in step 2 is: the same three-dimensional scene point to the same view The distance is equal.
优选地,所述步骤3包括如下步骤:Preferably, the step 3 includes the following steps:
根据优化得到的运动参数θ=(Rj,Tj)j=1,2,…n,对于第j幅和第k幅视图构成的双视图,加权计算三维场景点的坐标如下:According to the optimized motion parameters θ=(R j ,T j ) j=1,2,...n , for the double view composed of the jth and kth views, the coordinates of the three-dimensional scene points are calculated by weighting as follows:
Figure PCTCN2017087500-appb-000017
Figure PCTCN2017087500-appb-000017
Figure PCTCN2017087500-appb-000018
Figure PCTCN2017087500-appb-000018
Tj,k=TkRj,kTj T j,k =T k R j,k T j
Figure PCTCN2017087500-appb-000019
Figure PCTCN2017087500-appb-000019
其中:among them:
Xi表示第i个三维场景点的三维坐标,该三维场景点Xi对应第j幅和第k幅视图构成的双视图中的第s个图像特征点;X i represents the three-dimensional coordinates of the i-th three-dimensional scene point, and the three-dimensional scene point X i corresponds to the sth image feature point in the dual view formed by the j-th and k-th views;
Figure PCTCN2017087500-appb-000020
表示第i个三维场景点Xi在第j幅和第k幅视图构成的双视图中是否可见的标识函数,即当Xi在该双视图中可见时,
Figure PCTCN2017087500-appb-000021
否则,则
Figure PCTCN2017087500-appb-000022
Figure PCTCN2017087500-appb-000020
Denotes the i-th three-dimensional scene at the points X i j k web and double web configuration view of view is visible identification function, i.e., when X i is visible in the view of the dual,
Figure PCTCN2017087500-appb-000021
Otherwise, then
Figure PCTCN2017087500-appb-000022
Rj表示第j幅视图的绝对姿态;R j represents the absolute pose of the jth view;
Tj,k为第k幅视图相对于第j幅视图的相对位移向量;T j,k is the relative displacement vector of the kth view relative to the jth view;
Figure PCTCN2017087500-appb-000023
表示公共匹配特征点集{j,k}所对应的第s个匹配图像点对在第j幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000023
Representing the normalized image point coordinates of the sth matching image point pair corresponding to the common matching feature point set {j, k} on the jth view;
Figure PCTCN2017087500-appb-000024
表示公共匹配特征点集{j,k}所对应的第s个匹配图像点对在第k幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000024
Representing the normalized image point coordinates of the sth matching image point pair corresponding to the common matching feature point set {j, k} on the kth view;
Rk表示第k幅视图的绝对姿态;R k represents the absolute pose of the kth view;
Tj表示第j幅视图的绝对位移向量;T j represents the absolute displacement vector of the jth view;
Tk表示第k幅视图的绝对位移向量;T k represents the absolute displacement vector of the kth view;
Rj,k表示第k幅视图相对于第j幅视图的相对姿态;R j,k represents the relative pose of the kth view relative to the jth view;
Tj,k表示第k幅视图相对于第j幅视图的相对位移向量。T j,k represents the relative displacement vector of the kth view relative to the jth view.
优选地,所述低维度的集束调整计算方法,考虑相机已标定的情形,并假设已经确定了各视图间的匹配图像点对。Preferably, the low-dimensional bundling adjustment calculation method considers a situation in which the camera has been calibrated, and assumes that a matching image point pair between the views has been determined.
根据本发明提供的一种低维度的集束调整计算系统,包括存储有计算机程序的计算机可读存储介质,所述计算机程序被处理器执行时实现上述的低维度的集束调整计算方法的步骤。A low-dimensional bundling adjustment computing system according to the present invention includes a computer readable storage medium storing a computer program, the computer program being executed by a processor to implement the steps of the low-dimensional bundling adjustment calculation method described above.
与现有技术相比,本发明具有如下的有益效果:Compared with the prior art, the present invention has the following beneficial effects:
本发明是一种初始化简便、鲁邦性好、计算速度更快、计算精度更高的低维度集束调整方法。本发明可用作无人车/无人机视觉导航、视觉三维重建、增强现实等应用的核心计算引擎。The invention is a low-dimensional bundle adjustment method with simple initialization, good Lubang, faster calculation speed and higher calculation precision. The invention can be used as a core calculation engine for unmanned/undulous visual navigation, visual three-dimensional reconstruction, augmented reality and the like.
附图说明DRAWINGS
通过阅读参照以下附图对非限制性实施例所作的详细描述,本发明的其它特征、 目的和优点将会变得更明显:Further features of the present invention, by way of a detailed description of non-limiting embodiments, The purpose and advantages will become more apparent:
图1为根据本发明提供的低维度集束调整方法的步骤流程图。1 is a flow chart showing the steps of a low dimensional bundling adjustment method provided in accordance with the present invention.
具体实施方式detailed description
下面结合具体实施例对本发明进行详细说明。以下实施例将有助于本领域的技术人员进一步理解本发明,但不以任何形式限制本发明。应当指出的是,对本领域的普通技术人员来说,在不脱离本发明构思的前提下,还可以做出若干变化和改进。这些都属于本发明的保护范围。The invention will now be described in detail in connection with specific embodiments. The following examples are intended to further understand the invention, but are not intended to limit the invention in any way. It should be noted that a number of changes and modifications may be made by those skilled in the art without departing from the inventive concept. These are all within the scope of protection of the present invention.
本发明将景深表示为运动参数的函数,从而将三维场景点坐标从集束调整的参数优化过程中剔除。对于有m个三维场景点和n幅视图的情形,参数空间为6*n维。相比当前的主流方法,本发明提出的集束调整方法大幅降低了参数空间的维度。The present invention expresses the depth of field as a function of the motion parameters, thereby eliminating the coordinates of the three-dimensional scene points from the parameter optimization process of the bundle adjustment. For the case of m three-dimensional scene points and n views, the parameter space is 6*n dimensions. Compared with the current mainstream methods, the bundle adjustment method proposed by the present invention greatly reduces the dimension of the parameter space.
本发明考虑相机已标定的情形,并假设已经确定了各视图间的匹配图像点对。The present invention contemplates situations where the camera has been calibrated and assumes that matching image point pairs between views have been determined.
下面对代式的一般形式进行解释说明定义:The following is an explanation of the general form of the generation:
假定n为集束调整的视图数目,依次编号为视图1,视图2,…视图n;Assume that n is the number of views adjusted by the bundle, which are sequentially numbered as view 1, view 2, ... view n;
(Ri,Ti)表示第i幅视图的绝对位姿;(R i , T i ) represents the absolute pose of the i-th view;
Ri表示第i幅视图的绝对姿态;R i represents the absolute pose of the i-th view;
Ti=||Ti||ti表示第i幅视图的绝对位移向量;T i =||T i ||t i represents the absolute displacement vector of the i-th view;
ti表示第i幅视图的单位绝对位移向量,即||ti||=1;t i represents the unit absolute displacement vector of the i-th view, ie ||t i ||=1;
θ表示所有视图的绝对位姿参数集合;θ represents the absolute pose parameter set of all views;
Figure PCTCN2017087500-appb-000025
表示第k幅视图相对于第j幅视图的相对姿态;
Figure PCTCN2017087500-appb-000025
Representing the relative pose of the kth view relative to the jth view;
Tj,k≡Tk-RjkTj表示第k幅视图相对于第j幅视图的相对位移向量;T j,k ≡T k -R jk T j represents a relative displacement vector of the kth view with respect to the jth view;
Tj,k=||Tj,k||tj,k,tj,k为第k幅视图相对于第j幅视图的单位相对位移向量,即||tj,k||=1;T j,k =||T j,k ||t j,k ,t j,k is the unit relative displacement vector of the kth view relative to the jth view, ie ||t j,k ||=1 ;
(Rj,k,tj,k)表示第k幅视图相对于第j幅视图的相对位姿;(R j,k , t j,k ) represents the relative pose of the kth view relative to the jth view;
{j}表示第j幅视图上所有的特征点集;{j} represents all feature point sets on the jth view;
{j,k}表示第j幅和第k幅视图上的公共匹配特征点集,{j,k,...}以此类推,表示三幅以上视图上的公共匹配特征点集;{j,k} denotes a set of common matching feature points on the jth and kth views, {j,k,...} and so on, representing a set of common matching feature points on three or more views;
(j,k)表示第j幅和第k幅视图组成的双视图; (j, k) represents a dual view of the jth and kth views;
m(j,k)表示第j幅和第k幅视图组成的双视图中的匹配图像点对数目;m (j, k) represents the number of matching image point pairs in the dual view composed of the jth and kth views;
Figure PCTCN2017087500-appb-000026
分别为第j幅和第k幅视图组成的双视图中的第i个匹配图像点对在第j幅视图、第k幅视图上的归一化图像点坐标,即前两个分量为标定后的图像点坐标,第三个分量为1。
Figure PCTCN2017087500-appb-000026
The normalized image point coordinates of the i-th matching image point pair in the double view composed of the jth and kth views, respectively, in the jth view and the kth view, that is, the first two components are calibrated The image point coordinates, the third component is 1.
根据本发明提供的一种低维度集束调整方法,包括如下步骤:A low-dimensional bundle adjustment method according to the present invention includes the following steps:
步骤1:确定运动参数的初值;Step 1: Determine the initial value of the motion parameter;
步骤2:对运动参数的目标函数进行最小化计算,得到优化后的运动参数;Step 2: Minimize the objective function of the motion parameter to obtain the optimized motion parameter;
步骤3:根据优化后的运动参数,计算三维场景点坐标。Step 3: Calculate the coordinates of the three-dimensional scene points according to the optimized motion parameters.
下面对各个步骤进行详细说明。The individual steps are described in detail below.
所述步骤1包括如下步骤:The step 1 includes the following steps:
步骤1.1:对于第j幅和第j+1幅视图构成的双视图,j=1,2,…,n-1,对该双视图上的公共匹配特征点集{j,j+1}所对应的图像特征点,采用直接线性变换(DLT,Direct Linear Transformation)算法,求解第j+1幅视图相对于第j幅视图的相对位姿(Rj,j+1,tj,j+1);Step 1.1: For the double view of the jth and j+1th views, j=1, 2,..., n-1, the common matching feature point set {j, j+1} on the double view Corresponding image feature points, using Direct Linear Transformation (DLT) algorithm, solve the relative pose of the j+1th view relative to the jth view (R j, j+1 , t j, j+1 );
其中:among them:
n为参与集束调整的视图数目;n is the number of views participating in the bundle adjustment;
Rj,j+1为第j+1幅视图相对于第j幅视图的相对姿态;R j,j+1 is the relative attitude of the j+1th view relative to the jth view;
tj,j+1为第j+1幅视图相对于第j幅视图的单位相对位移向量,即||tj,j+1||=1;t j, j+1 is the unit relative displacement vector of the j+1th view relative to the jth view, ie ||t j, j+1 ||=1;
计算公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标
Figure PCTCN2017087500-appb-000027
以及公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图坐标系下的三维坐标
Figure PCTCN2017087500-appb-000028
Calculate the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j-th view coordinate system
Figure PCTCN2017087500-appb-000027
And the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j+1th view coordinate system
Figure PCTCN2017087500-appb-000028
Figure PCTCN2017087500-appb-000029
Figure PCTCN2017087500-appb-000029
Figure PCTCN2017087500-appb-000030
Figure PCTCN2017087500-appb-000030
其中:among them:
i=1,2,...,m(j,j+1)i=1,2,...,m (j,j+1) ;
m(j,j+1)表示第j幅和第j+1幅视图组成的双视图中的匹配图像点对数目;m (j, j+1) represents the number of matching image point pairs in the dual view composed of the jth and j+1th views;
Figure PCTCN2017087500-appb-000031
为公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000031
Normalized image point coordinates on the jth view for the i-th matching image point pair corresponding to the common matching feature point set {j, j+1};
Figure PCTCN2017087500-appb-000032
为公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000032
Normalized image point coordinates on the j+1th view for the i-th matching image point pair corresponding to the common matching feature point set {j, j+1};
Figure PCTCN2017087500-appb-000033
表示公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
Figure PCTCN2017087500-appb-000033
Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j-th view coordinate system;
Figure PCTCN2017087500-appb-000034
表示公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图坐标系下的三维坐标;
Figure PCTCN2017087500-appb-000034
Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j+1th view coordinate system;
步骤1.2:不失一般性,固定||T1,2||=1;对于第j-1幅、第j幅以及第j+1幅视图构成的三视图,j=2,3,…,n-1,根据该三视图上的公共匹配特征点集{j-1,j,j+1},计算相对位移的尺度||Tj,j+1||/||Tj-1,j||,得到尺度统一的相对位移向量Tj,j+1Step 1.2: Without loss of generality, fixed ||T 1,2 ||=1; for three views of the j-1th, jth, and j+1th views, j=2,3,..., n-1, according to a common set of matching feature points on the three views {j-1, j, j + 1}, the calculation of the relative displacement of the scale || T j, j + 1 || / || T j-1, j ||, to obtain a uniform displacement relative displacement vector T j,j+1 :
Figure PCTCN2017087500-appb-000035
Figure PCTCN2017087500-appb-000035
Tj,j+1=||Tj,j+1||tj,j+1T j,j+1 =||T j,j+1 ||t j,j+1 ;
其中:among them:
T1,2为第2幅视图相对于第1幅视图的相对位移向量;T 1,2 is the relative displacement vector of the second view relative to the first view;
Tj,j+1为第j+1幅视图相对于第j幅视图的相对位移向量;T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view;
Tj-1,j为第j幅视图相对于第j-1幅视图的相对位移向量;T j-1,j is the relative displacement vector of the jth view relative to the j-1th view;
m(j-1,j,j+1)表示第j-1幅、第j幅以及第j+1视图幅构成的三视图中的公共匹配图像点对数目;m (j-1, j, j+1) represents the number of common matching image point pairs in the three views composed of the j-1th, jth, and j+1th views;
Figure PCTCN2017087500-appb-000036
表示第j-1幅和第j幅视图上的公共匹配特征点集{j-1,j}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
Figure PCTCN2017087500-appb-000036
Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j-1, j} on the j-1th and jth views in the j-th view coordinate system;
Figure PCTCN2017087500-appb-000037
表示第j幅和第j+1幅视图上的公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
Figure PCTCN2017087500-appb-000037
Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} on the j-th and j+1th views in the j-th view coordinate system;
tj,j+1为第j+1幅视图相对于第j幅视图的单位相对位移向量; t j,j+1 is the unit relative displacement vector of the j+1th view relative to the jth view;
步骤1.3:根据第j幅视图的绝对位姿(Rj,Tj),计算得到第j+1幅视图的绝对位姿(Rj+1,Tj+1):Step 1.3: Calculate the absolute pose (R j+1 , T j+1 ) of the j+ 1th view according to the absolute pose (R j , T j ) of the jth view:
Rj+1=Rj,j+1Rj R j+1 =R j,j+1 R j
Tj+1=Tj,j+1+Rj,j+1Tj T j+1 =T j,j+1 +R j,j+1 T j
其中:among them:
Rj表示第j幅视图的绝对姿态;R j represents the absolute pose of the jth view;
Rj+1表示第j+1幅视图的绝对姿态;R j+1 represents the absolute pose of the j+1th view;
Rj,j+1为第j+1幅视图相对于第j幅视图的相对姿态;R j,j+1 is the relative attitude of the j+1th view relative to the jth view;
Tj表示第j幅视图的绝对位移向量;T j represents the absolute displacement vector of the jth view;
Tj+1表示第j+1幅视图的绝对位移向量;T j+1 represents the absolute displacement vector of the j+1th view;
Tj,j+1为第j+1幅视图相对于第j幅视图的相对位移向量;T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view;
当以第一幅视图为参考时:When referring to the first view:
(R1,t1)≡(I3,03×1)(R 1 , t 1 )≡(I 3 ,0 3×1 )
其中:among them:
R1表示第一幅视图的绝对姿态;R 1 represents the absolute posture of the first view;
T1表示第一幅视图的绝对位移向量;T 1 represents the absolute displacement vector of the first view;
I3表示3维的单位矩阵;I 3 represents a 3-dimensional unit matrix;
03×1表示3行1列的零矩阵;0 3 × 1 represents a zero matrix of 3 rows and 1 column;
需要说明的是:It should be noted:
--在步骤1.1中,j的取值为j=1,2,…,n-1;- In step 1.1, the value of j is j = 1, 2, ..., n-1;
--在步骤1.2中,j的取值为j=2,3,…,n-1;- In step 1.2, the value of j is j = 2, 3, ..., n-1;
--在步骤1.3中,j的取值为j=1,2,…,n-1。- In step 1.3, the value of j is j = 1, 2, ..., n-1.
在所述步骤2中,所述运动参数的目标函数具体如下:In the step 2, the objective function of the motion parameter is specifically as follows:
在相同三维场景点到相同视图的距离相等的前提下,运动参数θ=(Rj,Tj)j=1,2,…n的最小化目标函数δ(θ)如下给出:Under the premise that the distances from the same three-dimensional scene point to the same view are equal , the minimum objective function δ(θ) of the motion parameter θ=(R j , T j ) j=1, 2, . . . n is given as follows:
Figure PCTCN2017087500-appb-000038
Figure PCTCN2017087500-appb-000038
e3=[0 0 1]T e 3 =[0 0 1] T
Figure PCTCN2017087500-appb-000039
Figure PCTCN2017087500-appb-000039
Figure PCTCN2017087500-appb-000040
Figure PCTCN2017087500-appb-000040
其中:among them:
θ表示所有视图的绝对位姿参数集合;θ represents the absolute pose parameter set of all views;
δ(·)表示最小化目标函数;δ(·) means to minimize the objective function;
m(j,k)表示第j幅和第k幅视图组成的双视图中的匹配图像点对数目;m (j, k) represents the number of matching image point pairs in the dual view composed of the jth and kth views;
Figure PCTCN2017087500-appb-000041
为第j幅和第k幅视图上的公共匹配特征点集{j,k}所对应的第i个匹配图像点对在第k幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000041
The normalized image point coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, k} on the jth and kth views on the kth view;
Figure PCTCN2017087500-appb-000042
为第j幅和第k幅视图上的公共匹配特征点集{j,k}所对应的第i个匹配图像点对在第j幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000042
Normalized image point coordinates on the jth view for the i-th matching image point pair corresponding to the common matching feature point set {j, k} on the jth and kth views;
Rj,k为第k幅视图相对于第j幅视图的相对姿态;R j,k is the relative attitude of the kth view relative to the jth view;
Tj,k为第k幅视图相对于第j幅视图的相对位移向量;T j,k is the relative displacement vector of the kth view relative to the jth view;
由于通过步骤2已对步骤1得到的运动参数的初值进行了优化,得到了运动参数的优化值,因此,步骤3是根据运动参数的优化值进行计算。具体地,所述步骤3包括如下步骤:Since the initial value of the motion parameter obtained in step 1 has been optimized by step 2, an optimized value of the motion parameter is obtained. Therefore, step 3 is calculated based on the optimized value of the motion parameter. Specifically, the step 3 includes the following steps:
根据优化得到的运动参数θ=(Rj,Tj)j=1,2,…n,对于第j幅和第k幅视图构成的双视图,加权计算三维场景点的坐标如下:According to the optimized motion parameters θ=(R j ,T j ) j=1,2,...n , for the double view composed of the jth and kth views, the coordinates of the three-dimensional scene points are calculated by weighting as follows:
Figure PCTCN2017087500-appb-000043
Figure PCTCN2017087500-appb-000043
Figure PCTCN2017087500-appb-000044
Figure PCTCN2017087500-appb-000044
Tj,k=Tk-Rj,kTj T j,k =T k -R j,k T j
Figure PCTCN2017087500-appb-000045
Figure PCTCN2017087500-appb-000045
其中:among them:
Xi表示第i个三维场景点的三维坐标,该三维场景点Xi对应第j幅和第k幅视图构成的双视图中的第s个图像特征点; X i represents the three-dimensional coordinates of the i-th three-dimensional scene point, and the three-dimensional scene point X i corresponds to the sth image feature point in the dual view formed by the j-th and k-th views;
Figure PCTCN2017087500-appb-000046
表示第i个三维场景点Xi在第j幅和第k幅视图构成的双视图中是否可见的标识函数,即当Xi在该双视图中可见时,
Figure PCTCN2017087500-appb-000047
否则,则
Figure PCTCN2017087500-appb-000048
Figure PCTCN2017087500-appb-000046
Denotes the i-th three-dimensional scene at the points X i j k web and double web configuration view of view is visible identification function, i.e., when X i is visible in the view of the dual,
Figure PCTCN2017087500-appb-000047
Otherwise, then
Figure PCTCN2017087500-appb-000048
Rj表示第j幅视图的绝对姿态;R j represents the absolute pose of the jth view;
Tj,k为第k幅视图相对于第j幅视图的相对位移向量;T j,k is the relative displacement vector of the kth view relative to the jth view;
Figure PCTCN2017087500-appb-000049
表示公共匹配特征点集{j,k}所对应的第s个匹配图像点对在第j幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000049
Representing the normalized image point coordinates of the sth matching image point pair corresponding to the common matching feature point set {j, k} on the jth view;
Figure PCTCN2017087500-appb-000050
表示公共匹配特征点集{j,k}所对应的第s个匹配图像点对在第k幅视图上的归一化图像点坐标;
Figure PCTCN2017087500-appb-000050
Representing the normalized image point coordinates of the sth matching image point pair corresponding to the common matching feature point set {j, k} on the kth view;
Rk表示第k幅视图的绝对姿态;R k represents the absolute pose of the kth view;
Tj表示第j幅视图的绝对位移向量;T j represents the absolute displacement vector of the jth view;
Tk表示第k幅视图的绝对位移向量;T k represents the absolute displacement vector of the kth view;
Rj,k表示第k幅视图相对于第j幅视图的相对姿态;R j,k represents the relative pose of the kth view relative to the jth view;
Tj,k表示第k幅视图相对于第j幅视图的相对位移向量。T j,k represents the relative displacement vector of the kth view relative to the jth view.
以上对本发明的具体实施例进行了描述。需要理解的是,本发明并不局限于上述特定实施方式,本领域技术人员可以在权利要求的范围内做出各种变化或修改,这并不影响本发明的实质内容。在不冲突的情况下,本申请的实施例和实施例中的特征可以任意相互组合。 The specific embodiments of the present invention have been described above. It is to be understood that the invention is not limited to the specific embodiments described above, and various changes or modifications may be made by those skilled in the art without departing from the scope of the invention. The features of the embodiments and the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (7)

  1. 一种低维度的集束调整计算方法,其特征在于,包括如下步骤:A low-dimensional cluster adjustment calculation method, comprising the following steps:
    步骤1:确定运动参数的初值;Step 1: Determine the initial value of the motion parameter;
    步骤2:对运动参数的目标函数进行最小化计算,得到优化后的运动参数;Step 2: Minimize the objective function of the motion parameter to obtain the optimized motion parameter;
    步骤3:根据优化后的运动参数,计算三维场景点坐标。Step 3: Calculate the coordinates of the three-dimensional scene points according to the optimized motion parameters.
  2. 根据权利要求1所述的低维度的集束调整计算方法,其特征在于,所述步骤1包括如下步骤:The low-dimensional bundling adjustment calculation method according to claim 1, wherein the step 1 comprises the following steps:
    步骤1.1:对于第j幅和第j+1幅视图构成的双视图,j=1,2,...,n-1,对该双视图上的公共匹配特征点集{j,j+1}所对应的图像特征点,采用直接线性变换算法,求解第j+1幅视图相对于第j幅视图的相对位姿(Rj,j+1,tj,j+1);Step 1.1: For the double view of the jth and j+1th views, j=1, 2,..., n-1, the common matching feature point set on the dual view {j, j+1 } corresponding image feature points, using the direct linear transformation algorithm to solve the relative pose of the j+1th view relative to the jth view (R j, j+1 , t j, j+1 );
    其中:among them:
    n为参与集束调整的视图数目;n is the number of views participating in the bundle adjustment;
    Rj,j+1为第j+1幅视图相对于第j幅视图的相对姿态;R j,j+1 is the relative attitude of the j+1th view relative to the jth view;
    tj,j+1为第j+1幅视图相对于第j幅视图的单位相对位移向量,即||tj,j+1||=1;t j, j+1 is the unit relative displacement vector of the j+1th view relative to the jth view, ie ||t j, j+1 ||=1;
    计算公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标
    Figure PCTCN2017087500-appb-100001
    以及公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图坐标系下的三维坐标
    Figure PCTCN2017087500-appb-100002
    Calculate the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j-th view coordinate system
    Figure PCTCN2017087500-appb-100001
    And the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j+1th view coordinate system
    Figure PCTCN2017087500-appb-100002
    Figure PCTCN2017087500-appb-100003
    Figure PCTCN2017087500-appb-100003
    Figure PCTCN2017087500-appb-100004
    Figure PCTCN2017087500-appb-100004
    其中:among them:
    i=1,2,...,m(j,j+1)i=1,2,...,m (j,j+1) ;
    m(j,j+1)表示第j幅和第j+1幅视图组成的双视图中的匹配图像点对数目;m (j, j+1) represents the number of matching image point pairs in the dual view composed of the jth and j+1th views;
    Figure PCTCN2017087500-appb-100005
    为公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图上的归一化图像点坐标;
    Figure PCTCN2017087500-appb-100005
    Normalized image point coordinates on the jth view for the i-th matching image point pair corresponding to the common matching feature point set {j, j+1};
    Figure PCTCN2017087500-appb-100006
    为公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图上的归一化图像点坐标;
    Figure PCTCN2017087500-appb-100006
    Normalized image point coordinates on the j+1th view for the i-th matching image point pair corresponding to the common matching feature point set {j, j+1};
    Figure PCTCN2017087500-appb-100007
    表示公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
    Figure PCTCN2017087500-appb-100007
    Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j-th view coordinate system;
    Figure PCTCN2017087500-appb-100008
    表示公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j+1幅视图坐标系下的三维坐标;
    Figure PCTCN2017087500-appb-100008
    Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} in the j+1th view coordinate system;
    步骤1.2:固定||T1,2||=1;对于第j-1幅、第j幅以及第j+1幅视图构成的三视图,j=2,3,...,n-1,根据该三视图上的公共匹配特征点集{j-1,j,j+1},计算相对位移的尺度||Tj,j+1||/||Tj-1,j||,得到尺度统一的相对位移向量Tj,j+1Step 1.2: Fixed ||T 1,2 ||=1; for the three views of the j-1th, jth, and j+1th views, j=2,3,...,n-1 Calculate the scale of relative displacement based on the common matching feature point set {j-1, j, j+1} on the three views ||T j,j+1 ||/||T j-1,j || , to obtain a uniform displacement vector T j,j+1 :
    Figure PCTCN2017087500-appb-100009
    Figure PCTCN2017087500-appb-100009
    Tj,j+1=||Tj,j+1||tj,j+1T j,j+1 =||T j,j+1 ||t j,j+1 ;
    其中:among them:
    T1,2为第2幅视图相对于第1幅视图的相对位移向量;T 1,2 is the relative displacement vector of the second view relative to the first view;
    Tj,j+1为第j+1幅视图相对于第j幅视图的相对位移向量;T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view;
    Tj-1,j为第j幅视图相对于第j-1幅视图的相对位移向量;T j-1,j is the relative displacement vector of the jth view relative to the j-1th view;
    m(j-1,j,j+1)表示第j-1幅、第j幅以及第j+1视图幅构成的三视图中的公共匹配图像点对数目;m (j-1, j, j+1) represents the number of common matching image point pairs in the three views composed of the j-1th, jth, and j+1th views;
    Figure PCTCN2017087500-appb-100010
    表示第j-1幅和第j幅视图上的公共匹配特征点集{j-1,j}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
    Figure PCTCN2017087500-appb-100010
    Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j-1, j} on the j-1th and jth views in the j-th view coordinate system;
    Figure PCTCN2017087500-appb-100011
    表示第j幅和第j+1幅视图上的公共匹配特征点集{j,j+1}所对应的第i个匹配图像点对在第j幅视图坐标系下的三维坐标;
    Figure PCTCN2017087500-appb-100011
    Representing the three-dimensional coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, j+1} on the j-th and j+1th views in the j-th view coordinate system;
    tj,j+1为第j+1幅视图相对于第j幅视图的单位相对位移向量;t j,j+1 is the unit relative displacement vector of the j+1th view relative to the jth view;
    步骤1.3:根据第j幅视图的绝对位姿(Rj,Tj),计算得到第j+1幅视图的绝对位姿(Rj+1,Tj+1):Step 1.3: Calculate the absolute pose (R j+1 , T j+1 ) of the j+ 1th view according to the absolute pose (R j , T j ) of the jth view:
    Rj+1=Rj,j+1Rj R j+1 =R j,j+1 R j
    Tj+1=Tj,j+1+Rj,j+1Tj T j+1 =T j,j+1 +R j,j+1 T j
    其中:among them:
    Rj表示第j幅视图的绝对姿态;R j represents the absolute pose of the jth view;
    Rj+1表示第j+1幅视图的绝对姿态; R j+1 represents the absolute pose of the j+1th view;
    Rj,j+1为第j+1幅视图相对于第j幅视图的相对姿态;R j,j+1 is the relative attitude of the j+1th view relative to the jth view;
    Tj表示第j幅视图的绝对位移向量;T j represents the absolute displacement vector of the jth view;
    Tj+1表示第j+1幅视图的绝对位移向量;T j+1 represents the absolute displacement vector of the j+1th view;
    Tj,j+1为第j+1幅视图相对于第j幅视图的相对位移向量;T j,j+1 is the relative displacement vector of the j+1th view relative to the jth view;
    当以第一幅视图为参考时:When referring to the first view:
    (R1,t1)≡(I3,03×1)(R 1 , t 1 )≡(I 3 ,0 3×1 )
    其中:among them:
    R1表示第一幅视图的绝对姿态;R 1 represents the absolute posture of the first view;
    T1表示第一幅视图的绝对位移向量;T 1 represents the absolute displacement vector of the first view;
    I3表示3维的单位矩阵;I 3 represents a 3-dimensional unit matrix;
    03×1表示3行1列的零矩阵。0 3 × 1 represents a zero matrix of 3 rows and 1 column.
  3. 根据权利要求1所述的低维度的集束调整计算方法,其特征在于,在所述步骤2中,所述运动参数的目标函数具体如下:The low-dimensional bundle adjustment calculation method according to claim 1, wherein in the step 2, the objective function of the motion parameter is specifically as follows:
    运动参数θ=(Rj,Tj)j=1,2,...n的最小化目标函数δ(θ)如下给出:The minimum objective function δ(θ) of the motion parameter θ = (R j , T j ) j = 1, 2, ... n is given as follows:
    Figure PCTCN2017087500-appb-100012
    Figure PCTCN2017087500-appb-100012
    e3=[0 0 1]T e 3 =[0 0 1] T
    Figure PCTCN2017087500-appb-100013
    Figure PCTCN2017087500-appb-100013
    Figure PCTCN2017087500-appb-100014
    Figure PCTCN2017087500-appb-100014
    其中:among them:
    θ表示所有视图的绝对位姿参数集合;θ represents the absolute pose parameter set of all views;
    δ(·)表示最小化目标函数;δ(·) means to minimize the objective function;
    m(j,k)表示第j幅和第k幅视图组成的双视图中的匹配图像点对数目;m (j, k) represents the number of matching image point pairs in the dual view composed of the jth and kth views;
    Figure PCTCN2017087500-appb-100015
    为第j幅和第k幅视图上的公共匹配特征点集{j,k}所对应的第i个匹配图像点对在第k幅视图上的归一化图像点坐标;
    Figure PCTCN2017087500-appb-100015
    The normalized image point coordinates of the i-th matching image point pair corresponding to the common matching feature point set {j, k} on the jth and kth views on the kth view;
    Figure PCTCN2017087500-appb-100016
    为第j幅和第k幅视图上的公共匹配特征点集{j,k}所对应的第i个匹配图像点对在第j幅视图上的归一化图像点坐标;
    Figure PCTCN2017087500-appb-100016
    Normalized image point coordinates on the jth view for the i-th matching image point pair corresponding to the common matching feature point set {j, k} on the jth and kth views;
    Rj,k为第k幅视图相对于第j幅视图的相对姿态;R j,k is the relative attitude of the kth view relative to the jth view;
    Tj,k为第k幅视图相对于第j幅视图的相对位移向量。T j,k is the relative displacement vector of the kth view relative to the jth view.
  4. 根据权利要求3所述的低维度的集束调整计算方法,其特征在于,所述步骤2中给出的运动参数θ=(Rj,Tj)j=1,2,...n的最小化目标函数δ(θ)的前提是:相同三维场景点到相同视图的距离相等。The low-dimensional bundling adjustment calculation method according to claim 3, wherein the motion parameter θ=(R j , T j ) j=1, 2, . . . The premise of the objective function δ(θ) is that the distances of the same three-dimensional scene points to the same view are equal.
  5. 根据权利要求1所述的低维度的集束调整计算方法,其特征在于,所述步骤3包括如下步骤:The low-dimensional bundle adjustment calculation method according to claim 1, wherein the step 3 comprises the following steps:
    根据优化得到的运动参数θ=(Rj,Tj)j=1,2,...n,对于第j幅和第k幅视图构成的双视图,加权计算三维场景点的坐标如下:According to the optimized motion parameters θ=(R j ,T j ) j=1,2,...n , for the double view composed of the jth and kth views, the coordinates of the three-dimensional scene points are calculated by weighting as follows:
    Figure PCTCN2017087500-appb-100017
    Figure PCTCN2017087500-appb-100017
    Figure PCTCN2017087500-appb-100018
    Figure PCTCN2017087500-appb-100018
    Tj,k=Tk-Rj,kTj T j,k =T k -R j,k T j
    Figure PCTCN2017087500-appb-100019
    Figure PCTCN2017087500-appb-100019
    其中:among them:
    Xi表示第i个三维场景点的三维坐标,该三维场景点Xi对应第j幅和第k幅视图构成的双视图中的第s个图像特征点;X i represents the three-dimensional coordinates of the i-th three-dimensional scene point, and the three-dimensional scene point X i corresponds to the sth image feature point in the dual view formed by the j-th and k-th views;
    Figure PCTCN2017087500-appb-100020
    表示第i个三维场景点Xi在第j幅和第k幅视图构成的双视图中是否可见的标识函数,即当Xi在该双视图中可见时,
    Figure PCTCN2017087500-appb-100021
    否则,则
    Figure PCTCN2017087500-appb-100022
    Figure PCTCN2017087500-appb-100020
    Denotes the i-th three-dimensional scene at the points X i j k web and double web configuration view of view is visible identification function, i.e., when X i is visible in the view of the dual,
    Figure PCTCN2017087500-appb-100021
    Otherwise, then
    Figure PCTCN2017087500-appb-100022
    Rj表示第j幅视图的绝对姿态;R j represents the absolute pose of the jth view;
    Tj,k为第k幅视图相对于第j幅视图的相对位移向量;T j,k is the relative displacement vector of the kth view relative to the jth view;
    Figure PCTCN2017087500-appb-100023
    表示公共匹配特征点集{j,k}所对应的第s个匹配图像点对在第j幅视图上的归一化图像点坐标;
    Figure PCTCN2017087500-appb-100023
    Representing the normalized image point coordinates of the sth matching image point pair corresponding to the common matching feature point set {j, k} on the jth view;
    Figure PCTCN2017087500-appb-100024
    表示公共匹配特征点集{j,k}所对应的第s个匹配图像点对在第k幅视图上的归一化图像点坐标;
    Figure PCTCN2017087500-appb-100024
    Representing the normalized image point coordinates of the sth matching image point pair corresponding to the common matching feature point set {j, k} on the kth view;
    Rk表示第k幅视图的绝对姿态;R k represents the absolute pose of the kth view;
    Tj表示第j幅视图的绝对位移向量;T j represents the absolute displacement vector of the jth view;
    Tk表示第k幅视图的绝对位移向量; T k represents the absolute displacement vector of the kth view;
    Rj,k表示第k幅视图相对于第j幅视图的相对姿态;R j,k represents the relative pose of the kth view relative to the jth view;
    Tj,k表示第k幅视图相对于第j幅视图的相对位移向量。T j,k represents the relative displacement vector of the kth view relative to the jth view.
  6. 根据权利要求1所述的低维度的集束调整计算方法,其特征在于,所述低维度的集束调整计算方法,考虑相机已标定的情形,并假设已经确定了各视图间的匹配图像点对。The low-dimensional bundling adjustment calculation method according to claim 1, wherein the low-dimension bundling adjustment calculation method considers a situation in which the camera has been calibrated, and assumes that a matching image point pair between the views has been determined.
  7. 一种低维度的集束调整计算系统,包括存储有计算机程序的计算机可读存储介质,其特征在于,所述计算机程序被处理器执行时实现权利要求1至6中任一项所述的低维度的集束调整计算方法的步骤。 A low-dimensional bundle adjustment computing system comprising a computer readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement the low dimension of any one of claims 1 to 6. The steps of the cluster adjustment calculation method.
PCT/CN2017/087500 2017-05-23 2017-06-07 Low-dimensional bundle adjustment calculation method and system WO2018214179A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201710370360.4A CN107330934B (en) 2017-05-23 2017-05-23 Low-dimensional cluster adjustment calculation method and system
CN201710370360.4 2017-05-23

Publications (1)

Publication Number Publication Date
WO2018214179A1 true WO2018214179A1 (en) 2018-11-29

Family

ID=60192859

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2017/087500 WO2018214179A1 (en) 2017-05-23 2017-06-07 Low-dimensional bundle adjustment calculation method and system

Country Status (2)

Country Link
CN (1) CN107330934B (en)
WO (1) WO2018214179A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584299B (en) * 2018-11-13 2021-01-05 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, terminal and storage medium
CN109799698B (en) * 2019-01-30 2020-07-14 上海交通大学 Optimal PI parameter optimization method and system for time-lag visual servo system
CN111161355B (en) * 2019-12-11 2023-05-09 上海交通大学 Multi-view camera pose and scene pure pose resolving method and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985154A (en) * 2014-04-25 2014-08-13 北京大学 Three-dimensional model reestablishment method based on global linear method
CN104881869A (en) * 2015-05-15 2015-09-02 浙江大学 Real time panorama tracing and splicing method for mobile platform
CN106097436A (en) * 2016-06-12 2016-11-09 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN106157367A (en) * 2015-03-23 2016-11-23 联想(北京)有限公司 Method for reconstructing three-dimensional scene and equipment
CN106408653A (en) * 2016-09-06 2017-02-15 合肥工业大学 Real-time robust cluster adjustment method for large-scale three-dimensional reconstruction

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6996254B2 (en) * 2001-06-18 2006-02-07 Microsoft Corporation Incremental motion estimation through local bundle adjustment
US8837811B2 (en) * 2010-06-17 2014-09-16 Microsoft Corporation Multi-stage linear structure from motion

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985154A (en) * 2014-04-25 2014-08-13 北京大学 Three-dimensional model reestablishment method based on global linear method
CN106157367A (en) * 2015-03-23 2016-11-23 联想(北京)有限公司 Method for reconstructing three-dimensional scene and equipment
CN104881869A (en) * 2015-05-15 2015-09-02 浙江大学 Real time panorama tracing and splicing method for mobile platform
CN106097436A (en) * 2016-06-12 2016-11-09 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN106408653A (en) * 2016-09-06 2017-02-15 合肥工业大学 Real-time robust cluster adjustment method for large-scale three-dimensional reconstruction

Also Published As

Publication number Publication date
CN107330934B (en) 2021-12-07
CN107330934A (en) 2017-11-07

Similar Documents

Publication Publication Date Title
US10762645B2 (en) Stereo visual odometry method based on image gradient joint optimization
CN108665491B (en) Rapid point cloud registration method based on local reference points
CN110853075B (en) Visual tracking positioning method based on dense point cloud and synthetic view
CN103247075B (en) Based on the indoor environment three-dimensional rebuilding method of variation mechanism
CN109115184B (en) Collaborative measurement method and system based on non-cooperative target
CN107657645B (en) Method for calibrating parabolic catadioptric camera by using properties of conjugate diameters of straight line and circle
CN114399554A (en) Calibration method and system of multi-camera system
CN103411589B (en) A kind of 3-D view matching navigation method based on four-dimensional real number matrix
CN111415375B (en) SLAM method based on multi-fisheye camera and double-pinhole projection model
WO2018214179A1 (en) Low-dimensional bundle adjustment calculation method and system
CN108280858A (en) A kind of linear global camera motion method for parameter estimation in multiple view reconstruction
CN104318551A (en) Convex hull feature retrieval based Gaussian mixture model point cloud registration method
CN113160335A (en) Model point cloud and three-dimensional surface reconstruction method based on binocular vision
CN111998862A (en) Dense binocular SLAM method based on BNN
Komatsu et al. 360 depth estimation from multiple fisheye images with origami crown representation of icosahedron
CN106204717B (en) A kind of stereo-picture quick three-dimensional reconstructing method and device
CN109978957B (en) Binocular system calibration method based on quantum behavior particle swarm
KR102372298B1 (en) Method for acquiring distance to at least one object located in omni-direction of vehicle and vision device using the same
CN109584347B (en) Augmented reality virtual and real occlusion processing method based on active appearance model
CN109215118B (en) Incremental motion structure recovery optimization method based on image sequence
CN108921904B (en) Method for calibrating pinhole camera by using properties of single ball and asymptote
TWI731604B (en) Three-dimensional point cloud data processing method
CN111429571B (en) Rapid stereo matching method based on spatio-temporal image information joint correlation
CN109035342B (en) Method for calibrating parabolic catadioptric camera by using one straight line and circular ring point polar line
CN110570473A (en) weight self-adaptive posture estimation method based on point-line fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17910910

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 30/06/2020)

122 Ep: pct application non-entry in european phase

Ref document number: 17910910

Country of ref document: EP

Kind code of ref document: A1