CN107330934A - The boundling Adjustable calculation method and system of low dimensional - Google Patents

The boundling Adjustable calculation method and system of low dimensional Download PDF

Info

Publication number
CN107330934A
CN107330934A CN201710370360.4A CN201710370360A CN107330934A CN 107330934 A CN107330934 A CN 107330934A CN 201710370360 A CN201710370360 A CN 201710370360A CN 107330934 A CN107330934 A CN 107330934A
Authority
CN
China
Prior art keywords
mrow
msubsup
msub
jth
view
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710370360.4A
Other languages
Chinese (zh)
Other versions
CN107330934B (en
Inventor
武元新
蔡奇
郁文贤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201710370360.4A priority Critical patent/CN107330934B/en
Priority to PCT/CN2017/087500 priority patent/WO2018214179A1/en
Publication of CN107330934A publication Critical patent/CN107330934A/en
Application granted granted Critical
Publication of CN107330934B publication Critical patent/CN107330934B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/579Depth or shape recovery from multiple images from motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/337Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving reference images or patches

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Medicines Containing Antibodies Or Antigens For Use As Internal Diagnostic Agents (AREA)

Abstract

The invention provides a kind of boundling Adjustable calculation method and system of low dimensional, including:Determine the initial value of kinematic parameter;Calculating is optimized to the object function of kinematic parameter, the kinematic parameter after being optimized;According to the kinematic parameter after optimization, three-dimensional scenic point coordinates is calculated.The depth of field of several views is expressed as the function of the relative movement parameters of view two-by-two by the present invention, realize from several views and directly recover kinematic parameter, parsing obtains three-dimensional scenic point coordinates from kinematic parameter again, so as to be rejected in the parameter optimisation procedure that three-dimensional scenic point coordinates is adjusted from boundling, the dimension of parameter space is significantly reduced.The present invention be it is a kind of initialize that easy, Shandong nation property is good, calculating speed faster, the higher low dimensional collection beam adjusting method of computational accuracy.The present invention can be used as the core calculations engine of the applications such as unmanned vehicle/unmanned plane vision guided navigation, 3 D visual reconstruction, augmented reality.

Description

The boundling Adjustable calculation method and system of low dimensional
Technical field
The present invention relates to computer vision, photogrammetric field, meter is adjusted in particular to a kind of boundling of low dimensional Calculate method and system.
Background technology
Boundling adjusts (Bundle Adjustment), i.e., recover three-dimensional scenic point coordinates, kinematic parameter from several views And camera parameter, it is computer vision and one of the core technology in field such as photogrammetric.The target of boundling adjustment technology is to make The re-projection error for obtaining picture point is minimized, and re-projection error is represented by three-dimensional scenic point coordinates, kinematic parameter and camera Nonlinearity in parameters function.For there is the situation at m three dimensional field sight spot and n width views, parameter space is tieed up for 3*m+6*n.Due to The number at three dimensional field sight spot is generally very big, causes the dimension of parameter space to be optimized huge.At present, the main flow side of boundling adjustment Method is the nonlinear optimization algorithm realization openness using parameter Jacobi (Jacobian) matrix is considered, speed is calculated to be lifted Degree, but the dimension of the parameter space of main stream approach is more, still needs further improvement, to adapt to the demand calculated in real time.
The content of the invention
For defect of the prior art, it is an object of the invention to provide a kind of boundling Adjustable calculation method of low dimensional with System.The depth of field of several views is expressed as the function of the relative movement parameters of view two-by-two by the present invention, realizes and is regarded from several Figure directly recovers kinematic parameter, then parsing obtains three-dimensional scenic point coordinates from kinematic parameter.
A kind of boundling Adjustable calculation method of the low dimensional provided according to the present invention, comprises the following steps:
Step 1:Determine the initial value of kinematic parameter;
Step 2:Minimum calculating, the kinematic parameter after being optimized are carried out to the object function of kinematic parameter;
Step 3:According to the kinematic parameter after optimization, three-dimensional scenic point coordinates is calculated.
Preferably, the step 1 comprises the following steps:
Step 1.1:The dual-view constituted for jth width and the width of jth+1 view, j=1,2 ..., n-1, to the dual-view On public matching characteristic point set { j, j+1 } corresponding to image characteristic point, using direct linear transformation's algorithm, solve jth+1 Relative pose (R of the width view relative to jth width viewj,j+1,tj,j+1);
Wherein:
The number of views that n adjusts for participation boundling;
Rj,j+1Relative attitude for the width of jth+1 view relative to jth width view;
tj,j+1Unit relative displacement vector for the width of jth+1 view relative to jth width view, i.e., | | tj,j+1| |=1;
I-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } is calculated in jth width view coordinate system Under three-dimensional coordinateAnd i-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } is in jth+1 Three-dimensional coordinate under width view coordinate system
Wherein:
I=1,2 ..., m(j,j+1)
m(j,j+1)Represent the matching picture point in the dual-view of jth width and the width of jth+1 view composition to number;
It is i-th corresponding to public matching characteristic point set { j, j+1 } matching picture point on jth width view Normalized image point coordinates;
It is i-th corresponding to public matching characteristic point set { j, j+1 } matching picture point on the width view of jth+1 Normalized image point coordinates;
Represent i-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } in jth width view Three-dimensional coordinate under coordinate system;
Represent i-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } to being regarded in the width of jth+1 Three-dimensional coordinate under figure coordinate system;
Step 1.2:It is fixed | | T1,2| |=1;The three-view diagram constituted for the width of jth -1, jth width and the width of jth+1 view, j =2,3 ..., n-1, according to the public matching characteristic point set { j-1, j, j+1 } on the three-view diagram, calculates the yardstick of relative displacement | |Tj,j+1||/||Tj-1,j| |, obtain the unified relative displacement vector T of yardstickj,j+1
Tj,j+1=| | Tj,j+1||tj,j+1
Wherein:
T1,2Relative displacement vector for the 2nd width view relative to the 1st width view;
Tj,j+1Relative displacement vector for the width of jth+1 view relative to jth width view;
Tj-1,jRelative displacement vector for jth width view relative to the width view of jth -1;
m(j-1,j,j+1)Represent the public matching image in the three-view diagram that the width of jth -1, jth width and the view of jth+1 width are constituted Point is to number;
Represent i-th corresponding to the public matching characteristic point set { j-1, j } on the width of jth -1 and jth width view With picture point to the three-dimensional coordinate under jth width view coordinate system;
Represent i-th corresponding to the public matching characteristic point set { j, j+1 } on jth width and the width view of jth+1 With picture point to the three-dimensional coordinate under jth width view coordinate system;
tj,j+1Unit relative displacement vector for the width of jth+1 view relative to jth width view;
Step 1.3:According to the absolute pose (R of jth width viewj,Tj), calculate the absolute pose for obtaining the width view of jth+1 (Rj+1,Tj+1):
Rj+1=Rj,j+1Rj
Tj+1=Tj,j+1+Rj,j+1Tj
Wherein:
RjRepresent the absolute pose of jth width view;
Rj+1Represent the absolute pose of the width view of jth+1;
Rj,j+1Relative attitude for the width of jth+1 view relative to jth width view;
TjRepresent the absolute displacement vector of jth width view;
Tj+1Represent the absolute displacement vector of the width view of jth+1;
Tj,j+1Relative displacement vector for the width of jth+1 view relative to jth width view;
When using the first width view as when referring to:
(R1,t1)≡(I3,03×1)
Wherein:
R1Represent the absolute pose of the first width view;
T1Represent the absolute displacement vector of the first width view;
I3Represent the unit matrix of 3-dimensional;
03×1Represent the null matrix of 3 rows 1 row.
Preferably, in the step 2, the object function of the kinematic parameter is specific as follows:
Kinematic parameter θ=(Rj,Tj)J=1,2 ... nMinimum object function δ (θ) be given below:
e3=[0 0 1]T
Wherein:
θ represents the absolute pose parameter set of all views;
δ () represents to minimize object function;
m(j,k)Represent the matching picture point in the dual-view of jth width and kth width view composition to number;
For i-th of matching picture point corresponding to the public matching characteristic point set { j, k } on jth width and kth width view To the normalized image point coordinates on kth width view;
For i-th of matching picture point corresponding to the public matching characteristic point set { j, k } on jth width and kth width view To the normalized image point coordinates on jth width view;
Rj,kRelative attitude for kth width view relative to jth width view;
Tj,kRelative displacement vector for kth width view relative to jth width view.
Preferably, the kinematic parameter θ=(R provided in the step 2j,Tj)J=1,2 ... nMinimum object function δ (θ) Premise be:Identical three dimensional field sight spot is equal to the distance of identical view.
Preferably, the step 3 comprises the following steps:
The kinematic parameter θ=(R obtained according to optimizationj,Tj)J=1,2 ... n, the double vision constituted for jth width and kth width view Figure, the coordinate at weighted calculation three dimensional field sight spot is as follows:
Tj,k=Tk-Rj,kTj
Wherein:
XiThe three-dimensional coordinate at i-th of three dimensional field sight spot is represented, three dimensional field sight spot XiCorrespondence jth width and kth width view are constituted Dual-view in s-th of image characteristic point;
Represent i-th of three dimensional field sight spot XiIn the dual-view that jth width and kth width view are constituted whether visible mark Know function, that is, work as XiWhen visible in the dual-view,Otherwise, then
RjRepresent the absolute pose of jth width view;
Tj,kRelative displacement vector for kth width view relative to jth width view;
Represent s-th of matching picture point corresponding to public matching characteristic point set { j, k } on jth width view Normalized image point coordinates;
Represent s-th of matching picture point corresponding to public matching characteristic point set { j, k } on kth width view Normalized image point coordinates;
RkRepresent the absolute pose of kth width view;
TjRepresent the absolute displacement vector of jth width view;
TkRepresent the absolute displacement vector of kth width view;
Rj,kRepresent relative attitude of the kth width view relative to jth width view;
Tj,kRepresent relative displacement vector of the kth width view relative to jth width view.
Preferably, the boundling Adjustable calculation method of the low dimensional, it is considered to the situation that camera has been demarcated, and assume true The matching picture point pair between each view is determined.
A kind of boundling Adjustable calculation system of the low dimensional provided according to the present invention, includes the meter for the computer program that is stored with Calculation machine readable storage medium storing program for executing, the computer program realizes the boundling Adjustable calculation side of above-mentioned low dimensional when being executed by processor The step of method.
Compared with prior art, the present invention has following beneficial effect:
The present invention be it is a kind of initialize that easy, Shandong nation property is good, calculating speed faster, the higher low dimensional boundling of computational accuracy Method of adjustment.The present invention can be used as the core of the applications such as unmanned vehicle/unmanned plane vision guided navigation, 3 D visual reconstruction, augmented reality Computing engines.
Brief description of the drawings
By reading the detailed description made with reference to the following drawings to non-limiting example, further feature of the invention, Objects and advantages will become more apparent upon:
Fig. 1 is the step flow chart of the low dimensional collection beam adjusting method provided according to the present invention.
Embodiment
With reference to specific embodiment, the present invention is described in detail.Following examples will be helpful to the technology of this area Personnel further understand the present invention, but the invention is not limited in any way.It should be pointed out that to the ordinary skill of this area For personnel, without departing from the inventive concept of the premise, some changes and improvements can also be made.These belong to the present invention Protection domain.
The depth of field is expressed as the function of kinematic parameter by the present invention, so that the parameter that three-dimensional scenic point coordinates is adjusted from boundling Rejected in optimization process.For there is the situation at m three dimensional field sight spot and n width views, parameter space is tieed up for 6*n.Compared to current Main stream approach, collection beam adjusting method proposed by the present invention significantly reduces the dimension of parameter space.
The present invention considers the situation that camera has been demarcated, and assumes to have determined that the matching picture point pair between each view.
The general type for formula is explained below definition:
It is assumed that n is the number of views that boundling is adjusted, number consecutively is view 1, view 2 ... view n;
(Ri,Ti) represent the i-th width view absolute pose;
RiRepresent the absolute pose of the i-th width view;
Ti=| | Ti||tiRepresent the absolute displacement vector of the i-th width view;
tiRepresent the unit absolute displacement vector of the i-th width view, i.e., | | ti| |=1;
θ represents the absolute pose parameter set of all views;
Represent relative attitude of the kth width view relative to jth width view;
Tj,k≡Tk-RjkTjRepresent relative displacement vector of the kth width view relative to jth width view;
Tj,k=| | Tj,k||tj,k, tj,kUnit relative displacement vector for kth width view relative to jth width view, i.e., | | tj,k| |=1;
(Rj,k,tj,k) represent relative pose of the kth width view relative to jth width view;
{ j } represents feature point set all on jth width view;
{ j, k } represents the public matching characteristic point set on jth width and kth width view, j, k ... } by that analogy, and represent Three width are with the public matching characteristic point set on top view;
(j, k) represents the dual-view of jth width and kth width view composition;
m(j,k)Represent the matching picture point in the dual-view of jth width and kth width view composition to number;
I-th of matching picture point respectively in the dual-view of jth width and kth width view composition is in jth Normalized image point coordinates on width view, kth width view, i.e. the first two component are calibrated image point coordinates, the 3rd Component is 1.
A kind of low dimensional collection beam adjusting method provided according to the present invention, comprises the following steps:
Step 1:Determine the initial value of kinematic parameter;
Step 2:Minimum calculating, the kinematic parameter after being optimized are carried out to the object function of kinematic parameter;
Step 3:According to the kinematic parameter after optimization, three-dimensional scenic point coordinates is calculated.
Each step is described in detail below.
The step 1 comprises the following steps:
Step 1.1:The dual-view constituted for jth width and the width of jth+1 view, j=1,2 ..., n-1, to the dual-view On public matching characteristic point set { j, j+1 } corresponding to image characteristic point, using direct linear transformation (DLT, Direct Linear Transformation) algorithm, solve relative pose (R of the width of jth+1 view relative to jth width viewj,j+1, tj,j+1);
Wherein:
The number of views that n adjusts for participation boundling;
Rj,j+1Relative attitude for the width of jth+1 view relative to jth width view;
tj,j+1Unit relative displacement vector for the width of jth+1 view relative to jth width view, i.e., | | tj,j+1| |=1;
I-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } is calculated in jth width view coordinate system Under three-dimensional coordinateAnd i-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } is in jth+1 Three-dimensional coordinate under width view coordinate system
Wherein:
I=1,2 ..., m(j,j+1)
m(j,j+1)Represent the matching picture point in the dual-view of jth width and the width of jth+1 view composition to number;
It is i-th corresponding to public matching characteristic point set { j, j+1 } matching picture point on jth width view Normalized image point coordinates;
It is i-th corresponding to public matching characteristic point set { j, j+1 } matching picture point in the width view of jth+1 On normalized image point coordinates;
Represent i-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } in jth width view Three-dimensional coordinate under coordinate system;
Represent i-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } to being regarded in the width of jth+1 Three-dimensional coordinate under figure coordinate system;
Step 1.2:Without loss of generality, it is fixed | | T1,2| |=1;For the width of jth -1, jth width and the width view structure of jth+1 Into three-view diagram, j=2,3 ..., n-1 according to the public matching characteristic point set { j-1, j, j+1 } on the three-view diagram, calculate phase To the yardstick of displacement | | Tj,j+1||/||Tj-1,j| |, obtain the unified relative displacement vector T of yardstickj,j+1
Tj,j+1=| | Tj,j+1||tj,j+1
Wherein:
T1,2Relative displacement vector for the 2nd width view relative to the 1st width view;
Tj,j+1Relative displacement vector for the width of jth+1 view relative to jth width view;
Tj-1,jRelative displacement vector for jth width view relative to the width view of jth -1;
m(j-1,j,j+1)Represent the public matching image in the three-view diagram that the width of jth -1, jth width and the view of jth+1 width are constituted Point is to number;
Represent i-th corresponding to the public matching characteristic point set { j-1, j } on the width of jth -1 and jth width view With picture point to the three-dimensional coordinate under jth width view coordinate system;
Represent i-th corresponding to the public matching characteristic point set { j, j+1 } on jth width and the width view of jth+1 With picture point to the three-dimensional coordinate under jth width view coordinate system;
tj,j+1Unit relative displacement vector for the width of jth+1 view relative to jth width view;
Step 1.3:According to the absolute pose (R of jth width viewj,Tj), calculate the absolute pose for obtaining the width view of jth+1 (Rj+1,Tj+1):
Rj+1=Rj,j+1Rj
Tj+1=Tj,j+1+Rj,j+1Tj
Wherein:
RjRepresent the absolute pose of jth width view;
Rj+1Represent the absolute pose of the width view of jth+1;
Rj,j+1Relative attitude for the width of jth+1 view relative to jth width view;
TjRepresent the absolute displacement vector of jth width view;
Tj+1Represent the absolute displacement vector of the width view of jth+1;
Tj,j+1Relative displacement vector for the width of jth+1 view relative to jth width view;
When using the first width view as when referring to:
(R1,t1)≡(I3,03×1)
Wherein:
R1Represent the absolute pose of the first width view;
T1Represent the absolute displacement vector of the first width view;
I3Represent the unit matrix of 3-dimensional;
03×1Represent the null matrix of 3 rows 1 row;
It should be noted that:
-- in step 1.1, j value is j=1,2 ..., n-1;
-- in step 1.2, j value is j=2,3 ..., n-1;
-- in step 1.3, j value is j=1,2 ..., n-1.
In the step 2, the object function of the kinematic parameter is specific as follows:
Identical three dimensional field sight spot to identical view it is equidistant under the premise of, kinematic parameter θ=(Rj,Tj)J=1,2 ... n Minimum object function δ (θ) be given below:
e3=[0 0 1]T
Wherein:
θ represents the absolute pose parameter set of all views;
δ () represents to minimize object function;
m(j,k)Represent the matching picture point in the dual-view of jth width and kth width view composition to number;
For i-th of matching picture point corresponding to the public matching characteristic point set { j, k } on jth width and kth width view To the normalized image point coordinates on kth width view;
For i-th of matching picture point corresponding to the public matching characteristic point set { j, k } on jth width and kth width view To the normalized image point coordinates on jth width view;
Rj,kRelative attitude for kth width view relative to jth width view;
Tj,kRelative displacement vector for kth width view relative to jth width view;
Because the initial value of the kinematic parameter obtained to step 1 by step 2 is optimized, kinematic parameter has been obtained Optimal value, therefore, step 3 are calculated according to the optimal value of kinematic parameter.Specifically, the step 3 comprises the following steps:
The kinematic parameter θ=(R obtained according to optimizationj,Tj)J=1,2 ... n, the double vision constituted for jth width and kth width view Figure, the coordinate at weighted calculation three dimensional field sight spot is as follows:
Tj,k=Tk-Rj,kTj
Wherein:
XiThe three-dimensional coordinate at i-th of three dimensional field sight spot is represented, three dimensional field sight spot XiCorrespondence jth width and kth width view are constituted Dual-view in s-th of image characteristic point;
Represent i-th of three dimensional field sight spot XiIn the dual-view that jth width and kth width view are constituted whether visible mark Know function, that is, work as XiWhen visible in the dual-view,Otherwise, then
RjRepresent the absolute pose of jth width view;
Tj,kRelative displacement vector for kth width view relative to jth width view;
Represent s-th of matching picture point corresponding to public matching characteristic point set { j, k } on jth width view Normalized image point coordinates;
Represent s-th of matching picture point corresponding to public matching characteristic point set { j, k } on kth width view Normalized image point coordinates;
RkRepresent the absolute pose of kth width view;
TjRepresent the absolute displacement vector of jth width view;
TkRepresent the absolute displacement vector of kth width view;
Rj,kRepresent relative attitude of the kth width view relative to jth width view;
Tj,kRepresent relative displacement vector of the kth width view relative to jth width view.
The specific embodiment of the present invention is described above.It is to be appreciated that the invention is not limited in above-mentioned Particular implementation, those skilled in the art can make a variety of changes or change within the scope of the claims, this not shadow Ring the substantive content of the present invention.In the case where not conflicting, feature in embodiments herein and embodiment can any phase Mutually combination.

Claims (7)

1. a kind of boundling Adjustable calculation method of low dimensional, it is characterised in that comprise the following steps:
Step 1:Determine the initial value of kinematic parameter;
Step 2:Minimum calculating, the kinematic parameter after being optimized are carried out to the object function of kinematic parameter;
Step 3:According to the kinematic parameter after optimization, three-dimensional scenic point coordinates is calculated.
2. the boundling Adjustable calculation method of low dimensional according to claim 1, it is characterised in that the step 1 is included such as Lower step:
Step 1.1:The dual-view constituted for jth width and the width of jth+1 view, j=1,2 ..., n-1, in the dual-view Image characteristic point corresponding to public matching characteristic point set { j, j+1 }, using direct linear transformation's algorithm, solves the width of jth+1 and regards Relative pose (R of the figure relative to jth width viewj,j+1,tj,j+1);
Wherein:
The number of views that n adjusts for participation boundling;
Rj,j+1Relative attitude for the width of jth+1 view relative to jth width view;
tj,j+1Unit relative displacement vector for the width of jth+1 view relative to jth width view, i.e., | | tj,j+1| |=1;
I-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } is calculated under jth width view coordinate system Three-dimensional coordinateAnd i-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } in the width of jth+1 to regarding Three-dimensional coordinate under figure coordinate system
<mrow> <msubsup> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;times;</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>&amp;times;</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> </mrow>
<mrow> <msubsup> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </msubsup> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>&amp;times;</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>&amp;times;</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> </mrow>
Wherein:
I=1,2 ..., m(j,j+1)
m(j,j+1)Represent the matching picture point in the dual-view of jth width and the width of jth+1 view composition to number;
It is i-th corresponding to public matching characteristic point set { j, j+1 } matching picture point to the normalizing on jth width view Change image point coordinates;
It is i-th corresponding to public matching characteristic point set { j, j+1 } matching picture point to returning on the width view of jth+1 One changes image point coordinates;
Represent i-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } in jth width view coordinate system Under three-dimensional coordinate;
Represent i-th of matching picture point corresponding to public matching characteristic point set { j, j+1 } in the width view coordinate of jth+1 Three-dimensional coordinate under system;
Step 1.2:It is fixed | | T1,2| |=1;The three-view diagram constituted for the width of jth -1, jth width and the width of jth+1 view, j=2, 3 ..., n-1, according to the public matching characteristic point set { j-1, j, j+1 } on the three-view diagram, calculates the yardstick of relative displacement | | Tj,j+1||/||Tj-1,j| |, obtain the unified relative displacement vector T of yardstickj,j+1
<mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>|</mo> <mo>|</mo> <mo>/</mo> <mo>|</mo> <mo>|</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> </mrow> </msub> <mo>|</mo> <mo>|</mo> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>m</mi> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </msup> </munderover> <msubsup> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>/</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>m</mi> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>-</mo> <mn>1</mn> <mo>,</mo> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </msup> </munderover> <msubsup> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>;</mo> </mrow> 1
Tj,j+1=| | Tj,j+1||tj,j+1
Wherein:
T1,2Relative displacement vector for the 2nd width view relative to the 1st width view;
Tj,j+1Relative displacement vector for the width of jth+1 view relative to jth width view;
Tj-1,jRelative displacement vector for jth width view relative to the width view of jth -1;
m(j-1,j,j+1)Represent the public matching picture point pair in the three-view diagram that the width of jth -1, jth width and the view of jth+1 width are constituted Number;
Represent i-th of matching figure corresponding to the public matching characteristic point set { j-1, j } on the width of jth -1 and jth width view Picture point is to the three-dimensional coordinate under jth width view coordinate system;
Represent i-th of matching figure corresponding to the public matching characteristic point set { j, j+1 } on jth width and the width view of jth+1 Picture point is to the three-dimensional coordinate under jth width view coordinate system;
tj,j+1Unit relative displacement vector for the width of jth+1 view relative to jth width view;
Step 1.3:According to the absolute pose (R of jth width viewj,Tj), calculate the absolute pose (R for obtaining the width view of jth+1j+1, Tj+1):
Rj+1=Rj,j+1Rj
Tj+1=Tj,j+1+Rj,j+1Tj
Wherein:
RjRepresent the absolute pose of jth width view;
Rj+1Represent the absolute pose of the width view of jth+1;
Rj,j+1Relative attitude for the width of jth+1 view relative to jth width view;
TjRepresent the absolute displacement vector of jth width view;
Tj+1Represent the absolute displacement vector of the width view of jth+1;
Tj,j+1Relative displacement vector for the width of jth+1 view relative to jth width view;
When using the first width view as when referring to:
(R1,t1)≡(I3,03×1)
Wherein:
R1Represent the absolute pose of the first width view;
T1Represent the absolute displacement vector of the first width view;
I3Represent the unit matrix of 3-dimensional;
03×1Represent the null matrix of 3 rows 1 row.
3. the boundling Adjustable calculation method of low dimensional according to claim 1, it is characterised in that in the step 2, institute The object function for stating kinematic parameter is specific as follows:
Kinematic parameter θ=(Rj,Tj)J=1,2 ... nMinimum object function δ (θ) be given below:
<mrow> <mi>&amp;delta;</mi> <mrow> <mo>(</mo> <mi>&amp;theta;</mi> <mo>)</mo> </mrow> <mo>=</mo> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>k</mi> <mo>=</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <munderover> <mi>&amp;Sigma;</mi> <mrow> <mi>i</mi> <mo>=</mo> <mn>1</mn> </mrow> <msup> <mi>m</mi> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msup> </munderover> <mrow> <mo>(</mo> <mrow> <mo>|</mo> <mo>|</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>e</mi> <mn>3</mn> <mi>T</mi> </msubsup> <mo>&amp;CenterDot;</mo> <msub> <mi>F</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>+</mo> <mo>|</mo> <mo>|</mo> <mfrac> <mrow> <msub> <mi>F</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> <mrow> <msubsup> <mi>e</mi> <mn>3</mn> <mi>T</mi> </msubsup> <mo>&amp;CenterDot;</mo> <msub> <mi>F</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> </mrow> </mfrac> <mo>-</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow> <mo>)</mo> </mrow> </mrow>
e3=[0 0 1]T
<mrow> <msub> <mi>F</mi> <mi>L</mi> </msub> <mrow> <mo>(</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&amp;times;</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>&amp;times;</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>+</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> </mrow>
<mrow> <msub> <mi>F</mi> <mi>R</mi> </msub> <mrow> <mo>(</mo> <mrow> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>,</mo> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> </mrow> <mo>)</mo> </mrow> <mo>=</mo> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&amp;times;</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>X</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>&amp;times;</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <msubsup> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>T</mi> </msubsup> <msubsup> <mi>x</mi> <mrow> <mi>i</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msubsup> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> </mrow>
Wherein:
θ represents the absolute pose parameter set of all views;
δ () represents to minimize object function;
m(j,k)Represent the matching picture point in the dual-view of jth width and kth width view composition to number;
For i-th corresponding to the public matching characteristic point set { j, k } on jth width and kth width view matching picture point to Normalized image point coordinates on kth width view;
For i-th corresponding to the public matching characteristic point set { j, k } on jth width and kth width view matching picture point to Normalized image point coordinates on jth width view;
Rj,kRelative attitude for kth width view relative to jth width view;
Tj,kRelative displacement vector for kth width view relative to jth width view.
4. the boundling Adjustable calculation method of low dimensional according to claim 3, it is characterised in that provided in the step 2 Kinematic parameter θ=(Rj,Tj)J=1,2 ... nMinimum object function δ (θ) premise be:Identical three dimensional field sight spot is regarded to identical The distance of figure is equal.
5. the boundling Adjustable calculation method of low dimensional according to claim 1, it is characterised in that the step 3 is included such as Lower step:
The kinematic parameter θ=(R obtained according to optimizationj,Tj)J=1,2 ... n, the dual-view constituted for jth width and kth width view, The coordinate at weighted calculation three dimensional field sight spot is as follows:
<mrow> <msub> <mi>X</mi> <mi>i</mi> </msub> <mo>=</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msubsup> <mi>&amp;lambda;</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <msubsup> <mi>w</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <msubsup> <mi>R</mi> <mi>j</mi> <mi>T</mi> </msubsup> <mrow> <mo>(</mo> <mfrac> <mrow> <mo>|</mo> <mo>|</mo> <msub> <mi>T</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>&amp;times;</mo> <msubsup> <mi>x</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> <mrow> <mo>|</mo> <mo>|</mo> <msubsup> <mi>x</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>&amp;times;</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> </mrow> </mfrac> <msubsup> <mi>x</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>-</mo> <msub> <mi>T</mi> <mi>j</mi> </msub> <mo>)</mo> </mrow> </mrow>
<mrow> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <mo>=</mo> <msub> <mi>R</mi> <mi>k</mi> </msub> <msubsup> <mi>R</mi> <mi>j</mi> <mi>T</mi> </msubsup> </mrow>
Tj,k=Tk-Rj,kTj
<mrow> <msubsup> <mi>w</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>=</mo> <mo>|</mo> <mo>|</mo> <msubsup> <mi>x</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>&amp;times;</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> <mo>/</mo> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>j</mi> <mo>=</mo> <mn>1</mn> </mrow> <mrow> <mi>n</mi> <mo>-</mo> <mn>1</mn> </mrow> </munderover> <munderover> <mo>&amp;Sigma;</mo> <mrow> <mi>k</mi> <mo>=</mo> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> <mi>n</mi> </munderover> <msubsup> <mi>&amp;lambda;</mi> <mi>s</mi> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <mo>|</mo> <msubsup> <mi>x</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>k</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>&amp;times;</mo> <msub> <mi>R</mi> <mrow> <mi>j</mi> <mo>,</mo> <mi>k</mi> </mrow> </msub> <msubsup> <mi>x</mi> <mrow> <mi>s</mi> <mo>,</mo> <mi>j</mi> </mrow> <mrow> <mo>(</mo> <mi>j</mi> <mo>,</mo> <mi>k</mi> <mo>)</mo> </mrow> </msubsup> <mo>|</mo> <msup> <mo>|</mo> <mn>2</mn> </msup> </mrow>
Wherein:
XiThe three-dimensional coordinate at i-th of three dimensional field sight spot is represented, three dimensional field sight spot XiIt is double that correspondence jth width and kth width view are constituted S-th of image characteristic point in view;
Represent i-th of three dimensional field sight spot XiIn the dual-view that jth width and kth width view are constituted whether visible mark letter Number, that is, work as XiWhen visible in the dual-view,Otherwise, then
RjRepresent the absolute pose of jth width view;
Tj,kRelative displacement vector for kth width view relative to jth width view;
Represent s-th of matching picture point corresponding to public matching characteristic point set { j, k } to the normalizing on jth width view Change image point coordinates;
Represent s-th of matching picture point corresponding to public matching characteristic point set { j, k } to the normalizing on kth width view Change image point coordinates;
RkRepresent the absolute pose of kth width view;
TjRepresent the absolute displacement vector of jth width view;
TkRepresent the absolute displacement vector of kth width view;
Rj,kRepresent relative attitude of the kth width view relative to jth width view;
Tj,kRepresent relative displacement vector of the kth width view relative to jth width view.
6. the boundling Adjustable calculation method of low dimensional according to claim 1, it is characterised in that the boundling of the low dimensional Adjustable calculation method, it is considered to the situation that camera has been demarcated, and assume to have determined that the matching picture point pair between each view.
7. a kind of boundling Adjustable calculation system of low dimensional, includes the computer-readable recording medium for the computer program that is stored with, Characterized in that, the computer program realizes the low dimensional any one of claim 1 to 6 when being executed by processor The step of boundling Adjustable calculation method.
CN201710370360.4A 2017-05-23 2017-05-23 Low-dimensional cluster adjustment calculation method and system Active CN107330934B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710370360.4A CN107330934B (en) 2017-05-23 2017-05-23 Low-dimensional cluster adjustment calculation method and system
PCT/CN2017/087500 WO2018214179A1 (en) 2017-05-23 2017-06-07 Low-dimensional bundle adjustment calculation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710370360.4A CN107330934B (en) 2017-05-23 2017-05-23 Low-dimensional cluster adjustment calculation method and system

Publications (2)

Publication Number Publication Date
CN107330934A true CN107330934A (en) 2017-11-07
CN107330934B CN107330934B (en) 2021-12-07

Family

ID=60192859

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710370360.4A Active CN107330934B (en) 2017-05-23 2017-05-23 Low-dimensional cluster adjustment calculation method and system

Country Status (2)

Country Link
CN (1) CN107330934B (en)
WO (1) WO2018214179A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584299A (en) * 2018-11-13 2019-04-05 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, terminal and storage medium
CN109799698A (en) * 2019-01-30 2019-05-24 上海交通大学 The optimal PI parameter optimization method of time lag vision servo system and system
CN111161355A (en) * 2019-12-11 2020-05-15 上海交通大学 Pure pose resolving method and system for multi-view camera pose and scene

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060008121A1 (en) * 2001-06-18 2006-01-12 Microsoft Corporation Incremental motion estimation through local bundle adjustment
US20110311104A1 (en) * 2010-06-17 2011-12-22 Microsoft Corporation Multi-Stage Linear Structure from Motion
CN106157367A (en) * 2015-03-23 2016-11-23 联想(北京)有限公司 Method for reconstructing three-dimensional scene and equipment

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103985154A (en) * 2014-04-25 2014-08-13 北京大学 Three-dimensional model reestablishment method based on global linear method
CN104881869A (en) * 2015-05-15 2015-09-02 浙江大学 Real time panorama tracing and splicing method for mobile platform
CN106097436B (en) * 2016-06-12 2019-06-25 广西大学 A kind of three-dimensional rebuilding method of large scene object
CN106408653B (en) * 2016-09-06 2021-02-02 合肥工业大学 Real-time robust cluster adjustment method for large-scale three-dimensional reconstruction

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060008121A1 (en) * 2001-06-18 2006-01-12 Microsoft Corporation Incremental motion estimation through local bundle adjustment
US20110311104A1 (en) * 2010-06-17 2011-12-22 Microsoft Corporation Multi-Stage Linear Structure from Motion
CN106157367A (en) * 2015-03-23 2016-11-23 联想(北京)有限公司 Method for reconstructing three-dimensional scene and equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘浩敏 等: "基于单目视觉的同时定位与地图构建方法综述", 《计算机辅助设计与图形学学报》 *
齐南: "基于图像序列的目标三维重建技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109584299A (en) * 2018-11-13 2019-04-05 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, terminal and storage medium
CN109584299B (en) * 2018-11-13 2021-01-05 深圳前海达闼云端智能科技有限公司 Positioning method, positioning device, terminal and storage medium
CN109799698A (en) * 2019-01-30 2019-05-24 上海交通大学 The optimal PI parameter optimization method of time lag vision servo system and system
CN111161355A (en) * 2019-12-11 2020-05-15 上海交通大学 Pure pose resolving method and system for multi-view camera pose and scene
CN111161355B (en) * 2019-12-11 2023-05-09 上海交通大学 Multi-view camera pose and scene pure pose resolving method and system

Also Published As

Publication number Publication date
CN107330934B (en) 2021-12-07
WO2018214179A1 (en) 2018-11-29

Similar Documents

Publication Publication Date Title
CN107564061A (en) A kind of binocular vision speedometer based on image gradient combined optimization calculates method
CN102426019B (en) Unmanned aerial vehicle scene matching auxiliary navigation method and system
CN102435188B (en) Monocular vision/inertia autonomous navigation method for indoor environment
CN110223348A (en) Robot scene adaptive bit orientation estimation method based on RGB-D camera
CN108648215B (en) SLAM motion blur pose tracking algorithm based on IMU
CN105976353A (en) Spatial non-cooperative target pose estimation method based on model and point cloud global matching
CN110008851A (en) A kind of method and apparatus of lane detection
CN106826833A (en) Independent navigation robot system based on 3D solid cognition technologies
CN107871327A (en) The monocular camera pose estimation of feature based dotted line and optimization method and system
CN108534782A (en) A kind of instant localization method of terrestrial reference map vehicle based on binocular vision system
CN107192375B (en) A kind of unmanned plane multiple image adaptive location bearing calibration based on posture of taking photo by plane
CN105913410A (en) Long-distance moving object height measurement apparatus and method based on machine vision
CN106643545A (en) Calibration method for steel rail profile measured by adopting laser displacement technology
CN103559711A (en) Motion estimation method based on image features and three-dimensional information of three-dimensional visual system
CN103411589B (en) A kind of 3-D view matching navigation method based on four-dimensional real number matrix
CN103335648B (en) A kind of autonomous method for recognising star map
CN106295512A (en) Many correction line indoor vision data base construction method based on mark and indoor orientation method
CN107063190A (en) Towards the high-precision direct method estimating of pose of calibration area array cameras image
CN107330934A (en) The boundling Adjustable calculation method and system of low dimensional
CN111998862A (en) Dense binocular SLAM method based on BNN
CN115272596A (en) Multi-sensor fusion SLAM method oriented to monotonous texture-free large scene
CN109724586A (en) A kind of spacecraft relative pose measurement method of fusion depth map and point cloud
CN108492282A (en) Three-dimensional glue spreading based on line-structured light and multitask concatenated convolutional neural network detects
CN103854271B (en) A kind of planar pickup machine scaling method
CN106931978B (en) Indoor map generation method for automatically constructing road network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant