CN107452036A - A kind of optical tracker pose computational methods of global optimum - Google Patents

A kind of optical tracker pose computational methods of global optimum Download PDF

Info

Publication number
CN107452036A
CN107452036A CN201710545644.2A CN201710545644A CN107452036A CN 107452036 A CN107452036 A CN 107452036A CN 201710545644 A CN201710545644 A CN 201710545644A CN 107452036 A CN107452036 A CN 107452036A
Authority
CN
China
Prior art keywords
mrow
msubsup
msub
pose
tracker
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710545644.2A
Other languages
Chinese (zh)
Other versions
CN107452036B (en
Inventor
翁冬冬
李冬
胡翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Virtual Reality Testing Technology Co Ltd
Beijing Institute of Technology BIT
Original Assignee
Nanchang Virtual Reality Testing Technology Co Ltd
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Virtual Reality Testing Technology Co Ltd, Beijing Institute of Technology BIT filed Critical Nanchang Virtual Reality Testing Technology Co Ltd
Priority to CN201710545644.2A priority Critical patent/CN107452036B/en
Publication of CN107452036A publication Critical patent/CN107452036A/en
Application granted granted Critical
Publication of CN107452036B publication Critical patent/CN107452036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models

Abstract

The invention discloses a kind of optical tracker pose computational methods of global optimum, improve the pose computational methods of traditional tracker, the mathematical modeling based on global optimization thought is used, system of linear equations is built using the corresponding relation of spatial point and picture point, pose of the tracker relative to single base station need not be calculated,, can be with direct solution tracker global optimum pose also without pose data fusion is carried out;This method does not limit base station number, the information (even if the corresponding points lazy weight of this base station calculates pose with independent) of all base station corresponding points is taken full advantage of, greatly reduces the minimum of computation condition (corresponding points amount threshold is reduced to all base stations by any one base station at least 5 groups of corresponding points and has 4 groups of corresponding points altogether) of tracker pose;In addition, when multiple receivers and tracker are established and contacted, the pose fusion results of global optimum can be obtained, and result is more accurate, robustness is stronger.

Description

A kind of optical tracker pose computational methods of global optimum
Technical field
The invention belongs to track and localization technical field, and in particular to a kind of optical tracker pose calculating side of global optimum Method, the application field of optical tracking positioning is needed available for motion capture, surgical navigational, virtual reality etc..
Background technology
HTC VIVE systems are made up of transmitter base station and photoreceiver, transmitter can send periodicity light signal to Track region is scanned, and after receiver receives the scanning signal of transmitter, converts optical signals to data signal, so as to obtain Receiver relative to transmitter image coordinate, after a number of receiver is scanned, using computer vision algorithms make Obtain the spatial pose of the rigid body of receiver composition.
HTC VIVE possess two transmitter scannings base station (equivalent to two video cameras), when calculating pose, it is necessary to one At least five sensor points are closed by some base station scans to the pose that could be calculated between tracker and the base station on individual tracker System, tracker are blocked situation because the change of self-position angle there may be operative sensor in use, therefore it is required that A fairly large number of sensor points must be laid on tracker, to ensure, when operative sensor point is blocked, still there are at least 5 Sensor points can receive base station scanning signal.Sensor points are more, and the volume of tracker is bigger, are unfavorable for the small-sized of tracker Change.In addition, the method that HTC VIVE systems when carrying out pose fusion, have used weighting ellipse fitting, this method requirement is more Strictly, it is necessary to know the relative pose between each base station and tracker, and the pose data for being only applicable to two base stations are melted Close, when base station (video camera) is more, without the ability of pose data global optimization.
The content of the invention
In view of this, it is an object of the invention to provide a kind of optical tracker pose computational methods of global optimum, pass through The pose computational methods of traditional tracker are improved, relax tracker pose design conditions, i.e., can not calculate arbitrarily single connect When receiving device with tracker pose, only tracker pose meter can be completed with the limited information between tracker and multiple receivers Calculate.
A kind of optical tracker pose computational methods of global optimum of the present invention, including:
Step 1, for each sensor, determine that it can receive the transmitter of signal, by a sensor and its energy A transmitter for receiving signal receives combination as a transmitting, travels through all the sensors, and all transmittings of statistics connect Combined number is received, and is designated as N;
Step 2, receive and combine for any one transmitting, make sensor serial number j therein, transmitter sequence number is expressed as i;Then determine three-dimensional coordinate of j-th of sensor under itself rigid body coordinate systemDetermine that j-th of sensor can receive at it To the two dimensional image coordinate in i-th of transmitter of signalThen establish and received on the transmitting in combination in correspondence with each other Three dimensions point and effective equation group of X-Y scheme picture point:
Wherein, pi1、pi2、pi3And pi4Represent sensor rigid body coordinate system and i-th of transmitter image coordinate system it Between projection relation matrix PiIn element;aij=[0, -1, vij]T, bij=[1,0 ,-uij]T, wherein, uijAnd vijTable respectively Show two dimensional image coordinateIn two change in coordinate axis direction coordinate; Wherein,Table Show that sensor rigid body coordinate system is transformed into the spin matrix of transmitter coordinate system,Represent sensor rigid body coordinate system It is transformed into the translation matrix of transmitter coordinate system;
Step 3, receive combination for each transmitting and establish equation group shown in a formula (1), N number of transmitting receives combination N number of equation group is obtained, is consequently formed the system of linear equations of 2N dimensions;
Step 4, the line style equation group that step 3 is formed is rewritten into following form:
AX=B (2)
Wherein A is the matrix of 2N × 12,
X be 12 × 1 column vector, X=[r11, r12, r13, t1, r21, r22, r23, t2, r31, r32, r33, t3]T
B is the column vector of 2N × 1,
Step 5, as 4≤N≤5, the specific method solved to formula (2) is:
9 elements are extracted in X and obtain spin matrix R, are expressed as:
R=fR(X)
And make it that spin matrix R is unitary matrice, meet RR-1=I and R-1=RT, I is 3 × 3 unit matrix;
Then the Solving Linear problem shown in formula (2) is converted into following optimization problem:
s.t.fR(X)fR(X)T- I=0
I.e.:MeetingConstraints under, makeTake The X of minimum value is optimal solution, realizes that pose resolves;
As N >=6, formula (2) is solved using analytic method, obtains X, realize that pose resolves.
In the step 5, using optimization problem described in Levenberg-Marquardt Algorithm for Solving.
The present invention has the advantages that:
Present invention improves over the pose computational methods of traditional tracker, the mathematical modulo based on global optimization thought has been used Type, build system of linear equations using the corresponding relation of spatial point and picture point, it is not necessary to calculate tracker relative to single base station Pose, it is not required that carry out pose data fusion, can be with direct solution tracker global optimum pose;This method does not limit base Stand quantity, take full advantage of the information of all base station corresponding points (even if the corresponding points lazy weight of this base station calculates position with independent Appearance), the minimum of computation condition of tracker pose is greatly reduced (by corresponding points amount threshold by least 5 groups of any one base station Corresponding points are reduced to all base stations and have 4 groups of corresponding points altogether);In addition, when multiple receivers and tracker are established and contacted, can obtain The pose fusion results of global optimum, and result is more accurate, robustness is stronger.
Brief description of the drawings
Fig. 1 is existing HTC VIVE system pie graphs;
Differences of the Fig. 2 for the inventive method and HTC VIVE method in processing procedure.
Embodiment
The present invention will now be described in detail with reference to the accompanying drawings and examples.
As shown in figure 1, HTC VIVE systems include 1 Helmet Mounted Display and 2 handles.On Helmet Mounted Display and handle Dozens of photoreceiver is installed, when the infrared light scanning signal of base station is received by a number of receiver, can just be counted The locus of Helmet Mounted Display and handle is calculated, so as to realize the posture tracking of user.
If three-dimensional coordinate of j-th of light sensor under world coordinate system is X on trackerwj=[xj, yj, zj]T, its Corresponding image coordinate is x in i-th of transmitter base stationij=[uij, vij]T, then according to projection imaging principle, XwjWith xij's Relation meets below equation:
Wherein j=1,2 ... J, J are number of sensors;For coordinate Xwj And xijHomogeneous coordinates expression form (herein if not illustrating withRepresent A homogeneous coordinates), Pi=Ki[Rci |Tci] for the projection matrix of i-th transmitter, KiFor Intrinsic Matrix, RciFor spin matrix, TciFor translation matrix, they are equal It can be obtained by initial alignment.RciAnd TciThree-dimensional point coordinate can be described from world coordinate system to i-th of transmitter base station coordinate The conversion of system, if three-dimensional coordinate of the sensor points under i-th of transmitter base station coordinate system is Xcij, then XcijWith XwjRelation As shown in formula (2):
Xcij=RciXwj+Tci (2)
If three-dimensional coordinate of the sensor points in the case where tracking rigid body local coordinate system is Xrj, according to projection imaging principle, The imaging model of similar formula (1) is can obtain, as shown in formula (3):
Wherein Rri、TriDescribe change of the three-dimensional point from tracking rigid body local coordinate system to i-th of transmitter base station coordinate system Change, as shown in formula (4):
Xcij=RriXrj+Tri (4)
X can be obtained with reference to formula (2) and (4)wjWith XrjBetween transformational relation, as shown in formula (5):
Wherein R and T is pose of the tracker in world coordinate system.Due to RciAnd TciImmobilize, and initially marking Determining the stage has obtained, therefore only needs to calculate R in real time in useriWith TriThe three-dimensional position of tracker can be obtained according to formula (5) Appearance.Formula (3) is returned to, due to KiFor known calibration data, therefore need to only know corresponding to some groupsIt can ask for Rri、Tri.In the case of this Intrinsic Matrix is known, with n spatial point image point estimation position for video camera corresponding with them The method of appearance, i.e. spin matrix and translation matrix, referred to as PnP (perspective-n-point) problem, it can be divided into two Class, one kind are the situations of 3≤n≤5, and another kind of is the situation of n >=6.The research focus of first kind PnP problems are determination problem At most up to how many, conclusion includes real solution:P3P problems are up to 4 solutions;When 4 control points are coplanar, P4P problems have only One solution, and when 4 control points are non-coplanar, P4P problems are up to 4 solutions;P5P problems can have up to two solutions.Second class PnP problems can use DLT (Direct Linear Transform) method linear solution.Can on being discussed in detail for PnP problems With bibliography [1] ([1] Wu Y, Hu Z.PnP Problem Revisited [J] .Journal of Mathematical Imaging and Vision,2006,24(1):131-141), repeat no more here.
HTC VIVE systems possess two base stations, for a tracker, if first base station has been photographed on tracker p1The image coordinate of individual sensor, second base station have photographed p2The image coordinate of individual sensor, then HTC VIVE systems will Ask and work as p1>=5 or p2When >=5, the pose of the tracker can be just calculated.Work as p1>=5 and p2When >=5, two base stations can be respective The spatial pose of tracker is obtained according to formula (5), R can be designated as respectively1、T1、R2、T2.Now need obtain two base stations Pose data are merged to obtain the tracker pose that precision is higher, robustness is stronger.The pose fusion that HTC VIVE are used Shown in algorithm such as formula (6):
Wherein Slerp () is spherical linear interpolating function (referring to document [2] https://en.wikipedia.org/ Wiki/Slerp), α is coefficient, shown in its computational methods such as formula (7):
α=p1/(p1+p2) (7)
Because the pose fusion method shown in formula (6) is only applicable to the fusion of two pose data, therefore work as base station number Amount will be unable to carry out pose fusion using formula (6) when being more than 2.
Pose computational methods proposed by the present invention are not limited to the situation of only two base stations, and it is applied to any amount base Stand the situation of (or video camera).Simultaneous formula (1) and formula (5) can obtain j-th of sensor points of tracker in rigid body Three-dimensional coordinate under local coordinate systemTo the image coordinate of i-th of base station imaging planeProjection relation, following institute Show:
Formula (8) is equivalent to the form of formula (9):
WhereinForAntisymmetric matrix, ifThen have:
IfThen have Bringing M into formula (9) can obtain:
IfPi=[pi1, pi2, pi3, pi4], and makeWherein
By CijBring formula (11) into, three equations that unknown number is R and T can be obtained:
Because formula (13) describes the homogeneous coordinate transformation of degeneration, wherein only 2 equations are independent, therefore only select The first two equation in modus ponens (13) is used to solve R and T.Due toThereforeBring formula into (13) obtained in the first two equation:
Make aij=[0, -1, vij]T, bij=[1,0 ,-uij]T, and by formula (12)WithBring into formula (14), Obtain:
The both members of formula (15) are taken into transposition, obtained:
Formula (16) be three dimensions point corresponding to one group with X-Y scheme picture point caused by effective equation group, when N groups being present During such corresponding points, formula (16) can be rewritten into the system of linear equations of standard, and as shown in formula (17), wherein A is 2N × 12 Matrix, X are 12 × 1 column vector, and B is the column vector of 2N × 1.
AX=B
X=[r11, r12, r13, t1, r21, r22, r23, t2, r31, r32, r33, t3]T (17)
Thus the present invention will ask for tracker pose R, T conversion to solve the system of linear equations problem shown in formula (17). The dimension for noticing unknown number X is 12, therefore as N >=6, the equation can pass through X=A+B mode asks for analytic solutions, A+For A Generalized inverse.As 4≤N≤5, equation AX=B owes fixed, there is multiple solutions, but can be by increasing the alternative manner of constraints Solve.Due to whole elements of the spin matrix R comprising tracker in X, therefore can be rotated by 9 elements extracted in X Matrix R, function representation of the process shown in formula (18):
R=fR(X) (18)
Because spin matrix R is unitary matrice (i.e. unit orthogonal matrix), meet RR-1=I and R-1=RT, I is 3 × 3 unit Matrix, therefore constraints can be obtained:RRT=I, i.e. RRT- I=0.It is possible thereby to the Solving Linear shown in by formula (17) Problem is converted into following optimization problem:
Optimization problem shown in formula (19) can be by solution by iterative method, and a kind of conventional method is Levenberg- Marquardt algorithms, its details refer to document [3] (Mor é J J.The Levenberg-Marquardt algorithm: Implementation and theory[J].Lecture Notes in Mathematics,1978,630:105-116)。
The step of present invention asks for tracker pose is summarized below:
Step 1. utilizes three-dimensional coordinate of a certain sensor points under rigid body local coordinate system on trackerIt is right with its The two dimensional image coordinate for some base station answeredProcess according to formula (8)-(16) obtains 2 effective equation groups.
Step 2. is had each group of sensor three-dimensional point image coordinate point corresponding with its according to Step 1 method Equation group is imitated, then all equation groups are formed into the system of linear equations shaped like AX=B according to formula (17).
Step 3. is according to the different calculation method of corresponding points quantity N different choice.Analytic method X=A is used as N >=6+B is solved, and is solved as 4≤N≤5 using the optimal method shown in formula (19).
The present invention calculates the three-dimensional pose of tracker, and the representative of typical method at present from the angle of global optimization HTC VIVE systems are compared, and the inventive method relaxes tracker pose design conditions, while when supporting base station number more than 2 Pose data fusion, the more accurate robust of result of calculation.Fig. 2 compared for the inventive method and HTC VIVE methods treated Difference in journey.
As can be seen that HTC VIVE use the computational methods based on distributed thought, it is necessary to individually calculate tracker Merged relative to the pose of each base station, then by them.The present invention is the computational methods based on global optimization thought, is not examined Consider pose of the tracker relative to each base station, be only used to its corresponding points information build system of linear equations, it is linear by solving Equation group obtains global optimum's pose of tracker, it is not necessary to data fusion.
As an example it is assumed that p is designated as by the sensor points quantity that i-th of base station photographs on a trackeri, i=1, Base station number (is designated as M) by 2 ..., M here, and for HTC VIVE system M=2, it must is fulfilled at least one piWhen >=5, Tracker pose can be calculated.Work as p1>=5 and p2When >=5, it calculates tracker relative to the pose of two base stations, it is necessary to use Formula (6) carries out the fusion of pose data to obtain final result.For the present invention, the number M of base station is unrestricted, only Need to meetThe pose of tracker can be calculated, this greatly reduces the condition of pose calculating.Such as work as p1=2, p1=2, p2When=2, HTC VIVE systems can not calculate pose, and the inventive method can calculate pose.And for example work as p1=5, p2When=3, HTC VIVE systems can only calculate pose of the tracker relative to base station 1, and tracker is relative to the pose of base station 2 Because corresponding points lazy weight can not calculate, this is equivalent to 3 groups of corresponding points information for wasting base station 2.The inventive method according to Formula (16) and (17), can be all of all corresponding points information, therefore result of calculation will more accurate robust.Lower 1 compared for The performance difference of the inventive method and HTC VIVE methods.
The performance comparison of the inventive method of table 1 and HTC VIVE methods
In summary, presently preferred embodiments of the present invention is these are only, is not intended to limit the scope of the present invention. Within the spirit and principles of the invention, any modification, equivalent substitution and improvements made etc., it should be included in the present invention's Within protection domain.

Claims (2)

  1. A kind of 1. optical tracker pose computational methods, it is characterised in that including:
    Step 1, for each sensor, determine that it can receive the transmitter of signal, a sensor can be received with it A transmitter to signal receives combination as a transmitting, travels through all the sensors, all transmitting reception groups of statistics Number is closed, and is designated as N;
    Step 2, receive and combine for any one transmitting, make sensor serial number j therein, transmitter sequence number is expressed as i;Then Determine three-dimensional coordinate of j-th of sensor under itself rigid body coordinate systemDetermine that j-th of sensor can receive letter at it Number i-th of transmitter in two dimensional image coordinateThen establish on the three-dimensional in transmitting reception combination in correspondence with each other Effective equation group of spatial point and X-Y scheme picture point:
    <mrow> <mfenced open = "{" close = ""> <mtable> <mtr> <mtd> <mrow> <msubsup> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <msubsup> <mover> <mi>X</mi> <mo>~</mo> </mover> <mrow> <mi>r</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>+</mo> <msubsup> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <msubsup> <mover> <mi>X</mi> <mo>~</mo> </mover> <mrow> <mi>r</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>+</mo> <msubsup> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mn>3</mn> </mrow> </msub> <msubsup> <mover> <mi>X</mi> <mo>~</mo> </mover> <mrow> <mi>r</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>m</mi> <mn>3</mn> </msub> <mo>=</mo> <mo>-</mo> <msubsup> <mi>a</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mn>4</mn> </mrow> </msub> </mrow> </mtd> </mtr> <mtr> <mtd> <mrow> <msubsup> <mi>b</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mn>1</mn> </mrow> </msub> <msubsup> <mover> <mi>X</mi> <mo>~</mo> </mover> <mrow> <mi>r</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>m</mi> <mn>1</mn> </msub> <mo>+</mo> <msubsup> <mi>b</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mn>2</mn> </mrow> </msub> <msubsup> <mover> <mi>X</mi> <mo>~</mo> </mover> <mrow> <mi>r</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>m</mi> <mn>2</mn> </msub> <mo>+</mo> <msubsup> <mi>b</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mn>3</mn> </mrow> </msub> <msubsup> <mover> <mi>X</mi> <mo>~</mo> </mover> <mrow> <mi>r</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>m</mi> <mn>3</mn> </msub> <mo>=</mo> <mo>-</mo> <msubsup> <mi>b</mi> <mrow> <mi>i</mi> <mi>j</mi> </mrow> <mi>T</mi> </msubsup> <msub> <mi>p</mi> <mrow> <mi>i</mi> <mn>4</mn> </mrow> </msub> </mrow> </mtd> </mtr> </mtable> </mfenced> <mo>-</mo> <mo>-</mo> <mo>-</mo> <mrow> <mo>(</mo> <mn>1</mn> <mo>)</mo> </mrow> </mrow>
    Wherein, pi1、pi2、pi3And pi4Represent between sensor rigid body coordinate system and the image coordinate system of i-th of transmitter Projection relation matrix PiIn element;aij=[0, -1, vij]T, bij=[1,0 ,-uij]T, wherein, uijAnd vijTwo are represented respectively Tie up image coordinateIn two change in coordinate axis direction coordinate; Wherein, Represent that sensor rigid body coordinate system is transformed into the spin matrix of transmitter coordinate system,Represent sensor rigid body coordinate system It is transformed into the translation matrix of transmitter coordinate system;
    Step 3, receive combination for each transmitting and establish equation group shown in a formula (1), N number of transmitting reception, which is combined, to be produced To N number of equation group, the system of linear equations that 2N is tieed up is consequently formed;
    Step 4, the line style equation group that step 3 is formed is rewritten into following form:
    AX=B (2)
    Wherein A is the matrix of 2N × 12,
    X be 12 × 1 column vector, X=[r11,r12,r13,t1,r21,r22,r23,t2,r31,r32,r33,t3]T
    B is the column vector of 2N × 1,
    Step 5, as 4≤N≤5, the specific method solved to formula (2) is:
    9 elements are extracted in X and obtain spin matrix R, are expressed as:
    R=fR(X)
    And make it that spin matrix R is unitary matrice, meet RR-1=I and R-1=RT, I is 3 × 3 unit matrix;
    Then the Solving Linear problem shown in formula (2) is converted into following optimization problem:
    <mrow> <msub> <mi>min</mi> <mi>X</mi> </msub> <mi>f</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>)</mo> </mrow> <mo>=</mo> <mo>|</mo> <mo>|</mo> <mi>A</mi> <mi>X</mi> <mo>-</mo> <mi>B</mi> <mo>|</mo> <msubsup> <mo>|</mo> <mn>2</mn> <mn>2</mn> </msubsup> </mrow> 1
    s.t.fR(X)fR(X)T- I=0
    I.e.:MeetingConstraints under, makeTake The X of minimum value is optimal solution, realizes that pose resolves;
    As N >=6, formula (2) is solved using analytic method, obtains X, realize that pose resolves.
  2. 2. a kind of optical tracker pose computational methods as claimed in claim 1, it is characterised in that in the step 5, use Optimization problem described in Levenberg-Marquardt Algorithm for Solving.
CN201710545644.2A 2017-07-06 2017-07-06 A kind of optical tracker pose calculation method of global optimum Active CN107452036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710545644.2A CN107452036B (en) 2017-07-06 2017-07-06 A kind of optical tracker pose calculation method of global optimum

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710545644.2A CN107452036B (en) 2017-07-06 2017-07-06 A kind of optical tracker pose calculation method of global optimum

Publications (2)

Publication Number Publication Date
CN107452036A true CN107452036A (en) 2017-12-08
CN107452036B CN107452036B (en) 2019-11-29

Family

ID=60488337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710545644.2A Active CN107452036B (en) 2017-07-06 2017-07-06 A kind of optical tracker pose calculation method of global optimum

Country Status (1)

Country Link
CN (1) CN107452036B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108333579A (en) * 2018-02-08 2018-07-27 高强 A kind of system and method for the light sensation equipment dense deployment based on Vive Lighthouse
CN108765498A (en) * 2018-05-30 2018-11-06 百度在线网络技术(北京)有限公司 Monocular vision tracking, device and storage medium
CN109032329A (en) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 Space Consistency keeping method towards the interaction of more people's augmented realities
CN113359987A (en) * 2021-06-03 2021-09-07 煤炭科学技术研究院有限公司 VR virtual reality-based semi-physical fully-mechanized mining actual operation platform

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101750012A (en) * 2008-12-19 2010-06-23 中国科学院沈阳自动化研究所 Device for measuring six-dimensional position poses of object
US20140300775A1 (en) * 2013-04-05 2014-10-09 Nokia Corporation Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
CN104484523A (en) * 2014-12-12 2015-04-01 西安交通大学 Equipment and method for realizing augmented reality induced maintenance system
CN104777700A (en) * 2015-04-01 2015-07-15 北京理工大学 Multi-projector optimized deployment method realizing high-immersion projection
CN106908764A (en) * 2017-01-13 2017-06-30 北京理工大学 A kind of multiple target optical tracking method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101750012A (en) * 2008-12-19 2010-06-23 中国科学院沈阳自动化研究所 Device for measuring six-dimensional position poses of object
US20140300775A1 (en) * 2013-04-05 2014-10-09 Nokia Corporation Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system
CN104484523A (en) * 2014-12-12 2015-04-01 西安交通大学 Equipment and method for realizing augmented reality induced maintenance system
CN104777700A (en) * 2015-04-01 2015-07-15 北京理工大学 Multi-projector optimized deployment method realizing high-immersion projection
CN106908764A (en) * 2017-01-13 2017-06-30 北京理工大学 A kind of multiple target optical tracking method

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108333579A (en) * 2018-02-08 2018-07-27 高强 A kind of system and method for the light sensation equipment dense deployment based on Vive Lighthouse
CN108765498A (en) * 2018-05-30 2018-11-06 百度在线网络技术(北京)有限公司 Monocular vision tracking, device and storage medium
US10984554B2 (en) 2018-05-30 2021-04-20 Baidu Online Network Technology (Beijing) Co., Ltd. Monocular vision tracking method, apparatus and non-volatile computer-readable storage medium
US11704833B2 (en) 2018-05-30 2023-07-18 Baidu Online Network Technology (Beijing) Co., Ltd. Monocular vision tracking method, apparatus and non-transitory computer-readable storage medium
CN109032329A (en) * 2018-05-31 2018-12-18 中国人民解放军军事科学院国防科技创新研究院 Space Consistency keeping method towards the interaction of more people's augmented realities
CN109032329B (en) * 2018-05-31 2021-06-29 中国人民解放军军事科学院国防科技创新研究院 Space consistency keeping method for multi-person augmented reality interaction
CN113359987A (en) * 2021-06-03 2021-09-07 煤炭科学技术研究院有限公司 VR virtual reality-based semi-physical fully-mechanized mining actual operation platform
CN113359987B (en) * 2021-06-03 2023-12-26 煤炭科学技术研究院有限公司 Semi-physical fully-mechanized mining and real-time operating platform based on VR virtual reality

Also Published As

Publication number Publication date
CN107452036B (en) 2019-11-29

Similar Documents

Publication Publication Date Title
Ke et al. Quasiconvex optimization for robust geometric reconstruction
CN107452036A (en) A kind of optical tracker pose computational methods of global optimum
Dorfmüller Robust tracking for augmented reality using retroreflective markers
Saurer et al. Homography based visual odometry with known vertical direction and weak manhattan world assumption
WO2013086678A1 (en) Point matching and pose synchronization determining method for planar models and computer program product
Barreto et al. Wide area multiple camera calibration and estimation of radial distortion
Mariottini et al. Planar mirrors for image-based robot localization and 3-D reconstruction
US20200294269A1 (en) Calibrating cameras and computing point projections using non-central camera model involving axial viewpoint shift
CN114119739A (en) Binocular vision-based hand key point space coordinate acquisition method
US7613323B2 (en) Method and apparatus for determining camera pose
CN109785373A (en) A kind of six-freedom degree pose estimating system and method based on speckle
CN107509245A (en) A kind of extension tracking based on HTC VIVE
Seo et al. A branch-and-bound algorithm for globally optimal calibration of a camera-and-rotation-sensor system
CN110796699B (en) Optimal view angle selection method and three-dimensional human skeleton detection method for multi-view camera system
Wang et al. Perspective 3-D Euclidean reconstruction with varying camera parameters
Ohnishi et al. Featureless robot navigation using optical flow
CN113436264B (en) Pose calculation method and system based on monocular and monocular hybrid positioning
Pizarro Large scale structure from motion for autonomous underwater vehicle surveys
Hoang et al. Automatic calibration of camera and LRF based on morphological pattern and optimal angular back-projection error
Li et al. Depth-camera calibration optimization method based on homography matrix
Sato et al. Camera position and posture estimation from still image using feature landmark database
Liu et al. Algorithm for camera parameter adjustment in multicamera systems
Ruiz et al. Practical Planar Metric Rectification.
Chen et al. Visual odometry with improved adaptive feature tracking
Chang Significance of omnidirectional fisheye cameras for feature-based visual SLAM

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Bao Yihua

Inventor after: Weng Dongdong

Inventor after: Li Dong

Inventor after: Hu Xiang

Inventor before: Weng Dongdong

Inventor before: Li Dong

Inventor before: Hu Xiang