CN107452036B - A kind of optical tracker pose calculation method of global optimum - Google Patents

A kind of optical tracker pose calculation method of global optimum Download PDF

Info

Publication number
CN107452036B
CN107452036B CN201710545644.2A CN201710545644A CN107452036B CN 107452036 B CN107452036 B CN 107452036B CN 201710545644 A CN201710545644 A CN 201710545644A CN 107452036 B CN107452036 B CN 107452036B
Authority
CN
China
Prior art keywords
pose
tracker
base station
sensor
transmitter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710545644.2A
Other languages
Chinese (zh)
Other versions
CN107452036A (en
Inventor
翁冬冬
李冬
胡翔
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanchang Virtual Reality Testing Technology Co Ltd
Beijing University of Technology
Original Assignee
Nanchang Virtual Reality Testing Technology Co Ltd
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanchang Virtual Reality Testing Technology Co Ltd, Beijing University of Technology filed Critical Nanchang Virtual Reality Testing Technology Co Ltd
Priority to CN201710545644.2A priority Critical patent/CN107452036B/en
Publication of CN107452036A publication Critical patent/CN107452036A/en
Application granted granted Critical
Publication of CN107452036B publication Critical patent/CN107452036B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/75Determining position or orientation of objects or cameras using feature-based methods involving models

Abstract

The invention discloses a kind of optical tracker pose calculation methods of global optimum, improve the pose calculation method of traditional tracker, the mathematical model based on global optimization thought is used, use space point and the corresponding relationship of picture point construct system of linear equations, it does not need to calculate pose of the tracker relative to single base station, it does not need to carry out pose data fusion yet, it can be with direct solution tracker global optimum pose;This method does not limit base station number, the information (even if the corresponding points lazy weight of this base station calculates pose with independent) for taking full advantage of all base station corresponding points, greatly reduces the minimum of computation condition (corresponding points amount threshold is reduced to all base stations by any one base station at least 5 groups of corresponding points and has 4 groups of corresponding points altogether) of tracker pose;In addition, can get the pose fusion results of global optimum, and result is more accurate, robustness is stronger when multiple receivers and tracker foundation contact.

Description

A kind of optical tracker pose calculation method of global optimum
Technical field
The invention belongs to tracking and positioning technical fields, and in particular to a kind of optical tracker pose calculating side of global optimum Method can be used for the application field that motion capture, surgical navigational, virtual reality etc. need optical tracking to position.
Background technique
HTC VIVE system is made of transmitter base station and photoreceiver, the capable of emitting periodicity light signal of transmitter to Track region is scanned, and after receiver receives the scanning signal of transmitter, converts optical signals to digital signal, to obtain Image coordinate of the receiver relative to transmitter, after a certain number of receivers are scanned, using computer vision algorithms make Obtain the spatial pose of the rigid body of receiver composition.
HTC VIVE is gathered around there are two transmitter scanning base station (being equivalent to two video cameras), when calculating pose, needs one At least five sensor points are closed by some base station scans to the pose that could be calculated between tracker and the base station on a tracker System, tracker are blocked situation due to the variation of self-position angle there may be operative sensor in use, therefore it is required that A fairly large number of sensor points must be laid on tracker, to guarantee still there are at least 5 when operative sensor point is blocked Sensor points can receive base station scanning signal.Sensor points are more, and the volume of tracker is bigger, are unfavorable for the small-sized of tracker Change.In addition, the method that HTC VIVE system when carrying out pose fusion, has used weighting ellipse fitting, this method is required more Strictly, it needs to know the relative pose between each base station and tracker, and the pose data for being only applicable to two base stations are melted It closes, when base station (video camera) is more, without the ability of pose data global optimization.
Summary of the invention
In view of this, passing through the object of the present invention is to provide a kind of optical tracker pose calculation method of global optimum The pose calculation method for improving traditional tracker, relaxes tracker pose design conditions, i.e., arbitrarily individually connects cannot calculate When receiving device and tracker pose, tracker pose meter only can be completed with the limited information between tracker and multiple receivers It calculates.
A kind of optical tracker pose calculation method of global optimum of the invention, comprising:
Step 1 is directed to each sensor, determines that it can receive the transmitter of signal, by a sensor and its energy A transmitter for receiving signal receives combination as a transmitting, traverses all the sensors, counts all transmittings and connects Combined number is received, and is denoted as N;
Step 2 receives combination for any one transmitting, enables sensor serial number j therein, transmitter serial number is expressed as i;Then determine three-dimensional coordinate of j-th of sensor under itself rigid body coordinate systemDetermine that j-th of sensor can be received at it To the two dimensional image coordinate in i-th of transmitter of signalThen it establishes and is received in combination in correspondence with each other about the transmitting Effective equation group of three-dimensional space point and two-dimensional image point:
Wherein, pi1、pi2、pi3And pi4It indicates between sensor rigid body coordinate system and the image coordinate system of i-th of transmitter Projection relation matrix PiIn element;aij=[0, -1, vij]T, bij=[1,0 ,-uij]T, wherein uijAnd vijRespectively indicate two dimension Image coordinateIn two change in coordinate axis direction coordinate; Wherein,Indicate that sensor rigid body coordinate system is transformed into transmitter coordinate The spin matrix of system,Indicate that sensor rigid body coordinate system is transformed into the translation matrix of transmitter coordinate system;
Step 3 emits to receive to combine for each and establishes equation group shown in a formula (1), and N number of transmitting receives combination N number of equation group is obtained, the system of linear equations of 2N dimension is consequently formed;
The line style equation group that step 3 is formed is rewritten into following form by step 4:
AX=B (2)
Wherein A is the matrix of 2N × 12,
The column vector that X is 12 × 1, X=[r11, r12, r13, t1, r21, r22, r23, t2, r31, r32, r33, t3]T
B is the column vector of 2N × 1,
Step 5, as 4≤N≤5, formula (2) is solved method particularly includes:
9 elements are extracted in X and obtain spin matrix R, are indicated are as follows:
R=fR(X)
And spin matrix R is made to be unitary matrice, meet RR-1=I and R-1=RT, I be 3 × 3 unit matrix;
Then following optimization problem is converted by Solving Linear problem shown in formula (2):
s.t.fR(X)fR(X)T- I=0
That is: meetingConstraint condition under, make The X being minimized is optimal solution, realizes that pose resolves;
As N >=6, formula (2) is solved using analytic method, obtains X, realizes that pose resolves.
In the step 5, the optimization problem is solved using Levenberg-Marquardt algorithm.
The invention has the following beneficial effects:
Present invention improves over the pose calculation methods of traditional tracker, have used the mathematical modulo based on global optimization thought The corresponding relationship of type, use space point and picture point constructs system of linear equations, does not need to calculate tracker relative to single base station Pose, do not need yet carry out pose data fusion, can be with direct solution tracker global optimum pose;This method does not limit base It stands quantity, takes full advantage of the information of all base station corresponding points (even if the corresponding points lazy weight of this base station calculates position with independent Appearance), the minimum of computation condition of tracker pose is greatly reduced (by corresponding points amount threshold by least 5 groups of any one base station Corresponding points are reduced to all base stations and have 4 groups of corresponding points altogether);In addition, can get when multiple receivers and tracker foundation contact The pose fusion results of global optimum, and result is more accurate, robustness is stronger.
Detailed description of the invention
Fig. 1 is existing HTC VIVE system pie graph;
Difference of the Fig. 2 for the method for the present invention and the method for HTC VIVE during processing.
Specific embodiment
The present invention will now be described in detail with reference to the accompanying drawings and examples.
As shown in Figure 1, HTC VIVE system includes 1 Helmet Mounted Display and 2 handles.On Helmet Mounted Display and handle Dozens of photoreceiver is installed, when the infrared light scanning signal when base station is received by a certain number of receivers, can be counted The spatial position for calculating Helmet Mounted Display and handle, to realize the posture tracking of user.
If three-dimensional coordinate of j-th of photosensitive sensor under world coordinate system is X on trackerwj=[xj, yj, zj]T, Corresponding image coordinate is x in i-th of transmitter base stationij=[uij, vij]T, then according to projection imaging principle, XwjWith xij's Relationship meets following formula:
Wherein j=1,2 ... J, J are number of sensors;For coordinate Xwj And xijHomogeneous coordinates expression form (herein if not illustrating withIndicate the homogeneous coordinates of A), Pi=Ki[Rci |Tci] be i-th transmitter projection matrix, KiFor Intrinsic Matrix, RciFor spin matrix, TciFor translation matrix, they are It can be obtained by initial alignment.RciAnd TciThree-dimensional point coordinate can be described from world coordinate system to i-th of transmitter base station coordinate The transformation of system, if three-dimensional coordinate of the sensor points under i-th of transmitter base station coordinate system is Xcij, then XcijWith XwjRelationship As shown in formula (2):
Xcij=RciXwj+Tci (2)
If three-dimensional coordinate of the sensor points in the case where tracking rigid body local coordinate system is Xrj, according to projection imaging principle, The imaging model that similar formula (1) can be obtained, as shown in formula (3):
Wherein Rri、TriDescribe change of the three-dimensional point from tracking rigid body local coordinate system to i-th of transmitter base station coordinate system It changes, as shown in formula (4):
Xcij=RriXrj+Tri (4)
In conjunction with formula (2) and (4) available XwjWith XrjBetween transformational relation, as shown in formula (5):
Wherein R and T is pose of the tracker in world coordinate system.Due to RciAnd TciIt immobilizes, and is initially marking Determining the stage has obtained, therefore only needs to calculate R in real time in useriWith TriThe three-dimensional position of tracker can be obtained according to formula (5) Appearance.Formula (3) are returned to, due to KiFor known calibration data, therefore it need to only know that several groups are correspondingIt can seek Rri、Tri.In situation known to this Intrinsic Matrix, with n spatial point image point estimation position for video camera corresponding with them The method of appearance, i.e. spin matrix and translation matrix, referred to as PnP (perspective-n-point) problem, it can be divided into two The case where class, one kind is 3≤n≤5, another kind of the case where being n >=6.The research focus of first kind PnP problem is the problem that determines For real solution at most up to how many, conclusion includes: that P3P problem is up to 4 solutions;When 4 control points are coplanar, P4P problem has only One solution, and when 4 control points are non-coplanar, P4P problem is up to 4 solutions;P5P problem can have up to two solutions.Second class PnP problem can use DLT (Direct Linear Transform) method linear solution.It can about being discussed in detail for PnP problem With bibliography [1] ([1] Wu Y, Hu Z.PnP Problem Revisited [J] .Journal of Mathematical Imaging and Vision, 2006,24 (1): 131-141), which is not described herein again.
HTC VIVE system is gathered around there are two base station, for a tracker, if first base station has taken on tracker p1The image coordinate of a sensor, second base station have taken p2The image coordinate of a sensor, then HTC VIVE system is wanted It asks and works as p1>=5 or p2When >=5, the pose of the tracker can be just calculated.Work as p1>=5 and p2When >=5, two base stations can be respective The spatial pose of tracker is obtained according to formula (5), R can be denoted as respectively1、T1、R2、T2.It needs to obtain two base stations at this time Pose data are merged, and to obtain, precision is higher, the stronger tracker pose of robustness.The pose fusion that HTC VIVE is used Shown in algorithm such as formula (6):
Wherein Slerp () is spherical linear interpolating function (referring to document [2] https: //en.wikipedia.org/ Wiki/Slerp), α is coefficient, shown in calculation method such as formula (7):
α=p1/(p1+p2) (7)
The pose fusion method as shown in formula (6) is only applicable to the fusion of two pose data, works as base station number Amount will be unable to carry out pose fusion using formula (6) when being greater than 2.
Pose calculation method proposed by the present invention is not limited to the case where only there are two base stations, it is suitable for any amount base Stand (or video camera) the case where.J-th of sensor points of simultaneous formula (1) and formula (5) available tracker are in rigid body Three-dimensional coordinate under local coordinate systemTo the image coordinate of i-th of base station imaging planeProjection relation, following institute Show:
Formula (8) is equivalent to the form of formula (9):
WhereinForAntisymmetric matrix, ifThen have:
IfThen have It is available to bring M into formula (9):
IfPi=[pi1, pi2, pi3, pi4], and enableWherein
By CijIt brings into formula (11), available unknown number is three equations of R and T:
Since formula (13) describes the homogeneous coordinate transformation degenerated, wherein only 2 equations are independent, therefore only select The first two equation in modus ponens (13) is for solving R and T.Due toThereforeBring formula into (13) it is obtained in the first two equation:
Enable aij=[0, -1, vij]T, bij=[1,0 ,-uij]T, and will be in formula (12)WithIt brings into formula (14), It obtains:
The both members of formula (15) are taken into transposition, are obtained:
Formula (16) is one group of corresponding three-dimensional space point and effective equation group that two-dimensional image point generates, when there are N groups When such corresponding points, formula (16) can be rewritten into the system of linear equations of standard, and as shown in formula (17), wherein A is 2N × 12 Matrix, the column vector that X is 12 × 1, B are the column vector of 2N × 1.
AX=B
X=[r11, r12, r13, t1, r21, r22, r23, t2, r31, r32, r33, t3]T (17)
Thus the present invention will seek tracker pose R, T conversion to solve system of linear equations problem shown in formula (17). The dimension for noticing unknown number X is 12, therefore as N >=6, the equation can pass through X=A+The mode of B seeks analytic solutions, A+For A Generalized inverse.As 4≤N≤5, equation AX=B owes fixed, has multiple solutions, but can pass through the alternative manner of increase constraint condition It solves.Due to whole elements of the spin matrix R in X comprising tracker, therefore can be rotated by extracting 9 elements in X Matrix R, process function representation shown in formula (18):
R=fR(X) (18)
Because spin matrix R is unitary matrice (i.e. unit orthogonal matrix), meet RR-1=I and R-1=RT, I be 3 × 3 unit Matrix, therefore available constraint condition: RRT=I, i.e. RRT- I=0.It is possible thereby to by Solving Linear shown in formula (17) Problem is converted into following optimization problem:
Optimization problem shown in formula (19) can be by solution by iterative method, and a kind of common method is Levenberg- Marquardt algorithm, details can refer to document [3] (Mor é J J.The Levenberg-Marquardt algorithm: Implementation and theory[J].Lecture Notes in Mathematics,1978,630:105-116)。
The step of present invention seeks tracker pose is summarized below:
Step 1. utilizes three-dimensional coordinate of a certain sensor points under rigid body local coordinate system on trackerIt is right with its The two dimensional image coordinate for some base station answered2 effective equation groups are obtained according to formula (8)-(16) process.
Step 2. is had each group of sensor three-dimensional point image coordinate point corresponding with its according to the method for Step 1 Equation group is imitated, then all equation groups are formed into the system of linear equations shaped like AX=B according to formula (17).
Step 3. selects different calculation methods according to the difference of corresponding points quantity N.Analytic method X=A is used as N >=6+B is solved, and is solved as 4≤N≤5 using optimal method shown in formula (19).
The present invention calculates the three-dimensional pose of tracker from the angle of global optimization, and the representative of typical method at present HTC VIVE system is compared, and the method for the present invention relaxes tracker pose design conditions, while when base station number being supported to be more than 2 Pose data fusion, the more accurate robust of calculated result.Fig. 2 compared the method for the present invention and HTC VIVE method processed Difference in journey.
As can be seen that HTC VIVE uses the calculation method based on distributed thought, need individually to calculate tracker It is merged relative to the pose of each base station, then by them.The present invention is not examined based on the calculation method of global optimization thought Consider pose of the tracker relative to each base station, only is used for its corresponding points information to construct system of linear equations, it is linear by solving Equation group obtains global optimum's pose of tracker, does not need data fusion.
As an example it is assumed that being denoted as p by the sensor points quantity that i-th of base station takes on a trackeri, i=1, Base station number (is denoted as M here) by 2 ..., M, and for HTC VIVE system M=2, it must satisfy at least one piWhen >=5, Tracker pose can be calculated.Work as p1>=5 and p2When >=5, it calculates pose of the tracker relative to two base stations, need using Formula (6) carries out the fusion of pose data to obtain final result.For the present invention, the number M of base station is unrestricted, only It needs to meetThe pose of tracker can be calculated, this greatly reduces the condition of pose calculating.Such as work as p1=2, p1=2, p2When=2, HTC VIVE system cannot calculate pose, and the method for the present invention can calculate pose.For another example work as p1=5, p2When=3, HTC VIVE system can only calculate pose of the tracker relative to base station 1, and pose of the tracker relative to base station 2 Since corresponding points lazy weight can not calculate, this is equivalent to 3 groups of corresponding points information for wasting base station 2.The method of the present invention according to Formula (16) and (17), can be all of all corresponding points information, therefore calculated result is by more accurate robust.Lower 1 compared The performance difference of the method for the present invention and HTC VIVE method.
The performance comparison of 1 the method for the present invention of table and HTC VIVE method
In conclusion the above is merely preferred embodiments of the present invention, being not intended to limit the scope of the present invention. All within the spirits and principles of the present invention, any modification, equivalent replacement, improvement and so on should be included in of the invention Within protection scope.

Claims (2)

1. a kind of optical tracker pose calculation method characterized by comprising
Step 1 is directed to each sensor, determines that it can receive the transmitter of signal, a sensor can be received with it A transmitter to signal receives combination as a transmitting, traverses all the sensors, counts all transmitting reception groups Number is closed, and is denoted as N;
Step 2 receives combination for any one transmitting, enables sensor serial number j therein, transmitter serial number is expressed as i;Then Determine three-dimensional coordinate of j-th of sensor under itself rigid body coordinate systemDetermine that j-th of sensor can receive letter at it Number i-th of transmitter in two dimensional image coordinateThen the three-dimensional received in combination in correspondence with each other about the transmitting is established Effective equation group of spatial point and two-dimensional image point:
Wherein, pi1、pi2、pi3And pi4Indicate the projection between sensor rigid body coordinate system and the image coordinate system of i-th of transmitter Relational matrix PiIn element;aij=[0, -1, vij]T, bij=[1,0 ,-uij]T, wherein uijAnd vijRespectively indicate two dimensional image CoordinateIn two change in coordinate axis direction coordinate;Wherein,Indicate that sensor rigid body coordinate system is transformed into the spin matrix of transmitter coordinate system,Indicate that sensor rigid body coordinate system is transformed into the translation matrix of transmitter coordinate system;
Step 3 emits to receive to combine for each and establishes equation group shown in a formula (1), and N number of transmitting reception is combined to obtain the final product To N number of equation group, the system of linear equations of 2N dimension is consequently formed;
The line style equation group that step 3 is formed is rewritten into following form by step 4:
AX=B (2)
Wherein A is the matrix of 2N × 12,
The column vector that X is 12 × 1, X=[r11,r12,r13,t1,r21,r22,r23,t2,r31,r32,r33,t3]T
B is the column vector of 2N × 1,
Step 5, as 4≤N≤5, formula (2) is solved method particularly includes:
9 elements are extracted in X and obtain spin matrix R, are indicated are as follows:
R=fR(X)
And spin matrix R is made to be unitary matrice, meet RR-1=I and R-1=RT, I be 3 × 3 unit matrix;
Then following optimization problem is converted by Solving Linear problem shown in formula (2):
That is: meeting s.t.fR(X)fR(X)TUnder the constraint condition of-I=0, makeIt takes most The X of small value is optimal solution, realizes that pose resolves;
As N >=6, formula (2) is solved using analytic method, obtains X, realizes that pose resolves.
2. a kind of optical tracker pose calculation method as described in claim 1, which is characterized in that in the step 5, use Levenberg-Marquardt algorithm solves the optimization problem.
CN201710545644.2A 2017-07-06 2017-07-06 A kind of optical tracker pose calculation method of global optimum Active CN107452036B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710545644.2A CN107452036B (en) 2017-07-06 2017-07-06 A kind of optical tracker pose calculation method of global optimum

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710545644.2A CN107452036B (en) 2017-07-06 2017-07-06 A kind of optical tracker pose calculation method of global optimum

Publications (2)

Publication Number Publication Date
CN107452036A CN107452036A (en) 2017-12-08
CN107452036B true CN107452036B (en) 2019-11-29

Family

ID=60488337

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710545644.2A Active CN107452036B (en) 2017-07-06 2017-07-06 A kind of optical tracker pose calculation method of global optimum

Country Status (1)

Country Link
CN (1) CN107452036B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108333579A (en) * 2018-02-08 2018-07-27 高强 A kind of system and method for the light sensation equipment dense deployment based on Vive Lighthouse
CN108765498B (en) 2018-05-30 2019-08-23 百度在线网络技术(北京)有限公司 Monocular vision tracking, device and storage medium
CN109032329B (en) * 2018-05-31 2021-06-29 中国人民解放军军事科学院国防科技创新研究院 Space consistency keeping method for multi-person augmented reality interaction
CN113359987B (en) * 2021-06-03 2023-12-26 煤炭科学技术研究院有限公司 Semi-physical fully-mechanized mining and real-time operating platform based on VR virtual reality

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101750012A (en) * 2008-12-19 2010-06-23 中国科学院沈阳自动化研究所 Device for measuring six-dimensional position poses of object
CN104484523A (en) * 2014-12-12 2015-04-01 西安交通大学 Equipment and method for realizing augmented reality induced maintenance system
CN104777700A (en) * 2015-04-01 2015-07-15 北京理工大学 Multi-projector optimized deployment method realizing high-immersion projection
CN106908764A (en) * 2017-01-13 2017-06-30 北京理工大学 A kind of multiple target optical tracking method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9699375B2 (en) * 2013-04-05 2017-07-04 Nokia Technology Oy Method and apparatus for determining camera location information and/or camera pose information according to a global coordinate system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101750012A (en) * 2008-12-19 2010-06-23 中国科学院沈阳自动化研究所 Device for measuring six-dimensional position poses of object
CN104484523A (en) * 2014-12-12 2015-04-01 西安交通大学 Equipment and method for realizing augmented reality induced maintenance system
CN104777700A (en) * 2015-04-01 2015-07-15 北京理工大学 Multi-projector optimized deployment method realizing high-immersion projection
CN106908764A (en) * 2017-01-13 2017-06-30 北京理工大学 A kind of multiple target optical tracking method

Also Published As

Publication number Publication date
CN107452036A (en) 2017-12-08

Similar Documents

Publication Publication Date Title
RU2698402C1 (en) Method of training a convolutional neural network for image reconstruction and a system for forming an image depth map (versions)
CN107452036B (en) A kind of optical tracker pose calculation method of global optimum
CN104376552B (en) A kind of virtual combat method of 3D models and two dimensional image
CN101216304B (en) Systems and methods for object dimension estimation
CN102572505A (en) In-home depth camera calibration
CN107077743A (en) System and method for the dynamic calibration of array camera
EP3818741A1 (en) Method, apparatus and computer program for performing three dimensional radio model construction
Mariottini et al. Planar mirrors for image-based robot localization and 3-D reconstruction
KR20080029080A (en) System for estimating self-position of the mobile robot using monocular zoom-camara and method therefor
Jaegle et al. Fast, robust, continuous monocular egomotion computation
Cvišić et al. Recalibrating the KITTI dataset camera setup for improved odometry accuracy
CN114119739A (en) Binocular vision-based hand key point space coordinate acquisition method
CN108362205A (en) Space ranging method based on fringe projection
Huang et al. 360vo: Visual odometry using a single 360 camera
CN110070578A (en) A kind of winding detection method
Fremont et al. Circular targets for 3d alignment of video and lidar sensors
JP2009186287A (en) Plane parameter estimating device, plane parameter estimating method, and plane parameter estimating program
CN110796699B (en) Optimal view angle selection method and three-dimensional human skeleton detection method for multi-view camera system
Streckel et al. Lens model selection for visual tracking
Wang et al. Perspective 3-D Euclidean reconstruction with varying camera parameters
CN113436264B (en) Pose calculation method and system based on monocular and monocular hybrid positioning
Boittiaux et al. Homography-based loss function for camera pose regression
WO2020124091A1 (en) Automatic fine-grained radio map construction and adaptation
Fleck et al. Graph cut based panoramic 3D modeling and ground truth comparison with a mobile platform–The Wägele
Gao et al. An improved iterative solution to the pnp problem

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information
CB03 Change of inventor or designer information

Inventor after: Bao Yihua

Inventor after: Weng Dongdong

Inventor after: Li Dong

Inventor after: Hu Xiang

Inventor before: Weng Dongdong

Inventor before: Li Dong

Inventor before: Hu Xiang