CN113240717A - Error modeling position correction method based on three-dimensional target tracking - Google Patents

Error modeling position correction method based on three-dimensional target tracking Download PDF

Info

Publication number
CN113240717A
CN113240717A CN202110609753.2A CN202110609753A CN113240717A CN 113240717 A CN113240717 A CN 113240717A CN 202110609753 A CN202110609753 A CN 202110609753A CN 113240717 A CN113240717 A CN 113240717A
Authority
CN
China
Prior art keywords
dimensional
tracking
camera
error
method based
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110609753.2A
Other languages
Chinese (zh)
Other versions
CN113240717B (en
Inventor
李特
金立
陈文轩
王彬
秦学英
顾建军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202110609753.2A priority Critical patent/CN113240717B/en
Publication of CN113240717A publication Critical patent/CN113240717A/en
Application granted granted Critical
Publication of CN113240717B publication Critical patent/CN113240717B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Analysis (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The invention discloses an error modeling position correction method based on three-dimensional target tracking, which is characterized in that based on a three-dimensional tracking result, according to an error distribution curve of a specific object of a specific algorithm, modeling is carried out on error distribution of the three-dimensional target tracking, and further, position information of the three-dimensional tracking is corrected, so that more accurate position information of the three-dimensional object in space is obtained. The realization method of the invention is convenient, efficient and simple in calculation, and ensures the precision of the three-dimensional position information to be reliably ensured.

Description

Error modeling position correction method based on three-dimensional target tracking
Technical Field
The invention belongs to the field of computer vision, and particularly relates to an error modeling position correction method based on three-dimensional target tracking.
Background
Three-dimensional object tracking is one of Augmented Reality (AR) techniques, and solves the pose of an object in real time by estimating the relative positional relationship of a camera and the three-dimensional object in real time. Three-dimensional object tracking technology has wide application, for example, it can be applied to AR games, AR navigation in environments such as shopping malls using mobile devices, and electronic instruction manual for instrument maintenance, and real-time rendering steps or devices to be processed on a screen by tracking instruments.
When a three-dimensional virtual object (e.g., a three-dimensional Marker, a 3D Marker) needs to be rendered on a screen in real time during a tracking process to interact with a user, an error of a tracking algorithm causes an obtained motion track of the object to be not smooth, so that the 3D Marker shakes on the screen, which affects user experience.
Fig. 1 is a frame in a video sequence when a RBOT algorithm is used for tracking a three-dimensional object, and includes a projection view of the same pose under the view of two cameras, it can be found that the pose of the three-dimensional object under a camera coordinate system is obtained by tracking, and the three-dimensional object is projected into a corresponding video frame, and can be well overlapped with an object in a video, but a large deviation exists under other viewing angles, and the large deviation exists on a connection line from the center of the camera to the center of the three-dimensional object.
Disclosure of Invention
The invention aims to provide an error modeling position correction method based on three-dimensional target tracking aiming at the defects of the prior art.
The purpose of the invention is realized by the following technical scheme: an error modeling position correction method based on three-dimensional target tracking comprises the following steps:
the method comprises the following steps: and counting the distribution condition of the tracking errors on the training subset, averaging the intervals with the most accumulated errors, and substituting the intervals into an error model for tracking the three-dimensional target.
Step two: and carrying out three-dimensional tracking on a video sequence of a three-dimensional object to obtain preliminary three-dimensional position information.
Step three: and correcting the position information obtained in the step two by using an error model.
Further, the error model is: the error of the Euclidean distance from the three-dimensional object to the camera obtained by three-dimensional tracking is in direct proportion to the square of a straight line from the three-dimensional object to the camera:
Figure BDA0003095219440000011
wherein D istrIs the Euclidean distance from the three-dimensional object obtained by three-dimensional tracking to the center of the camera; dgtIs the true value of the euclidean distance of the three-dimensional object to the center of the camera. SigmaXIs a scale factor.
Further, in the third step, the correcting the position information specifically includes: solving the predicted value D of the Euclidean distance from the three-dimensional object to the camera by the following formulapr
Figure BDA0003095219440000021
Then, the position vector obtained by tracking the original three-dimensional target is represented in a unitization way as the orientation of the three-dimensional object relative to the camera, and is multiplied by DprAnd obtaining the translation distance required by the position correction, namely obtaining the corrected position.
Further, the scale factor σXObtained by simulation experiments.
Further, in the second step, a spatial smoothing process is performed on the preliminary three-dimensional position information obtained by tracking.
Further, in the third step, the corrected position information is converted into a representation in a camera coordinate system.
The invention has the beneficial effects that: the method is based on the three-dimensional tracking result, corrects the three-dimensional tracking position information and obtains more accurate position information of the three-dimensional object in the space, and the implementation method is convenient, efficient and simple in calculation; the precision of the three-dimensional position information is reliably ensured.
Drawings
FIG. 1 is a diagram illustrating the effect of three-dimensional object tracking using RBOT algorithm; wherein, (a) the projections of the actual object and the tracking result in the video frame coincide, and (b) the projections of the actual object and the tracking result under the sight line of the other camera do not coincide;
fig. 2 is a comparison graph of the distance from the three-dimensional object to the center of the camera and the corrected distance obtained by the target tracking algorithm.
Detailed Description
By performing error analysis on the result obtained by three-dimensional tracking, it is found that: the closer the three-dimensional object is to the camera, the more accurate the three-dimensional target tracking result is; the result of three-dimensional object tracking is always on the side of the real value of the three-dimensional object space coordinate to the camera center line near the camera, as shown in fig. 1. The motion of the camera in the space is continuous, and the tracked sequence curve is not smooth; the following conclusions can be drawn therefrom: the error of three-dimensional tracking is related to the distance from the three-dimensional object to the camera; the three-dimensional tracking result has a system error; the results of the three-dimensional tracking can be made closer to the reality of the motion of the three-dimensional object in space using a fitting process. The error of the Euclidean distance from the three-dimensional object to the camera obtained by three-dimensional tracking is obtained by a large number of experimental observations, and is in direct proportion to the square of a straight line from the three-dimensional object to the camera, and the formula is as follows:
Figure BDA0003095219440000022
wherein D istrThe Euclidean distance from a three-dimensional object obtained by three-dimensional tracking to the center of the camera; dgtThe real value of the Euclidean distance from the three-dimensional object to the center of the camera; sigmaXIs a scale factor.
Therefore, according to the error distribution curve of the specific object of the specific algorithm, error modeling can be carried out on the three-dimensional tracking, the error is processed, and the error of the three-dimensional tracking is reduced. For a certain object and an algorithm, carrying out simulation experiment statistics in advance to obtain sigmaXA value; then solving the predicted value D of the Euclidean distance from the three-dimensional object to the camera through a formulaprThe formula is as follows:
Figure BDA0003095219440000031
further, the position vector obtained by tracking the original three-dimensional target is represented in a unitization mode as the orientation of the three-dimensional object relative to the camera, and then is multiplied by DprAnd obtaining the translation distance required by the position correction, thus obtaining the corrected position information of the invention. And correcting the translation in the pose estimated by three-dimensional tracking to obtain a more accurate result.
The invention relates to an error modeling position correction method based on three-dimensional target tracking, which specifically comprises the following steps:
the method comprises the following steps: and counting the distribution condition of the tracking errors on the training subset, averaging the intervals with the most accumulated errors, and substituting the intervals into an error model for tracking the three-dimensional target.
Step two: and carrying out three-dimensional tracking on a video sequence of a three-dimensional object to obtain preliminary three-dimensional position information, and carrying out spatial smoothing processing.
Step three: and correcting the position information obtained in the step two by using an error model, and converting the position information into a representation in a camera coordinate system.
FIG. 2 is a graph of the results of a compensation experiment using the error model of the present invention after three-dimensional target tracking using the RBOT algorithm on a video sequence; wherein, the average error obtained by three-dimensional tracking before compensation is 4.1093mm, and the average error after compensation is 1.0550 mm.
According to the embodiment of the invention, the EDF algorithm and the RBOT algorithm are used for tracking the three-dimensional target of the rabbit, and the tracking result is subjected to position correction by adopting the method, so that the obtained accuracy is respectively improved by 15.44% and 21.14%.

Claims (6)

1. An error modeling position correction method based on three-dimensional target tracking is characterized by comprising the following steps:
the method comprises the following steps: and counting the distribution condition of the tracking errors on the training subset, averaging the intervals with the most accumulated errors, and substituting the intervals into an error model for tracking the three-dimensional target.
Step two: and carrying out three-dimensional tracking on a video sequence of a three-dimensional object to obtain preliminary three-dimensional position information.
Step three: and correcting the position information obtained in the step two by using an error model.
2. The error modeling position correction method based on three-dimensional target tracking according to claim 1, characterized in that the error model is: the error of the Euclidean distance from the three-dimensional object to the camera obtained by three-dimensional tracking is in direct proportion to the square of a straight line from the three-dimensional object to the camera:
Figure FDA0003095219430000011
wherein D istrIs the Euclidean distance from the three-dimensional object obtained by three-dimensional tracking to the center of the camera; dgtIs the true value of the euclidean distance of the three-dimensional object to the center of the camera. SigmaXIs a scale factor.
3. The error modeling position correction method based on three-dimensional target tracking according to claim 2, wherein in the third step, the correction position information specifically includes: solving the predicted value D of the Euclidean distance from the three-dimensional object to the camera by the following formulapr
Figure FDA0003095219430000012
Then, the position vector obtained by tracking the original three-dimensional target is represented in a unitization way as the orientation of the three-dimensional object relative to the camera, and is multiplied by DprAnd obtaining the translation distance required by the position correction, namely obtaining the corrected position.
4. The method of claim 2, wherein the scaling factor σ is used to correct the position of the object based on error modelingXObtained by simulation experiments.
5. The error modeling position correction method based on three-dimensional target tracking according to claim 1, characterized in that in the second step, a spatial smoothing process is performed on the preliminary three-dimensional position information obtained by tracking.
6. The error modeling position correction method based on three-dimensional target tracking according to claim 1, characterized in that in step three, the corrected position information is converted into a representation under a camera coordinate system.
CN202110609753.2A 2021-06-01 2021-06-01 Error modeling position correction method based on three-dimensional target tracking Active CN113240717B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110609753.2A CN113240717B (en) 2021-06-01 2021-06-01 Error modeling position correction method based on three-dimensional target tracking

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110609753.2A CN113240717B (en) 2021-06-01 2021-06-01 Error modeling position correction method based on three-dimensional target tracking

Publications (2)

Publication Number Publication Date
CN113240717A true CN113240717A (en) 2021-08-10
CN113240717B CN113240717B (en) 2022-12-23

Family

ID=77136271

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110609753.2A Active CN113240717B (en) 2021-06-01 2021-06-01 Error modeling position correction method based on three-dimensional target tracking

Country Status (1)

Country Link
CN (1) CN113240717B (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205824A (en) * 2015-09-25 2015-12-30 北京航空航天大学 Multi-camera global calibration method based on high-precision auxiliary cameras and ball targets
CN109325444A (en) * 2018-09-19 2019-02-12 山东大学 A kind of texture-free three-dimension object Attitude Tracking method of monocular based on 3-D geometric model
WO2019225547A1 (en) * 2018-05-23 2019-11-28 日本電信電話株式会社 Object tracking device, object tracking method, and object tracking program
US20200234452A1 (en) * 2017-08-30 2020-07-23 Mitsubishi Electric Corporation Imaging object tracking system and imaging object tracking method
CN112183355A (en) * 2020-09-28 2021-01-05 北京理工大学 Effluent height detection system and method based on binocular vision and deep learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205824A (en) * 2015-09-25 2015-12-30 北京航空航天大学 Multi-camera global calibration method based on high-precision auxiliary cameras and ball targets
US20200234452A1 (en) * 2017-08-30 2020-07-23 Mitsubishi Electric Corporation Imaging object tracking system and imaging object tracking method
WO2019225547A1 (en) * 2018-05-23 2019-11-28 日本電信電話株式会社 Object tracking device, object tracking method, and object tracking program
CN109325444A (en) * 2018-09-19 2019-02-12 山东大学 A kind of texture-free three-dimension object Attitude Tracking method of monocular based on 3-D geometric model
CN112183355A (en) * 2020-09-28 2021-01-05 北京理工大学 Effluent height detection system and method based on binocular vision and deep learning

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
JUAN LUO等: "Indoor Multi-Floor 3D Target Tracking Based on the Multi-Sensor Fusion", 《IEEE ACCESS》 *
尚洋等: "三维目标位姿跟踪与模型修正", 《测绘学报》 *

Also Published As

Publication number Publication date
CN113240717B (en) 2022-12-23

Similar Documents

Publication Publication Date Title
CN106558080B (en) Monocular camera external parameter online calibration method
US10438412B2 (en) Techniques to facilitate accurate real and virtual object positioning in displayed scenes
Song et al. Survey on camera calibration technique
CN111160298B (en) Robot and pose estimation method and device thereof
CN111311632B (en) Object pose tracking method, device and equipment
CN108629810B (en) Calibration method and device of binocular camera and terminal
CN111586384B (en) Projection image geometric correction method based on Bessel curved surface
CN110763306B (en) Monocular vision-based liquid level measurement system and method
CN113393524A (en) Target pose estimation method combining deep learning and contour point cloud reconstruction
CN116091625A (en) Binocular vision-based reference mark pose estimation method
CN112381887A (en) Multi-depth camera calibration method, device, equipment and medium
CN112015269A (en) Display correction method and device for head display device and storage medium
CN113240717B (en) Error modeling position correction method based on three-dimensional target tracking
JP6924455B1 (en) Trajectory calculation device, trajectory calculation method, trajectory calculation program
CN108053491A (en) The method that the three-dimensional tracking of planar target and augmented reality are realized under the conditions of dynamic visual angle
CN111915739A (en) Real-time three-dimensional panoramic information interactive information system
CN113643363B (en) Pedestrian positioning and track tracking method based on video image
CN114926542A (en) Mixed reality fixed reference system calibration method based on optical positioning system
Yan et al. A decoupled calibration method for camera intrinsic parameters and distortion coefficients
CN111932628A (en) Pose determination method and device, electronic equipment and storage medium
Ren et al. Self-calibration method of gyroscope and camera in video stabilization
CN113538490B (en) Video stream processing method and device
CN114092526B (en) Augmented reality method and device based on object 3D pose visual tracking
CN117274558B (en) AR navigation method, device and equipment for visual positioning and storage medium
CN109143213B (en) Double-camera long-distance detection method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant