CN110470298B - Robot vision servo pose estimation method based on rolling time domain - Google Patents

Robot vision servo pose estimation method based on rolling time domain Download PDF

Info

Publication number
CN110470298B
CN110470298B CN201910597156.5A CN201910597156A CN110470298B CN 110470298 B CN110470298 B CN 110470298B CN 201910597156 A CN201910597156 A CN 201910597156A CN 110470298 B CN110470298 B CN 110470298B
Authority
CN
China
Prior art keywords
vector
time domain
time
equation
coordinate system
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910597156.5A
Other languages
Chinese (zh)
Other versions
CN110470298A (en
Inventor
俞立
卢威威
刘安东
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201910597156.5A priority Critical patent/CN110470298B/en
Publication of CN110470298A publication Critical patent/CN110470298A/en
Application granted granted Critical
Publication of CN110470298B publication Critical patent/CN110470298B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Abstract

A robot vision servo pose estimation method based on a rolling time domain comprises the following steps: 1) performing feature point transformation by using a camera projection model; 2) establishing a discrete time model; 3) defining a cost function according to a discrete time model and a rolling time domain strategy; 4) and ensuring that the cost function reaches the minimum, thereby designing the optimal rolling time domain estimator. The invention provides a robot vision servo pose estimation method based on a rolling time domain, which minimizes a cost function by introducing a rolling time domain objective function and determines a design scheme of an optimal prediction equation.

Description

Robot vision servo pose estimation method based on rolling time domain
Technical Field
The invention relates to a robot vision servo system, in particular to a pose estimation method based on a rolling time domain.
Background
With the development of scientific technology and control technology, computer vision has been widely applied in various fields, wherein the pose estimation problem of a Robot Vision Servo (RVS) system has been receiving wide attention. Pose estimation refers to the use of image information to determine the position and pose of a camera relative to an object coordinate system, which the robotic system can use to perform real-time control of the robot's motion. Aiming at the research of the pose estimation of the robot vision servo system, the theoretical result of the pose estimation of the robot can be enriched, the higher and higher requirements of multiple fields on the pose estimation technology can be met, and the method has practical theoretical and engineering significance.
However, in a practical environment, pose estimation of RVS systems has two main difficulties, respectively the efficiency of pose estimation and its robustness. Meanwhile, noise interference always exists in the robot in the motion process, and the pose estimation problem of the robot is actually a state estimation problem with the noise interference. Currently, kalman filtering methods are mainly applied to solve these difficulties. The existing methods for solving the nonlinear problem are also extensions of the linear system Kalman filtering methods, such as the most commonly used Extended Kalman Filter (EKF), Unscented Kalman Filter (UKF), and the like. Wang et al, in a paper (3D relative position and orientation estimation using Kalman filtering for robot control), propose an Extended Kalman Filtering (EKF) method for the problem of robot pose estimation. Shademan et al in the paper (sensory analysis of EKF and iterative EKF for position-based visual serving) mainly used iterative Kalman filtering (I-EKF) algorithm and compared with Extended Kalman Filtering (EKF) algorithm. Ficocelli et al in the paper (Adaptive filtering for position estimation in visual serving) use the Adaptive Kalman filtering (A-EKF) algorithm to realize the pose estimation of the robot, however, none of the above methods completely solves the problems of efficiency and robustness of RVS pose estimation. Therefore, research on a robot vision servo system pose estimation method based on a rolling time domain is necessary.
Disclosure of Invention
In order to overcome the defect that the prior art cannot solve the problem of estimation of the robot visual servo pose, the invention provides a robot visual servo pose estimation method based on a rolling time domain.
The technical scheme adopted by the invention for solving the technical problems is as follows:
a robot vision servo pose estimation method based on a rolling time domain comprises the following steps:
1) feature point transformation;
defining the relative pose of the object with respect to the camera as W ═ X, Y, Z, φ, α, ψ]TThe coordinate vector of the jth feature point in the camera coordinate system is
Figure BDA0002117954700000021
The coordinate vector of the jth characteristic point in the object coordinate system is
Figure BDA0002117954700000022
The projection coordinate of the jth characteristic point on the image plane is
Figure BDA0002117954700000023
Wherein j is belonged to {1,2, …, 5}, X, Y, Z represent the relative position of the object coordinate system relative to the camera coordinate system, phi, alpha, psi represent the relative postures of the rolling, pitching and yawing parameters, and the relation between the object coordinate system and the camera coordinate system of the j-th characteristic point is
Figure BDA0002117954700000024
Wherein the content of the first and second substances,
Figure BDA0002117954700000025
according to the projection law, the projection coordinates of the characteristic points on the image plane are compared with
Figure BDA0002117954700000026
Has the transformation relation of
Figure BDA0002117954700000028
Wherein, PXAnd PYAre respectively an image plane XiAnd YiPixel spacing on axis, F is focal length;
2) establishing a discrete time model;
for pose estimation, the state vector at time k is defined as a form containing pose and velocity parameters as follows
Figure BDA0002117954700000027
Definition of ykIs the measurement vector at time k, initial state x0Is an unknown constant, ukControl vector, ξ, at time kkSystem noise vector at time k, ηkThe vector of the measurement noise at the time k is obtained, and the discrete time state equation is obtained by the following steps:
xk+1=Axk+Bukk (4)
yk=Cxkk (5)
wherein the content of the first and second substances,
Figure BDA0002117954700000031
in the form of a matrix of states,
Figure BDA0002117954700000032
b is a control input matrix, and B is a control input matrix,
Figure BDA0002117954700000033
is a measurement matrix associated with the feature points,
Figure BDA0002117954700000034
3) defining a cost function;
equation (4) is converted to the following equation based on the rolling time domain estimation:
Figure BDA0002117954700000035
wherein the content of the first and second substances,
Figure BDA0002117954700000036
is a state vector xk-M-1Based on the estimate of the time instant k-1,
Figure BDA0002117954700000037
is composed of
Figure BDA0002117954700000038
M is the rolling time domain window length; the cost function defining equation (6) is as follows
Figure BDA0002117954700000039
Wherein the content of the first and second substances,
Figure BDA00021179547000000310
and
Figure BDA00021179547000000311
is the euclidean norm, μ is a non-negative constant;
4) designing a rolling time domain estimator;
define the following vector
Figure BDA00021179547000000312
Figure BDA00021179547000000313
For a given
Figure BDA0002117954700000041
Finding an optimal estimate
Figure BDA0002117954700000042
Ensuring that the cost function (7) is minimized
min Λk (8)
And satisfy the constraint
Figure BDA0002117954700000043
According to a first-order KKT condition, the formula (7) is derived
Figure BDA0002117954700000044
Further, the optimal estimator obtained by the equation (10) is
Figure BDA0002117954700000045
Incorporating a given prior prediction
Figure BDA0002117954700000046
And an optimal estimator (11) for obtaining a final optimal prediction update equation as follows:
Figure BDA0002117954700000047
the technical conception of the invention is as follows: firstly, a camera projection model is used for carrying out feature point transformation, and system process noise and measurement noise are considered, so that a discrete time model is established; then, introducing and minimizing a cost function to obtain optimal prediction; and finally, combining the given prior prediction to obtain a final optimal prediction updating equation.
The invention has the following beneficial effects: a cost function is introduced and minimized to obtain optimal prediction, so that the state of a discrete time model can be better estimated; by choosing the appropriate free parameter u it is ensured that the rolling horizon estimator performs the estimation even under high noise influence.
Drawings
Fig. 1 is a schematic projection diagram of object feature points on an image plane.
Detailed Description
The invention is further described below with reference to the accompanying drawings.
Referring to fig. 1, a robot vision servo pose estimation method based on a rolling time domain includes the following steps:
1) feature point transformation;
defining the relative pose of the object with respect to the camera as W ═ X, Y, Z, φ, α, ψ]TThe coordinate vector of the jth feature point in the camera coordinate system is
Figure BDA0002117954700000048
The coordinate vector of the jth characteristic point in the object coordinate system is
Figure BDA0002117954700000051
The projection coordinate of the jth characteristic point on the image plane is
Figure BDA0002117954700000052
Wherein j is belonged to {1,2, …, 5}, X, Y, Z represent the relative position of the object coordinate system relative to the camera coordinate system, phi, alpha, psi represent the relative postures of the rolling, pitching and yawing parameters, and the relation between the object coordinate system and the camera coordinate system of the j-th characteristic point is
Figure BDA0002117954700000053
Wherein the content of the first and second substances,
Figure BDA0002117954700000054
according to the projection law, the projection coordinates of the characteristic points on the image plane are compared with
Figure BDA0002117954700000055
Has the transformation relation of
Figure BDA0002117954700000056
Wherein, PXAnd PYAre respectively an image plane XiAnd YiPixel spacing on axis, F is focal length;
2) establishing a discrete time model;
for pose estimation, the state vector at time k is defined as a form containing pose and velocity parameters as follows
Figure BDA0002117954700000057
Definition of ykIs the measurement vector at time k, initial state x0Is an unknown constant, ukControl vector, ξ, at time kkSystem noise vector at time k, ηkThe vector of the measurement noise at the time k is obtained, and the discrete time state equation is obtained by the following steps:
xk+1=Axk+Bukk (4)
yk=Cxkk (5)
wherein the content of the first and second substances,
Figure BDA0002117954700000058
in the form of a matrix of states,
Figure BDA0002117954700000059
b is a control input matrix, and B is a control input matrix,
Figure BDA00021179547000000510
is a measurement matrix associated with the feature points,
Figure BDA00021179547000000511
3) defining a cost function;
equation (4) is converted to the following equation based on the rolling time domain estimation:
Figure BDA0002117954700000061
wherein the content of the first and second substances,
Figure BDA0002117954700000062
is a state vector xk-M-1Based on the estimate of the time instant k-1,
Figure BDA0002117954700000063
is composed of
Figure BDA0002117954700000064
M is the rolling time domain window length; the cost function defining equation (6) is as follows
Figure BDA0002117954700000065
Wherein the content of the first and second substances,
Figure BDA0002117954700000066
and
Figure BDA0002117954700000067
is the euclidean norm, μ is a non-negative constant;
4) designing a rolling time domain estimator;
define the following vector
Figure BDA0002117954700000068
Figure BDA0002117954700000069
For a given
Figure BDA00021179547000000610
Finding an optimal estimate
Figure BDA00021179547000000611
Ensuring that the cost function (7) is minimized
min Λk (8)
And satisfy the constraint
Figure BDA00021179547000000612
According to a first-order KKT condition, the formula (7) is derived
Figure BDA00021179547000000613
Further, the optimal estimator obtained by the equation (10) is
Figure BDA00021179547000000614
Incorporating a given prior prediction
Figure BDA00021179547000000615
And an optimal estimator (11) for obtaining a final optimal prediction update equation as follows:
Figure BDA0002117954700000071

Claims (1)

1. a robot vision servo pose estimation method based on a rolling time domain comprises the following steps:
1) feature point transformation;
defining the relative pose of the object with respect to the camera as W ═ X, Y, Z, φ, α, ψ]TThe coordinate vector of the jth feature point in the camera coordinate system is
Figure FDA0002117954690000011
The coordinate vector of the jth characteristic point in the object coordinate system is
Figure FDA0002117954690000012
The projection coordinate of the jth characteristic point on the image plane is
Figure FDA0002117954690000013
Wherein j is belonged to {1,2, …, 5}, X, Y, Z represent the relative position of the object coordinate system relative to the camera coordinate system, phi, alpha, psi represent the relative postures of the rolling, pitching and yawing parameters, and the relation between the object coordinate system and the camera coordinate system of the j-th characteristic point is
Figure FDA0002117954690000014
Wherein the content of the first and second substances,
Figure FDA0002117954690000015
according to the projection law, the projection coordinates of the characteristic points on the image plane are compared with
Figure FDA0002117954690000016
Has the transformation relation of
Figure FDA0002117954690000017
Wherein, PXAnd PYAre respectively an image plane XiAnd YiPixel spacing on axis, F is focal length;
2) establishing a discrete time model;
for pose estimation, the state vector at time k is defined as a form containing pose and velocity parameters as follows
Figure FDA0002117954690000018
Definition of ykIs the measurement vector at time k, initial state x0Is an unknown constant, ukControl vector, ξ, at time kkSystem noise vector at time k, ηkThe vector of the measurement noise at the time k is obtained, and the discrete time state equation is obtained by the following steps:
xk+1=Axk+Bukk (4)
yk=Cxkk (5)
wherein the content of the first and second substances,
Figure FDA0002117954690000021
in the form of a matrix of states,
Figure FDA0002117954690000022
b is a control input matrix, and B is a control input matrix,
Figure FDA0002117954690000023
is a measurement matrix associated with the feature points,
Figure FDA0002117954690000024
3) defining a cost function;
equation (4) is converted to the following equation based on the rolling time domain estimation:
Figure FDA0002117954690000025
wherein the content of the first and second substances,
Figure FDA0002117954690000026
is a state vector xk-M-1Based on the estimate of the time instant k-1,
Figure FDA0002117954690000027
is composed of
Figure FDA0002117954690000028
M is the rolling time domain window length; the cost function defining equation (6) is as follows
Figure FDA0002117954690000029
Wherein the content of the first and second substances,
Figure FDA00021179546900000210
and
Figure FDA00021179546900000211
is the euclidean norm, μ is a non-negative constant;
4) designing a rolling time domain estimator;
define the following vector
Figure FDA00021179546900000212
Figure FDA00021179546900000213
For a given
Figure FDA00021179546900000214
Finding an optimal estimate
Figure FDA00021179546900000215
Ensuring that the cost function (7) is minimized
minΛk (8)
And satisfy the constraint
Figure FDA00021179546900000216
According to a first-order KKT condition, the formula (7) is derived
Figure FDA00021179546900000217
Further, the optimal estimator obtained by the equation (10) is
Figure FDA0002117954690000031
Incorporating a given prior prediction
Figure FDA0002117954690000032
And an optimal estimator (11) for obtaining a final optimal prediction update equation as follows:
Figure FDA0002117954690000033
CN201910597156.5A 2019-07-04 2019-07-04 Robot vision servo pose estimation method based on rolling time domain Active CN110470298B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910597156.5A CN110470298B (en) 2019-07-04 2019-07-04 Robot vision servo pose estimation method based on rolling time domain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910597156.5A CN110470298B (en) 2019-07-04 2019-07-04 Robot vision servo pose estimation method based on rolling time domain

Publications (2)

Publication Number Publication Date
CN110470298A CN110470298A (en) 2019-11-19
CN110470298B true CN110470298B (en) 2021-02-26

Family

ID=68506780

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910597156.5A Active CN110470298B (en) 2019-07-04 2019-07-04 Robot vision servo pose estimation method based on rolling time domain

Country Status (1)

Country Link
CN (1) CN110470298B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113822996B (en) * 2021-11-22 2022-02-22 之江实验室 Pose estimation method and device for robot, electronic device and storage medium
CN117506937B (en) * 2024-01-04 2024-03-12 中铁十四局集团大盾构工程有限公司 Weldment autonomous placement method based on multi-stage visual servo control

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106525049A (en) * 2016-11-08 2017-03-22 山东大学 Quadruped robot body posture tracking method based on computer vision
CN108711166A (en) * 2018-04-12 2018-10-26 浙江工业大学 A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN109102525A (en) * 2018-07-19 2018-12-28 浙江工业大学 A kind of mobile robot follow-up control method based on the estimation of adaptive pose
CN109213175A (en) * 2018-10-31 2019-01-15 浙江工业大学 A kind of mobile robot visual servo track tracking prediction control method based on primal-dual neural network
CN109509230A (en) * 2018-11-13 2019-03-22 武汉大学 A kind of SLAM method applied to more camera lens combined type panorama cameras

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9691163B2 (en) * 2013-01-07 2017-06-27 Wexenergy Innovations Llc System and method of measuring distances related to an object utilizing ancillary objects
CN109712172A (en) * 2018-12-28 2019-05-03 哈尔滨工业大学 A kind of pose measuring method of initial pose measurement combining target tracking
CN109495654B (en) * 2018-12-29 2021-08-03 武汉大学 Pedestrian safety sensing method based on smart phone

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106525049A (en) * 2016-11-08 2017-03-22 山东大学 Quadruped robot body posture tracking method based on computer vision
CN108711166A (en) * 2018-04-12 2018-10-26 浙江工业大学 A kind of monocular camera Scale Estimation Method based on quadrotor drone
CN109102525A (en) * 2018-07-19 2018-12-28 浙江工业大学 A kind of mobile robot follow-up control method based on the estimation of adaptive pose
CN109213175A (en) * 2018-10-31 2019-01-15 浙江工业大学 A kind of mobile robot visual servo track tracking prediction control method based on primal-dual neural network
CN109509230A (en) * 2018-11-13 2019-03-22 武汉大学 A kind of SLAM method applied to more camera lens combined type panorama cameras

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种新的惯性_视觉组合系统初始对准的时域约束方法;杨东方 等;《仪 器 仪 表 学 报》;20140430;第35卷(第4期);第788-793页 *

Also Published As

Publication number Publication date
CN110470298A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
CN108416840B (en) Three-dimensional scene dense reconstruction method based on monocular camera
CN109102525B (en) Mobile robot following control method based on self-adaptive posture estimation
CN110470298B (en) Robot vision servo pose estimation method based on rolling time domain
CN106371442B (en) A kind of mobile robot control method based on the transformation of tensor product model
Silveira On intensity-based nonmetric visual servoing
CN113532420A (en) Visual inertial odometer method integrating point-line characteristics
CN109443355B (en) Visual-inertial tight coupling combined navigation method based on self-adaptive Gaussian PF
Qiu et al. Concurrent-learning-based visual servo tracking and scene identification of mobile robots
CN112734823A (en) Jacobian matrix depth estimation method based on visual servo of image
CN117340929A (en) Flexible clamping jaw grabbing and disposing device and method based on three-dimensional point cloud data
Kuleshov et al. Machine learning in appearance-based robot self-localization
CN111709095A (en) Method for constructing 6D virtual clamp for complex curved surface
Xu et al. OD-SLAM: Real-time localization and mapping in dynamic environment through multi-sensor fusion
CN108469729B (en) Human body target identification and following method based on RGB-D information
CN115219492B (en) Appearance image acquisition method and device for three-dimensional object
CN109542094B (en) Mobile robot vision stabilization control without desired images
Ji et al. Adaptive correction of landmark for visual homing in mobile vehicles
CN112330698B (en) Improved image segmentation method for geometric active contour
Wu et al. Monocular vision SLAM based on key feature points selection
CN114434441A (en) Mobile robot visual servo tracking control method based on self-adaptive dynamic programming
CN114519813A (en) Mechanical arm target grabbing method and system
CN111523345B (en) Real-time human face tracking system and method
Kiddee et al. A real-time and robust feature detection method using hierarchical strategy and modified Kalman filter for thick plate seam tracking
CN113359626A (en) Finite time hierarchical control method for multi-robot system
LIU et al. Robot intelligence for real world applications

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant