CA1285634C - Telerobotic tracker - Google Patents

Telerobotic tracker

Info

Publication number
CA1285634C
CA1285634C CA000550521A CA550521A CA1285634C CA 1285634 C CA1285634 C CA 1285634C CA 000550521 A CA000550521 A CA 000550521A CA 550521 A CA550521 A CA 550521A CA 1285634 C CA1285634 C CA 1285634C
Authority
CA
Canada
Prior art keywords
camera
denote
robot
arm
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
CA000550521A
Other languages
French (fr)
Inventor
Joseph Sze- Chiang Yuan
Richard Anthony Macdonald
Felix How Ngo Keung
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Spar Aerospace Ltd
Original Assignee
Spar Aerospace Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Spar Aerospace Ltd filed Critical Spar Aerospace Ltd
Application granted granted Critical
Publication of CA1285634C publication Critical patent/CA1285634C/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Manipulator (AREA)

Abstract

ABSTRACT
A telerobotic system adapted for tracking and handling a moving object comprises a robot manipulator, a video monitor, an image processor, hand controls and a computer. The robot manipulator comprises a movable robotic arm having an effector for handling an object, a drive system for moving the arm in response to arm input signals, sensors for sensing the position of the arm and for generating arm output signals which characterize the dynamic motion behaviour of the arm, and a video camera carried by the arm. The camera responds to motion of the movinq object within the field of view of the camera. The video monitor receives an input video signal from the video camera, for displaying an image of the object to a human operator. The image processor is responsive to the output signal of the camera, and is capable of acquiring and pre-processing an image of the object on a Frame by frame basis. The hand control is capable of generating a hand control output signal in response to input from a human operator. The computer generates arm input signals and is disposed between the hand control means, the robot manipulator, and image processr. The computer receives (i) output signals from the image processor and (ii) the arm output signals and (iii) the hand control output signal and generates arm input signals in response to the received signals whereby the arm tracks the motion of the object.

Description

This invention relates to a traclcing system ~or tracking the position and orientation of moving objects. In particular, this invention relates to a telerobotic trackin~ system for use in the tracking and handling of moving objects.
PRIOR ART
For many years now, teleoperation using manipulators has been a standard practice in the remote handling of objects in hostile environments. One of the most prominent examples is the space shuttle's remote manipulator system (SRMS) which allows an operator to retrieve free-flying payloads from a control station located in the aft flight deck of the shuttle.
A fundamental requirement for using the SRMS is that the payload must be sufficiently stabilized in it~ motion relative to the shuttle before it can be grappled by the end effector.
Indeed, this has been demonstrated rather dramatically during the missions involving Solarmax and TDRSS.
Up to now, most of the payloads retrieved by the SRMS
have been more or less stationary relative to the base Oe the manipulator. Hence, the task has been accomplished without much difficulty in most cases Unfortunately, this mode oE teleoperation can present severe difficulties when the payload is in motion and when the operator must perform tile usual manipulation tasks and track the motion~of the target at the same time. Without a certain degree of automation, the workload imposed on the operator could render such a task completely intractable~
, ~ ~ .

::
:

3~

- 3485-89 SPl -Our telerobotic tracker employs a manipulator equipped with a camera mounted on its end effector, and can autonomously track the motion of the target. The basic concept is that, in the absence of any operator command, the camera image of the target as seen by an operator remains stationary. Working from the video ima~e of the apparently stationary target, the operator can then perform the usual remote manipulation tasks without having to compensate for the motion of the payload.
Apart from uses in space, the tracking robot also has commercial potential in terrestrial environments. These applications range from the retrieval of hovering aircraft from a ship deck to the online inspection of workpieces on a moving conveyor.
Prior to this, a system for detecting target ran~e and orientation has been developed independently by the National Research Council of Canada (NRCC). This system has been disclosed as follows:
[1] H. F. L. Pinkney, C. I. Perratt, V. Kratky, and A. A.
Ayad, "On the application of automatic, real-time single camera photo~rammetry to derive the relative spatial orientation and position of objects in machine tasks: A
conceptual outline and preliminary evaluation", NRCC/
NAE Laboratory Technical Report, LTR-ST-1007, Au~ust 1981.
[2] H. F. L. Pinkney, "Theory and development of an on-line . :
30 Hz video photogrammetry system for rea] time three-dimensional control", ISP Symposium on ~ :
~ Photogrammetry for Industry, Stockholm, Au~ust 1978.
:~ :
~: :
~ 2 --' ' ' -. ' ', - ~

- 3485-89 SPl -[3] V. Kratky, "Analytical study of a photogrammetric solution for real-time three-dimensional control", ISP
Symposium on Photogrammetry for Industry, Stockholm, August 1978.
[4] R. C. Hughes, "Enhanced single camera photogrammetry algorithms for real-time control applications,"
International Society for Photogrammetry and Remote Sensing, Commission V Symposium, Ottawa, June 1986.

However, in contr:ast to our telerobot, the NRCC system, as described in [1]-[4J above, is designed only for determining target range and orientation and does not ma~e explicit allowance or either autonomous target tracking or man-in-the-loop teleoperation.
The NRCC method is based on a photogrammetry solution of the classical collinearity eguations as described in, e.g., [5] Wolf, R.P., Elements of Photogrammetry, McGraw-Rill Inc., 1974.

The algorithm is iterative from sample to sample, i.e., the data ; for eacb iteration in the alogrithm are taken from successively sampled image frames. However, the algorithm does not necessarily produce the correct photogrammetric solution at each sampling interval and to prove the convergence of the algorithm is by no means an easy task.
In contrast, the photogrsmmetric ~solution adopted in our : ~:

~ :

' .
~ . , Si3~

34~5-89 SPl -telerobotic ~system is novel insofar as it i5 based entirely on linear al~ebraic techniques and makes no explicit use of the classical collinearity equations. The algorithm is also iterative, but the iteration cycle is carried to convergence within each sampling interval so that the correct photogrammetric solution is generated for each data sample.
In order to speed up the computation, the NRCC sol~tion has been tailored to fit only certain specific tar~et configurations, such as a square point array or a symmetric three-point pattern. Unfortunately, this also means that a predesigned point array pattern must first be installed onto each target.
The solution used in our telerobot is completely general in the sense that it is not constrained by the choice of the feature points on the target. As a result, there is no need to install specially-designed point patterns onto the target.
St1MMARY OF INVENTION
___ In our telerobot concept, the manipulator, equipped with an end-effector-mounted camera, automatically tracks the motion of the tar~et so that, in the absence of any operator command, the camera image of the latter remains stationary. This then allows the operator to perform remote manipulation tasks on the target without having to compensate for payload.
This telerobot concept has been demonstrated in a laboratory environment with a prototype system consisting o commercially-available equipment and specially-designed software.

~ - 4 : ~ :
'~ ' ' ~ , . ' ' ' , : .

- 3485-89 SPl -With the aid of specially-designed control software, our syst~m can autonomously track the motion of a moving object while allowing man-in-the loop teleoperation. The tracking system monitors the target's position and orientation uslng the image data collected from a single wrist-mounted camera. A
fast-converging iterative photogrammetry algorithm together with a proportional Eeedback controller maintain real-time tracking of the object in all six degrees-of freedom. The prototype uses an image processor, a 32-bit minicomputer, an industrial robot and a video camera, all of which are commercially available. The tracking/control system in the demonstration can operate at a sampling frequency of between 20 and 40 Hz.
According to one aspect of the present invention there is provided a telerobotic system adapted for handling a moving object, said object having at least three Eeature points of known position, comprising:
(a) a robot manipulator comprising (i) a movable robotic arm, (ii) means for moving said arm in response to arm input signals, (iii) means for sensing the position of said arm and for generating arm output signal~s, which arm output signals characterize the dynamic motion behaviour o~ said arm, and (iv) a video camera carried by said arm, said camera being adapted to respond to motion of said moving object within :: , ~ the field of view o~f said camera;

,~''` .
~ - 5 ::

:~

.

:-' ' ~ ' ' :
.: ,,, - 3485-8g SPl -(b) video monitor which receives an input video signal from said video camera, for displaying an image to a human operator;
(c) image proces~sing means, responsive to the output signal of said camera, capable of acquiring and pre-processing an image on a frame by frame basis;
~d) hand con~rol means capable of generating a hand control output signal in response to input from a human operator;
(e) computer means disposed between said hand control means, said robot manipulator, and said image processing means, said computer means receiving (i) the output signal from said image processing means, (ii) the output signal from said hand control means, and (iii) said arm output signals and generating arm input signals in response to said input signals to said computer means for tracking the target.
The invention will be more clearly understood after reference to the following detailed specification read in conjunction with the drawings wherein Figure 1 is a block diagram illustrating the control system configuration, Figure 2 is a block diagram illustrating the resolved rate control o target position, Figure 3 is a block diagram illustrating the resolved ~: rate control of target orientation, Figure~4 is a diagrammatic illustration of a target tracking system suitable for use in demonstrating the present invention, ~ Figure 5 is a block diagram illustrating the robot servo :~ systems, ~ - 6 , . ' s~

- 3~85-89 SPl -Figure 6 is a block diagram of the computer system, Figure 7 is a block diagram of the image processor, Figure 8 is a pictorial view of a target suitable for use in demonstrating the present invention, Figure 9 is a software block diagram, Figure 10 is a diagram which illustrates the relationship between the camera, target and image plane~ and Figure ll is a diagramatic representation of the robot illustrating the various position vectors.
With reference to Figures l and 4 of the drawings, the reference numeral 10 refers generally to the robot manipulator, ~the reference numeral 12 reers generally to the computer, the reference numeral 14 refers to the image processor, the reference numeral 16 refers generally to the video monitor and the numeral 18 refers generally to the hand controller. The movable target is generally identified by the reference numeral 20.
OVERVIEW OF THE SYSTEM
Figure l contains a schematic description of the tracking and handling system, or tracking system. Refer AlSO to Figure 4, which portrays a perspective view of the equipment from which the system may be assembled.
As can be seen, the system includes the following subsystems:
A. Robot manipulator 10.
B. Image processor 14.
C. Computer 12.
D. Video Monitor 16.
E. Hand controller 18. -$~q~

- 3485-89 SPl -The target 20 is to be tracked, or, tracked and handled by the system.
A general description of the ~unctions of the various elements in the system now follows.
The robot manipulator 10 includes robot arm 30, joint servos 32, ~oint sensors 34 and camera 36. The arm 30 moves in accordance with the input signals to the joint servos 32, while the output signals from the joint sensors 34 denote the positions of the individual joints in the arm 30 in relation to each other.
The overall joint actuation and sensing process ~or a typical robot is illustrated in Figure 5.
The outputs from the joint sensors 34 can be translated via kinematic transformation to relate the position and orientation of the end-effector 31 (i.e., the last link in the arm 30) to the base of the manipulator 10.
The camera 36 is mounted on tha end-effector 31 of the arm 30. The image processor 14 acquires the video signal from the camera 36 and preprocesses the images on a frame by frame basis.
The outputs of the image processor 14 comprise the instantaneous locations and rates of motion of selected feature points on the target 20 as observed on the image plane of the camera 36.
The camera 36 also outputs signals directly to the vidso monitor 16 thus providing the operator with an instantaneous view of the target 20 as seen from the end-effector of the arm 30.
The hand controller 18 allows a human operator to command the motion of the target 20 relative to the end-effector of the .~
~ 8 ~: ~

.. .

. .
, ~3~

- 3~5-89 SPl -arm 30 based on the visual information of the target as perceived on the video monitor 16.
The computer 12 accepts the outputs from the hand controller 18, the ima~e processor 14 and the joint sensors 34 from the robot manipulator 10, and computes the command signals required to drive the joint servos 32~ The computer contains software to perform the followin~ control functions:
(a) Kinematic transformation, (b) Photogrammetry, (c) Target motion estimation, and (d) Tracking c~ntrol.
The detailed algorithms for control functions (b), (c) and (d) will be presented hereinafter. A general description of these functions now follows.
(a) Kinematic Transformation The mathematical relationships between the position, orientation and velocities of the end-effector of a robot manipulator and the positions and rates of its joints are described by a set of algebraic transformations. These kinematic transEormations are derivable for most robot manipulators using standard technigues in robotics and are therefore considered to be well-known to those skilled in the art. Hence, these will not be described here.
(b) Photogrammetry The objective of the photogrammetry algorithm is to estimate the position and orientation (as well as the linear and angular velocities) of the tar~et with respect to the camera. For _g_ ' ~2~

- 3485-89 SPl -this a general solution has been developed which yields the photogrammetry information based on a two-dimensional image of the target recorded by the robot-moounted camera.
In the photogrammetry solution, each target is configured by a number of feature (or, control) points whose locations on the target are known a priori. In general, for a numerically robust algorithm, no more than 5 control points are needed; 3 or 4 points will typically suffice. Furthermore, the solution is almost always unique in the case o~ 4 coplanar control points. Our solution however, is completely general in the s~nse that it is applicable to any control-point configuration. Any identifiable feature on the target may be used as a control point provided its location on the target is known a priori. In principle any salient features of an object could be used.
tc) Target Motion Estimation ` The photogrammetry solution yields the motion (i.e., position, orientation and rates) of the target relative to the camera. The motion of the camera relative to the base of the robot, on the other hand, is described by the kinematic transformtion algorithm. Thus, it is now possible to estimate the motion of the target relative to the base of the robot. This information~will be needed in the subsequent control algorithms.
(d) Tracking Control The idea of the so-called robotically assisted : ` :
teleoperation concept is to allow the operator to issue commands from a rate-type hand controller based on the camera image of the :~ :
~; ~ target (Figure 1~. The hand controller inputs therefore consist ~ -- 10 --.

~ , ' ' :

3.~

- 3485-89 SPl -of commands for target velocities relative to the camera reference frame. Based on the estimated motion of the target, the control algorithm then generates the appropriate commands for the ~oint servos in the robot.
The resolved rate control algorithms for position and orientation are depicted in Figures 2 and 3, respectively. Within the control algorithm, standard proportional feedback is used.
Orientation control is accomplished here with quaternion feedback which represents a novel approach in robotic control. This concept has been in use for some time in spacecraft attitude control problems, see, e.g., [6] Mortensen, R.E., "A Globally Stable Linear Attitude Regulator", Int. J. Control, Vol.8, No.3, 1968, pp.297-302.
[7] Ickes, B.P., "A New Method for Performing Digital Control System Attitude Compensation Using Quaternions", AIAA Journal, Vol.8, No.l, January 1970, pp.l3-17.
[8] Wie, B. and Barba, P.M., "Quaternion Feedback for Spacecraft Large Angle Maneuvers", AIAA Journal of Guidance, Control and Dynamics, Vol.8, No.3, May-June 1985, pp.360-365.
: ~:
A discuss~ion of the mathematics of (1) the photogrammetry ;~ algorithm, (2) target motion estimator and (3) tracking control algorithm is next presented.

'~ ~

, ': -: '' . ~ ' ''~ ' - 3485-89 SPl -DETAILED DESCRIPTION OF THE COMPUTER ALGORITHMS
The algorithms or (A) photogrammetry, (B) target motion estimation and (C) resolved rate control are presented below.

(A~ PHOTOGRAMMETRY ALGORITHMS

Notation For the purpose of photogrammetric analysis, the camera optics can be represented by a projection centre located at a distance of fe ~the effective focal length) behind the image plane as shown in Fig. 10. The two reference frames of interest here are: FC fixed at the projection centre of the camera, and FT fixed in the target object, each reference frams being represented by a unit vector triad -- (XC, YC~ ZC) for Fc and (xT, YT~ T
The xC-axls is taken to be the optical axis of the camera.
Let the position o each image point be denoted by the vector Ii whose coordinates in Fc are expressed as ~ . ~
~ ~ Ii = [fe Yi Zi] (2.1) '~' ' ~ ' ' .
Here Yi and Zi are the coordinates of the image point measured on the image plane of the camera. (Throu~hout this specification, a superscript "T" denotes the transpose of a vector or matrix.) ~ The tar~et reference frame FT may be arbitrarily chosen - ~ ~ provided the positi`on of each control point (Pi) in this frame is.:
~ known.~ Denote the position o the target by the vector PCT
. ~ :
directed from the origin o Fc to the base of FT.

. ~ .
' , . ~ ~ '. ', '- :
.
:, - . . - . .

- 3485-~9 SPl -In this specification we shall represent the target's orientation by the rotation matrix BCT which has the property that for any vector whose coordinates are expressed as x in Fc and y in FT, we have x = BCT Y
This representation is unique in the sense that there is a one-to-one correspondence between orientation and the numerical values of the elements in BCT~ However, the elements in BCT are not aIl independent since its rows (and columns) must form an orthonormal set. Indeed, it can easily be seen that the columns in BCT are in fact the coordinates of the unit vectors ~xT, YT, ) measured in Fc Likewise, the rows of B~T represent the triad (XC~ YC~ ZC) in FT coordinates.
The photo~rammetry problem can now be posed formally as follows.
Given:
fe, effective focal length of the camera, Pi~ i = l..n, the coordinates of n control points in FT, ::
and li, i = l..n, the coordinates of n image points in Fc.

Find:
~ ~ .
PCT~ position of FT in Fc-coordinates, and BCT~ the orientation matrix of FT in Fc.

.

: ':
~ -- - : ~' - . :

, . .
.
' ~ ' ' ' ' ' ;

- 3485-~9 SPl -Photogrammetry Algorithm for position and orientation From the geometry depicted in Fig. 10, the position of each of tlle n control points can be expressed in Fc-coordinates as ~ i Ii = PcT + BCT Pi ~i = 1 .. n) (3.1) where ~i are positive scalars. The classical collinearity equations are obtained by taking the ratio of each of the last two equations of (3.1) to the f irst one (viz-~ Yi/fe and zi/fe) This eliminates the unknowns ~i from (3.1) but leaves the equations in a ~orm from which little can be said about the structure of a general solution.
If, instead o~ elimination ~i~ we remove PcT from (3.1) by subtracting, say, the nth equation from the rest, we get n In = BCT ~Pi ~ Pn)~ (i = 1 .. n-l) (3.2) Assuming these equations can be solved for the unknowns ~i and~
BCT~ we can easily recover PCT from the following averaging process n PCT = (l/n) ~ (~i Ii ~ BCT Pi) i=l (3.3) Henceforth, we shall direct our attention only to the solution of (3.2~).
Define the unknowns in Eq. (3.2) by the vectors ::
;~ (3.4) ~ a = [al~aT2 aT3]
where ai (i = 1,..3) represent the rows in the orientation matrix:
al~
BCT := aT2 (3.5) La3_ .
~ - 14 -~' : .
.
, . . . . . . . . . .

. : , , ' : ' ' .',,, , ' '; ,' 3~

- 34~5-~9 SPl -(Here and below, the symbol ":=" denotes a definition.) Equation (3.2) can thus be written in the form Rx P O o r ~1 Ry O P O L ~ = (3.6) Rz O O P
where Rx, Ry and Rz each has a dimension of (n-l) X n, and P is an n-l) X 3 matrix. These matrices are defined as follows:

fe O ~fe Rx :=O fe , ~fe (3.7a) fe ~e Yl O -Yl -Ry :=Y2 ~Yn (3.7b) O '~ :
Yn-l ~Yn 1 O ~Zn Rz := Z2 ~Zn (3.7c) ~ ~ O '~ :
~ O Zn-l ~Zn :
-(P1 ~ pn)T
p _ .
. ~3.7d) ~(Pn-l ~ Pn) ~; ~ where Yi and Zi are taken directly from the coordinates of the image vectors li defined in ~2.1), and Pi are the control point :: ~
coordinates.

Note that each row in P represents a vector directed at the nth control point from one of the remainin~ points. Thus, ::

~L~ ~3~3 ~

- 3485-89 SPl -these vectors span a linear subspace occupied by the control points with a dimension given by the rank of P (i.e., the number of linearly independent rows in P). For, instance, when the control points all lie in a straight line, P clearly has a rank of one. In the case of coplanar points, there are no more than two linearly independent row vectors in P. In ~eneral, for a random distribution of four or more control points, P will attain a maximal rank of three.
Through Gaussian elimination (see, e.g., Noble, B., Applied Linear Algebra, Prentice-Hall, 1969), we can find a nonsingular matrix E such that E P = Pl (3.8) LO_ i where Pl consists of r linearly independent rows, r being the i rank of P. Premultiplying each of Rx, Ry and Rz by E, we get ; ~RXll ~ Ry11 ~Rz~
~E Rx ~ L I ; E Rv = L I ; E Rz = L ~ 3-9) where the partitioning separates the first r rows of each matrix from the remaining (n-r-l) rows. Equation (3.6) can now be .
written as Rxl Pl Ryl Pl O _ Rz O O Pl a : ~ ~ _ .
~ ~ , ... .

~ . ~,, -.' , . ::

- 3485-~9 SPl -[ ] ~ ~ (3.10a) . _ ~ ~ ~ = [R2] ~ = (3.10b) The algebraic solution to the above equations is ~iven by [ ~ ~ Ker 1RI Pl]; ~ ~ Ker [R2] (3.11) a where Ker [.] denotes the kernel (or, null s~ace) of a matrix.
(The kernel of a matrix B is the set of all linearly independent vectors x such at B x = O. The number o~ such vectors which are :
nonzero defines the dimension of the kernel. We say a kernel is nontrivial if it contains at least one nonzero e~ement.) Assuming the kernels in (3.11) to be nontrivial, we can write the solution to Eq.(3.10) as = [Ker R2 X13 (3.12a) al~ -Pl Rx 1 Ker Pl~ 0 0 ~

a = a2 = -Pl Ry1 Ker Pl 0 [x2~ (3.12b) ~: a3 -^P1 Rz 0 0 Ker Pl where~ [.I+ denotes matrix pseudoinverse and Xi (i = 1, 2) are :. .
~' ' .

' ' - 3485-8g SPl -arbitrary vectors of appropriate dimensions. Since Pl has full row rank, its pseudoinverse is given by + T T -1 Pl = Pl [Pl Pl ]
where the matrix product [Pl Pl] is nonsingular. Equivalently, we can combine the two solutions of (3.12) into one and write -Pl RX1 ~Ker R2] Ker Pl O ~X11 a = -Pl Ryl [Ker R2] Ker Pl O L

-Pl Rz [Ker R2] 0 ker Pl ~ 1 rvll - :~
31 ~ (3.13) The problem now is to solve Eq.(3.13) for x subject to the following additional constraints on the component vectors of a:
T
~, ai aj = ~ij ; al = a2 X a3 (3.14) :
where~ij is equal to one when i=j and zero otherwise. The cross-product constralnt is to en~orce a "right-hand rule"
rotation sequence for the unit vector triad. In principle, these constraints impose nine quadratic equations on x. In practice, however, it bas been Eound that the orthonormality conditions in (3~14) alone are suficient to resolve the tar~et orientation in mo$t cases.

, , . ,, , .

~: ' : ' ' - 3~85-89 SPl -The six orthonormality constraints of (3.14) can be formulated as T T
X Vl Vl x ~ 1 T T
x V2 V2 x - 1 x V3 V3 x - 1 := f(x) = 0 t3.15) T T
X Vl V2 X
T T
x V2 V3 x T T
x V3 Vl x Since all the functions in (3.15) are twice-differentiable in x, we may use, in principle, any appropriate gradient technique for its solution. Here we shall choose Newton's method (see, e.~., Luenberger, D.G., Optimization by Vector Space Methods, John Wiley and Sons, 1969) because it yields a very fast (in fact, quadratic) rate of convergence as the iteration approaches a solution.
To apply Newton's method, we first linearize Eg.(3.15) about the current iterative point Xk as follows (xk + ~Xk) ~ f(xk) + ~ f(xk) ~xk (3.16) where ~ xk denotes an increment to xk and Q f is the gradient matrix, or Jacobian, of f. The latter, in this case, is simply given by -':' ~
: ~ :
; .
:
: ~ :

'` ' ... . . - . ~ .
~ ' ' .
- ' .

~ ~ : . . :' ., '' - 3485-89 SPl T T
2 x Vl V
T T
2 x ~2 V2 T T
2 x V~ v3 V f(x) = xT (VT V -~ vT V ) (3.17) The iterative point is then updated accordin~ to the following :recursive expression _ _ xk+l = Xk ~ V f(xk' ~ f(xk) (3~I8) The pseudoinverse in the solution is defined ,as follows:
[9fT V f]_l V fT , di,m (x) C 6 (3.19a) ~' :[V f]+ = [V f]-l , dim (x) = 6 (3.19b) V fT[ V f V fT]~l , dim (x) ;~6 (3.19c) prov~ded the required matrix inve'rsion exists. Here dim (.) denotes the dimension o a vector. Note that (3.19a) and (3.19c), ' when used with (3.18),~in Eact yield the least squares and the minimum-norm solutions, respectively, to the equation f(xk) + V f(xk) ~Xk a O
:
: The photogrammetry solution is now complete and can be . summarized by the following algorithm:

: STEP 1 Construct the matrix P in (3.7d) from the control point `~: coordinates Pi, i=l.. n. Then use Gaussian elimination to find the matrices E and Pl defined in (3.8). Also, ; ~ ~ compute the pseudoinverse and kernel of Pl.
,: ~ :
~:: ~: :
:
~'~ - 20 -. ~ ' ~ :

~ :; ' ~ ' '' , ' ~
- . .
,~ .
.:

- 3~85-89 SPl -STEP 2 Construct the matrices Rx, Ry and Rz defined in (3.7a-c) from the image vector coordinates Ii, i=l..n.
Premultiply each of these matrices by E to yield the matrices Rxi~ Ryi and Rzi (i=l, 2) defined in (3.9).

STEP 3 Find the kernel of R2 which is defined in (3.10b).

STEP 4 Construct the matrices Vi, i=1..3, defined in (3.13).

STEP 5 Select an initial estimate of the vector xk (k=0).

STEP 6 Evaluate the function f defined in ~3.15) at the current iterative point Xk. If the norm is within a predefined threshold of zero, then go to Step 8O

STEP 7 Compute the Jacobian V f from (3.17). If the required matrix inversion in (3.19) fails due to singularity, then return to Step 6 with an arbitrarily perturbed iterative point~ Otherwise, update xk according to (3.18) and return to Step 6.
. . ~ : ' :
STEP 8 Compute the vectors a and ~ from (3.13) and (3.12a), respectively. Assemble BCT as in (3.5) and recover PCT
from ~3.3).

:
: :
.

3`~

- 34~5-89 SPl -If at any iterative point the Jacobian i.s close to being singular, then perturhing the point by an arbitrary amount will almost certainly render the matrix nonsingular again. This explains the rationale behind the precaution taken in Step 7 of the algorithm.
Note that Step 1 needs to be processed only once for any given target and can therefore be done offline. Also, in the case of sequentially-sampled image data (e.g., when trackin~ a moving target~, the above procedure must be carried to convergence over each image frame. In this case, the solution of each iteration cycle may be used as the starting point for the following cycle.
This ensures the interative points will stay sufficiently close to the true solution to yield quadratic convergénce.

Photogrammetry Algorithm for Velocities .
Once the position and orientation of the target frame FT
with respect to the camera ~rame Fc are known, their relative v~locities can be obtained as follows.
We start by differentiating (3-1) to yield:

:: .

~ i Ii = PCT ~ wCT x ACT Pi~ (i = l.-.. n) (4-1) - .
.

:

' .. . .
-~ .
, 3L ~$~34 - 3485-89 SPl -where PcT and wcT are the linear and angular velocities, respectively, of FT relative to Fc. We assume the ima~e velocity vectors are also available as:

Ii = [ Yi æi]T, i = l... n ~ .
(4-2) If we denote qi := ACT Pi, i = l....n (4-3) and write the cross product of two vectors u = ~ul u2 u3]T and v in matrix notation as u x v = UX v := ~ U3 -uo3 -ull v (4-4) :
,-u2 Ul then (4-1) becomes:

~i Ii + ~i Ii = PcT - ~ix wCT, i = l..~..n (4-5) As before, we delete the term PCT by substracting the nth equation ;of ~4-5) from the rest and get-n In + ~i Ii ~ ~n In a (qi - qn)x wCT, i = l...(n-l) :
Re-arranging terms, we can write (4-6) as:
x ~ ~ h ~ ~4-7) ; Rz P h2 , : `

~2as~

- 3485--89 SPl -where ,B := [~gl .... ,~n]T
(4-8) and the matrices Rx, Ry and Rz are defined as in (3-7)- Px~ Py and Pz are (n-l) X 3 matrices given by, _ _ I
Px := -(qiZ ~nz) (qjy ~ qny) t4-9a) . (j = l...n-l) .
r.i , .
Py := ~qjz ~~nz) O ~( qjx ~ qn'x) (4-9b) . (j = l...n-l) .
. ~ . .
z := ~(qjy ~~ny) (qjx ~ ~nx) (4-9c) . (j = l...n-l) .

~qix~ ~iy and qiz bein0 the components f qi defined in (4-3). h and h2 are (n-l)-vectors given by;

1 L n Yn j Yi] h2 = L~n Zn ~ ~] Zj~ (4-lO~
(j=l...n-~l) tj-l..n~

A qeneral solution to (4-7) is then given by;

P j h ~4 :

: , : ~ : :,. , -~ ~ .
': . ' - . - ,. : -.

:~ , :

'- 3485-89 SPl -providing the pseudoinverse exists.
Once ~i and wcT have been computed from (4-11), the linear velocity PCT follot~s directly from averaging the equations in (4-5):
n PCT = (l/n) ~ wcT x ACT Pi) (4-15) i =l and the photogrammetry solution is complete.

TARGET MOTION ESTIMATION AND TRACKING CONTROL ALGORITHM

The following is a description of the algorithms for (s) target motion estimation and (C) tracking control.
Notation The letter p is used generally to denote a position vector. The letter B is the orientation matrix of one frame of reference expressed in terms of another frame of reference. The letters w and p represent the angular and linear velocity, respectively, of one frame of reference in terms of another. The various reference frames and position vectors are described in Fig. 11.
We define the following for both the target motion : :
estimation and the tracking control algorithms.
Ph~ Boh: position & orientation of robot's hand w.r~t.
its base~

.~ .

, .

~: :
~; ' , '~

- 3~85-89 SPl -Ph~ wh: linear & angular velocities of robot's hand w.r.t. base.
PT, BOT: estimated position & orientation of target w.r.t. robot's base.
PT, WT: estimated linear & angular velocities of target w.r.t. base.
PC~ Boc: position & orientation of camera w.r.t. robot's hase.
PC, wc: linear & an~ular velocities of camera w.r.t.
base.
- PCh~ BCh: position and orientation of camera on robot's hand (assumed fixed).
; PcT; BCT: estimated position & orientation of target ~ w.r.t. camera.
PcT, wcT: estimated linear & angular velocities of tar~et w.r.t. camera.
* *
, wcT: commanded (from hand controller) velocities of tar~et relative to camera.
,:
* *
Ph~ Wh : commanded outputs to the resolved-rate controller of the robot.
~`': ~ :
For any vector v with components [x, y, z], define the matrix :` O -z y vX = z 0 -x -y x O

~ ~ , ~ - 26 -, :
: ~
:~,: :

:

, .
.
~'' :12~

- 3485-89 SPl -(B) Target Motion Estimation Algorithm The photogrammetry algorithm yields the parameters: ( BCT~ PCT~ wc~r) and the instantaneous values of (Ph~ Boh~ Ph~ Wh) are obtained from the robot's kinematic transformation model.
From the latter, together with (PhC, BhC) which describe the camera's position and orientation on the robot, we can calculate (pc,Boc). With this information, we can compute the followin~
estimates of the target's motion:

PT = PC + BoC PcT
BOT = Boh BhC BCT
PT = Ph ~ wh (PT ~ Ph~ ~ Boh BhC PCl WT = Wh + Boh BhC WCT

(C) Tracking Control Algorithm :
The commands for the resolved-rate controller o~ the robot are generated by the following eguations: ~

where : I O
l = (PT _ ph~x I

: :
.
.

- 3485-89 SPl -rBOc~ O
E2 =

(PT -PH)xBoc BoC _ * *
VV = PCT - F (PcT ~ PcT) where I is a unit matrix and F is a positive-definite matrix. p CT
the desired target position relative to the camera, is obtained by intergrating the commanded rate p CT~
The signal vw is computed as follows~

VW = w _ K (nCT qCT ~ nCT qCT ~ q qCT) CT CT
where K is a positive-definite matrix and (nCT~ qCT~ nCT~ qCT) are Euler parameters generated by integrating the following equations:
2 nCT = _qT wCT; 2 qCT = (nCTI * ~x ) wcT
CT CT
2 nCT = _qT ~w ; 2 qCT = (nCTI + q ) w CT CT CT
where a superscript "T" denotes matrix or vector transpose and I
i S a unit matrix.
IMPLEMENTATION ASPECTS
Figure 4 illustrates the equipment that comprises an operating laboratory demonstration of the system. Included are an ` industrial robot, a 32-bit minicomputer, an image processor, a moveable target and a video camera~ The features of each piece of equipment will be discussed below followed by a description of the software used to implement the tracking.
Industrial Robot The robot is an industrial electric robot (Cincinnati ~, ,~

: . .

~a~3~

- 3485-89 SPl -Milacron T3-776) with a payload capacity of 70 kg (suitable for carrying a camera, sensors and a reasonably sized tool or other object). For use in the laboratory as a test b~d for developing and evaluating control algorithms, the robot's microprocessor based control computer was replaced with an interface to the I/0 subsystem of a 32-bit minicomputer. The use of the minicomputer allows rapid prototyping and debugging Oe control software and provides enough computing power to execute 60phisticated control algorithms programmed in high level languages. The original analog rate servos for joint control and the sensors (resolvers~
for joint position measurement were used without modification in the minicomputer based control system. Figure 5 shows a block diagram of the robot after the modifications 'to the robot electronics were made.
Minicomputer The computer used is a high performance super-mini (Perkin-Elmer 3250XP) running a real-time operating system (OS/32 suitable for both development and run-time operation. The system features a high bandwidth (64 mb/sec) memory bus capable of supporting the CPU and I/0 subsystem simultaneously without performance degradation. This is important when I/0 from several devices (vlsion system and robot interface) must be accommodated while executing software with cycle times of less than 50 ms. The features of the computer system that are important to the tracking application are shown in Fi~ure 6.
~` :
:
.

.

} . , .
-- .

- 3485-89 SPl -Vision System The vision system is based on an image processor (RCI
Trapix) capable of acquiring and preprocessing an image on a frame by frame basis. The system has a dual memory bus architecture which is important for its use in the control system for the robot (discussed in the software section). The image processor has two computing elements: a pipeline processor capable of operating on image data as it is being acquired, and a pixel processor capable of preprocessing image data prior to its transfer to a host computer. In the tracking application, the pipeline processor is used only to control synchronization of the image acquisition with the processes running in the minicomputer while the pixel processor is used to calculate information about the targets in the image prior to transfer to the host. The details of the image processor that apply to the tracking demonstration are shown in Figure 7.
Moveable Target The target shown in Figure 8 was fabricated to allow ease of configuration of the control points required for the photogrammetry. The control points are taken to be the tips of illuminated plastic rods which are adjustable in the length.
Illumination of the rods is selected via switches on the operator console. ~
Video Camera The video camera contains a Charge Coupled Device (CCD) sensor. Output from the camera i5 standard video used by the image processor without preprocessing.

.
~, .

: ' .

.
.

- 3~85-89 SP1 -Software The software in the minicomputer that implements the target tracking is broken into three main tasks (Ref: Figure 9).
An IOTSK provides data acquisition and low level control of the joint rate servos in the robot. A VISION task maintains synchronization with the activities in the image processor and looks after capture and tracking of the individual control points (up to four). A CONTROL task is responsible for executing the control algorithms and the photogrammetry as well as supervising overall operation of the robot control system.
The photogrammetry requires an accurate determination of the position of the control points in the image. This is accomplished by establishing "windows" around the control points and calculatin~ the centroids of the part of the image within the windows. The position of the windows is ad~usted each control .
cycle in order to maintain tracking. For reasonable window sizes (e.g., 33 x 33 pixe]s) each centroid calculation would take about 10 ms in the super-mini. For multiple control points the overhead required to perform the centroid calculations prevents operation of the control system with the desired 50 ms cycle time. For this reason, software was developed to allow carrying out the required : i ~: :
computations in the image processor's pixel processor. This frees the minicomputer for more demanding calculations such as those !
that comprise the photogrammetry and kinematics.
The dual memory bus architecture of the image processor is essential for operation of the real-time trackin~ system.
Since~acquisition of a frame of video data takes a relatively long ::

:' ~
:~:

' ~ : ' , ~

- 3485-89 SPl -time (33 ms), it is important that the data obtained in a previous cycle can be processed without suspending the image digitization operation. This is accomplished by making use of the independent memory channels in the Trapix which allows concurrent image acquisition by the pipeline processor and processing by the pixel processor.
Approximate timing for the tracking demonstration is summarized as follows (expressed as % of the 50 ms cycle):

Mini Image Function Computer Processor Servo control 20% --Control laws 10% --Photogrammetry 20~
Centroid calc. -- 80%
(4 control pts) It should be noted that functions in the minicomputer and the image processor are carried out concurrently.
Experimental Result.s Since the moveable target is not instrumented a quantitative evaluation of the tracking was not possible. In order to get an estimate of tracking per~ormance however, the target was independently moved through all six degrees of freedom while an observer watched the video monitor displaying the camera image. The motion of the target that was most demanding on the tracking system performance was found to be pure yaw (rotation about its vertical axis). In a typical situation of this type (T3 about 2 meters from target, target rotating at 5/sec) the deviation in yaw oE the target with respect to the robot end ' ' - 3485-89 SPl -points was a maximum of 5. This is with the feedback gains adjusted to provide well-damped transient response. In all other translational and rotational combinations tracking was much closer.
With this level of performance the system i9 certainly useable for operations, like close range grappling of a moving target.
The concept of robotically-assisted teleoperation is a significant one insofar as the future role of teleoperation is concerned. Using only commercially available equipment, we have demonstrated here a method of vision-aided remote handling of objects in motion. '~his autonomous capability will have immediate applications in the repair and servicing of satellites in space as well as in terrestrial applications such as the manipulation of work pieces on moving conveyors.

~;

~ .
.

, ~ :;
~ !
'~ ' .~ .

' '~

Claims (10)

1. A telerobotic system adapted for handling a moving object, said object having at least three points of known position, comprising:
(a) a robot manipulator comprising (i) a movable robotic arm having an effector for handling an object, (ii) means for moving said arm in response to arm input signals, (iii) means for sensing the position of said arm and for generating arm output signals, which arm output signals characterize the dynamic motion behaviour of said arm, and (iv) a video camera carried by said arm, said camera being adapted to respond in real time to real time motion of said moving object within the field of view of said camera;
(b) a video monitor for receiving an input video signal from said video camera and for displaying an image of the object to a human operator;
(c) image processing means, responsive to the output signal of said camera, for acquiring and pre-processing an image of the object on a frame by frame basis;
(d) hand control means for generating a hand control output signal in response to input from a human operator;
(e) computer means for generating arm input signals disposed between said hand control means, said robot manipulator, and said image processing means, said computer means receiving (i) output signals from said image processing means and (ii) said arm output signals and (iii) said hand control output signal and generating arm input signals in response to said received signals whereby said arm tracks the real time motion of the object.
2. The telerobotic system of claim 1 wherein said computer means comprises:
(a) computer kinematic transformation means for generating an ouput signal indicative of the position and orientation of said camera relative to said robot manipulator in response to said arm output signals;
(b) computer photogrammetry means responsive to the output signal from said image processing means for generating an output signal indicative of the motion characteristics of said moving object relative to said camera, the moving object being within the field of view of said camera;
(c) target motion estimation means for receiving the output signal from said kinematic transformation means and said computer photogrammetry means and for producing an estimate of the motion characteristics of said moving object relative to said robot manipulator;
(d) control algorithm means which receive the output signals from said computer photogrammetry means, said target motion estimator means and said hand control means and for producing arm input signals for said means for moving said arm in response to such signals.
3. A telerobotic system as claimed in claim 1 wherein said computer means comprising means for solving the following tracking control algorithm where:

E1 = ;

E2 = ;

vv = ; and where I is a unit matrix, F is a positive-definite matrix; and P?T, the desired target position relative to the camera, is obtained by integrating the commanded rate ;
and wherein the signal vw is computed as follows:

vw = where K is a positive-definite matrix and (nCT, qCT, ?CT, ?CT) are Euler parameters generated by integrating the following equations:

2 ?CT = -qTCTwCT; 2 ?CT = (nCTI + qxCT) wCT

2 ?CT = -qT w?T ; 2 qCT = where a superscript "T" denotes matrix or vector transpose and I
is a unit matrix; and wherein:
the letter p is used generally to denote a position vector; and the letter B denotes the orientation matrix of one frame of reference expressed in terms of another frame of reference; and the letters w and p represent the angular and linear velocity, respectively, of one frame of reference in terms of another frame of references, wherein:
ph, BOh denote position and orientation of the robot's hand with respect to its base;
?h, wh denote linear and angular velocities of the robot's hand with respect to its base;
PT, BOT denote estimated position and orientation of a target with respect to the base of the robot;
?T, wT denote estimated linear and angular velocities `
of a target with respect to the base of the robot;
Pc, BOC denote position and orientation of said camera with respect to the base of the robot;

?C, WC denote linear and angular velocities of said camera with respect to base of the robot;
PCh, BCh denote position and orientation of said camera on the robot's hand (assumed to be fixed);
PCT, BCT denote estimated position and orientation of a target with respect to said camera;
PCT, wCT denote estimated linear and angular velocities of a target with respect to said camera;
, denote commanded (from hand controller) velocities of target relative to said camera, denote commanded outputs to the resolved-rate controller of said robot; and for any vector v with components [x, y, z], define the matrix vx =
4. A telerobotic system as claimed in claim 1 wherein said computer means comprises means for solving the following photogrammetry algorthm comprising the steps of:
(a) constructing matrix P from the control point coordinates Pi, determining matrices E and P1, using Gaussian elimination, and computing the psuedoinverse and kernel of P1;
(b) calculating matrices Rxi, Ryi, Rzi (i=1,2);

(c) finding the kernel of R2;
(d) constructing matrices Vi, (i=1,..3);
(e) selecting an initial estimate of vector xk (k=O);
(f) evaluating function f at the current iterative point xk and determining the norm of f;
(g) comparing the norm of f with a predefined threshold of zero;
(h) if the norm of f is within a predefined threshold of zero, computing vectors .alpha. and .beta., assembling BCT and recovering PCT;
(i) if the norm of f is outside a predefined threshold of zero, computing the Jacobian ? f and updating xk, and returning to step (f) above;
(j) if ? f cannot be computed because of a singularity, returning to step (f) with an arbitrarily perturbed iterative point xk+1
5. A telerobotic system as claimed in claim 4 wherein said computer means comprising means for solving the following tracking control algorithm where:

E1 = ;

E2 = vv = ; and where I is a unit matrix, F is a positive-definite matrix; and P?T, the desired target position relative to the camera, is obtained by integrating the commanded rate ;
and wherein the signal vw is computed as follows:
vw = where K is a positive-definite matrix and (nCT, qCT, ?CT, ?CT) are Euler parameters generated by integrating the following equations:

2 ?CT = -qTCTwCT; 2 ?CT = (nCTI + qxCT) wCT

2 ?CT = -qT w?T ; 2 qCT = where a superscript "T" denotes matrix or vector transpose and I
is a unit matrix; and wherein:
the letter p is used generally to denote a position vector; and of reference expressed in terms of another frame of reference; and the letters w and p represent the angular and linear velocity, respectively, of one frame of reference in terms of another frame of references, wherein:
ph, BOh denote position and orientation of the robot's hand with respect to its base;
?h, wh denote linear and angular velocities of the robot's hand with respect to its base;
PT, BOT denote estimated position and orientation of a target with respect to the base of the robot;
?T, WT denote estimated linear and angular velocities of a target with respect to the base of the robot;
PC, BOC denote position and orientation of said camera with respect to the base of the robot;
?C, WC denote linear and angular velocities of said camera with respect to base of the robot;
PCh, BCh denote position and orientation of said camera on the robot's hand (assumed to be fixed);
PCT, BCT denote estimated position and orientation of a target with respect to said camera;
PCT, wCT denote estimated linear and angular velocities of a target with respect to said camera;
, denote commanded (from hand controller) velocities of target relative to said camera, velocities of target relative to said camera, , denote commanded outputs to the resolved-rate controller of said robot; and for any vector v with components [x, y, z], define the matrix vx =
6. A telerobotic tracking system for tracking the movement of a moving object, said object having at least three points of known position, comprising:
(a) a robot manipulator comprising (i) a movable robotic arm, (ii) means for moving said arm in response to arm input signals, (iii) means for sensing the position of said arm and for generating arm output signals, which arm output signals characterize the dynamic motion behaviour of said arm, and (iv) a video camera carried by said arm, said camera being adapted to respond in real time to real time motion of said moving object within the field of view of said camera;
(b) a video monitor which receives an input video signal from said video camera, for displaying an image to a human operator;
(c) image processing means, responsive to the output signal of said camera, capable of acquiring and pre-processing an image on a frame by frame basis;
(d) computer means for receiving (i) the output signal from said image processing means and (ii) said arm output signals and generating arm input signals in response to said input signals to said computer means.
7. The telerobotic system of claim 6 wherein said computer means comprises:
(a) responsive to said arm output signal from computer kinematic transformation means for generating an output signal indicative of the position and orientation of said camera relative to said robot manipulator;
(b) computer photogrammetry means responsive to the output signal from said image processing means for generating an output signal indicative of the motion characteristics of said moving object relative to said camera, the moving object being within the field of view of said camera;
(c) target motion estimation means for receiving the output signal from said kinematic transformation means and said computer photogrammetry means and for producing an estimate of the motion characteristics of said moving object relative to said robot manipulator;
(d) control algorithm means which receive the output signals from said computer photogrammetry means, said target motion estimator means and for producing arm input signals for said means for moving said arm in response to such signals.
8. A telerobotic system as claimed in claim 6 wherein said computer means comprising means for solving the following tracking control algorithm where:

E1 = ;

E2 = ;

vv = ; and where I is a unit matrix, F is a positive-definite matrix; and P?T, the desired target position relative to the camera, is obtained by integrating the commanded rate ;
and wherein the signal vw is computed as follows:

vw = where K is a positive-definite matrix and (nCT, qCT, ?CT, ?CT) are Euler parameters generated by integrating the following equations:

2 ?CT = -qTCTwCT; 2 ?CT = (nCTI + qxCT) wCT

2 ?CT = -qT w?T ; 2 qCT = where a superscript "T" denotes matrix or vector transpose and I
is a unit matrix; and wherein:
the letter p is used generally to denote a position vector; and the letter B denotes the orientation matrix of one frame of reference expressed in terms of another frame of reference; and the letters w and p represent the angular and linear velocity, respectively, of one frame of reference in terms of another frame of references, wherein:
ph, BOh denote position and orientation of the robot's hand with respect to its base;
?h, wh denote linear and angular velocities of the robot's hand with respect to its base;
PT, BOT denote estimated position and orientation of a target with respect to the base of the robot;
?T, WT denote estimated linear and angular velocities of a target with respect to the base of the robot;
PC, BOC denote position and orientation of said camera with respect to the base of the robot;

?C, WC denote linear and angular velocities of said camera with respect to base of the robot;
PCh, BCh denote position and orientation of said camera on the robot's hand (assumed to be fixed);
PCT, BCT denote estimated position and orientation of a target with respect to said camera;
PCT, wCT denote estimated linear and angular velocities of a target with respect to said camera;
, denote commanded (from hand controller) velocities of target relative to said camera, , denote commanded outputs to the resolved rate controller of said robot; and for any vector v with components [x, y, z], define the matrix V =
9. A telerobotic system as claimed in claim 6 wherein said computer means comprises means for solving the following photogrammetry algorithm comprising the steps of:
(a) constructing matrix P from the control point coordinates Pi, determining matrices E and P1, using Gaussian elimination, and computing the psuedoinverse and kernel of P1;
(b) calculating matrices Rxi, Ryi, Rzi (i=1,2);

(c) finding the kernel of R2;
(d) constructing matrices Vi, (i=1,..3);
(e) selecting an initial estimate of vector xk (k=0);
(f) evaluating function f at the current iterative point xk and determining the norm of f;
(g) comparing the norm of f with a predefined threshold of zero;
(h) if the norm of f is within a predefined threshold of zero, computing vectors .alpha. and .beta., assembling BCT and recovering PCT;
(i) if the norm of f is outside a predefined threshold of zero, computing the Jacobian ? f and updating xk, and returning to step (f) above;
(j) if ? f cannot be computed because of a singularity, returning to step (f) with an arbitrarily perturbed iterative point xk+1.
10. A telerobotic system as claimed in claim 9 wherein said computer means comprising means for solving the following tracking control algorithm where:

E1 = ;

E2 = ;
vv = ; and where I is a unit matrix, F is a positive-definite matrix; and P?T, the desired target position relative to the camera, is obtained by integrating the commanded rate ;
and wherein the signal vw is computed as follows:

vw = where K is a positive-definite matrix and (nCT, qCT, ?CT, ?CT) are Euler parameters generated by integrating the following equations:

2 ?CT = -qTCTwCT; 2 ?CT = (nCTI + qxCT) wCT

2 ?CT = -qT w?T ; 2 qCT = where a superscript "T" denotes matrix or vector transpose and I
is a unit matrix; and wherein:
the letter p is used generally to denote a position vector; and the letter B denotes the orientation matrix of one frame of reference expressed in terms of another frame of reference; and the letters w and p represent the angular and linear velocity, respectively, of one frame of reference in terms of another frame of references, wherein:
ph, BOh denote position and orientation of the robot's hand with respect to its base;
?h, wh denote linear and angular velocities of the robot's hand with respect to its base;

PT, BOT denote estimated position and orientation of a target with respect to the base of the robot;
?T, wT denote estimated linear and angular velocities of a target with respect to the base of the robot;
PC, BOC denote position and orientation of said camera with respect to the base of the robot;

?C, wC denote linear and angular velocities of said camera with respect to base of the robot;
PCh, BCh denote position and orientation of said camera on the robot's hand (assumed to be fixed);
PCT, BCT denote estimated position and orientation of a target with respect to said camera;
PCT, wCT denote estimated linear and angular velocities of a target with respect to said camera;

, denote commanded (from hand controller) velocities of target relative to said camera, , denote commanded outputs to the resolved-rate controller of said robot; and for any vector v with components [x, y, z], define the matrix vx =
CA000550521A 1986-11-07 1987-10-28 Telerobotic tracker Expired - Lifetime CA1285634C (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US92787586A 1986-11-07 1986-11-07
US06/927,875 1986-11-07

Publications (1)

Publication Number Publication Date
CA1285634C true CA1285634C (en) 1991-07-02

Family

ID=25455391

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000550521A Expired - Lifetime CA1285634C (en) 1986-11-07 1987-10-28 Telerobotic tracker

Country Status (1)

Country Link
CA (1) CA1285634C (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110587569A (en) * 2019-09-09 2019-12-20 西安天圆光电科技有限公司 Cooperative teleoperation control method for double-arm robot

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110587569A (en) * 2019-09-09 2019-12-20 西安天圆光电科技有限公司 Cooperative teleoperation control method for double-arm robot

Similar Documents

Publication Publication Date Title
US4942538A (en) Telerobotic tracker
Jain et al. An analysis of the kinematics and dynamics of underactuated manipulators
Wilson et al. Relative end-effector control using cartesian position based visual servoing
Ozawa et al. Dynamic visual servoing with image moments for a quadrotor using a virtual spring approach
Siradjuddin et al. A position based visual tracking system for a 7 DOF robot manipulator using a Kinect camera
Caccavale et al. Quaternion-based kinematic control of redundant spacecraft/manipulator systems
Martinet et al. Robot control using monocular pose estimation
EP0323681B1 (en) Telerobotic tracker
Rybus et al. Manipulator trajectories during orbital servicing mission: numerical simulations and experiments on microgravity simulator
Peng et al. Modeling and analysis of the multiple dynamic coupling effects of a dual-arm space robotic system
Conticelli et al. Discrete-time robot visual feedback in 3D positioning tasks with depth adaptation
Rybus et al. Application of bidirectional rapidly exploring random trees (BiRRT) algorithm for collision-free trajectory planning of free-floating space manipulator
Xia et al. Hybrid force/position control of industrial robotic manipulator based on Kalman filter
CA1285634C (en) Telerobotic tracker
Shin et al. Dynamic control with adaptive identification for free-flying space robots in joint space
Maruyama et al. Robust control for planar manipulators with image feature parameter potential
Jin et al. Reaction torque control of redundant free-floating space robot
Jenkins Telerobotic work system-space robotics application
Koningstein et al. Experiments with model-simplified computed-torque manipulator controllers for free-flying robots
Zhao et al. Robust image-based control for spacecraft uncooperative rendezvous and synchronization using a zooming camera
CN107967241B (en) Base disturbance calculation method of space free floating robot
Mukerjee Adaptation In Biological Sensory-Motor Systems: A Model For Robotic Control.
Korpela et al. A hardware-in-the-loop test rig for aerial manipulation
Yang et al. Application Technology of Space Manipulator
Dyba et al. Active 6 DoF force/torque control based on Dynamic Jacobian for free-floating space manipulator

Legal Events

Date Code Title Description
MKLA Lapsed