CA1073111A - Method and apparatus for calibrating mechanical-visual part manipulating system - Google Patents

Method and apparatus for calibrating mechanical-visual part manipulating system

Info

Publication number
CA1073111A
CA1073111A CA261,838A CA261838A CA1073111A CA 1073111 A CA1073111 A CA 1073111A CA 261838 A CA261838 A CA 261838A CA 1073111 A CA1073111 A CA 1073111A
Authority
CA
Canada
Prior art keywords
coordinate
coordinate system
angle
vector
manipulator
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA261,838A
Other languages
French (fr)
Inventor
Carl F. Ruoff (Jr.)
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bendix Corp
Original Assignee
Bendix Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US05/636,069 external-priority patent/US3986007A/en
Application filed by Bendix Corp filed Critical Bendix Corp
Application granted granted Critical
Publication of CA1073111A publication Critical patent/CA1073111A/en
Expired legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • G01B11/022Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness by means of tv-camera scanning

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Numerical Control (AREA)
  • Manipulator (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)
  • Image Processing (AREA)
  • Bending Of Plates, Rods, And Pipes (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Abstract of the Disclosure An automaton includes a work part supporting and manipulating system having sensors which provide signals indica-tive of the position of the work holder in a first coordinate system and a television system which can observe a part and generate signals representative of the position of the part in the television system's coordinate system. A subsystem for generating conversion factors used to translate positions in the vision system coordinates into the manipulating system coordinates receives the output of both the manipulator sensors and the vision system while the manipulator moves a target through a series of points along a line. The subsystem determines the angle of rotation between the two coordinate systems and the distance from the origin of one system to the origin of the other system through calculations involving lines drawn through the points in the vision system and manipulating system coordinates. It also calculates a length scaling factor representing the ratio between an observed distance in the vision system and in the manipulating system.

Description

311~

.~;

Backqround o:e the Invention I. Fleld of the Inv ntlon This inventlon relates to workpiece manipulatlng systamshaving the abilit~ to observe a workpiece with a televislon subsystern and more particularly to a method of and apparatus for ~eneratlng transformation factors for translatlng maasurement in the v~ sion coordinate systern to equlvalent measurements in the manipulator coordinate system.

~V ~311i II. DescriPtion of the Prior_Art A variety of commercially availabla workpiece manipula-tors exist which ara capable of grasping a part and movlng lt through 3-dimen~lo~al spacs ~y means of coorc1inated motion~ along separately controlled axes. It has recently been proposed that such systems be equlpp~d with optical-alectrical lmage converter syst~ms and means for analyzing the output of the imags converter to determine the outline and location o~ a part within a field of motion of the manipulator. Outputs of this vision syst~m may b~
used to control the manipula~or and to inspect and identify parts to be manipulated.
Slnce the measurements of position and distance made by the vision system will inherently differ from equivalent maasure-ments maae by the manipulator sensors because different measure-ment scales and locations are used in the two systems, it is desirable to provlde means for converting signals representative of positions and distances as generated by tha vlsion system into signals representative of the sam~ mea~urements in the manipulaking system coordinates. Whlle the necessary conversion factors mlght be determined on an a priori basis from a knowledge of the para-meters of the two measurement systems, in practice it is found that both systems exhibit some instability over a period of time and this drift reduces the accuracy of such calculated transforma-tion parameters. Moreover, it may be desirable to separat~ly support the vision and manipulation syskems so that the two do not share incidental motions caused by support vibrations and the like.
Also, it may be desirable to reposition the vision system relative to the manipulation circuitry from time to time which would require a recalculation of the conversion factors.

Summar~ of the Present Invention Tha present invention is addressed to a method of deter-mining the constants of transformation for convartiny vision ... .

3~
. ` '~, .

system measurements into the coordinates of the manlpulating system so that these signals may be used in controlling a manipulating system and to a system for practicing the - -method. The method does not require a knowledge of the characteristics of the two measurement systems or depend upon the accuracy of this knowledge~ It is capable of ;~
generating conversion factors based on the instantaneous ``
characteristics of the two systems and their positions relative to one another at the time of practice of the method. Moreover, the method is sufficiently simple that ~ ;
it can be performed on a regular basis so that inaccuracies ~;
due to drift of ~he systems are minimized.
The present invention is used in a workpiece manipulating system having a manlpulator with jaw means for engaging a workpiece on a work area, the manipulator having sensor means for indicating the coordinate position of the ~aw means in a first coordinate system, and an ` ~ `
optical image converter positioned to view an image on the ~ --work area and generate a signal in response thereto, analyzing means operatively coupled with the converter for computing the coordinate position of the centroid of the image in a second coordi~ate system. The lnvention relates to a method for translating coordinate positions from the second coordinate system to coordinate positions in the first coordinate system comprising the steps of: placing a target in the jaw means, moving the jaw means and target between a predetermined number of points across the work area, determining the coordinate position of the target in both of the coordinate systems at each of the points, computing transformation factors for translating any coordinate position on the work area in the second coordinate system to the same coordinate position on the work area :Ln -the first coordinate system, and storing the transformation mb/~ci ~ 3 -10'-~3 factors.
That i5, the method of the present lnvention involves supporting a target, of such nature that its position is readily determinable with a high degree of accuracy by the vision system, in the work holder of the manipulating system. The manipulating system is then controlled to move the target through a series of positions which lie in a straight line relative to the manipulating system coordinates. The outputs of the manipulating system sensors which define the position of the work holding -fixture are recorded at each of the positions. Likewise, the vision system is controlled to generate the coordinates `~
of the center of the target at each of the points in its `
own coordinate system. The two resulting series of data are then used to compute equations of lines passing through the points on a least squares basis. The angle between these two lines is effectively the angle between the two '~
coordinate systems, the X~Y planes of which are assumed parallel. Next, a linear scale factor between the two systems is derived by calculating the distance between the beginning and end point of the position sequence in each ;~
coordinate system. The ratio of these distances will later `
be used to convert the length of a line in a vision system to the equivalent length of a line in the manipulator system coordinate. A translation vector extending from the origin of the manipulator coordinate system to the origin ' mb/l ~ - 3a -- . . : .... . . ... ..

~ 3~1 of the vision coordinate s~stem is then calculated by rotating a vector extanding from the vision system origin to the end point of~
the line in the vision s~stem through ~he pr~viously calcula~ed angle batwaen the system ; multiplying ~hat vector by the scale ratio; and subtracting that vector from th~ vector extanding from .
the manipulator system origin to the same poi.nt.
To convert a vector ln vision sy~te~m coordinates to an equlvalent vector in the manipulator system c:oordlnates the vision vector ls first multiplied by the length rati.o~ then rotated by tha angle between the coordinata systems and then summed wit~ the vector between the origin of the two systems to derlve the equiva-len~ manipulation system vector.
In the preferred embodlment of the invention, which will subsequently be disclosed in detail, tha signals representative of .:
the positions of the target points in the two coordinate systems are provided to a general purposa computer programmea to process the data coordinates with the method of the preferred embodiment to derive the conversion factors. In alternative embodiments the computations might be performed by hard-wired digital circuits, suitably arranged analog computation elements, or combi~ations thereof alone or with suitable general purpose computer subsy~tems~
The choice between the hard-wired and programmed general purpose ~-implementations is largely an economlc decision.
Other objectlves, advantages and applications of khe presant invention will be made apparent by the following aetailed description of tha preferred embodiment of the in~ention. The description makes reerence to the accompanying drawings ~n which:
FIG. l is a perspective view of a workpiece manipulating system employing an optical-electrical image converter vision system and employing apparatus for practicing the method of the prssent inv0ntion to generate fac~ors useful to convert signals representativa of positions and distances in tha vision system coordinatas to equivalent signals in tha manipulator system coordlnates;

~ 311~

FIG. 2 is a plan view of tha target used with the pxeferred embodiment of the invention;
FIG. 3 is a vector diagra~n illustrating the process of the prssent invention;
FIG. 4 is a flow chart of the over2l11 control routine utiliz~d by the general purpose computer forming part of the sys-tam illustrated in FIG. 1, for practicing the pre~e~t invention;
and FIGS, 5 and 6 are flow charts of subroutines called for by the routine of FIG. 4.
Referring to the drawings, the systems of the prssent invsntion, which practices the method of the prasent invention, forms a subsystem of the system broadly illustrated ln FIG. 1 which is adapted to engage and manipulate workpieces lO which are provided to it in a random, unoriented manner. The parts are illustratad as belng dropped one-by-one, by suitable mechanism (not shown) down a slide 12 so that they fall onko a support surface 14 within an area bounded by a pair of separable pusher arms 16. At an appropriate time in the work cycle the pusher arms are actuated by suitable mechani~m (not shown) to horizontally move a single part 10 onto a horizontal translucent presenter stage 18. The arms 16 then separate and retract to their position over the support 14 leaving the single part 10 supportsd on the presenter stage 18. The part is in an unoriented position and must be moved into an oriented position before it may be provided to subsequent operating apparatus such as a metal working press or the like.
An appropriate source of illumination (not shown) is preferably disposed below the translucent presenter stage 18 so that the silhouette of the part 10 is provided in sharp contrast to a mirror 20 supported directly above the presenter stage. The mirror 20 reflects the image of the presenter stage and the sil-houette of the part 10 to an optical-elec ric image convlerter 22, preferably taking the form of a vidicon tube. This tube i9 ,, - ,. , ` ~

1~3111 controlled so that the image of the presenter stage, as created on the tube by its lens system, is repeatedly scanned. The electrical signals generated during the scan are provided to a control sub-system 24 which preferably contains a suitably programmed general purpose computerO
The computer 24 analyzes the signals from the conver~er 22 to determine khe primary outline of ~he part 10 as it res~s on ~the presenter stage 18, and then the centroid of that outline.
The outline of the part is further analyzed to determine the location of certain characteristic points on the ou~line, relative to the centroid of the part.
This lnformation is used by the computer 24 to generate control signals for a manipulator, generally indicated at 26.
The manipulator carries a set of ~aws 28 which are adapted to engage the workpiece 10. The position of the ~aws may be controlled in a number of respects by appropriate mo~ions of the manipulator elements. That is, the jaws may be lifted and lowered, moved horizontally along a pair of perpendicular axes, rotated, and inclined, etc. Based on the signals provided by the computer 24 the manipulator is caused to engage the workpiece 10, lift it from the presenter stage, and to move it to a working station 30 and place it thereon ~n an oriented position. A variety of aystems having the capabilities described above have therefore been proposed and built.
The task described above is quite elementary for ~ystems -of this type and the syste~ may perform more complex tasks, for example, the vision system might be adapted to recognize another part to be assembled with the workpiece 10 and to control the motion of the workpiece to bring the parts into assembled relationship.
The computer is capable of deriving signals representative of positions and di~tances by analysis of the output signals m~/J~ - 6 -'~"

'73~
of the image convertor 22 and the manipulator 26 includes appro-priate sensors that generate electrical signals representative of the positions of its various elements and provides these to the compu~er 24 which analyzes them to determi~e positions and distances of the operating elements. Sincs tha ultlmate function of the system is manipulation of the workpieces it is necessary to conver~
the position and distance signals derived by the vision system into the manipulator sy tem coordinates in order to provide appropriate control signals for the manipulator elements. ~he general purposa computer 24 must have available trans~ormatio~ coordinates which will allow posltion and distance signals derived from an analysis of signals fxom the image convertor 22 to be transla~ed into the machine coordinates. The program further includes sections which control the computer 24 to derive those transformation factors utilizing the method of the present invention~
The first step in generating the transformation factors i9 to load a target 32, illustrated in FIG. 2, into the workpiece engagi~g jaws 28 of the man~pulator. The target conslsts of a circular section of transparent plastic having a short handle 34 projecting from one end and an opaque circular target area 36 formed in the center of the circular section.
The jaws 28 of the manipulator are controlled to engage the handle 34 of the target so that the central section 32 extends out beyond the jaws. The jaws are controlled so that the target 32 is dlsposed parallel to the presenter stage 18 and closely above it. The manipulator is next controlled to cause its arm to suc-cessively move through a serias of points . The rotation of the manipulator arm about the vertical or Z axis is malntained constant throuyh all these points and the horizontal attitude o the taxget 32 and its elevation above the presenter stage 18 are maintained constant for all of these points.
The machine sensors define a position in global coordi-nates; that is a 3 dimensional set of polar coordinates speci~ying . .

~ 3~

the vector distance from ~ha origin of the manipulator coordinate system to the point and the angla that vector assumes with respect to each of tha principle rectangular axes. Thus when work engaging fixture 28 is extanded along a radial line between the successive polnts, in theory the only coord~nate dlfference between the suc-cesslve points will be in the length of the radius, with the angle3 rema~ning constant. However, as a matter of practice, a certain drift may occur in the angle that the support arm makes relative to a llne in the horizontal plana and the system must record this angle at each point along the line. I~ is assumed that the incli-nation of the arm remains constant and that the mantpulator system, vision system and respeckive coor~inate systems have parallel X-Y planes.
Accordingly, as the manipulator system moves the target 32 through a series of points in the X-Y plane, baarlng substan- ~i tially constant angle to the X-Z plane, the length of ~he radius vector and the angle of the radius vector relative to the X-Z
plane ln the X-Y plane is recorded ~or each point. This is all performed in accordance with an appropriate program provided for the computer 24.
At the same time the program controls the computer 24 t4 analyze the output of the image convertor 22 so as to determine the coordinates o each polnt in the image plane of the camera system. These measurements will typically be in rectangular coordinates, representing the X and Y coord1nates of the point relative to the origin of the vision system, in the 2 dimensional vision plane, which is parallel to the presenter stage.
Using these two sets of measurements as data the compu-ter 24 is controlled by the program to generate the transformation factors which may be subsequently usad to translate vlsion system coordinates into manipulator system coordinates.
FIG. 3 illustrates the procesP of deriving the trans-formation factors by vector diagrams~ Assume that the target is . ~ . ; :. .~. . .

~'73 moved to N points which are viewed in the vision system coordi~
IV~ P2V~ P3v~ to PNV relative to the vision system coordinates Xv Yv- A line, ~ is fitted to these points using the method of least squares and the angle that this line makes relative to the X axis of the vision system is calculated as ~V' The same points are called PlM, P2M, P3M an NM
manipulator system. The measurements of the angle that the manipulator system arm makes with the X-Z plane in the X-Y plane at each of the points are averaged resulting in an angle in the manipulator system coordinates of 3M. The first transformation factor, 3T~ the angle in the X-Y plane between the vision system and manipulator system coordinate axes is equal to the angle between ~ and av. Next the ratio of a length in the manipulator coordinate system to the same length in the vision coordinate system is derived by dividing the measured distance from the first point to the last point in the manipulator syst~m coord- r inates (Ml - Mp) by the distance between the first and last points as determined by the vision system tPIV - PNV). ~his `
length factor is termed FL.
Next the vector in the vision system from the origin of the vision system V to the last point PNV is rotated through the angle aT to derive the vector Pl~V, which is then multiplied by the linearzation factor FL to derive PNV,,. This point P~V, is the same point as PNrl at the last point on the line series as determined by the manipulator system.
~ Finally a vector extending from the origin of the manipulator system to the origin of the vision system is derived by subtracting the vector from the origin of the vision system to point PNV,, from the vector extending from the origin of the manipulator system to point PNM. This is the final conversion factor. A point in the vision system coordinate system may then ;~
be transormed to a point in the manipulator system coordinate system by rotating the vector from the origin of the vision system to the point in the vision system through the angle 3T; multiply~
ing the length of that vector by the factor FL, and then adding _g_ lO'i'~

the vector extending between the origins of the two systems to the vector of the transformed point in the vision system.
~ he flow charts of FIGS. 4 - 6 outline the steps follow~d by the computer 24 under control of its s$ored program, to generat~
these transformat~on factors.
Referring to FIG. 4, the first step 50 upon entry of the~ ;
program is to set up an in~ex value indicatirlg the number of point~
along the llne to be read for this particular calibratlo~ proce-dure. ~ext, at 52, the systsm provides a signal to the sperator lndicatlng that the manual operatlon consis~lng o attaching ~he target 32 to the workpiece 28 and locating the fixture over the vlsion stage 18 is to be accomplished. The operator, in step 54, establishes an increment value equal to the distance between each of the points on the line. Next, at 56, the manlpulator ls con-trolled to move to the first point and khe coordinates of that point in the vislon and manipulator systems are recorded~ The program then, at 58, call~ or a subroutlne entltled VSYWCl which is disclosad in detail ln FIG. 5. That subroutine will be described subsequently.
When return ls made from the VSYNCl program, at 60, the lndex value is aecremented at 62 to inalcate that one point has ,, been processed. ~ext, at 64, the index value is testea to deter-mlne if it is zero~ If it is zero, the system enters an exlt routine 66. If the index value is still nonzero ~he manipulator is controlled to move the target 32 by one lncr2ment, in box 68, and the subroutine entitled VSYNC2, disclosed in detail in FIG. 6, is entered in box 70.
Upon return from VSYNC2 at 72, box 62 is agai~ entere~
decrementing the index. This process continues until the index value i9 reduced to zero and the routine is comple~d.

The VSYNCl program of ~IG. 5 is an initialization routine particularly addresssd to the problem o fitting a line to the series of points as detected by the manipulating system sensors and the vision systf3m, t;hrough use o the least squares meghod.

--10-- `

11~';~311~

This method defines and calculates the slope of a line in the X-Y plane (~X/~Y) passing ~hrough a 8exi~s 0:~ points Xl ' X2 ' ' ~N; ~l ~ Y2 ~ YN in a~'cordance wlth the followlng equation:
[ X~) [ yJ --n ~ ~ XiYi~ ~

( ~ Xi~ ~ I xJ , ( ~ x 1 WHERE: n - number of points used n -X~ sum of all the points ' X
i~l coordinates n Y1 z sum of all the points ' Y
i-l coor~inates-Box 74 1~ the entry point to the VSY~Cl routinQ rom box 58 of FIG. 4. The first step, dafined in box 76, is to initialize the memory locations whlch are to be used as summing points in the least squares method, by z0roing them. Seven such storage loca- ::
tions are u~iliz~3d:

i ~ n .. o i~l n O n = O :
Yi XiYi i~l i~l -~n ~ ~n ~o n ~ ~ -o :
Xi ) l i~i J ~i 2J

i ~1 i.l ~ ~' Next, the program movss to box 78 where thos~ memory ;~
locations which are used to determine the average value of the an~le ~M that ths manipulator arm makes with the X-Z plane at each o the targ~t pointsO These two locations store the sum of the ~M values and that v~lue divided by the number of OM values. :~

Next, at box 80, the program zeros or initializes the our vector memory locations used in averaging the ollowing ~rans-lation vQc~ors: vector from manipulator s~stem origin to a polnt; ~ .

vector frDm vision system origin to the point; vector 'rom man~pu-. ~ , . . , ~

~ 3~
lator system origin to vision system origin; and resultant average translation vector. Next, at box ~2, the program inltializas thos0 memory locations used i~ calculating the scale factor between the two systsms.
The system is now ready to begin its actual operatlng routine and in the next series of program steps, illustrated in box 84, the system determlnes the value of the angle ~M which the ma~lpulator arm make~ wlth the X-Z plane, ancl the valua R, equal to the distance of the target 32 from the marlipulator system Z
axis, for the first targek location. These rsadings are ~erived directly from th0 manipulator sensors and for tha present ignore the target off sets. Thls is considered point ~umber l and this value will be averaged in with all succeeding values and the aver-age data is used in calculating both the translation vector and the angle o ~otation between the two coordinate systems.
Next ths system performs the tasks illustrated in box ~6 wherein the coordinates of the same point are determined in the vision system's X and Y coordinates. This is done by causing the camera to take a picture of the target 32 and then analyzing the output of the image convarter to determine the outline of the central opaque circle 36 and finally calculate tha centroia of that area. The measurements of this centroid in the vision system are 9aved for later use. Next, at box 88, the system begins the calculation of the least squares best fit line to the coming series of vision centroids by adding the first centroid's vision coordi-nates and calculated values to the five appropriate summing loca- ;
tions as initialized in box 76. When thls is completed, return is had from box ~8 to the routina of FIG. 4.
Following that routine the in~ex value is decremented, causing the manipulator system to move the target 32 by the increment value over the R axi3. Entry is then had to the VSYNC2 routine beginnlng at box 9O. The first operational step of VSYNC2, indicated at box '32, is to increment by one the storage location .

1~7311~

that contains the count o the number of points used. This skorage location was previously set to zero in performance o the opera~lon outlined in box 76.
Next, in block 94, the value of ~M is added to the memory locations which store and sum the ~M values, and an average value f ~M is calculated. This ls performed in box ~4.
Next lncrements are summed with the readings of the R
and ~M sensors, as stored in block 84 for the first point taken, to obtain the actual po~ition o ths centroid of the target mark 36 which differs from the physical position of the workpiec~ holder -:;
because of the physical configuration of the target. These offset values have previously b~en stored in appropriats memory locations, Th~y are determined by actual measurements of the target relative to the workpiece holder~ This operation is illustrated in box 96.
The program then proceeds to block ~8 wherein the program determlnes the length and angle of the vector in ~he XY plane, from the manipulator system origtn to the first centroid locationO The same two operations are then performed for the current centroid readings as determined by adding the ofsets to the curxent R and average ~encoder in block lO0 and determining the vector in X and Y coordinates from the manipulator system origin to tha~ centroid in block 102. Next the X and Y coordinates of the current centroid location, as determined by the vision system, are generated by causing the vision system to take a picture of the targst, to determine the outline of the circle 36 on the target ana dstermine the X and Y coordinates of the centroid of that outline. This is accomplished by programming identified with block 1040 At this point the vision system has developed coordinates of two or more separate target points and the system calculates :
th~ ~lope of the least squares best fit line through those centroids~

This is done by adding the current vision centroid coordinates to the appropriate storage summation locations and calculating the slope in accordance with the previous formula. I'his is performed in bloc~c 106.

-3i In the next block, 10~, the system converts the slope o~this best fit line through ths points so far datermined into the angle betwaPn that line and the vision system X axisO Th~ angle that the same line makes with the manipulator systsm X axi~ is averaged value of ~V The amount the vision coordlnate axes need to be rotated to bring them parallel to the manipulator system coordinate axes is the diffarence between the~ two anglas ~Mand ~V
As the first stap of calculating this ~ngle a routine located in block 110 calculates a vector of arbitrary length having the same slope as the best fit least square line through the points as seen in the vision system, and origina~ing at the origin of the vision system. A routine illustrated in block 112 assures -~
that thi vector calculated pointR in the same general dlrection as the motion of the target between the first point and its current positlon. This is done by taking the scalar product o~ the arbi-trary vector from the origin of the vision system, havi~g ~he slope of the best fit line, and a vector constructed from the first recorded centroid to the current centroid. If the scalar product is greater than zero these two vectors have the same direction; if -~
they have opposite directions, the scalar product is less than zero ~
and the arbitrary vector must be multiplied by -1 in order to ~ ;;
rotate it through 180 degrees about the oxigin.
~ext the system calculates the angle that this vector `;
makes with the vision system X axis in block 114. Then the sys-tem determines the direction of a uector of arbitrary length constructed from the origin of the manipulator system coordinate axes at an angle of average ~ to th~ manipulator system's X ax1s.
This has the same ~lrection as the vector constructed from the centroid number 1 to the current centroid. The direction is de- ~ -termined by comparing the R coordinate of the cenkroi~ number 1.
If the R value of the latest centroid is greater ~han that of the first csntroid, the~ ~ is the correct angle for this arbitrary vector. Otherwise, the angle used must be 180 degrees minus the average a. This is illustrated in block 116.

~, ~q3~1 Finally, ~he system calculates the rotation angle between the vector ln the vision and the manipulator systems by subtracting the angle of the vector in the manipulator systsm coordinates, as calculated in block 116, from the angle of the ve~tor in tha vision systsm as calculated in block 118. That is we axe looking at th~
same vector (except for scal0) ~n twn coordinat0 frames.
The program next constructs a 2 x :2 matrix which may ba used to rotate a point in the vision ~ystem coor~inate axis through the angle determined in block 118. The 2-dimensional matrix allows the coordlnates of a point in the unrotated vision coor~inate system to be converted to a point in the rotated vlsio~ coordinate system.
The matrix implaments the following equatlons: XR ~ X(cos~) +
~(sin~); Y~ ~ x(-sin~) + Y(cos~l wherein XR and YR are X and ~
a-q rotated, respectlvely. l'he angle ~ equals the rotation angle between the two coordina~e systemsO This matrix is set up in block 120. This rotation transforms a vector from vision coordinates to manipulator coordinatss.
Next the system must generate a scale ~actor equal to the ratio between the distance in the v~sion system and the mani-pulator system. This scala factor is calculated by dividing the difference in R coordinate values between the manipulator systems' flrst centroid and the current centroid by the l~ngth of a vector extHnding froll tha first centroid as seen in the vision system to the current centroi~ as seen in the vision system. The scale fac-tor may then be multiplied by a mea~urad vision distanca to arrive at a manipulator systam distance. This routine is performea in block 122.
Next a test is performed, as indicated in box 124, to determine if this is the first time through the VSYNC2 routine.
If it is the first time, the translation vector from the origin of the manipulator coordinates to the origin of tha vision system coor~inates is generated in block 126. This oparatlon invoLves creating a vector from the origin of the vision syste~ coordinates tc the first centroid and then rotating that vector by the angle ~

--15-- :

~73 - ~and scaling it to its equlvalsnt manipulator system coordinates.
That vector ls then subtract~d from a vector extendlng fxo~ the manipulating system origin to the irst ce~troid~ The resulting vector components are addcd to the average translation vector component accumulation locations. The translac~ion vector from the origin of the manipulator coordinates to the origin of the ~ision system coordinates using the curxent centroid is now calcu~
lated. The resulting vector componsnts are added to the average translation vector component accumulatlon locations. Finally the ~;
systsm calculates an average translation vector from the vi~ion system origin to the man~pulator system origin by dlvidlng each averaga translation vector component accumulation loca~ion by the number of centroids that have been used. This i9 performed in block 130 and then in 132 return is had to block 72 on FIG. 4.
~ he routina of FIG. 4 is completed when a nur~er of points equal to the index value set up ~n block 50 have been completed. The results of the process are a rotation matrix, a scale factor and the vector from the origin of the manipulator system coordinates to the origin of the vision system coordinates.
When the vision systsm later "sees" a point and defines its X and Y coordinates, that point may be expressed in the manipulator ~ystem coordinates by drawing a vector in the vision system coordi-nates to that point, multiplying the vector components by the ro-tatlon matrix and scale ~actor, and addlng the resulting vector to the vector between th~ origins of the two systems. The resulting vector extends from the origin of the manipulator system to the point as seen in the vision system.
Improvemsnts to the present invention are, of course, possible. For exampl~, the visual coordinatc system may be gsneralized to a three dimensional coordinate system by employing a second alectrical optical converter having $ts viewing lens operatively positioned in a horizontal position across the face of the presenter stage 1~. ~ikewise no attempt is here rnade to . ,: , lV~3~1~

limit the type of coordinate system for either the vision or manipulator coordinate systems, rather, a~y type of coordinate system, conventional or otherwise, may be employed within the scope of the invention. It will be understoocl, however, that the actual matbematical computation of the trans~ormation factors will vary with the particular coordinate system emE)loyed.
Having thus described my invention, I cl~im:

- ~ .

Claims (7)

1. In a workpiece manipulating system having a manipu-lator with jaw means for engaging a workpiece on a work area, said manipulator having sensor means for indicating the coordinate posi-tion of said jaw means in a first coordinate system, and an optical image converter positioned to view an image on the work area and generate a signal in response thereto, analyzing means operatively coupled with said converter for computing the coordinate position of the centroid of said image in a second coordinate system, a method for translating coordinate positions from said second coor-dinate system to coordinate positions in said first coordinate system comprising the steps of:
placing a target in said jaw means, moving said jaw means and target between a predetermined number of points across said work area, determining the coordinate position of said target in both of said coordinate systems at each of said points, computing transformation factors for translating any coordinate position on said work area in said second coordinate system to the same coordinate position on said work area in said first coordinate system, and storing said transformation factors.
2. The method as defined in claim 1 wherein said first coordinate system is a global coordinate system and second system is a rectangular coordinate system having the X-Y plane as its horizontal plane and having the Z axis parallel to the Z axis in said first coordinate system and wherein said target is moved across said work area in a generally horizontal straight line.
3. The method as defined in claim 2 wherein said step of computing transformation factors further comprises the steps of:
computing a best fit straight line between said points in said second coordinate system, calculating a first angle between said best fit line and said X axis in said second coordinate systems, and calculating an angle transformation factor by subtracting said first angle from a second angle, said second angle comprising the angle between said best fit line and the X-Z plane in said first coordinate system.
4. The method as defined in claim 3 and further comprising the steps of:
computing a first distance between the first and last of said predetermined points in said second coordinate system, computing a second distance between said first and last predetermined points in said first coordinate system, and computing a scale transformation factor by dividing said second distance by said first distance.
5. The method as defined in claim 4 and further com-prising the step of calculating a vector transformation factor by computing a vector between the origin of said first coordinate system and the origin of said first coordinate system.
6. The method as defined in claim 3 wherein said best fit line is a least squares best fit line.
7. The method as defined in claim 3 and further compri-sing the step of constructing a rotation matrix for rotating said coordinate positions in said second coordinate system through said transformation angle.
CA261,838A 1975-11-28 1976-09-22 Method and apparatus for calibrating mechanical-visual part manipulating system Expired CA1073111A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US05/636,069 US3986007A (en) 1975-08-20 1975-11-28 Method and apparatus for calibrating mechanical-visual part manipulating system

Publications (1)

Publication Number Publication Date
CA1073111A true CA1073111A (en) 1980-03-04

Family

ID=24550294

Family Applications (1)

Application Number Title Priority Date Filing Date
CA261,838A Expired CA1073111A (en) 1975-11-28 1976-09-22 Method and apparatus for calibrating mechanical-visual part manipulating system

Country Status (6)

Country Link
JP (1) JPS5267355A (en)
AU (1) AU503000B2 (en)
CA (1) CA1073111A (en)
DE (1) DE2649608A1 (en)
FR (1) FR2332843A1 (en)
GB (1) GB1518244A (en)

Families Citing this family (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5414763A (en) * 1977-07-05 1979-02-03 Mitsutoyo Seisakusho Coordinate measuring display system
CH627959A5 (en) * 1977-10-04 1982-02-15 Bbc Brown Boveri & Cie METHOD AND DEVICE FOR DETERMINING THE ROTATION OF OBJECTS.
JPS5630609A (en) * 1979-08-21 1981-03-27 Kosaka Kenkyusho:Kk Polar coordinate recorder
US4468695A (en) * 1980-11-20 1984-08-28 Tokico Ltd. Robot
JPS57192807A (en) * 1981-05-25 1982-11-27 Hitachi Ltd Centering method
GB2123172B (en) * 1982-07-06 1986-10-22 Emi Ltd A robot control system
EP0108511A3 (en) * 1982-11-04 1985-12-18 EMI Limited Improvements in or relating to robot control systems
DE3246828A1 (en) * 1982-12-17 1984-06-20 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V., 8000 München Mobile transporting and handling device
DE3306888A1 (en) * 1983-02-26 1984-09-13 GdA Gesellschaft für digitale Automation mbH, 8000 München METHOD AND DEVICE FOR DETECTING AND COMPENSATING THE PATH DIFFERENCE OF AN INDUSTRIAL ROBOT
IN161120B (en) * 1983-03-30 1987-10-03 Wyler Ag
JPS60159905A (en) * 1984-01-30 1985-08-21 Hitachi Ltd Control device of robot provided with visual sense
JPS6138511A (en) * 1984-07-31 1986-02-24 Toyo Electric Mfg Co Ltd Coordinate measurement system for solid body
DE3445849A1 (en) * 1984-12-15 1986-06-19 Dürr Automation + Fördertechnik GmbH, 7889 Grenzach-Wyhlen Industrial robot
JPH07104153B2 (en) * 1985-09-24 1995-11-13 株式会社ニコン Drive
FI101689B (en) * 1993-06-17 1998-08-14 Robotic Technology Systems Fin Procedure for processing an object
FI955274A (en) * 1995-11-03 1997-05-04 Robotic Technology Systems Fin Machining cell and method for machining a part
RU2472612C1 (en) * 2011-06-01 2013-01-20 Российская Федерация, От Имени Которой Выступает Министерство Промышленности И Торговли Российской Федерации Bench to control accuracy of contour movements of industrial robot
RU2641604C1 (en) * 2016-12-28 2018-01-18 Федеральное государственное автономное образовательное учреждение высшего образования "Дальневосточный федеральный университет" (ДВФУ) Method of measuring absolute position of end link of ndustrial robot multilink mechanism
CN107553493A (en) * 2017-09-22 2018-01-09 东南大学 A kind of robot kinematics' parameter calibration method based on displacement sensor for pull rope
RU2721769C1 (en) * 2019-08-28 2020-05-22 Федеральное государственное автономное образовательное учреждение высшего образования "Национальный исследовательский Томский политехнический университет" Bench for monitoring contour movements of flexible manipulator

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4017721A (en) * 1974-05-16 1977-04-12 The Bendix Corporation Method and apparatus for determining the position of a body

Also Published As

Publication number Publication date
FR2332843A1 (en) 1977-06-24
JPS5267355A (en) 1977-06-03
DE2649608A1 (en) 1977-06-08
GB1518244A (en) 1978-07-19
FR2332843B1 (en) 1981-06-12
AU503000B2 (en) 1979-08-16
AU1949576A (en) 1978-05-18

Similar Documents

Publication Publication Date Title
CA1073111A (en) Method and apparatus for calibrating mechanical-visual part manipulating system
US3986007A (en) Method and apparatus for calibrating mechanical-visual part manipulating system
US4402053A (en) Estimating workpiece pose using the feature points method
CN110238845B (en) Automatic hand-eye calibration method and device for optimal calibration point selection and error self-measurement
US4305130A (en) Apparatus and method to enable a robot with vision to acquire, orient and transport workpieces
Joubair et al. Non-kinematic calibration of a six-axis serial robot using planar constraints
CA1330363C (en) Robot system
CN2869887Y (en) Visual servo apparatus for sealed radiation resource leak automatic detection platform
US20190061163A1 (en) Control device and robot system
WO1993015376A1 (en) System for recognizing and tracking target mark, and method therefor
EP1809446A2 (en) Method and system to provide imporved accuracies in multi-jointed robots through kinematic robot model parameters determination
JPS61281305A (en) Articulated robot control device
CN111220120B (en) Moving platform binocular ranging self-calibration method and device
CN108827264A (en) Mobile workbench and its mechanical arm optics target positioning device and localization method
CN1885064A (en) Vision servo system and method for automatic leakage detection platform for sealed radioactive source
JP2019195885A (en) Control device and robot system
JPH08272425A (en) Method to teach coordinate system to robot in non-contact
Staub et al. Dex-net mm: Deep grasping for surface decluttering with a low-precision mobile manipulator
Đurović et al. Low cost robot arm with visual guided positioning
Preising et al. Robot performance measurement and calibration using a 3D computer vision system
Secil et al. A robotic system for autonomous 3-D surface reconstruction of objects
JPH0626770B2 (en) Workpiece fitting method
Heikkilä et al. Calibration procedures for object locating sensors in flexible robotized machining
JPH0727408B2 (en) Robot handling device with fixed 3D vision
CN115409878A (en) AI algorithm for workpiece sorting and homing

Legal Events

Date Code Title Description
MKEX Expiry