CA2263530A1 - Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects - Google Patents

Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects Download PDF

Info

Publication number
CA2263530A1
CA2263530A1 CA002263530A CA2263530A CA2263530A1 CA 2263530 A1 CA2263530 A1 CA 2263530A1 CA 002263530 A CA002263530 A CA 002263530A CA 2263530 A CA2263530 A CA 2263530A CA 2263530 A1 CA2263530 A1 CA 2263530A1
Authority
CA
Canada
Prior art keywords
camera
measurement
point
coordinate system
viewing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
CA002263530A
Other languages
French (fr)
Inventor
David F. Schaack
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US08/689,993 external-priority patent/US6009189A/en
Priority claimed from US08/871,289 external-priority patent/US6121999A/en
Application filed by Individual filed Critical Individual
Publication of CA2263530A1 publication Critical patent/CA2263530A1/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B23/00Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
    • G02B23/24Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00147Holding or positioning arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00174Optical arrangements characterised by the viewing angles
    • A61B1/00183Optical arrangements characterised by the viewing angles for variable viewing angles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/06Devices, other than using radiation, for detecting or locating foreign bodies ; determining position of probes within or on the body of the patient
    • A61B5/065Determining position of the probe employing exclusively positioning means located on or in the probe, e.g. using position sensors arranged on the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1076Measuring physical dimensions, e.g. size of the entire body or parts thereof for measuring dimensions inside body cavities, e.g. using catheters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/02Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B23/00Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
    • G02B23/24Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
    • G02B23/2407Optical details
    • G02B23/2423Optical details of the distal end
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/05Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion

Landscapes

  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Surgery (AREA)
  • Engineering & Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • Pathology (AREA)
  • Public Health (AREA)
  • Biophysics (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • General Physics & Mathematics (AREA)
  • Astronomy & Astrophysics (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

Spatial locations of individual points on an inaccessible object are determined by measuring two images acquired with one or more cameras which can be moved to a plurality of positions and orientations which are accurately determined relative to the instrument. Once points are located, distances are easily calculated. This new system offers accurate measurements with any convenient geometry, and with existing endoscopic apparatus. It also provides for the measurement of distances which cannot be contained within any single camera view. Systematic errors are minimized by use of a complete and robust set of calibration procedures. A standard measurement procedure automatically adjusts the measurement geometry to reduce random errors. A least squares calculation uses all of the image location and calibration data to derive the true three-dimensional positions of the selected object points. This calculation is taught explicitly for any camera geometry and motion.

Description

W O 98/07001 PCTAUS97/lS206 APPARATUS AND METH~D FOR MAKING ACCURATE THREE-DIMENSIONAL
SIZE MEASUREMENTS OF INACCESSIBLE OBJECTS

Technical Field This invention relates to optical ~ ,llulo~y, srecifir~ly to the problem ûf making non-contact AimPnci~n-- lu~,d~uu~ ts of in~tnc~pccible objects which are viewed through an enAoscQpe Ba~,h~luund Art A. Illll udu~ liol~ .
In the past several decades, the use of optical enAn~op~ S has become common for the visual inCpectinn of 5 inarc~Pccible objects, such as the internal organs of the human body or the internal parts of ",~ t)r-y. These visual ina~ulions are p~,ru....ed in order to assess the need for surgery or e.~"i~ tear down and repair; thus the results of the i.~ iol~c are accorded a great deal of importance. Accordingly, there has been much effort to improve the art in the field of t~n~Aoscu~
Fnrloscopes are long and narrow optical systems, typically circular in cross-section, which can bc inserted 10 through a small opening in an el~,luaule to give a view of the interior. They almost always include a source of ill on which is ' ~ ' along the interior of the scope from the outside (~.~ ~I) end to the inside (distal) end, so that the interior of the chamber can be viewed even if it contains no illl~min:~tion Fn~lo5~op~ i come in t~vo basic types; these are the flexible cn~ln~copes (r.l,~.acopes and vi.le.~scu~ ~) and the rigid bo.e~,opes.
Flexible scopes are more versatile, but bol~a~;u~~s can provide higher image quality, are less expensive, are easier 15 to manipulate, and are thus generally preferred in those ~ppli~innc for which they are suited.
While F, ~Ao~copP~ (both flexible and rigid) can give the user a relatively clear view of an in:~Cp~ le region, there is no inherent ability for the user to make a 4u .1~ l ive 1-l~ lau,el.lel.l of the size of the objects he or she is viewing. There are many arplir~innc for which the size of an object, such as a tumor in a human body, or a crack in a machine part, is a critically h~ i ' piece of i~ " ~ Thus, there have been a number of inventions 20 directed toward obt-:";~ " size information along with the view of the object through the F ..A~o~
The problem is that the accuracy to which the size of defects can be del~ PA is poor with the currently used iPchni~lues Part of the reason is that the m~gl-ifi~:ltion at which the defect is being viewed through the bol~,~u~,c is I ' ...1l. The other part of the problem is that the defects occur on surfaces which are curved in three d;.~ o~c, and the view through the e ..AnS~o~e is strictly two-dimPn~inn~
25 Many concepts have been proposed and patented for addre~il,g the need to make ~lUdlllildtiV~i IlleaaUI~
through ~ ..A..~ U~r C Only some of these concepts address the need to make the ~..~u.~ ,nt in three dim~pn~ir)ns Few of these concepts have been shown to provide a useful level of lu~aau~ -l-enl accuracy at a practical cost.
Probably the simplest approach to ob~ ing ~lualllilalive object size inforrn~ n is to attach a physical scale to the distal end of the _ ~ - - r ', and to place this scale in contact with the object to be ~llcaaul~d. The pl ~ '.~ -30 with this are that it is often not possible to insert the scale through the available access hole, that the objects of . . _, ~

W O 98/07001 PCTrUS97/15206 interest are almost never flat and oriented in the correct plane so that the scale can lic ;against them, that it is often not possible to m~nipulAtr the end of the Pnflflccope into the correct position to make the desired measurement, and that it is often not pc ~e to touch the objects of interest.
These p..'~' - have driven work toward the invention of non-contact l"~aau,c..-ent tcrhniq--cs There have been a number of systems patented which are based on the principle of optical perspective, or more f"~ri~,... -.~lly 5 on the principle of ~ nVl~lAIim) B. Non-Contact Medaul.,.ll~,.ll~ Using Tri~ln~lA~ion and Pc-~
What I mean by "use of p~ ecLivc'' is the use of two or more views of an object~ obtained from different viewing positions, for limrnci~1nAI ~ ,d~ul.,.u.,.l~ of the object. By "~ ,-easuu~.--c-l-l", I mean the d~ A~ ir~n of the true three-flimPncionAl (height, width, and depth) distance betweerl two or more selected points on the object.
10 To perform a pc~a~ c~ fli nflncionql IllCdsUl~ nt, the apparent positions of each of the selected points on the object are dclrl l~ d in each of the vie~s. This is the same principle used in alc~cOSCOI~iC viewing, but here I
am conrf-~ ~-Pcl with making q~ I ;IAI ;VC ll.edsulclllcllts of object dim~ ncir~nc~ rather than obt -ining a view of the object co~ g qualitative depth cues. As I will teach, given sufficient hl~owled~, about the relative k~ca~ionc orif n-~tionc and imaging p.op~-lies of the viewing optical systems, one can al~.;u,..~,ly fiP-PrminP the locations of 15 the selected points in a lI._d:~UI~ cooldil-dle system. Once these locations are known, one then simply c~lr~ C the desired distances between points by use of the well-known Pythagorean Theorem.
Pc~a~ec~ive is related to and based on i v ' on but triAnVJlllAti~- is also the principle behind making any .I.~,aa..~ ...l of distance using the ll~,a:~UII ' of angles.
The earliest related art of which I am aware is dPcrrihed in US Patent 4,207,594, (1980) to Morris and Grant.
20 The basic approach of this work is to measure the linear field of view of a bo~c~cope at the object, then scale the size of the object as ,..ea:~uutid with video cursors to the size of the field of view as l"~asuued ~vith the same cursors.
The linear field of view is Ill~a ~ulcd by dc~.".mil g the dirrt;lc--ce in bo-~s~,o~,e insertion depth between AlignmPnt of the t vo opposite edges of the field of view with some selected point on the object.
A major problem with this approach is that it cannot dcle--l~ihlc the depth of the object. In fact, the patent 25 specif~es that the user has to know the angle that the plane of the object makes with respect to the plane pCl~ ~ r to the bo,~,s.,u~ line of sight. This inform:l~if)n is almost never available in any practical lll~a;~uu~ t situation. A second problem is that this technique gives valid results only if the optical axis of the bu-.,i,~o~,e is oriented precisely p~l~ ' rnl~r to the length of the bo,- s~,o~.In US Patent 4,702,229, (1987) Zobel describes a rigid bo-cacul,c free to slide back and forth between two 30 fixed positions inside an outer ,--o~ tube, to measure the .lill,r..~ of an object. As with Morris and Grant, Zobel does not teach the use of the principle of pc ~ " and thus discusses only the ~lllelll of a flat object, oriented p~ to bù,~ ,o~,c line of sight.
US Patent 4,820,043, (lg89) to Diener describes a l"eaau,~ -,l scope after Zobel (4,702,229) with the addition of an el~.l,u,lic ~IAI~ on the lll~.a~ulcluenl scale, an insllu-llc"Lcd steering prism at the distal end, 35 and a CAIrn~ r The principle is that once the distance to the object is (~lr~ ~ i n~d by the translation of the CA 02263~30 1999-02-16 W O98/07001 PCTrUS97/15206 bo~ ,u~Je proper according to Zobel, then the object size can be d~le.~ ed b~ making angular size Illcdsul~lu~ 5 with the steerable prism. Again, there is no ~1Pr:l~ion of meaaulelllell~ of the depth of the object.
US Patent 4,935,810, (1990) to Nonami and Sonobe shows e.~plicitly a method to measure the true three-~l;,,,~ .I onAl distance between two points on an object by using t- o views of the object from different p~la,ue~ ves They use two cameras se~ t~d by a fixed distance mounted in the tip of an en-loscopç, where both cameras are aligned with their optical axes parallel to the length of the f ~ s~ope The fixed distance between the cameras 5 eauses the Illeaaul~ ,nt error to be rather large for most ~rplhf ~tionc, and also places a lirnit to how elose an object to be lll~,aaul~d ean be to the e -~ s~ .,pe In addition, the two cameras must be precisely matched in optical ul - - ~ v~ 1~- ;CI irC in order for their tc ' 1 to give an accurate luCaau~f .-l~ .ll, and such ~ e is difficult to do.
All four of these prior art systems assume that the Ill.,aaul~ l system is accurately built to a particular geometry. This is a cienifir-An~ flaw, since none of these patents teach one how to achieve this perfect geometry.
10 That is, if one attempts to use the lf rhni~lue taught by Nonami and Sonobe, for instance, one must either i~- L .~ f ~.lly develop a large body of ~-chnifluf c to enable one to build the system accurately to the required geometry, or one must accept l.leaasul~ lls of poor accuracy. None of these inventors teach how to calibrate their system, and what is more, one cannot correct for errors in the geometry of these systems by a calibration process because none of these systems include any provision for inco,l.u-dling calibration data into the m~a results.
In US Patent 5,575,754, (1996), Kan( teaches a system of pe.a~e~ e flimf nci~n~l .ueasu~~.llelll which is based on moving a rigid bult;acol)f along a straight line in a manner similar to Zobel, but now the bfJl~auu~c is moved by a variable distance to obtain the two views of the object. ~ .,g"i,l c that using a variable distance between the viewing positions allows one to obtain lower Ill~,aaul~ ,nt errors, in general. K- nr,m~r~ also ~cfJe,~ c the necessity of c~ e for the effects of cenain aspects of the actual ~~lwau~lllelll geometry 20 being used, thus implicitly allowing for h~crl~,uldtion of some c~lihr~tion data into the ~--eaaulelllenl result. The patent does not teach how to do the r~lihr.~ticm and unfortunately, the co..~ .c~'ion technique that is taught is both i~ ....pl-l- and incorrect. In addition, the motion technique taught by Konomura is not inherently of high precision, that is, it is not suitable for making Aimfnci~n~l lu~àaul~.llents of high accuracy. Konf mll-a's d~J~Ja~dlus for holding the bol~,à~,u~J~, allows the scope to be rotated with respect to the \l~pA~Alllc in order to align the view with objects of interest, but there is no ~Ol~ le.. .~ l ion given to the repeatability of bo.~acol)e pnSi~ioni ne which is necci~acUy in order to assure accuracy in the lU~,aaUI~ ~
All of these ~...,dau~ ,..l le~ ". ~ are limited to objects which are small enough to be c. . I 'y 25 e~ d within the field of view of the e ~ s~ope In addition, there are other :~prlif~AlionC of interest which simply cannot be addlf sa~,d by any of these t~ es, for instance where the object has a shape and an r~rif-n-~inn sueh that the two ends of a !I;1~ of interest cannot both be viewed from any single position.

_ . .. .

CA 02263~30 1999-02-16 W O 98107001 PCTrUS97/15206 Disclosure of the Invention While the prior art in this area is extensive, there remains a need for a l~lcasu.~ ll system which can provide truly accurate ~1in Pncion~ edau-~-~-ents on an in~rcPCcihle object By "truly accurate", I mean that the level of accuracy should be limited only by the tPrhnrlogy of mPrh~ni~l metrology and by unavoidable random errors made by the most careful user There also rcmains a need for a usefully accurate ~ aaululllenl at low cost By "usefully accurdte", I mean that the accuracy of the l-.~dSulf ..lelll should be adequate for the purposes of most 5 common indnctri~l ~pplic~l;nl~ By "low cost", I mean that with some ~ o~ " the user should be able to add this ~ dauu-"...,.-~ capability to his or her existing remote visual increction capdl)ilil~ with a lower hlu~ tdl PYIl-pnf~ re than is required with the prior art. There also remains a need for a ~ ~ ~,.llenl system which can be applied to a wider range of ~ io.lc than has been addressed in the prior art Meeting these goals inherently requires that the ll~,aau~ ll should not depend on an al~p ~ being built 10 precisely to a particular geometry, nor to particular optical chdld~t-,.ialics Instead, the ~ ,daul'u~llclll system must be sl~ffiriPntly complete and ~ o.ll~ nalre so that the al,l,a-~ll-s is capable of being calibrated and there must exist a S--rr- f l~t set of methods to perform this calibration In one aspect, therefore, my invention provides a method of locating an object point of interest in three 1 space using one or more cameras which can be moved among any of a plurality of pl~.lt~ in~d 15 relative viewing positions and oriPn' ~ionc ''pl~.1. t~ , .nrd" in this case means that these 4ualltiLi~S are d~,t~,l..uncd before the ~ll~,aaul.,..l~ result is ,r~lr ' ' ~, and that the ~ ,aaul~ ,nl requires no auxilia~y ;~r~ on from or about the object. Because the camera(s) can be moved, the viewing positions that are used to perform a pdlti~ làl point location can be chosen during the ..lcdaul~ -ll, accorlli--g to the 1~, .."enta of the particular lll~,dau-~,.l.~lll being performed The apparent locations of the images of the point as viewed from two 20 different positions are ll,edauled and, using the ,~ d~l~l,---ned geometry of the system, and the p-~d~ ul~cd optical rl~ r~ 1ies of the camera(s), a fully three~ f ~;on~l, least squares estimate of the location of the point is '~ e~1 The geometry of the lll~aaulelll~nt is c~ ely general. and there are i.l~ ~;riÇd a complete set of p~ rt~ ~ a which can be calibrated in order to ensure an accurate .ll~,aau-~.llent A complete set of calibration methods is taught This aspect of my invention enables one to ac-,u. '~, locate a point using whatever 25 ~.~;.u~ is most alv g~uc for the arplication In another aspect, the invention provides a method in which the motion(s) of the camera(s) is (are) ~ r~ f ~1 to one of a variety of specific paths. According to this aspect, different camera paths have advantages for different ,l ic ~1 ;o~ Also, a-,~ o-~ g to this aspect, it is possible to ~rl ~ f the ~~rirn~ io~(S) of the cameM(s) with respect to its (their) path(s) of motion in a calibration p.u~,e.lu-~, and to take into account this 30 (these) ntirnt~inn(S) in the d~ ion of the location of the point of interest In addition, a~ ~,ul~lh~g to this aspect, it is possible to d~ errors in the actual motion(s) with respect to the ideal motion(s) and to also take these errors into account in locating the point Thus, this aspect allows one, for instance, to acc -'y locate a point using existing ,~ u~ halllwd-~ which ~vas not originally designed to make Ill.,dSul~lll.,.lli, and which is not built according to the assumptions and rcquirements of the prior art 35 In another aspect, the method of locating a point of interest is used to determine the three-dimensional t~nres between points of interest on a remote object, where all the points of interest can be contained within a CA 02263~30 1999-02-16 single view of the camera(s) being used. This aspect allows one to, for instance, perform an h~ uv~d pc.~l.e.,Li~e rlimr~nri~m~Al ~c...cnl under cr~nrli~ionc similar to those ad.l~sed by the prior art.
In anûther aspect, the method of locating a point of interest is used to flr~ermimp the three-d;~ nAl distance between a pair of points on an object, where the two points of the pair cannot necessarily be culllained within any single view of the camera being used. This aspect allows one to perform a new mode of p~ ,e~;Li.~e 5 ~ ;IJI~A~ aaul~ which has the capability of ac~;u-~tcly ~ A~ e distances which are i."p~ to measure at all in the prior art. This aspect also offers the capability of p. .Çullllil1g the most precise rl;...~ .--:, -~Ill~,a~uu~ aclucY~ blc with my system.
In another aspect, my invention provides a method of locating a point of interest in three~ AI space using a single camera, subjected to a b "-~ly pure ~r~nc~q~ir n between two viewing pr cilic - in which the lO first and second viewing positions are selected so that the point of interest is viewed first ncar the edge of one side of the field of view and secondly near the edge of the opposite side of the field of view. This aspect allows one to lly obtain one of the key co~ ;nns required for a~,hh,v~,....,.ll of the lowest random error (highest precision) in the pe-~ e dimr ncionql ll~cdaul~ nl In still another aspect, the invention provides an ~ a-all~. for ll.~asu~ g the three-r~ onAl distances 15 between points on an ~a: '- object, wherein the ~ rll~ includes a bu-~ ,u~e ~ o-led by a linear motion means, a driving means which controls the position of the linear motion means, and a position Ill~,a~ul~
means which dr~ the position of the linear motion means. Here the hlll"uy~ l is that a linear motion means is used which provides a motion of very high accuracy. In an c ~l o ~ 1, one may select the driving means to be an actuator, for instance an air cylinder. Also the position lll~,a~Ul~ .,Ut means may be e ~ bod;ed as a 20 linear position l In another aspect, the invention provides an ol)~JAli~ for measuring the three~imPnci~ n~l distances between points on an ;, ~c~ c object, wherein the a~Jpa~d~us includes a bfJI~JtJf ~u~"oo~led by a linear motion means, a driving means which controls the position of the linear motion means, a position ,..easul~,..,~.,l means which determines the position of the linear motion means, and wherein the improvement is the use of a lead screw and 25 ,llA~ jl-g nut as the driving means. In an ~ ~l-o~ Plll> one may embody both the driving means and the position ,lea~u.~,."~ mcans as a Illi-,lUll-~,t~,..
In another aspect, the invention provides al~ according to the previous two aspects, but wherein the bo.Ls.,ope includes a video camera, and wherein the video camera is correctly rotationally oriented with respect to the bore~cJI~e in order to satisfy a second key confli~inn required for the a~ l~i. Yc..,~ l of Ill~ lUl~ e~ of the 30 highest fcasible ~
In still another aspcct, the invention provides an ap,odlàtl.s for mP~ rin~ the three~ o ~1 distances between points on an ;~r~ object, wherein the AplJAlAIll~ includes a video camera mounted on a linear tr~nr~ifm means, and wherein this assembly is mounted on the distal end of a rigid probe to form an electronic l..edsu., bo.~scu~,c. This aspect thus provides a self-co- ~ c~ ulca~u~lll~nl system, one which can provide 35 ~-l~.,.~,...~,ntS of much higher accuracy than those provided by prior art systems.
In another aspect, the invention provides an ~PlJAlA~ for measuring the three~imPnci~)nAl distances between points on an: a~ object, wherein the d~Jpar~lus includes a video camera mounted on a linear trAnC~ n means, and wherein this assembly is mounted at the distal end of a flexible housing to form an electronic _, . ..

W O 98/07001 PCT~US97/15206 r- ~ullenlc;lldoscope This aspec~ also provides a self-c~n~inrd nlcd~u~,ll~llt system with high lll~d.7uie~ n accuracy, but in this case in a flexible package that can reach in~c~ccil-'e objects under a wider range of cOntl;~ nc In another aspect, the invention provides an d~J~Jdldlu7 for measuring the three-dimensional distances between points on an il a~ ' le object which c~ ;st7c a camera and a support means for moving the camera along a S straight ~rAncl ~ion axis, wherein the camera can also be rotated about a rotation axis for convenient ~
with an object of interest, and wherein the improvement co..~ ises a means for measuring an angle of rotation about the rotation axis and also a means for inco~ordlillg the measured angle into the result of the pe,.,~,.,7~ e m~:~u~ lcl~l The rotation in this case is made prior to (and not during) the measurement process This aspect then allows a user to make accurate dimpnci~ while allowing rotation of the camera for 0 CO..~ Alignm~7nt while also requiring only h~ 4ucll1 calibrations.
In still another aspect, the il.~nLioll provides an ~.p})d.dlus for mr~cllrin~ the three-dim~7nci~n~l distances between points on an in7rc~ le object which ~ollll,lif,es a ~ ;Ally side-looking bo-~ o~,e, where the bo" ;.~,o~,c can be translated along a straight line and where the bol~sl~ ,c can also be rotated about a rotational axis, wherein the improvement comprises the arr~ng~m~n~ of the rotation axis to he accurately aligned with the 15 ~ 7~ axis. This aspect also enables a user lo make accurate J;...~ on;~l lll~d.7Ull while allowing rotation of the bol~,~,cu~,c while also requiring only inrl~yut;.~l Alignnnon~ c7~lihr~ nc Further objects, a~lta~,~;" and features of my system will become apparent from a con~i~Prn~ion of the following description and the ~ ~o~ ng schpm~tir drawings.

CA 02263~30 1999-02-16 W O98/07001 PCT~US97/15206 Brief Des~,fi~,lion of the Drawings Figure 1 shows the dpfinitio~r of various 4uanliLi~s related to a rigid bol~aculJc.
Figure 2 depicts the change in pela~ ive when viewing a point in space from two different poci~i~mS
Figure 3 depicts the imaging of a point in space with a camera.
5 Figure 4 is a p~,a~ ive view of the 1~ F~ ni. ~I portion of a first clllo~ lrlltof the invention and its use in a typical ,--eaauu~ l situation.
Figure 5 is a detailed pCla~ vt; view of the merh~nir:ll portion of the first e---bod of the inventjon.
Figure 6 is a cross-sectional view of a portion of the structure shown in Figure 5.
Figure 7 is a block diagram of the electronics of the first I ~~ho~ of the invention.
10 Figure 8 is a view of the video monitor as seen by the user during the firsl stage of a first distance lu~,dSu ,C~lulci Figltre 9 is a view of the video monitor as seen by the user during the second stage of a first distance IllCdaul~lll,,lL
p-~c~lu,c.
Figltre 10 shows two views of the video monitor as seen by the user during the first stage of a second distance 15 I--eaau,.,..-.,l-l p.u~,~l~.
Figure 11 shows the two views of the video monitor as seen by the user during the second stage of a second l~l~aul~ p~ u~:.
Figure 12 shows a general rpl~ltionchir between the viewing cooldin..te systems at the two viewing pncitionc Figure 13 depicts a second mode of the fl;".~ o~ daul~ process taught by the present invention.
20 Figure 14 is a block diagrarn of the el~,LIOnil,a of a second c.,.bo~lil...,nl of the invention.
Figure 15 is a front view of the ~-,. .1-~ 1 portion of a second ~ 1~fj;111F~II of the h~cn~o,..
Figure 16 is a plan view of the ~ ' ,i ' portion of a second, ' - ' of the invention.
Figure 17 is a rear view of the ll~f~ ;t ~I portion of a second e .hfj h~ .1 of the invention.
Figure 18 is a left side elevation view of the ~e~ h~nirAI ponion of a second e~..bo~1~.. e~l of the invention.
25 Figure 19 is a right side elevation view of the mPnh:lnir~l ponion of a second c ~ho~l;. nl of the invention.
~igure 20 is a pe-a~,c~ e view of the 1 ~i)5ni~ Al portion of a third ~ ' ' of the invention.
Figure 21 is a plan view of the internal all ul-lultia at the distal end of the third C~llbu~ lllf Figure 22 is a left side elevation view of the internal aLI u- ~ at the distal end of the third ~ . ~ho 1; . . -~
Figure 23 is a right side elevation view of the internal sllu~lull, at the distal end of the third rmhodimpnt 30 Figure 24 is a plan view of the internal sLluc~u~es at the proximal end of the third ~ .. ho.l;.. 1 Figure 25 is a left side elevation view of the internal allu~,lules at the proximal end of the third l....ho.l;,.., ~.1 Figure 26 is a right side elevation view of the internal aLlu~,lul~,~ at the proximal end of the third embodiment.
Figure 27 is a proximal end elevation view of the internal a~,u~lu,es at the proximal end of the third emho.limPn-Figure 28 is a block diagram of the electronics of the third ~ hod;~
35 Figure 29 is a plan view of the internal a~l u-.;lu~s at the distal end of a founh Pmhodimrnt Figure 30 is a left side elevation view of the internal allu~,lul~5 at the distal end of the fourth e.llbo.lilllclll Figure 31 depicts the pela~ /e Illedsul~ nL mode 2 process when a camera moves in a straight line path, but - when the orientation of the camera is not fixed.

. . . _ . . .

W O 98/07001 PCTrUS97/15206 Figure 32 depiets the p~ ,e~ive In~a~u~ le.ll mode 1 process when a camera is conctr lin~ (l to a circular path whieh lies in the plane of the eamera optieal axis.
Figure 33 shows an I ~ -ios.upe which imrl~-nr~n~c a eireular eamera path where the eamera view is perpenfiirlllar to the plane of the path.
Figure 34 depiets the .I~ea~u~ ellt mode 2 process with a general motion of the eamera.
5 Figure 35 depiets the ~..easulL.I.~l.l of a distance with a co ,b;~ inn of circular camera motion and mode 2.
Figure 36 ill..c~ratr c a group of ealibration target points being viewed with a eamera loeated at an I ' .."
position and o. ;~ n Figure 37 illustrates the proeess of c~lihr~ion of rotational errors of the ir~nCI~tior~ stage used in the third and 10 fourth elllb~
Figure 38 shows an enlarged view of the ~.(IIII,UUII~ mounted to the translation stage during the c llibr~tion proeess depieted in Figure 37.
Figure 39 leplesell~ an example of the change in alienmrnt between a p~ ,eclive tiicrl~rPm~nt vector, d and a bol~opc's visuai eoordinate system that can occur if the bo,~scu~.e lens tube is not straight.
15 Figure 40 depicts the ehange in ~ nnn-on~ between the pel~l)el live riicplar~mPn~ and the visual coul~ dle system that can oceur if the bul~,ù~C is rotated about an axis that is not parallel to the pe,~,e~ive ~
Figure 41 is a p~ J~ iVt; view of a first variant of bo-~opc/BPA ~ .. ,1.~1.,.. 1~ of the invention.
Figure 42 is a pc.~,e~ live view of an r mhoriimrn~ of a strain-relieving calibration sleeve.
Figure 43 is an end elevation view of a test rig for d~le~ i ng the alienm~n~ of a V groove with respect to the 20 tr~nCIa~i- n axis of a ~ l iol~ stage.
Figure 44 depiets the proeess of .i.1~ e the ,al ig errors eaused by ;~ - f el;-~c in the geometry when a eylinder rotates in a V groove.
Figure 45 is a p~ .~u~ live view of the I portion of a second variant of bo,cscu~.c/BPA r.mho~imrn~c of the invention.
25 Figure 46 depiets the rela~ nc~irs between the three Cartesian UOOI~ ~ systems used in analyzing the effeets of a mis:~liFnmrnt of the bol~a- u~c axis of rotalion with respect to the p~ e~ live rlicpl ~çPmr n~
Figure 47 shows the rel~ionchir of the bol~culJc visual and m~haniral eoordinate systems.

CA 02263~30 1999-02-16 Best Modes for Carrvin~ out the Im ention A. Exrl~n~ion of the Prior Art of rC~a~ ve MCa~Ul~ nl and its I i~ n~ c In order to clarify the dicrl-cciQn of the p~ ,e~;live ~l-ca~ul~ ent and the problems in the prior art, I will carefully define the terms and processes being used.
Figure 1 depicts the distal end of a rigid bG-~j-,opc 2 together with a r~ ,se.. lalion of its conical optical field of view 4. Field of view 4 is defined by a nodal point 10 of the bol~,u~e optical system and a cone that has its apex there. The "nodal point" of an optical system is that point on the optical axis of the system for which an optical ray incident at the point is not deviated by the system.
The axis of conical field of view ~ is assumed to coincide with the optical axis 8 Figure 1 is drawn in the lû plane which both contains optical axis 8 and which is also parallel to the m.~rh lmc~l ~ Ihle of the bo~ ;ù~,e 6.
The apex angle, 11, of the field of v iew cone is denoted as FOV, half that angle, 12~ is denoted as HFOV, and the "viewing angle" of the bo,~scu,ue with respect to the cel~lellh~e of the scope, 14, is denoted as VA . Viewing angle 14 is defined to be positive for rotation away from the bol~ ",c centerline, to match standard industry practice.
The change in pel~e~ilive when viewing a point in space from two different positions is depicted in Figure 2.
15 A right handed global Cartesian coordinate system is defined by the unit vectors 2, V, and z . A particular point of interest, Pj, at r = ~ + y V + z, is viewed first from position Pl, then from position P2. The cool~lillalc system has been defined so that these viewing positions are located on the ~ axis, equally spaced on either side of the coor~lin~t~ origin. I call the distance d between the viewing positions the p~ live baseline, and I call the vector d = d :~ the pc,~ e ~i~rl~ t.
20 According to the known p~ ,e~live ~..-enl techni(lue~ viewing coordinate systems are set up at Pl and P2, and both of these CUOIdiIIaI~ systems are aligned parallel to the global coo,dil-at~,s defined in Figure 2.
As part of the ~-~,e~,live IlledSulC~ lll, the object point of interest is imaged onto the flat focal plane of a camera, as depicted in Figure 3. In Figure 3, a point 16 is imaged with a lens that has a nodal point 10. An image plane 18 is set up behind nodal point 10, with the distance from the plane to the nodal point being denoted as i.
25 This distance is l--ed~ ;d along a perpcn~irnl~r to image plane 18, and is often referred to as the effective focal length of the camera. The nodal point is taken as the origin of a Cartesian couldindt~, system, where the z axis is defined as that p~ r to the image plane that passes through the nodal point. The z axis is the optical axis of the camera.
In the model of Figure 3, the camera lens is considered as a paraxial thin lens. According to paraxial optics, 30 rays that strike the nodal point of the lens pass through it undeviated. It is h~ )oll~ull to realize that any imaging optical system, il-~l."~ g that of an en~ scQI)e, can be ~~ ,-.led as a camera as shown in Figure 3.
For object point 16 at (I, y, Z) one can write these coo,~' ~' ~ in standard spherical polar coo~dill..t~s about the nodal point as:
35 ~ = o'sin~cos~ y = o'sin~sin~ z = o'cos~ (1) where o' is the distance from the object point to the nodal point, and the polar angle ~ is shown in Figure 3.
By the plul~w~ies of the nodal point~ the angles of the l~ d ray will remain the same and one can write the image point location 20 as:

W O 98tO7001 PCTrUS97/15206 2im = - ir sin~ cos~ Y~ = - i' sin~ sin~ = - i (2) But i = i' cos~ so that:
i sin6 cos~ i 2 i r 2im = -- cos~ o' cos~ z (3) i sin~ sin~ i y i y Yim = -- CO519 0 cos6i z That is, the transverse coo~ ld~es of the image point (2im,yim), are directly proportional to the transverse S cooldil.ates of the object point.
When considering the p~l~,l.ldnce of a real optical system, as opposed to a paraxial model, the image of an object point will be blurred by what are called point aberrations and it will be ~icpl~'cd by what are called field aberrations. I define the location of the image point to be the location of the centroid of the blur spot, and I refer to the extent to which Fq~ ionc (3) and (4) do not hold for the image point centroid as the dislorlion of the optical 10 system. Clearly, conc~ tinn of the distortion of the optical system is important for making accurate l--~;a:,ul~ ..t~, and this was lr~O~ d in some of the prior art. I will later show how to determine the distortion and how to take it into account in the lU~,a~UI~ ' Conci~P-in~ now the view from position Pl in Figure 2, one may write:
2iml = ---- ; Yiml = ---- (S) 15 where (2V1 ~ Yvl ~ ZV1 ) are the cool.lil-dles of the point of interest in the viewing ~oordilld~ system at Pl . Similar c~ aiolls in terms of (2v2, Yv2, ZV2) hold for the view at P2. Using the facts that 2vl = 2 + 2- ~ 2v2 = 2--2-Yvl = ~v2 = y, and Zvl = Zv2 = z, the solulion of the four e4udliolls for the position of the point Pj in global cooldina~es is:
--id 2iml --2im2 (6) 2 = ( 2 i ) (2iml + 2im2) Y = ( z )(Yiml) = ( ~ )(Vim2) To make a ~ UII ' of the true, three ~ c:ol~l distance between two points, A and B, in space, one has simply to measure the three dimpnci~n~l position (~, y, z) of each point according to (6) and then to calculate the distance between them by the well known formula:
r = ~/(~A --2B)2 + (YA--~B) + (ZA--ZB) (7) This is the pelalJe~live l~eaaul~ L process taught by both Nonami and Sonobe in US Patent 4,935,810 and by Knnomllra in US Patent 5,575,754. Nonami and Sonobe use two cameras, one located at each of points Pl and P2 in Figure 2, while Konomura uses a single camera, translated along a straighl line from Pl to P2.
Ul~l~ a~ , this known process has severe limit~llionc First, il will give accurate results only when the optical (z) axes of the cameras at both Yiewing positions are perfectly aligned along the global z axis. Secondly, it CA 02263~30 1999-02-16 W O98/07001 PCTrUS97115206 also requires that the ~ axes of the cameras at both viewing positions be aligned per~cctly along the p~la~e~tive ~I;slJ~
In the case of Nonami and Sonobe, they do not teach how to achieve these nc~ eaadly conditions. In addition, for their system the two cameras must be identical in both distortion and effective focal length in order to give an accurate ll.~aSu-l lllc-ll. They do not teach how to achieve those con~i1innc either. As an ~ldi~ nql difficulty with 5 their system, Nonami and Sonobe deal with the l~ ' ' ~ y inherent in ~he final equation of (6) by spe~iftc~lly teaching that only three of the four available apparent point location IIIC~Sul~ should be used for each point location d~.t~ ion In fact, they go to a great deal of trouble to ensure that only one of the image point ~
position l--~,asul~ ,..ls can be used. This amounts to throwing information away and in general, cons~ rinE the effects of l..caau-~,....,nl errors, it is not a good idea.
In the case of Kor- where there is only one camera, which is a boleacùpe~ the first necessary c~nr~itio ~
means that the viewing angle of the bOluà~,ùpc must be a.,~,walely equal to 90~ in order for the l.lcaau~ ..l to be accurate. Konomura realizes that this is a problem and teaches the use of the following equation for the case where the viewing angle is not 90~:
d' = dcos(VA - 90~) (8) Ul luli 'y, use of Equation (8) does not correctly take into account the change in the perspective ~ ,aau~ ll which occurs when the camera viewing angle is not equal to 90~. In addition, Konomura makes no plu.,iaiOll for ensuring that the llleaau~ ,nl :~ axis of the camera is aligned with the pela~e~,live .li~ q. r~
Since the camera is a standard video borescope, in which the video sensor could be attached to the bolescu~,c proper at any rotational angle, there is little lik~lihood that the ~ axis of the video sensor will be aligned to the 20 ~lal e~,Live ~ r~
Krm. mnra has nothing to say about how to handle the lt:Ju.~d~.t r._ ~ in (6).

B. Description of a First F~.bo~ .u Figure 4 shows a view of the - ' - - ral portion of a basic Pmhodim~nt of my system and its use in a typical 25 ~ aau~ lll situation. In Figure 4, an object 100 with a damaged area or feature of interest 102 is being viewed with a video bo.ci,.;ùpc system 120. Object 100 is ~ , ' 'y enclosed by an e-l~'c~ e 110. In Figure 4 only a small portion of the wall of ell~,loaul~ 110 is shown. The bù~sculJe has been inserted through an inCpection port 112 in the wall of enclosure 110.
The bùl~_3co~ is au~JIJultcd by and its position is controlled by a ...~ l assembly that I call the 30 borescope posi~ioning assembly (~PA), which is denoted by 138 in Figure 4.
Several features of video l~,tiauope system 120 are shown in Figure 4 to enable a better ~ s~ .E of my system. The confi~-r~ti- n shown is meant to be generic, and should not be construed as defining a specific video bo.~s~ul.e to be used.
Conical field of view 122 represents the angular extent of the field visible through the bo~s~ ope. The small 35 rii~m. t~r~ r lonE~trd lens tube 124 ~,Oll~l~lia~S the largest portion of the length of the bol~ac(",e. The remainder of the b~ ,ùpe is cOlllpliacd s~.c~c;,ai~ of an illllmin:ltion interface adapter 126, a focusing ring 130, a video adapter 132, and a video camera back or video sensor 134. Video camera back 134 l~tJlU~U~lta every element of a closed circuit television camera, except for the lens. Video adapter 132 acts to optically couple the image forrned CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97/15206 by the boltis~ e onto the image sensing element of video camera back 134 as well as serving as a mrf h~ni~
coupling.
rnin~tifln adapter 126 provides for the connection of an illllmin~ion fiber optic cable (not shown) to the bo~ ,o~)e through a fiber optic co~ ur 128. The illllmin~tion (not shown) e ;its lens tube 124 near the apex of field of view eone 122 to illllmin~nf- objects ~ r' within cone 122.
A camera . - 136 conneets video eamera back 134 to its controller (not shown) through a cable whieh is also not shown.
The portion of BPA 138 whieh directly suppons the bu~ u~)e is a clamp assembly 140, which clamps lens tube 124 at any eonvenient position along its length, thereby ~I~JUI ling the weight of bol~ ,upe 120 and ~rll-~ ."i"ing its position and orientation. BPA 138 is itself supported by a strueture which is attached to enclosure 10 110 or to some other strueture which is fixed in position with respect to object 100. This support structure is not part of the present invention.
BPA 138 is shown in more detail in Figure 5. Lens tube 124 has been removed from elamp 140 in this view for elarity. Clamp 140 is cv,~ e~l of a lo~er V - block 142~ an upper V - block 144, a hinge 148, and a cl~mrin~
screw 150. The upper V - block is lined with a layer of resilient material 146, in order that the clamping pressure 15 on the lens tube 124 can be evenly .lh.llil.uted over a sllhst~nti~l length of the tube.
Lower V - bloek 142 is attached to moving table 184 of a translation stage or slide table 180. Tr:lncl~tion stage 180 is a standard .,ollll.unent COI~--UGI~ idlly available from several vendors, and it provides for a smooth motion of moving table 184 which is precisely ~,on~lla;-led to a straight line. Tr;lncl~ n stage 180 consists of moving table 184 and a fixed base 182, cnl~n~clpd by crossed roller bearing slides 186. Fixed base 182 is attaehed 20 to a BPA b~cepl 162.
The bearings in translation stage 180 eould also be either ball bearings or a dovetail slide. Sueh stages are also cullllllw.,;ally available, and are generally con..-deled to be less precise than those using crossed roller bearings, though they do have advantages, infh~ine lower cost. Translation stage 180 could also be an air bearing stage, which may offer even more motion accurac- than does the crossed roller bearing version. although at a 25 considerable increase in system cost and ~olll~le.~ily.
Also attached to BPA baseplate 162 is a micrometer mn~ ~g block 166. Mounting block 166 supports a lluwulll~ 168. Mierometer 168 has an fY~pncion shaft 170, a rotating drum 178, and a distanee scale 172. As drum 178 is rotated, a preeision serew inside the micrometer rotates inside a precision nut, thus ~h~ngine the distance between the end of ey1pncion shaft 170 and mmlntine block 166. Of course, mierometer 168 could be a 30 digital unit, rather than the traditional analog unit shown.
Mierometer t.~t. n..;on shaft 170 is conn~ d to an actuator arm 174 through a bushing 176. Actuator arm 174 is mounted to moving table 184. Bushing 176 allows for a slight amount of non-parallel motion between u.i-,l.,l.~,lel P~t~ncinn shaft 170 and moving table 184, at the cost of allowing some backlash in the relative motions of table 184 and shaft 170. Mierometer scale 172 can be read to determine the position of moving table 35 184 within its range of motion.
Figure 6 shows a detailed view of bushing 176 and the interface between micrometer extension shaft 170 and actuator arm 174. Shaft 170 is captured within bushing 176 so that arm 174 will follow position changes of shaft 170 in either direction, with the previously mentioned small amount of backlash.

CA 02263C,30 1999-02-16 Figure 7 shows a block diagram of the electronic portion of this first emho~iment Figure 7 ~ nts the electronics of a standard, known bo~sl.ol)e ~ ideo system except for the addition of a cursor controller 230 and a ~- 228. In Figure 7, an illnmi~ tinn controller 200 is connected to the bo~,cu,ue through a fiber optic cable 206 as has previously been described. Video camera back 134 is ,O,.~ Ir~l to camera controller 212 through camera control cable 135 as has also been ~Csc~ihc~l For the known system, the video signal out of the camera 5 controller is c~ rd to a video monitor 214 and, optionally, to a video recorder 216, through a video cable 137 as shown by the broken line in Figure 7. In this clllbo lill.~lll, the video signal from camera controller 212 is instead sent to cursor controller 230. The video signal as modifled by cursor controller 230 is then supplied to video monitor 214 and to video recorder 21~. Use of video recorder 216 is optional, though its use makes it possible for the user to repeat - ~ or to make ~ ition~l lu-,~u~ .lllS at some later time, without 10 access to the original Illcd.7ul~ lL situation Figure 8 shows a view of the video monitor as seen by the user. On video screen 310 there is seen a circular image of the bo.ti..,u~,e field of view, which I call the apparent f eld of view, 312. Inside apparent field of view 312 is shown an image of the object under i..~l~e~ .n 314. Su~)~-i--l~Jo~,ed on video screen 310, and hence on image 314, are a pair of cross-hairs, fiducial marks, or cursors, 316 (Cursor A) and 318 (Cursor B). These cursors can 15 be moved to any portion of the video screen, and can be adjusted in length, brightness, and line type as required for best aliEnmPn~ with points of interest on image 314. Note that these cursors do not need to be cross-hairs; other easily disce. ' '- shapes could also be produced and be used as well.
The gel.~"dlion of video cursors is well known by those familiar with the art, so is not part of this invention.
The fin ~lio~ of cursor controller 23û are controlled by co~ ute, 228 (Figure 7). Computer 228 has a user 2û interface that allows m~nip~ tion of the cursor positions as desired. It also provides a means for the user to indicate when a g*en cursor is aligned d~)plU~Jli t~ ', 50 that an internal record of the cursor position can be made.
It provides means for the user to input numerical data as read from micrometer scale 172. In addition, c-.,..l...
228 contains software which . l~ algorithms to be described which combine these numerical data appropriately to derive the true three dimrncion~l distance between points selected bv the user Finally, computer 25 228 provides a display means, whereby the distance(s) determined is (are) displayed to the user. Clearly, this display could be provided directly on video screen 310, a terhn~cgy which is now well known, or it could be provided on the front panel or on a separate display screen of COIlllJulel 228.
The system dc- c-i'~d by Knnom-lra in US patent 5,575,754 is similar to mine in that it also allows one to move a bo.~jcol,e to various positions along a straight line path and to view and select points on the image of the 30 object on a video screen. However, Knr d uses a cylinder sliding within a cylinder, driven by a rack and pinion, to do the positioning of the bo~,.,cupe. There are two basic problems with Konnmllra's Focitioning " /~ n;c... which are u~c-. - in my system.
The first problem with Kor ~mllra~s ~' is that it is difficult and expensive to achieve an adequate accuracy of straight line travel with a cylindcr sliding inside a cylinder as colul~dl~d to the tranCl~ion stage of my 35 preferred rmhorlimrn~ which is widely available at ~casonable cost and which provides ~Yree~ingly accurate motion. The accuracy of the straight line motion directly affects the accuracy of the p,. . ,~,~?~tive ~--easu~ The second problem is that Knnnm lra's use of a rack and pinion to drive the ~osition of the bo.~,col,e means that that position will tend to slip if the bo.~" ope is not oriented exactly horizontal, due to the weight of the moving CA 02263~30 1999-02-16 W 098/07001 PCTrUS97/15206 assembly. Konomura makes no provision for holding the position of the bol~a~ol)e. W~th mv micrometer drive, or more generally, with a lead screw drive. the high mrchqnir~l advantage means that there would be no tendency for the position to slip even if the lead screw were supporting the full weight of the moving assembly.
There are fim~qm~n~l reasons why the translation stage or slide table of my preferred elllbodill,elll provides a more accurate straight line motion than does a cylinder sliding within a cylinder. First, the translation stage uses 5 rolling friction rather than the sliding friction of K~- ~ als system. This means that there is much less tendency to alte,.,a~.llg stick and slip motion ("stiction"). Secondly, the tr~nCl~tion stage makes use of the principle of averaging of ~---' 1 errors. The ways and rollers of slides 186 of stage 180 are produced to very tight ~c'~ - to begin with. Then, the ways and rollers are heavily preloaded so that. for instance, any rollers that are slightly larger than the average undergo an elastic d~ru-,,,alion as they roll along the wavs. Thus, the motion of 10 moving table 184 is determined by an average of the positions that would be d~ -h by errors in the ways and the individual rollers. One cannot use a large preload to average out errors in a cylinder sliding within a cylinder, because then the friction would become too high. This is especially true because of the large surface contact area involved. I ml-n~ioned above that a dovetail slide could also be used in my system. Such a slide can be preloaded to average motion errors without the friction 1~ g too high simply because the surface contact area is suitably 15 small.

C. Operation of the First F.. l-orl.,.. l The view of the object shown in Figure 8 has the problem that it is a tWo-diml~ncinnql p-~, ~-~ of a three-P -I cihlqtion Clearly the co .hi. ~n~ of cursor controller 230 and computer 228 is capable of making 20 relative Illca;~ul~ a of the apparent size of features on object image 314, as is well known. But, because there is no i..rl..,.",~iO,, on distance, and because the distance may vary from point to point in the image, there is no way to dut~ c the true ~I;~..~,.,~;~a~c of object feature 102 from image 314.
The solution offered by my p~,.a~C~LivC ul~aaul~ system is to obtain a second view of the object as shown in Figure 9. This second view is obtained by lldllaldlillg video l)ol~a~u~e 120 a known distance along an accurate 25 straight line path using BPA 138 described above.
The following (liccllcci~n of the operation of my system assumes that it is the distance between two points on the object which is to be determined. As will become clear, it is strai~hl~ul Wdld to extend the ll~eaaulelll~
process to as many points as desired. In the case of more than two points, the distances between all other points and one particular reference point could be determined by the process described below. Arl-liti~ -lly, the distances 30 between all of the points taken as pairs could be determined from the same data gathered during this process.
I will now outline a first mode of distance ul.,daul~ ,-lL operation. As was shown in Figure 8, to begin the process the bol~s-,ope is aligned with the object to produce a view where the points of interest are located -lly on one side of the field of view. In that view, cursor A (316) and cursor B (318) are aligned with the two points of interest, l~a~ Lively, as shown in Figure 8. When the cursors are aligned correctly, the user 35 indicates this fact through the user interface of computer 228, and ~;uu-~Julel 228 records the locations of cursors A
and B. The user also then enters the position of moving table 1$4 as indicated on micrometer distance scale 172.
Using micrometer 168, the user then repositionC the bor~a~ul~ to obtain a second view of object 100. As shown in Figure 9, the user selects a second position of the bol~acope to bring the points of interest to 5~h5~qn~iqlly CA 02263~30 1999-02-16 W O98/07001 PCTrUS97/15206 the other side of the bo.u ,.,u~,c field of view as cul,.pal~d to where they were in the first view. The cursors are then once again used to locate the positions of the points of interest, cursor A for point A and cursor B for point B. In Figure 9, Cursûr B (318) is shown temporarily moved to an out of the way position to avoid the possibility of co ~f~'iol- when the user is aligning cursor A with Point A. The user has the option of aligning and recording the cursor positions one at a time, if desired. When the cursors are p-~ci~ioncd correctly, or when each cursor is s p~";~;o--eA, if they are being used one at a time, the user indicates that fact through the user interface of r - --r ' 228. The user then enters the new position of moving table 184 as indicated on micrometer distance scale 172.
With the data entered into computer 228, (two cursor position ..l~asu~c,ll~"~ts for each point of interest and two bol~i,.,o~,c position lll~,d~Ul~lU~ 5) the user then comm inAc the computer to calculate and display the true three .~ - - --l distance between the points which were selected by the cursors. The computer co~ s the l~ea~ul~d 10 data with ci~libr~tion data to d~ e this distance in a software process to be described further below. The calibration data can be obtained either before the luLa~ululll.,..l or after the ~,-.,a~u-~ l, at the option of the user.
In the laner case, CO~ uL~I 228 will store the acquired data for future con p~ ion of the ~u~asu.~d distance. Also, in the case of pOst~ dsult;...."ll calibration, the user has the option of directing computer 228 to use preliminary or previously obtained calibration data to provide an app-ù~i-llate !t~ ion of the distance imm~Ai~ly after the 15 o~ ",enl, with the final distance (lrtL~ ~..;-.,.~ion to depend on a future calibration.
The ~ UI~,lll~ll~ process just outlined is that expected to be the one most generally useful and convenient.
However, there is no l.,~Uil~ ,nl to use two separate cursors to d.,l~,.-l inc the apparent positions of two points on the object, because one cursor would work perfectly well as long as the cursor position data for each point of interest are kept or~cu~i~ed properly. In addition, it may be that the distances between more than a single pair of 20 points is desired. In this case, there are just more data to keep track of and nothing fi - ~l~, Ir~ l has changed.
I now outline a second mode of distance l"ea~u.~ ,nl operation. Consider lllea~ulel,l~,lll of the distance between two points which are so far apart that both points cannot lie on ~ lly the same side of apparent field of view 312. Figures 10 and 11 show an example of this situation, where the three-AimPncionql distance behween the hwO ends of an elliptical feature is to be determined. Figures lOA and lOB show the h~o steps 25 involved in the de~e.",i"dLion of the three Aim~ncion~l location of the first end of the elliptical feature. Figures 1 lA and 1 lB show the two steps involved in the dele--"inc lion of the three Aim~n~ion~l location of the second end of the elliptical feature. In this mode of distance Ill~:~UICIIIC;III, a point of interest on the object is first brought to a location on one side of apparent field of view 312 and a cursor is aligned with it. The cursor position and llli~,l~Jll.~,t~. position data are then stored. The view is then changed to bring the point of interest to the other side 30 of apparent field of view 312, and the cursor position and micrometer position data are once again stored. This same process is carried out scquenti:llly for each point of interest on the object. After all of the cursor and ,loll.e~r position data are gathered, the computer is in~llu~led to calculate the desired distances between points.
Note that in this second mode of distance Ill~;dSUl~,lll~,nl op~r~tio~ the two points of interest could be located so far apart that they could not both be viewed at the same time. In this case the ~,,ea~ul~lu~nl still could be made.
35 That is, there is no l~uil~ that the dist;mce to be ~u~d~ul~d be ~o~ l~t~ly ~o~ !c within apparent field of view 312. The only limit is that two suitable views of each point be obt~inable within the tr ~nC~ )n range of BPA
138. This is a capability of my system that was not conceived of in the prior an.

CA 02263~30 1999-02-16 WO 98/07001 PCT~US97/15206 In detail, the process of making a Ill~,d~Ul~ t of the distance bet-veen two points, both of which are cont~in~rl within a relatively small portion of apparent field of view 312 as shown in Figures 8 and 9, (I call this ,~el~sur~c"~ t mode 1) is made up of the following steps:
1. A specific area of interest on object image 314 is located in apparent field of view 312 by sliding and rotating bures~o~.e 120 inside bGlt;~co~c clamp 140.
2. Borescope cla~np 140 is locked with rl~mpin~ screw 150 to secure the position and orientation of the ~,e~ope with respect to BPA 138.
3. Micrometer drum 178 is rotated to select a first view of the object, with both points of interest located lly on one side of apparent field of view 312, such as that shown in Figure 8. The alJ~Jlu~llld~
position of the rnicrometer as read from scale 172 is notcd.
4. Micrometer drum 178 is rotated to select a second view of the object, such as that shown in Figure 9. This step insures that a suitable view is, in fact, obl~ ,!e within the range of motion of micrometer 168, and that, for instance, the view is not blocked by intervening ûbjects.
5. Mi~,lu~ u~ drum 178 is then rotated back again to a~ o.~iludt~ the position selected for the first view.
At this point, the rotation of the micrometer is again reversed so that the llu.,lulll~lel is being rotated in the direction that is nc~,c;,~dly to move from lhe first view to the second view. After a sllffiri~nt reverse rotation to ensure that the backlash of bushing 176 has been taken up, the micrometer rotation is halted.
This is now the selected viewing position for the first view.
6. Cursors 316 and 318 are then aligned with the selected points on object image 314 using the user interface provided by cc. ~ 228.
7. When each cursor is aligned correctly, CO~ ut~,~ 228 is comm~t~ d tO store the cursor poci~i- The cursors can be aligned and the positions stored either sequentially, or ' 'y, at the option of the user.
8. The user reads micrometer scale 172 and enters the reading into the Cûlll,~ut~l with the user interface provided.
9. Micrometer drum 178 is now carefully rotated in the direction nc~c~dly to move from the position of the first view to the position of the second view. This rotation stops when the user judges the second view to be ~ r ~uy for the purposes of the ~ ..l desired, such as that shown in Figure 9.
10. The user repeats steps 6, 7, and 8.
30 11. The user r ~ ' the ~~ , to calculate and display the true three-dim~nc~sm~l distance between the points selected by the cursors in steps 6 and 10. If desired~ the CO~ u~l can be con~m~r(1~d to also display the absolute positions of each of the two points. These absolute positions are defined in a cool.li~ t~, sy~stem to be d- ~(,. il ed below.

In detail, the process of making a ~ a~u~ ,nt of the distance between two points, when they cannot both be c~ d within a relatively small portion of the apparent field of view 312 as shown in Figures 10 and 11, (I call this mea~u,~ ~t mode 2) is made up of the following steps:

CA 02263~30 1999-02-16 W O98/07001 PCTrUS97/lS206 1. Computer 228 is instructed to comm?n~ cursor controller 230 to produce a single cursor. While it is not ab~olul~ly ncce~sa.y to use a single cursor, I believe that the use of a single cursor helps avoid ~ n~c~ r co.~r. ~:o.~ on the part of the user.
2. The user adjusts micrometer 168 to ~",.o.~ilately the midpoint of its range by rotating drum 178.
3. A specific area of interest on object image 314 is located in apparent field of view 312 by sliding and rotating bU-~UIJC 120 inside bo,.,~cupc clamp 140. The two points of interest are identifie~1, and the bo,~sco~c is po~ ition.~d so that the center of apparent field of view 312 is located apl,.uxllllut~lr cqui-lic~nt between the two points of interest.
4. Borescope clamp 140 is locked with clamping screw 150 to secure the position and o-ienl~tio., of the bo~o~,e with respect to ~PA 138.
5. ~ ulll~,t~,~ drum 178 is rotated to select a first view of the first point of interest. The first view is selected so that the point of interest is located ~ t~ ~lly on one side of apparent field of view 312, such as that shown in Figure lOA. The a~p.u~illldle position of the lld~lulllel~l as read from scale 172 is noted.
156. Mh,lu.. ~,h. drum 178 is rotated to select a second view of the first point. The second view is selected so that the point of interest is located 5~ lly on the other side of apparent field of view 312 from where it was in the first view, such as that shown in Figure IOB. This step insures that a suitable view is, in fact, ol,~; ~"e within the range of motion of micrometer 168, and that, for instance, the view is not blocked by i.lt~ ning objects.
7. Steps S and 6 are repeated for the second point of interest, as depicted in Figures 1 lA and 1 lB. This step er~sures that suitable views are, in fact, ~i - "e for the second point of interest with the bol~;,.,ol.c ~li~m~n~ chosen in step 3.
8. Micrometer drum 178 is then rotated to appr~xim ' '~, the position selected for the first view of the first point of interest (Step 5). At this point, the user makes sure that the micrometer is being rotated in the same direction that is nc~,cl~ly to move from Ihe first view to the second view of the first point of interest. After a sufficient rotation to ensure that the backlash of bushing 176 has been taken up, the Ild~,lull.~,t~r rotation is halted. This is now the selected position for the first view of the first point of interest.
9. The cursor is then aligned with the first point of interest on object image 314 using the user interface provided by cr . 228.
10. When the cursor is aligned correctly, computer 228 is comm:ln~ d to store the cursor position.
11. The user reads ll~,lull..ler scale 172 and enters the reading into the computer with the user interface provided.
12. Micrometer drum 178 is now carefully rotated in the direction nece~ to move from the position of the first view to the position of the second view. This rotation stops when the user judges the second view to be ~ ;~r ~ lol~ for the purposes of the III~ UI~ desired.
13. The user repeats steps 9, 10, and 11.
14. Micrometer drum 178 is rotated to obtain the first view of the second point of interest, which was selected during step 7. The user repeats step 8 for this first view of the second point of interest.

.. ... . _ _ , W O98/07001 PCT~US97/15206 15. The user repeats steps 9 to 13 for the second point of interest.
16. The user ~o.,...~ c the computer to calculate and display the true three-(limrnci-m~l distance between the points. If desired, the compuler can be comm~nAFd to also display the absolute positions of each of the two points, in the coordinate system to be defined below.

I have l,r '1~ e ~l.h .~ A that the user should position the points of interest first near one edge of apparent field of view 312, then near the other edge, during the lueaSul~ ' The reason is that analysis of the errors in the nl~,aault;ll~ t shows that there is, in general, an optimum p~ e~,liv~ baseline to be used, and that this optimum baseline differs for each individual .llca~ .a~,ul. The pnmary l~4uil~lllelll on the p~ e~live baseline is that it be chosen to be proponional to the range of the object from the camera. A secondary, and less h..pu. Lanl, 10 l~uilt~ lll is that the pcl~ ,Live baseline be chosen to cull~-r~r~ to an optimum mcasul-,lllcnl viewing angle of the bù~ cu~Je being used. While the exact optimum lucds~ lll vie ving angle depends on the detailed Cl~ault;li~lic~ of the bol. scuye~ for most bo...~.,u~s the optimum angle will be so.llewll~ near the edge of the field of view, although not exactly at the edge. Use of the plu~lu~e as specified above ;Illtt~m~ic~lly ensures that both of these rc~luil~;lllellts on the p~-~l,c~;livc baseline are being achieved for the panicular ~ll~a~ul~lllenl being 15 - ~ trr~
It should be clear that the lll~a ,ur~ nl process and ~ - - r:ll hardware I have defined could be used with 1)O1~ U~S other than video bu-t;scu~Jc 120 as I have dFc~ribecl it. For instance, the video bol~ ol~e could be . ' ' with a tiny video camera and lens located at the distal end of a rod or tube without ch~nging this system of lll~,a~ul~ l at all. In such an "cPc'~.llunic bu-~l ulJe" there would be no need of a lens train to conduct 20 the image from the distal end to the proximal end. While flexible electronic F l~A~ )f ~ built this way are currently available, I am not aware of a rigid bul~ opc like this. However, when one c~ncide~s that optical ~ keep getting more e~ while solid state imagers keep getting less expensive and that the resolution of solid stage imagers keeps increasing, it seems likely that electronic bo-~c~es will be used at some future time, especially in the longer lengths, where the optical p~lru...,allcF of ordinary borescopes is ~legr~
25 will later describe an electronic ~ a .u.~lll.,nl bores~("oc, here I am speaking of an electronic bo-~a~ope that contains no inherent ..-~,asulc~ capability.) This system could also be used with a visual bu-~;~u)l,e, that is, one with no video at all, requiring only that the bolt;~cù~e eyepiece contains an ~ !e fiducial mark with a position readout (a device r- ~ Iy called a "filar lluclolll~ ) Such an ~ ..1~1;.. ~ ~ of the system, while feasible, would have the strong disadvantage of requiring 30 the manual Lr,,n~ Jtion of fiducial position data, which would be a source of errors. It also would have the disaJvallLag~ of requiring the user to perform a delicate and precise task, namely ac.,u.alcl~ aliglung the fiducial mark with a selected point on the image of the object, while under the physical stress of looking through the bol~_~.,opc. (In general the bolti~cope would be located at a position awkward for the user). And, of course, such a visual lueaa~ llL bol~s~ul,e would not be a standard c~ o~ , unlike the video boles~ e I have .1;~
It is also clear that the video system could be a digital video system as well as the analog system I have d;~c~ A That is, the video signal could be digitized into discrete "pixels" with a video "frame grabber" and all video plùces~ g could be done digitally. ICr- d'S system is . ' ~ with digital video. Such systems are more expensive than my simple analog system. and there is another disadvantage that is more subtle~ but is illl~UI Idllt.
A detailed analysis shows that the pclayeclive ll.ea.,u,~-l,cnl process is much more sensitive to errors made in locating the image point in the direction of lhe ye,ay~live dicrl5rpmpnt than in the direction perpPn~ir~l,qr to the ~la~,euLive rlicrlqrPmPnl This means that for best lllcd.7ulclllclll accuracy, one wants to arrange the a~stem so that 5 the smallest feasible errors are made along the direction of the pc~ayeclive diCplqrpmpnt as projected onto the image plane. For standard video systems there is a ~lilrcr~ e in 1~ '1 ' on between the holi~onlal and vertical video directions, with the horizontal direction having the higher resol-l~ion This higher resolution applies not only to the video sensor itself, but also to the cursor position r~cn'~ion In a digital system, the 'nori70ntql cursor resolution is limited by the number of hr~ nl~l pixels in each line lO of video, which is typically about 5l2 or certainly no more than 1024 for a standard system In an analog system, the ho~i,olllal cursor resolution is not limited to any particular value, since it is a matter of timing. It is straiL~,l.L~, wald to build an analog cursor po~itioning system which provides a cursor resolution of nearly 4000 positions across the video field. This higher honzontal cursor resolution available to my analog system is valuable in .~ g the error in the Illeaaul. l.l~"ll.
15 As I p~v. _ ly mt-n~ionPrl the prior art p~ yeclive ul~dSùlclll~lll assumes that the viewing camera optical axis is oriented p~,~y~ lqr to the perspective .I;syls ~ 1, that is, along the z axis in Figure 2. It also assumes that the ho.i~un~l and vertical axes of the camera are oriented along the ~ and y directions in that Figure.
Clearly, in view of Figures l and 4, these ~CSllmrtjonS are not adequate if one ~vants to use any s--hst~ntislly side-looking bo.escopc without any specific ~lignmPnt between the optical axis and the cc-n~erline of the bc,l~a~u~,c or 20 without any specific rotational orientation of video camera back 134 with respect to field of view 122.
Because of the higher available Icsululiùn in the horizontal video direction, one wants to arrange things so that the pcl a~ c d;~ 1 is viewed along the horizontal video direction. Thus, in order to achieve the most precise Illcaauuc~l~ellls possible with my system, the user should prepare the bo.~scùye before making l~,casuucJll~".ta by rotating video camera back l 34 about the axis of borescope 120 so that the horizontal video 25 direction of camera back l 34 is a~)~JIu~illldlCly aligned lo the plane in which the optical axis of field of view 122 lies. (This assumes that there is no atlAition~l rotation of the image about the optical axis inside the bo-cscuye~ If there is such an a~ itic!n~l rotation, then one rotates the camera back to align the horizontal video direction with the plu~ - ' direction of the p~,.ay~live Aisrl~ren~nt as seen at the position of the video sensor.) This ~lignm~nt will ensure that lll~,a.,u-~".ltnLa7 are made with the smallest feasible random error. But, in order 30 to obtain the random error reducing yn~ycl~ s of this ~lienmPnt it is not nc~,esaal~ that it be very accurate. Thus, this y-cyard~u.y ~lig~,lmPn~ is not a formal part of the l.l~auuc.~ ,.u~ed .-c, nor of the calibration of the system, which is d;~ se~ later. In any case, whether this pl~,lJa.~.t~ ly :~li3- is y.,~ol - ~ or not, my c~libr~tion duu~; d~ , the actual ~llier men~ of the camera, and my data pluccsaing "ucedu~c takes that ~ nn~nt correctly into account in the nl~,dSU
In the l.._d.~ul~ ll y~OCeaa~'a that were desrribed above, the ~y~hl~nlal data obtained are four image position coo-~ /ml~ Ylm~ /m2~ Y/m2) for each object point of interest and the reading of the micrometer at CA 02263~30 1999-02-16 W O 98/07001 PCT~US97/15206 each viewing position. I now explain how to combine these measured 4ud..lilies, together with calibration data, in an optimum way to d~ illf the distance between the two points of interest.
Figure 12 shows a generalized pe,~l,eulive Il~Cd~ulC~ situation. Here, two viewing coordinate systems are set up, each of which is determined by the x and y axes of the camera focal plane. and their mutual perpf nf~ r z = :z x V
S In Figure 12 a first cofJId - ~ system has its origin at the first obselvddlioll point, Pl, and a second cooldh~dle system has its origin at the second observation point, P2. Because there may be a rotation of the camera in moving between Pl and P2, the coordh~ale axes at Pl and P2 are not parallel, in general. These coordinate systems are denoted by the ~ul,s~ 1 and 2. That is, the P l coordinates of a point are eA~ sed as (2~, yl, z~ ) while the coo~ - of the same point, as ~;A~ es~ed in the P2 system~ are (~2,y2,z2). The P2 coo~ lldle system has its 10 origin at d in the Pl system.
To accomrlich the p~a~euli~re lI.~,d:~Ul~ , the arbitrary point Pi is viewed first in the Pl ~ooldinale system, then in the P2 coo- Iil-ale system.
Because in this first ~;--.bo~' I use a translation stage which provides a high degree of precision in the motion of the camera, I assume for now that there is no rotation of the camera in the tr;lncl~ti~ between Pl and 15 P2. In this case, the cool-lil-dle axes of the two systems in Figure 12 are parallel. The partial generalization here is that the pel~c.,live rli~ ,f ~. ..11 between Pl and P2, d, can be at any arbitrary orientation with respect to the viewing cou,di-late axes.
In the ~ - -- of the prior art above, the following ~- l " io~ between the camera image plane 20 coo,~' and the COIll '~I-o~ object point COOIl'- were ~ietf m~infA
i~ iy 2 im = ~ ; Yim = ~ (9) where i is the distance from the nodal point of the optical system to the image plane. Similar e q~ jonC hold for the observations at both camera pOci~io~c These image point data can be written in vector form as:
~im l im rim = y~m = Yim = _ _ y = ----r (10 Zir~ --t z 25 or m ~
r= ---rim=z ~ 1 =zav (Il) The vector av, which I call the visual localion vecJor, contains the image point location data for the a~ ll of the apparent location of a point Pj from a given viewing position. These flata, of course, are assumed to have been corrected for camera distortion as was previously ~ f~i and will be ~Iic~ ed in detail later. The distance, z, is unlu-uwl-. When one measures the apparent locations of Pj from two viewing poci~ionc 30 ~ ed by a vector d, one has two vector e.~
rl =r2+d=z2av2+d (12) r~= z~avl where r~ is the location of a point as eAtJ-~a~d in the coordinate system which has its origin at Pl, and r2 is the location of the same point as expressed in the COOI~ e system tied to P2.
Expressions (12) represent 6 eqn~ nc 4 UllkllOVvllS. The four unknowns are the three cv -.l.o~F.~I~ of r, (or r and Z2 (or z~ ).
5Subtracting the two Fq~ irmc (12), one obtains:
zl av~--Z2 aV2 = d (13) which can be written as a matrix equation:
[ av~ - av2 ] [ Z2 ~ = d (14) Expression (14) rt;plescll~s three e~ n~io..C in two unknowns. When there are more equations than unknowns, the system of eqllqti~nc is called over - determined, and there is in general no exact solution. However, because the 10 C~-rfi~h.~ of the eq~ations are t:A~t;lhll~lllally d~l~llllincd qll~n~iliPc that contain noise, one wouldn't want an exact solution, even if one hal.p~ ed to be available. What one wants is a solution that best fits" the data in some sense. The standard criterion for "best" is that the sum of the squares of the deviations of the solution from the u-~ds~ d data is ~ .;l. 7r~1 This is the so-called least squares solution or least s4uares estima~e.
The least squares solution of the over d~ uhlcd system of Equations (14) can be simply eAIJIes~ed by 15 hll-udu~h~g the leh pseudo-inverse of the data matrix:
[ zl ] = [ avl - aV2 ] d (15) Adding the two F-lllqtionc (12), one obtains:
2 rl--d = [av~ aV2][Z~] (16) 20 S~ llE (15) into (16):
r~ = 2[[av~ av2]1av~ -av2~ +13]d (17) where I3 is the identity matrix of ' ~n 3. Equation (17) gives a three--limPn~i~ nql Ieast squares estimate for the location of the point of interest, Pj, as ~A~ ed in the coordinate system at viewing position Pl, for the visual location vectors av~ and aV2 I"ca .u.ed at viewing positions Pl and P2 ~t~ ivel~.
To aid in the CO ~ of expressions (17) to the prior art result (6), introduce an auxiliary coordinate 25 system into ~A~Iess~oll (17). Recall that (6) refers to a coordinate system which is defined such that the origin lies exactly half way between the two observation positions. Therefore, define:
r", = r~--2 d (18) Then:
- 30r", = 2[aV~ aV2][aV~ - aV2] d (19) This is the simple, general expression for the location of a point of interest, given eA,~ el-tally d~lc-,...ined ., CA 02263~30 1999-02-16 apparent positions, when the pc-~,e~ /e ~licpl~rcm~n~ d is oriented in some arbitrarv direction. Expression (19) is correct and complete as long as lhe motion of the camera between the two viewing positions is a pure tr~nCI~
An h..pd.ldl.L cv.~ ;ml from expression (19) is that the d~l~----mation of the position of a point, r, from the .I,ed~ult;d data requires only the knowledge of the pe-~l,c-,live (licpl~c~m~nt vector d, as e~ d in the Pl ~uu-di-.dle system, and the image distance or effective focal length, i (from (11)). Of course, the image point 5 position data inco-~ldtcd in visual location vectors aVI and aV2 must have been corrected for the distortion of the optical system before being used in (19), as was previously explained.
To compare (19) to the prior art result, note that the leh pseudo-inverse of a matrix can be written as:
ALI--(ATA)-IAT (20) and specify that d is directed along the x axis, as was assumed in the derivation of (6). When one also assumes 10 that Yiml = Yim2, one finds that (19) reduces to:
d --(Ximl + Xim2) 2(xim2--xjml) 2 i (21) Clearly, (21) is identical to (6) for the case where Yiml = yim2 The optimum way to use the four ~ued~ , according to the least squares criterion, is given by ( 19). It reduces to result (21) for the case where d is directed along the 2 axis and when the two images are located at the 15 same y positinnc If the ll.ca~ul~d yimldoes not equal yjT,~2 but the ~ ercllce is small, it can be shown that the least squares result is the same as using the average value of Yimland Yim2 in (21) in place Of Yiml ( but only when d is directed along the :~ axis).
Now consider Illedsul~ mode 1. Assume that two points of interest, A and B, are viewed from two camera p~citionc~ Pl and P2.
The distance between PI and P2 is d, and is simply c~lc~ n?tl from the ~ llrlll~l data as d = 12--11, where ll and 12 are the micrometer readings at viewing positions P l and P2 respecti~ ely. Considering now the det~rrnin~tion of the location of either one of the points, one ne:;t corrects the measured image position data for distortion. As I discuss further in the calibration section, I use the term dlsto~t~on to refer to any devialion of the image position from the position that it would have if the camera were perfect. This is a much more general 25 definition than is ohen used, where the term refers only to a particular type of optical field aberration.
Of course, this distortion correcting step is pe-ru--lled only if the distortion is large enough to affect the accuracy of the Ill~d~ulelll~,llt, but this will bc the case when using any standard bo-~s~u~e or almost any other type of camera to perform the p~ e~ e ..lea~u~ elll.
As is further ~escrihcd in the calibration section, one can write the image position coordil.dlts as:
~im = 2im--fD~(2im. Yim) (22) yim = Yim fDV~2im~ YTim) where (1~;T",~ Yim) are the t~ ,.i...enlal ~lleasul~llle-lls and (2jm1 Yim) are the distortion corrected versions The same equation applies to the data at both camera pocitinnc~ that is, both ~jmland ~ 2 are sul~je~-led to the same correction function fD~ and both y~ land YjT,~2 are corrected with fD~. The distortion correction functions fDI and CA 02263530 l999-02-l6 W O 98/07001 PCTrUS97/15206 fDV are determined in a r~lihr~tion process which is described in the calibration section This calibration process is known in the art.
Next, the data are scaled by the inverse of the effective focal length of the co--~billed optical-video system.
That is, the data (x~ , Viml~ ri~7~2- Yi,~2) are m--ltiplird by a factor ne~es~dly to generate the equivalent true values~ of the tangent of the viewing angles:tan(~ (23) tan(a ~ yi-nl and likewise for the other two ...~,as.n~.,lc~ for this point on the object from position P2. The equivalent focal length, i, is pl~ abl~ determined in the same calibration process as is the distortion, as ~vill be described later in the calibration section.
Next two visual location vectors are formed from the scaled, distortion corrected image position 10 IllCa~u~ tS~ These vectors are:
~tan(~l) ~tan(~2) aVI = tan(~ ) and ay2 = tan(cr~,2) (24) The pc-~e-,live ,~ is formed by placing the p~ .c~,live baseline (the .~leasu~ed distance between viewing positions Pl and P2) as the first element of a vector:
d db = 0 (25) IS The p~ ,e~ ve ~~icrl~ is then ~r;~ c~ d to the viewing cool~ ale system defined by the camera at Pl by ' , li~ of db by a pair of 3 x 3 rotation matrices R~ and R/:
dVl = Rl Ry db (26) The mllitirlic~ionc in Equation (26) are standard matri~ mllltiplir~ionc of~ for instance~ a 3 x 3 matrix ~ ith a 3 1 vector. Rotation matnces RV and R~ describe the effects of a rotation of the coordinate system about the y axis 20 and about the z axis le~e~ ly. They are each defined in a standard way as a function of a single rotation angle.
The d~finitionc of the rotation matrices, and the calibration process for ~t~rmin~ion of the rotation angles, are given later. The ~liEnment calibration process that I define there to detennine these rotation angles is new.
The location of the point being d-,t~,.-u-l-ed is then c~ l~ ed according to Equation (19) as:

rm = y = 2[avl av2][avl - aV2ludvl (27) The process ending with the c llr~ tinn expressed in Equation (27) is performed for the data obtained on points A
and B in turn, and then Equation (7) is used to calculate the desired distance between points A and B.

Mea ~ t mode 2 is depicted in Figure 13 . Here there are up to a total of four viewing positions used.
The fields of view of the camera at each position are indicated by the solid lines ~m~n~ting from the viewing . . .

CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97/15206 position, while the camera optical axes are denoted by the dot-dash lines. Dashed lines indicate srhPm~ ly the angles at which points A and B are viewed from each position.
Because of the accurate motion provided by translation stage 180, the viewing positions all lie along a straight line and the viewing coonlindl~ systems are all parallel. Figure 13 is drawn as the projection of a three-Al situation onto the ~, z plane of the camera. Thus, the viewing positions and the line along which they 5 are drawn do not necessarily lie in the plane of the Figure, nor do the points of interest A and B.
Point of interest A is viewed from positions PIA and P2A with pe~ ,.,live baseline dA, while point B isviewed from PIB and P2B with p~ ,ue~;live baseline dg. The e~l~r~ data obtaine d during the mode 2 .llcaau.~,.llenl process are the four image point coordinates for each of the points A and B, and the four viewpoint positions along the camera motion axis IIA, l2A, IIB, and l2B Note that two of the viewing positions could be 10 cr,inritlPnt, so that a total of three different viewing positions ~vould be used, and this mode would still be distinct from mode 1.
Vectors rA and rB are del~l,l-illed using the p~ c~,live baselines dA = l2A - IIA and dB = l2B--llB as has just been des~,il,ed for ~--easu,~.,.el-l mode 1. The distance between the coordinate origins for the measu~....,nl~ of 15 A and B is then r llc~ ted as:
dAB = 2(l1A + I'A-- IIB-- l2B) (28) Next, vector dAB in the camera cool.li~ t~ system is c~lr~ t~l as:
dAB
dAg = R~Ry 0 (29) o Finally, the desired distance between points A and B, r, is r~lr~ t~d as:
r = lrl = ldAB + rA - rBl = ~ (30) where the vertical lines indicate the n~gni~n(lf (length) of a vector.
Me~uucll~ mode 2 can have a lower random error than does lUCd:~ul~ nl mode I because the points of interest can be brought to the optimum apparent angular position in each of the views, whereas the apparent angular positions chosen for the points in mca~u-~,---~--l mode I is necessarily a co---l"(...use.
In contrast to the prior art, my data ~ ~oce~hlg IJlOI_edUI~ correctly takes into account the general geometry of 25 the pel~,e.,~ ..ca~uu....~ . Because of this, it is possible to define a complete set of l~ which can be c~' ' ' in order to obtain an accurate ll.casul,..--c.-l no matter what l..casul~ geol.l~l-y is used. Thus, it is only my .l.~ull"ll~nl system which can make an accurate IlI-,d~UII;~ ;III with â standard, offthe shelf, video bo~"l,u~e. In addition, I make use of all of the available Ill~,a~Ul~ d data in an optimum way, to produce a ,..caju,~...clll with lower error than ull-ci-~..;,e would be provided. Finally, my system provides a new IUCa~Ul~ ~ :
30 mode (mode 2) which allows one to measure objects which are too large to be l--caiu.~d with prior art systems, and which provides the lowest feasible random luea~ul~l,.el-t error.

D. Desr~irtinn of a Second Fmho~' Figure 14 shows a block diagram of the electronic portion of a second ~--.I,r ' of my system. The new 35 elements added as co---~ d to the first embodiment are a position transducer 360, a motion actuator 410, a CA 02263~30 1999-02-16 motion controller 452 and a position ~ d~u~ nt block 470. The latter two blocks a~c combined with cursor controller 230 and computer 228 into a block called the system controller 450. Position transducer 360 is connPetPd to position -- c".cnl block 470 by a position tr:lncrh~cl r cable 366. Motion actuator 410 is connPetP~ to motion controller 452 with an actuator cable assembly 428.
This seeond embodiment of the electronics could be built with the capability of ~- ~ ' 'y ~ltor~tirQpp~
5 of the position of bolescopc 120. That is, bol~cope 120 eould be ~ositinn~d a~"~ within the range of travel of t~ jnn stage 180 (Figure 5) under control of computer 228 upon operator co~ In this case, the user would only have to co~ n~l some initial position for translation stage 180, then align and clamp bul~sco~)e 120 a}",lul.liaLely as des."ibed above for the operation of the first ~ lo~ lr~ and then never have to touch any of the mPch:lnir~l hardware again during the ~ a~u~ ent proeess. The two viewing positinnc Pl and P2, as dP~crrihed 10 previously, would be seleeted by the user by moving stage 180 under C(Jllll~ul~l eontrol.
Such all-nm~if pu~ ~in~ ;..e of bol~ SLU~ 120 could be closed-loop po~i~ io~ e That is, the cO~ JUtl,. would position the bo~seopc by moving the bvl~copc until a panicular desired position was in~lie~ed by the co. ,l.;~ ion of ~ c.l~le~P~ 360 and position l~.asule..lcnl bloek 470.
In faet, the same cOllull~,iâl vendors who supply translation stages often supply complete positioning systems 15 which eombine a l"~ inn stage with the motion eontrol and position ll.~,a~ull bloeks shown in Figure 14.
Most often these systems use an actuator ~o ~ e an electrie motor, either a dc motor or a stepping motor, driving a precision lead screw. That is, the actuator is essentially a l~ulo~i~ed micrometer. Clearly, there are a number of different aetuators and position ~I_a~.l lr~ that ean be used in any sueh system.
What I eonsider the best mode for . ' g this seeond ~ o~ t of the invention is so",~,.. hal 20 different th~n the system I have just deseribed. I believe that a system can be built at lower cost and be at least as co..~_n.~ t to operate if it is built as I will now describe. Since the primary use of the mech~nir:~l b~y~t~,.ll is to move bol~aw~Je 120 (Figure 4) back and forth between two positions, this c ho ~ is direeted toward making that process simple and quick. Generally speaking, it takes a long time for a motor dnven translation stage to move between positions spaced a cignifin:ln~ distance apart.
The second ~ l o~l;.. ~l of BPA 138 is sho~ n in Figures 15 through 19. Figure 15 is a front view, Figure 16 is a top view, Figure 1~ is a rear view, while Figures 18 and 19 are left and right side views respectively.
The same bo-~s~u~v~ elamp assembly 140 as was used in the first c ,-l~o~ is also used in this seeond ;",~"l As before, lens tube 124 has been removed from clamp 140 in these views for clarity. Clamp 140 is e~ ed of a lower V - bloek 142, an upper V - bloek 144, a hinge 148, and a c' . g screw 150. The upper V
30 - block is lined with a layer of resilient material 146, for the same reason given in the clPc~ription of the first Also, just as in the first; bc ''mP.ni, lower V - bloek 142 is attached to the moving table 184 of a tr~in l ~inn stage or slide table 180. The tr:~ncl~tion stage consists of a moving table 184 and a fixed base 182, ~ nPu~d by erossed roller bearing slides 186. Fixed base 182 is attached to a BPA baseplate 162.
35 The di~ between this second ~.,-I,odil--.,nt of BPA 138 and the first ~ o~ t are con~inPd in the methods by which moving table 184 is poci~iomP(I and how its position is determined. In this second e.,.bûdi,,..,.,~
an air cylinder 412 is mounted to an actuator mounting bracket 422 which is in turn mounted to baseplate 162.
- Air cylinder 412, whieh is shown best in Figure 18, has two air pûrts 420 and an P~t~ncion rod 418. Air hoses (not CA 02263',30 1999-02-16 shown) are conn~cted to ports 420 and are col~t~inPd within actuator cable assemblv 428 which was shown on the block diagram, Figure 14. The air hoses convey air pressure from motion controller 452 (Figure 14). Extension rod 418 is cn~ reled to an actuator J~l~r~ r~l bracket 424 through an actuator ~ l bushing 426. Bracket 424 is fastened to moving table 184 as is best shown in Figures 16 and 17.
On the other side of the moving table / bo~ ope clamp assembly from air c! linder 412 is mounted a linear 5 position ~.fmc.l...~, 360. Position ~ lu~ 360 consists of a linear scale body 362 and a scale read head 364, which are attached to each other as an integral assembly, but which are free to move with respect to one another within limits along one direction. Attached to read head 364 is a position ~ l -- P~ cable 366 which connects to system controller 450 as was shown in Figure 14. Scale body 362 is mounted to moving table 184 through a scale body mr,-ln~in~ bracket 363. Read head 364 is mounted to BPA baseplate 162 through a read head ~o~m~".e 10 bracket 365.
Attached to the upper side of actuator ~ rk,..~ bracket 424 is a dovetail slide 404. Mounted on dovetail slide 404, as best shown in Figures 16 and 18, is an adjusting nut bracket 394. Bracket 394 contains a fixed nut 396 which in turn contains an adjusting screw 398. Adjusting screw 398 has an adjusting screw knob 400 and an ~Ajllc~ine screw tip 402 disposed at opposite ends of its length. Bracket 394 also contains a bracket position 15 locking handle 406. Locking handle 406 is connected to a locking cam 407 mounted inside bracket 394. Locking cam 407 is shown only in Figure 17.
Dovetail slide 404 and adjusting nut bracket 394 and the items co~ d therein for n a s~ .ly known as the forward stop posi~ioncr 390. An exactly similar assembly called the reanvard stop poci~ioner 388 is mounted to the BPA baseplate behind l-~u~lation stage fixed base 182. Rearward stop pocit 388 is best shown in 20 Figures 16, 17 and 19.
D.~ e on the position of moving table 184, adjusting screw tip 402 of adjusting screw 398 of forward stop p~ tionrr 390 can contact end stop insert 393 of end stop 392 as best shown in Figures 16 and 18. Similarly, the rearward stop po~i~ionrr 388 is aligned so that the tip of its adjusting screw can contact the rear end of moving table 184, as can be best visualized from Figures 16 and 17.
In Figure 16 is shown a stop pin hole 440, the purpose of which will be explained below.
Although the overall length of BPA 138 could be made shorter if read head 364 were mounted to moving table 184 and scale body 362 were mounted to baseplate 162, I have chosen to mount the unit as shown because then cable 366 does not move with table 184. Either way will work, of course.

30 E. Operation of the Second F--.l.o.l;l~t As stated above, the di~ nces behveen this second cmho~limcn~ and the first embodiment relate to how the ~--,acu~e is moved and how the position of the bo.c~_upe is d, t~ inc~1.
The inrlllci~n of position LldnSdU~C;I 360 and position ~ a~ul~ -,l block 470 as shown in Figure 14 means - that the user of the instrument is no longer responsible for making position readings and tr~ncrribinE, them into 35 Cou",ulel 228. When the user indicates that the cursors are positioned as desired, as was des-,-il~ed in the operation of the first ~ o~ P~7 the ,~ , . will now ~n~om~ir;llly co--~ a camera position ~l~easu~,.uent from position mcasul~,l"~.,l block 470 and will ~n~onl~tir~lly store this datum.-CA 02263s30 1999-02-16 W O 98/07001 PCTrUS97/15206 Note that position t,~ du~e~ 360 need not be an absolute encoder of position. From Equation (28) (and the similar c"~lJf~iu.l for ~lleaau,-,-..~,ll mode 1, which is not a display equation) it is clear that the l~led~ul~....,nl depends only on the distance moved between viewing positions. A constant value can be added to the encoded position without ch,m~ing the ll~easulc..l~.~l in any way. Thus, position transducer 360 together with position IlI~,d:~Ul~ block 470 need only produce a position value that has an offset which is constant over the period of a S lll~,a~ul~ . This offset need not be the same from IneaSU~CilU.~nl to l~a~Ul~ This means that 1 360 can be what is called an incl~,.ll~.lLal distance encoder, and this is what will be A~crrihe~
As I will explain later with regard to other; bc-' t~, if one wants to correct for errors in the camera motion, or if one wants to use a camera motion that is not c~nctr~q,ined to a perfect straight line, then it is nece~a-y to know the absolute position of the camera with respect to some fixed reference point. The distance encoder that I
10 describe here has what is known as a "home'' position capability. The home position allows one to use the i..c,~..-.,..~l encoder as an absolute encoder when and if required.
Position tl..nsJuc~;l 360 contains a precision magnetic or optical pattern formed on a plate inside scale body 362. Read head 364 reads the pattern and thereby produces signals which change according to changes in relative position between read head 364 and scale body 362. The unit depicted here is sold by RS~ Elektronik Ges.m.b.H.
15 of Tarsdorf, Austria, but similar units are available from Renishaw plc of the United Kingdom and Dr. Johannes Pnhqin GmbH of Germany. The unit shown is available in rccnllltinnc as small as 0.5 lniClul.l~ Lm), with ~~qr~q~ntPed p~ -g accuracy as good as + 2 ~m over a length of 300 millim~terc For the short length unit used in the BPA, one would expect the accuracy to be considerably better.
Position ll.ca~ul~ l block 470 h.t~ l,ul~ls the signals from read head 364 to ~ e changes in the 20 position of read head 364 with respect to the scale inside scale body 362. Position ~ueasul~ lll block 470 formats the position data into a form that is ul-de-~lood by CUII-~JUII I 228. If the home position capability has not been used, then mca~u-~,.lle.ll block 470 will report a position relative to the position that the lt;-l-~d~ce~ assembly was in when the power was turned on. If the home capability has been used, then the position will be reported relative to the fixed home position. Whether the home position capabilit~ is used or not is a design decision which depends 25 on whether motion errors are to be corrected. The method of correction for errors in the motion is ~I;c-l~ssed at length below in a sub-section entitled "Operation of ~mbodiments Using Arbitrary Camera Motion".
The ~Yictenre of motion actuator 410 and motion controller 452 means that the user is not required to manually move the bo-~"cope between Pl to P2. This has the advantage of ~liminq~ing any chance that the user will acriflPnt~lly misalign BPA 138, hence bOI~ UPC 120, during the lu~,a~ul~ lll process. It also has the 30 ~.lval.t~ ; of P~ in ~i--g the tedious rotation of the micrometer barrel 178 which is required during operation of the first e.--l)c ' Air cylinder 412 is a double action unit, which means that air pressure applied to one of the ports 420 will extend rod 418 while air pressure applied to the other port will retract rod 418. When a differential pressure is applied bet~Yeen the ports, rod 418 will move until it is stopped by some m~chqni~ql means. If there is no other 35 m~nhqninql stop, rod 4l8 simply moves to either its fully extended or fully retracted position.
Through the action of bushing 426 and ,qttqrhm~nt bracket 424, moving table 184 is concl~ to move with ~ t~ ;Ol~ rod 418. The e~ent of motion of table 184 is controlled by th~ stops created by the co.,.l,il.alion of forward stop pocilioncr 390 and end stop 392 and the co.~ ion of rearward stop po~ilioncl 388 CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97/15206 and the rear end of moving table 184. For instance. in the fonvard motion direction~ the limit to the motion of table 184 is deterrnined when ~Aj~lcting screw tip 402 of adjusting screw 398 contacts insert 393 of end stop 392. Since the limit positions of table 184 are d~ cd by these mechanical stops, backlash in bushing 426 does not affect the accuracy or rep~t:lhilit~ of this pocitil~ning Thus. viewing positions P1 and P2 are solel- determined by the position of these l--f ~ 1 limit stops. The measurement of these positions~ however, is subject to an- backlash 5 cc ' within position 1, dl 360, or within the d~ C of the Irdusduc~l to the remainder of the strueture.
C~ rl ;~E now the forward stop pociti~m~r 390, operating handle 406 rotates cam 407 to either produee or remove a locking foree due to eontact between cam 407 and dovetail slide 404. Thus, when llnlorlre~ bracket 394 can be slid back and forth along dovetail slide 404 until adjusting screw tip 402 is located to give the desired stop 10 position. Handle 406 is then rotated to force cam 407 against slide 404 to lock bracket 394 in place. Adjusbng screw 398 can then be rotated in fi:ced nut 396 with handle 400 to produce a fine adjustment of the stop position.
Once the positions of ~ljncting screws 398 of forward stop poci~ nf~r 390 and rearward stop posilioner 388 are set as appropriate for the desired pc.~ ,live viewing positions Pl and P2, moving back and forth bet~ een these posibons is a simple matter of reversing the dfflerential pressure across air cylinder 412. Depending on the length 15 of the air hoses which connect cylinder 412 to motion controller 452, the chd~ lislics of air cylinder 412, and the mass of the assembly being supported by moving table 184, it may be necessd,y to connect a motion damper or shock absorber (not shown) between moving table 184 and BPA baseplate 162. This would be required if it is not possible to control the air pressure change to produce a smooth motion of table 184 between the stops at Pl and P2.
Stop pin hole 440 is used as follows. At the beeinning of the m~,a~u~ llL process, it makes sense to start 20 with moving table 184 centered in its range of travel. Therefore, a stop pin (not shown) is inserted into hole 440 and computer 228 is instructed to cause motion controller 452 to apply air pressure to cylinder 412 to produce an achl~tion force which will cause moving table 184 to move bd,hwalll~ until it is stopped by the stop pin. At this point the user is ready to begin the lu~d~ul~ ent set up process.
If the home p(,~iliolling capability of ~dn~.lu-,~" 360 is to be used, after the instrument is powered up, but 25 before ll-easl,-~."~."~ are ,~ln ~ , uo"~,ulel 228 is instructed by the user to find the home position. Computer 228 then co.,.-"~u~lc mobon controller 452 to move actuator 410 back and forth over its full range of motion.
C- pu 228 also c~ k position lued~ul~-ll.,nl block 470 to ciml~ nPoucly look for the home position signature in the output signal from i ' - 360. Once the home position is found, the offset of the position output data from position ll,~,ul-,,l.~.-t block 470 is set so that a p,~d~ d value coll~onds to the fixed 30 home position.
In detail, the process of making a l,.~a~ul~ of the distance between two points, both of which are c - ' within a relatively small porbon of apparent field of view 312 as shown in Figures 8 and 9, (that is, Ill~,a iUl~llU,Ill mode 1) is made up of the following steps in this second rll,lloll;~
1. Translation stage 180 is centered in its range of travel by use of a stop pin as described above.
2. A specific area of interest on object image 314 is located in apparent field of view 312 by sliding and rotating bolesco~,e 120 inside bo,~ ,ope clamp 140.
3. Borescope clamp 140 is locked with ~l~mring scre w 150 to secure the position and ori~ nt~ion of the bo,~jcope with respect to BPA 138.

CA 02263~30 1999-02-16 W O98/07001 PCTrUS97/15206 4. Computer 228 is instructed to remove any ~ et~.-lial air pressure across air cylinder 412. The stop pin is removed from hole 440. Moving table 184 is now free to move. The user moves table 184 rearward until the view on video screen 310 is dpplu~ alcly as shown in either Figure 8 or Figure 9.
5. Rearward stop positioner 388 is positioned so that the adjusting screw tip contacts the rear end surface of moving table 184. Stop p-~siti: - 388 is then locked at this position.
6. The user moves table 184 forward until the view on video screen 310 is a~ u~ -dlely as shown in the opposite view of Figures 8 and 9. That is, if in step 4, the view in Figure 9 was attained~ then in this step, the view in Figure 8 is to be obtained.
7. Forward stop positioner 390 is adjusted so that the adjusting screw tip contacts end stop insert 393, and is then locked into position.
8. The computer is instructed to apply air pressure to move table 184 rearward. The view on video screen 310 is incpec~e d and any fine ~ljnctmPntc to the position of the bolc~.;vl,c are made by rotating the a~ n~ .l screw of rear stop positioner 388. This is position P2.
9. The computer is instructed to apply air pressure to move table 184 forward. The view on video screen 310 is incpectrd and any fine a~lj,~l .. ,~c to the position of the bol~aco~e are made by rotating the screw of forward stop posilioncr 390. This is position Pl.
10. Cursors 316 and 318 are then aligned with the selected points on object image 314 using the user interface provided by c(~ ,ul~r 228.
I l. When each cursor is aligned correctly, ~ ~ , ' 228 is c~ i to store the cursor positions. The cursors can be aligned and the positions stored either ~ u~ ly, or cimlllts~n~oncly~ at the option of the user.
12. Computer 228 alltom~tir~lly ,- '- a position reading from position Illed~u~ t block 470 Computer 228 records this position reading as the position of Pl.
13. Computer 228 is instructed to apply air pressure to cylinder 412 to move table 184 rearward. Steps 10 to 12 are repeated for P2.
14. The user comm~nrlc the computer to calculate and display the true three-~im~nci~-n~l distance bet veen the points selected by the cursors in steps 10 and 13. If desired, the ~o~ uler can be comm~n~led to also display the absolute positions of each of the two points in the coordinate system that was defined in the ~per~ion of the first embodiment.
The mode 2 lu~ajull,~llenl has a detailed l,..Jcedu-e which is modified in a similar manner as co--ltJared to the detailed ~.u-,ed~..e given for the first ~ bo~
In this second ~ , the data acquired and the processing of that data are identical to that desc-ibcd for the first f .~ o li~ If motion errors are to be corrected, the data pl-J~,c~ g is slightly more involved, and will 35 be ~liscncced below in the section entitled "Operation of Embodiments Using Arbitrary Camera Motion".

F. Des~ tion of a Third ~Illbo~ t The .n~ ni. ~l ponion of a third r~.ll.O~ .1 of my invention is shown in an overall pels~,e~ e view in Figure 20 and in detailed views in Figures 21 through 27. This Pmbo-limcnt imrlen~Pntc a new type of rigid CA 02263~30 1999-02-16 W O 98107001 PCTrUS97/lS206 l~cs-,ope which I call an electronic ",cu~u~-".~t borescope (EMB). Figure 28 is an electronic fi-nctinn~l block diagram of the EMB system.
In Figure 20 elec~lu,lic ~l-ed~ulc"-ent bGrc~copc 500 has a bo~csco~e probe tube 512 which itself contains an f!1nnga~~d viewing port 518 at the distal end. At the proximal end of probe tube 512 is located a fiber optic x - to~ 128. Tube 512 is attached to a proximal housing 510, to which is mounted an cle~llullic connectnr 502.
S An eleetronic eable (not shown) connects EMB 500 to a system eontroller 450 as shown in Figure 28.
Figures 21, 22, and 23 are ,espec~ a plan view and leh and right side elevation views of the distal end of eleetronie nl~d~ul~ l bolcacope 500. In these three views bolcs uyc probe tube 512 has been cPctioned to allow viewing of the intemal C-J...I-9,~
In Figures 21 through 23 a ",inia~ulc video camera 224 is shown mounted to a moving table 184 of a 10 tr~ncl~tion stage 180. Camera 224 is made up of a solid state imager 220 and an objeetive lens 121. Prism 123 redirects the field of view of camera 224 to the side so that the angle between the optical axis of the camera and the tr~nclPtion direetion is dpylo~i."ately 90 degrees, or some other 5~lhs~Anti~lly side-looking angle as required for the desired ~pplination Solid state imager 220 llallSIl~ and receives signals through imager cable 222.
In these figures, the hardware that mounts the lens and the prism has been omitted for clarity. In addition, 15 sch~m~ic optieal rays are shown in Figures 21 and 22 purelv as an aid to u,.dcl~ldl-ding. The optical system shown for camera 224 is chosen for ill-lctrntion purposes, and is not meant to represent the opties that would actually be used in electronic bol~s.;oye 500. Such optical systems are well known in the art, and are not part of this ill~,nlion.
Fixed base 182 of translation stage 180 is fastened to distal baseplate 514 whieh in turn is fastened to 20 bo.c~cuye probe tube 512.
The position of moving table 184 is controlled by a po~ on~ cable 482, which is wrapped around a p~citinning pulley 484. poSi~ioning cable 482 is clamped to moving table 184 through a distal motion clamp 486.
Pulley 484 is mounted to baseplate 514 through a pulley rnollntin~ shaft 485.
Motion clamp 486 supports a dislal fiber clamp 492. which in turn supports an illllmin~tion fiber bundle 127.
25 Fiber bundle 127 is also ~uyyo~led and attached to moving table 184 by a fiber end clamp 494. Fiber end clamp 494 has intemal provision for e~l,al~ling the bundle of fibers at the end to form fiber output surface 129 (shown in Figure 23).
Fiber bundle 127 and imager cable 222 are both supported by two distal cable stabilizer clamps 490, which are in turn clamped to and su~,~,u,led by pocitinning cable 482. The more distal cable stabilizer clamp 490 is captured 30 inside a distal stabilizer slot 491, which is itself attached to baseplate 514.
Also mounted to distal baseplate 514 is a Lldnsducel mollnting bracket 367, which in turn supports a linear position t ' 360. T,;~ 360 is attached to moving table 184 through a ll~u.~d~,ce~ opP-~ting rod 361 and a ~ tt~rl bracket 369. Position l~ cable 366 extends from the rear of the ll,.nc.l..n~
towards the proximal end of the bol~s~ù~,c. Tl,.~c~ cable 366 is clamped in ll~-sducel cable clamp 371 so that 35 tension on eable 366 is not lldll;,r~l~cd to ~dllsducel 360. Clamp 371 is mounted to baseplate 514.
Figures 24 through 27 are respecti~ely a plan view, a lefl side elevation view, a right side elevation view and a proximal end elevation view of the pro~;imal end of electronic ll~,d~ulc~ ll bol, ~cupc S00. In these views CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97/1~206 proximal housing 510 has been sectioned to allow viewing of the internal ~;o,..~o.-el,ts. In Figure 24. bolescu~e probe tube 512 has been sCction~orJ as well, for the same reason.
In Figure 24, imager cable 222, transducer cable 366, and actuator cable 411 have been shown cut short for clarity. In Figure 27, the same cables have been rli,....l lr~rl for the same reason In Figures 24 through 27 the proximal end of pocitioning cable 482 is wrapped around a poCitionine pulley 5 484. Pulley 484 is . r ollud by a mrnJn~ing shaft 485, which in turn is mounled to proximal baseplate 516 through a pulley support bracket 487.
The proximal end of fiber bundle 127 is attached to illl.minq.~ n fiber optic conn~C~or 128. The proximal ends of imager cable 222 and position tldnsducel cable 366 are attached to electronic connPctr~r 502. Connector 502 i su~ ,d by proximal housing 510. Housing 510 also supports l)ol~,~cul)e probe tube 512 through bulkhead 498.
10 Cables 222 and 366 are clamped in bulkhead 498. Cable 366 is stretched taught between the distal and proximal ends of probe tube 512 before being clamped at both ends, while cable 222 is lefl slack as shown.
Clamped to p-~citioning cable 482 is a proximal motion clamp 488. Clamp 488 is supported by a proximal lr,m~ stage 496~ which is in turn mounted to proximal baseplate 516 through a proximal stage support bracket 499.
15 The position of proximal trqn~iatif~rl stage 496 is controlled by the action of actuator l lO through actuator ~ ~~ bracket 424. Bracket 424 is attached to the moving table of tr,qnCl qtion stage 496. Actuator 410 contains an actuator output shaft 413 which operates bracket 424 thrûugh an actuator qltqrhmPnt bushing 426.
Actuator 410 is attached to proximal baseplate 516 through an actuator monnting bracket 422.
Actuator 410 is shown as a ~ d micrometer. Actuator electrical cable 411 connects actuator 410 to 20 ele~,t,ul.ic cQnn~ctor S02.
As shown in Figure 28, electronically this ~ is very similar to the second r-,~ bo~ (compare Figure 14). The pnmary dirre,~il.,e is that the video camera back 134 in Figure 14 has been split into solid state imager 220 and imager controller 221 in Figure 28.

G. Clperation of the Third F ' ~limr nl This third r~nnborlimr.nt contains the essentially the same elements as did the second e-lll,odil~ and from the user's StonAr~int the operation is virtually the same except that now all Op~:ldtiùlls are pPrfi~rmed through the user interface of computer 228, and the user makes no mrchonir~o1 or~jl.ctm~ntc at alL except for the initial poCitionin~ of EMB 500 with respect to the object to be incpPcterl The key to this third e ~-o~l;" ~' is that the motion of actuator 410 is Lldn~r~ d to proximal tr,on~ n stage 496, thence to pocit . g cable 482, and finally to moving table 184 at the distal end of the scope. As a result, camera 224 is moved a known distance along a straight line path, which allows one to make rlimr nsionol lll~aul~l..cnts as I have described in the first ' - ' This third ~ o~ c~l has the advantage that the image quality does not depend on the length of the bo-~cu~,e, thus making this of most interest when the object to 35 be ;~~cl~cctPd is a long distance from the increchrln port.
The optical quality of objective lens 121 can be made higher than the optical quality of a rigid bo.e~,,ope.
However, solid state imager 220 will in general not have as high a resolution as do external video imagers such as CA 02263~30 1999-02-16 W O 98/07001 PCT~'S97/15206 video camera back 134 which was used in the first t~ o (BPA) ~ bc ~ Thus the tradeoffs in image quality between the BPA e.l.bodi~ l.t~ and this ~MB cannot be cnco..,pAcced by a simple CtA-f--Distal translation stage 180 is shown imp~PmPn~Pd in Figures 21 to 23 with a ball bearing slide. This could also be either a crossed roller slide or a dovetail slide. The slide selected will depend on the charactenstics of the ~pplir~tinn of the EMB.
A dovetail slide can be made smaller than either of the other two options, so that the smallest EMB can be made if one were used. A dovetail slide would also have more friction than the other two options, and this would not always be a disa~al-t~ge. For instance, if the EMB were to be used in a high vibration eM i-ulu..el.l~ the extra friction of a dovetail slide would be valuable in damping f~cf~illAtionc of the translation stage position.
With this third I I-o~ h any error due to rotational motion of the ~r:~nC1~ion stage will not act through a 10 long lever arm, unlike with the first two (BPA) ~..ho~ tc Thus, the translation accuracv of the stage is less critical in this e~..l)o l;~ ,a, which means that it is more feasible to use a less accurate ball or dovetail slide instead of a crossed roller slide.
The elimin~tion of the long lever arm is a second reason why this third Pmhot1imPnt will be preferred when the object to be if.~lPd is distant from the in.cpeCtion port.
Because fiber bundle 127 is moved along with camera 224, the illllmin~tinn ûf the camera's field of view does not change as the camera's position is changed. Both fiber bundle 127 and imager cable 222 must move with the camera, thus they are directly su~,polled by poCitinning cable 4X2 to avoid putting ~ Fc~Ar~ forces on moving table 184.
It is possible to provide a second pulley and cable arr~n~Pmpnt to take up the load of the fiber bundle 127 and 20 imager cable 222, thus Pl;~...l.A~ g any ~l~. t~hing of positionin~ cable 482 due to that load, but that makes it more difficult to keep the assembly small, and there is little or no adv~.lta~, when the position lr~nsducer is located at the distal end of the scope, as I have shown.
Distal cable stabilizer clamps 490 fasten fiber bundle 127 and imager cable 222 to positioning cable 482 to keep them out of the way of other portions of the system. Distal stabilizer slot 491 controls the orientation of the 25 more distal stabilizer clamp 490 to ensure that fiber bundle 127 and cables 222 and 482 keep the desired relative positions near stage 180 under all c~n~litinnc Fiber bundle 127 and imager cable 222 must have suffifiPnt length to ~CCf~mr~r~flAte the required trAnc~ on of camera 224. Position ~ ... eY cable 366 is of fixed length. Thus, t~h~-c l~fe~ cable 366 is fixed at the proximal end of bo~ o~,e 500 to bulkhead 498 and is clamped between bulkhead 498 and i ~ ' cable clamp 371 with 30 sl-ffif i~ tension that it will remain s ~ ul~ ~ over the length of probe tube 512. Fiber bundle 127 and imager cable 222 are run over the top of lldns.lu~e. cable 366 so that 1 ". _ I f f r cable 366 acts to prevent fiber bundle 127 and imager cable 222 from contact with posi~ioning cable 482. In this manner, ~ - ~ .y increases in the frirtinn~l load on p ci-i~ g cable 482 due to contact with the other cables are avoided.
This simple scheme for keeping the cables apan will work only for a short EMB. For a longer EMB, one can 35 place a second cable spacer and clamp similar to bulkhead 498 near the distal end of probe tube 512. but far enough behind the hardware shown in Figures 21 - 23 so that the cables can come together as shown there. Then all of the cables w ill be under tension between the proximal and distal ends of lhe EMB. In such a system, one CA 02263~30 1999-02-16 W O98/0~001 PCT~US97/15206 could also use a long SP.p~r~ting member, placed between positioning cable 482 and the other cables~ to ensure that they do not come into contact.
For very long EMBs, it will be necessary to support all of the cables 127, 222, 366, and 482 at several positions along the length of probe tube 512. in order to prevent them from sagging into each other and to prevent poci~ioning cable 482 from sagging into the wall of tube 512. Such support can be provided by using multiple 5 cable spacers fixed at d~plu~idte intervals along the inside of tube 512. These spacers must remain aligned in the correct angular oriPnt~tion so that the friction of cable 4~2 is lUi..;~ d The end of fiber bundle 127 is ~ nrlrrl as nece~sa~y in fiber end clamp ~94 so that the i~ min~tion will a~Pqn:l~ely cover the field of view of camera 224 at all mc<l~ul~..lent distances of interest. A lens could be used here as well to expand the illllmin~tion beam.
Viewpon 518 is large enough to ensure that the field of view of camera 224 is unobstructed for all camera positions available with stage 180. Clearly, this viewport can be sealed with a window (not shown), if neccaàdly, to keep the interior of the distal end of the EMB clean in dirty C~ 'ilUnlll~ . The window could be either in the form of a flat, parallel plate or in the form of a cylindrical shell, with the axis of the cylinder oriented parallel to the direction of motion of moving table 184. In either case. the tolPt~rpc on the accuracy of the geometrical form and 15 posltion of the window must be evaluated in terms of the effects of those errors on the nlcaaul~ nl.
All camera lines of sight will be refracted by the window. This can cause three types of problems. First, the window could cause an increase in the optical aberrations of the camera, which will make the image of the object less distinct. In general this will be a problem only if a cylindrical window is placed with its axis far away from the optical axis of camera 224, or if the axes of the inner and outer cylindrical surfaces of the window are not 20 coinridPnt Secondly, dirf~ ..c~s in how the line of sight is refracted over the field of view of the camera will change the dislul lioll of the camera from what it would be without the window in place. This would cause a problem only if the distortion were not calibrated with the window in place. Third, dirre~n.~ca in how the line of sight is refracted as the camera is moved to different positions would cause errors in the d~tell~indlion of the apparent positions of a point of interest. This is potentially the largest problem, but once again, it is easily handled 25 by either rdbli~,dtillg and positioning the window to appropriate accuracies, or by a full calibration of the system with the window in place, using the r~lihra~ion methods to be described later.
It is a design decision whether to locate position llan~ducel 360 at the distal end of E~ 500, as I have shown, or whether to locate it at the proximal end of the scope. Either wav will work as long as dppiul)lidle attention is paid to lllil~ errors. For the distally mounted lldnsducer, because of the small size required, it is 30 not possible to achieve the level of accuracy in the lldnsducel that one can get with the proxin ally mounted n,.,~c.l..re~ shown in the second ~lllbodil--.,.-l. However. if a proximally mounted tldnsdu~l is used, one must carefully consider the errors in the transfer of the motion from the proximal to the distal end of the scope.
When it is mounted distally, transducer 360 must be small enough to fit in the space available and have 5~ffiriPn~ precision for the purposes of the l"~laul.,.,.~nt. Suitable trar~c(hlc~pr~ include linear poPn1iomPtPr~ or 35 linear variable differential transformers (LVDTs). Note that both of these options are absolute position sJu~el~, so that the issue of determining a home position does not exist if they are used.
Suitable linear potPr~ir,nnPtPrs arc available from Duncan Ele~lluni~,~ of Tustin, California in the USA or Sfernice of Nice, France. Suitable LVDTs are available from Lucas Control System Products of H~rnpt~n Virginia CA 02263~30 1999-02-16 W O 98/07001 PCT~US97/lS206 in the USA. For instance, model 249 XS-B from Lucas is 4.75 mm diameter bv 48 mm long for a measu range of at least 13 mm.
These small, distally mounted ~rPnc~ cPrs must be calibrated. In fact, LVDT m In~-f~rh~rers provide ç~lihr~tion fixtures, using micrometers as standards. What matters most to the performance of the ul~dSult~
ulll~"~t is rçpP~t~t-ility. The repP~t~hility of small linear potPntionlptprs is generally I part in 104, or 0.0001 5 ~ r per Cf ~ . of travel. The rerP~t~b~ y of an LVDT is delf;--- ined by the signal lo noise ratio of the signal plocessi-,g electronics. A signal to noise ratio of 1 part in 105 is easily obtained with small signal bandwidth, and I part in 106 is quite feasiblc, though more expensive to obtain. These available levels of repe:lt~hility are quite concis~ with the purposes intended for the h.~llulllenl.
If the EMB is to be used over a large range of t~lu~Judlu-es, it will be nc~ dly to include a tC~ ldlu 10 llans.lucel at the distal end of the scope, so thal the l~ dl~--e sensitive scale factor of the distal position transducer can be determined and taken into account in the lucd~ulclllelll.
With the distally mounted position tldns lucer, the only backlash that matters is the backlash between moving table 184 and position transducer 360 due to the neces~a.y rlP~r~nre between transducer opçr~ing rod 361 and tl~u~sducer ~n~rhmPn~ bracket 369. This backlash will nol be negligih'~ in general, so thal the l-.~asu.~ n~
15 plu.e.lule must use the anli-backlash elements of the ll.edsul~ lo~,eJul~ detailed above in the description of the first r .nl od;~f ~t (Briefly, this means thal the camera position is always dPterminPd with the camera having just movcd in one particular direction.) Since the system shown in Figure 28 is a closed - loop pocitioninE system, it is 5tr~ i r~ .ald to , ' anti-backlash plocedul~s a~-t-~m~ic~lly in the pocitioning software, and the user then need not be ~,ol~c~ ed with them.
20 The position l~i.nC~h~r~. will not correctly measure the position of the camera if the ll-easul~l.~nL axis of the ir~.-c.l..cP. is mic~lignPd with the axis of camera motion. Such a mic~lignmPn~ causes a so-called "cosine" error, because the error is proportional to the cosine of the angular mic~lignmPnt This error is small for ~easc ~"e m~rhining and assembly tolerances. For instance~ if the rnic:~lign ~ is 10 milliradians (0.6 degrees), the error in the distance moved between camera positions is 1 part in 104. When necessarv for very accurate work, this error 25 can be d~:lell. il.ed and laken into account in Ihe ~ asulelll~nl~ by scaling the transducer posilion data accordingly.
The first hVO c ~ a~ are also subject lo this error, but in those cases the ne~,e~dly mPrh:lnir~l tolPranrrS are easier to achieve. Note that an i-lsllul--enl suffering from this error will system~ir~lly determine distances to be larger than they really are.
There could be a thermal drift of the camera position if pocitioning cable 482 has different ~
30 cocffiri~Pnt of r ~ nL ~n than does probe tube 512 or if the in~-ulll~l-t is subjected to a l~ gradient. Such a drift would not be a problem over the small time that it takes to make a ll.~a~ul~ ,ul, because it is only the differential motion of the camera between vie-ving positions Pl and P2 that is hl~ultalll~ In more general terms, it doesn't rnatter if there is a small variable offset between proximal position comn~r~-~ and the distal position achieved, as long as any such offset is constant over a lll~a~ ,l-lent. As previously diccucce~, a large offset could 35 be a problem if one desires to correct for errors in the motion of translation stage 180.
Of course, differential thermal P~ r:lncion of pncitio ing cable 482 and borescope tube 512 would cause a varying tension in cable 482. Thus, unless cable 482 and 512 are made of malerials with the same expansion coeffiriPn~ it may be neces~dl~ to spring load pulley support bracket 487. Whether such spring loading is CA 02263~30 1999-02-16 W O98/07001 PCTrUS97115206 nc~ a.~isrl~ f.h~ on the length of tube S 12 and the te~ " L range over which the EMB must operale, as well as the ~ r~ e in t(,l~ Lld~UIt: coPff~iPn~C
A sipnifir~nt level of static friction (sticlion) in translation stage 180 would require that the EMB be r '~'~ with a distal position tldnsducu-, since otherwise there would be considerable uncertaintv added to the position of the camera. Dovetail slides tcnd to have cignifir~n~ stiction, so that use of a dovetail slide will 5 almost certainly require a distal position lldilsduuel. If the stiction is too severe, the position setability of the camera will be co~ ..,llused, which could make the in~ r~u~lldLhlg to use.
Clearly, the EMB could be il.,pl~ e.1 with another sort of motion actuator 410~ for instance, an air cylinder.
I have shown that there is used a proximal translation stage 496 between actuator 410 and poCitil~ninp cable 482. Clearly, this is not strictly necessary as cable 482 could be clamped directly to output shaft 413 of actuator 10 410, provided that output shaft 413 does not rotate and can sustain a small torque.
The EMB could also be inlp!ernPntPd with a ~inidlulL motor and lead screw placed at distal end. This e' the l~luh~...lel~l for transfer of motion from the proximal to the distal end. but it then requires more space at the distal end. The advant~ge is that this could be used to embody an electronic measurement en~ scopP
that is, a flexible Illca~ululllelll scope. Such a scope would be flexible, except for a relativelv short rigid part at the 15 distal end.

H. DPsrrirtio of a Fourth F...hc,.l;....
Figures 29 and 30 show ~ ,} e~,liv~ly plan and left side elevation views of the distal end of a fourth mPnh~nir~l of the invention, which I call the electronic meu~u/~,,.e~t endoscope (EME). This fourth L-llbodill.~l.t 20 is similar to the third L ~ - 1 t, except that the posi~ nin~ pulley and cable system has been replaced here by a po~;lio-.;.~g wire 532 which is enclosed except at its distal and proximal ends by â p-~citioning wire sheath 534.
In Figures 29 and 30 many of the same elements are shown as in the third ~mho~liment and only those elements most directly related to the ~liscl~cri ~n of this fourth ~ odilll~ are i(le~ified again.
The distal end of posilionin~ wire 532 is clamped by distal positioning wire clarnp 542. Clamp 542 is attached to the moving table of translation stage 180. P~citi(minE wire sheath 534 is clamped to distal b~ceF~te 514 with a distal sheath clamp 536.
The external housing of the en~lnc,~ )c now consists of two ponions, a flexible f ndoscope envelope 538 and a distal rigid housing 540. Rigid housing 540 is attached to the end of flexible envelope 538 to form an enlloscope which is flexible along most of its length, with a relatively short rigid section at its distal end.
Flexible envelope 538 includes the necessary hardware to allow the end of the ~ndocrope to be steered to and held at a desired position under user control. Such coll~l~ u~Lions are well known in the art and are not part of this in~nliull.
As in the third e-lllJodi-llc~ imager cable 222 and jl1--min~tion fiber bundle 127 are sul,~ ulled by and clamped to the element which transfers motion from the proximal to the distal end of the scope. Here cable 222 and fiber bundle 127 are clamped by a distal cable stabilizer clamp 490 which is itself clamped to positioning wire 532. Also as in the third l~mhoriinlcnt, clamp 490 is captured inside distal stabilizer slot 491 to control its position and o. i~ n CA 02263~30 l999-02-l6 W O 98/07001 PCTrUS97/15206 As in the third ~..l.o~ the distal end of illnminq.~if~n fiber bundle 127 is supported by distal fiber clamp 492 and fiber end clamp 494. In this emhoAimPnt fiber ciamp 492 is attached to positioning wire clamp 542.
Imager cable 222, il' nqtion fiber bundle 127, position transducer cable 366, and positioning wire sheath 534 all pass through and are clamped to distal end cable clamp 544, which is located at the proximal end of distal rigid housing 540. Po~itionine wire sheath 534 is positioned in the center of cable clamp 544. ~hile the other S three cables are arranged around it in close ~J~u~h~ y. Positioned at suitable intervals ~vithin flexible ~ scu~c envelope 538 are a number of cable centering members 546, through which all of the cables pass.
The position of stage 180 is monitored by linear position transducer 360, which is mounted to distal baseplate 514 with tt, ncfiucf-r r~q-~ ~-r~tine bracket 367.

I. C~peration of the Fourth Embodiment Clearly, if the proximal end of sheath 534 is clamped to proximal baseplate 516 of the third emi~o~iimf-nt~ and if actuator 410 is attached to pocitioning wire 532, then the molion of the actuator will be l-~nsrt,.ed to distal tr~qnClqtiorl stage 180. Thus, the operation is identical to that of the third embodiment. except that this embofiim~nt is now a flexible ulfasulcll.~nt endoscopc which can be brought into position for lu~aSul~ lS in a wider range of 15 q When this EME is steered to a desired position, fiexible envelope 538 will necessarily be bent into a curve at one or more places along its length. Bending envelope 538 means that one side of the curve must attain a shorter length, and the opposite side a longer length, than the original length of the envelope. The same holds true for c~ o~ t~ internal to envelope 538, if these CfJllllJO~ 5 have cignifir -~ length and are not centered in envelope 20 538. Thus, in order to prevent the bending of the EME from affecting the position of tr,qnclqti~m stage 180, it is ncce~a,y to ensure that pr~;l ;oni,~e wire 532 runs down the center of envelope 538. Cables 222, 366, and fiber bundle 127 are also run as close to the center of envelope 538 as feasible, to ~~ the stress on these cables as the EME is steered.
This ~ ~..hod;-,-cl~t almost certainly requires the use of a distally located linear position transducer 360, as 25 shown, because there is likely to be consider:~ le stiction in the motion of pocitioning wire 532 inside sheath 534.
Imager cable 222 and ili.-.-..~-~lion fiber bundle 127 must have sufficient length to reach the most distai position of stage 180. These, as well as cablc 366, are clamped to housing 540 through distal end cable clamp 544 so that no forces can be tlLu,s~"ed from them to the ~u~ u~e~ ll hardware inside housing 540. As the EME is bent, there will be small changes in the lengths of cables 222 and 366 and fiber bundle 127. Thus, there must be 30 snffi~ien- extra length of these cables stored at the proximal end, or throughout the length of the f n~ scope, so that no large forces are E~ LIICd when the EME is bent.
When stage 180 is moved away from its most distal position, the portion of cable 222 and fiber bundle 127 which are - ' within housing 540 will bend so as to store their now excess lengths in the portion of housing 540 behind the proximal end of baseplate 514.

CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97/15206 J F.~ .o~ c Using Other Camera Motions 1. Irtludu~,lion In the preferred embo~iimrntc I teach the use of straight line camera motion belween viewing positions, with a fixed camera ori~n~tion~ to perform the pe~ ,live mcasul~lu~ The reasons that I prefer these ~mhodim~n~c are that they are simple and of obvious .,~,~r"~ However, my system is not restricted to the use of straight line S camera motion or fixed camera ori~nt~tion Other camera motions are possible and can also be used when making a p~,r~Live Ill~,aSul~ llt. Some of these more general camera motions will be useful for specific :lpplir~ nc Below, I show how to perform the ~ aaul~ when using any arbitrary motion of the camera, and whcn using multiple cameras.
This e~ method of pe~pe~,live ~imencion~l measul~ .nl that I teach here has an important 10 ~prlir~tion in improving the accuracy of the ~- ~,lllent made with my preferred e~,l)stli"~ c Even with the best available hardware, the motion of the camera will not conform to a perfect straight line tr:lnCl~ n In this section, I show how to take such motion errors into account when they are known. In the calibration section I will show how to d~l~l..u.lc those errors.
2. Linear Camera Motion Figure 31 depicts the geometry of a mode 2 p~ c~ e lUCdSUI~.--.,.II of the distance between the points A and B being made with a camera moving along a linear path. but where the camera orientation does not remain constant as the camera position changes. Compare Figure 31 to Figure 13. In Figure 31, the points A and B are chosen to lie at ~,onsid~ bl~ different distances (ranges) from the path of the camera in order to emphasize the di~ ccs that a variable camera orient~tiorl creates.
20 The situation shown in Figure 31 re~ llls the geometry of a Uled~ul~ ll which may be made with any number of different physical systems. For instance, the camera could be movable to any position along the path, and the camera could be rotatable to any oricntation with respect to that path. Or, the camera rotation could be restricted, for instance, to be about an axis perp(~nAiC~ r to Figure 31. Another possibility is that the camera o- ~ ~ is restricted to only a relatively small number of specific values, such as, for instance, the two specific 25 ori~nt~tilmc shown in the Figure. A third possibility is that the positions the camera can take are restricted to a relatively small number, either in c~ n with rotational freedom or in con~bin~icln wilh restricled rotation.
If either the positions or the oril~nt~ti~nc of the camera are small in number, then one can use the well-known kint~m~ic mmmting principle to ensure that these positions and/or ori~nt~tionc are repeatable to a high degree of accuracy.
30 The basic concept of the Illeasul~l.lelll geometry shown in Figure 31 is that the camera is rotated towards the point of interest at any viewing position. This is useful, for instance, when the camera has a narrow field of view, and when one desires to use a long pclal~e~Live baseline. One wants to use a long p .~l,.,.,live baseline because it the random error in the l~l.,d~u.,,...~,nl. In fact, one can show that for optimum meàsu.~ ,ll results, one wants to set the baseline at a value that keeps the angle sl~htenrl~d at each point of interest by the two viewing 3S positions a~",lu~i---dlely constant for points at various ranges from the h.~llu,l.~lll. This is just the situation shown in Figure 31, where the angles formed by thc dot-dash lines are the same for both point A and point B.

~ . .. _ .... . .

CA 02263~30 1999-02-16 WO 98tO7001 PCTrUS97/15206 The disadvantage of the meaau,c"-ent geometry shown in Figure 31 is that it requires accurately kno-vn camera motion in two degrees of freedom rather than just one~ as do my preferred embo-~im~n~c Its advantage is that it provides the smallest possible random lll~aaulc~ ,nt error.
It should also be clear to the reader that two cameras could be used to make the medaUlC~ lt depicted in Figure 31. If two cameras are used, it is still necessary to move one of the cameras with respect to the other to 5 adjust the perspective baseline to the optimum value, when locating points at different ranges and relative poCitionc When viewing an -- - ' le object, the preferred j"l~ ~-r~ lin-~ is to mount both cameras to a single probe assembly, but it is also possible to mount each camera on a separate probe, just as long as the relative positions and ori~nt~ionc of the cameras are known accurately. I discuss below the pa~i~luel~,a of the lll~aSUl~,~llCllt gco",e~,y which must be known in order to make the ,-.cdau-c---c--l in the most general case.
One advantage of a two camera system is that the lc4uhc,.. c.ll for a second degree of freedom of motion can be ~limin~tcd under certain c~nrlitionc~ since the orientation of each of the cameras could be fixed at a suitable angle for the ,--easaulclllclll~ and the point of interest could then be brought into the field of view of each camera by ll~laldlillg the camera to an a~ u~-iate pOillt along the path. This situation can be envisioned from Figure 31 by :lcsllming that there is an upper camera, which is used at the viewing positions P2A and P2B, and a lower camera, lS which is uscd at the viewing positions PlA and PlB, and that both cameras can be moved along the single line of motion shown in the Figure.
A second advantage of a two camera system is that the lUCaaUll ~ data can be acquired in the time nccessa.y to scan one video frame, once the cameras are in position, if the digital video "frame grabber"
t~ Im~ y m~ntion~d earlier is used. Such quasi-inC~nt~nrol~c lU~,aaUlClll~ ~lls are useful if the object is moving or 20 vibrating. For the same reason, such a system could reduce the stability lc~uire~ la on the mollntin~ or support structure for the llleaaul~ nt hlallulll~ t.
A disadvantage of a two camera il-.~ nn of the Ill~dsul~"llGlll shown in Figure 31 is that there will be a .... pe;-~.e~live baseline set by the ph~sical size of the cameras. If the camera orienl ltionC are fixed, the - 1l p~-a~,e~live baseline implies a ~";";,.,~" lu~aaulcll~ ll rangc. A second disadvantage of the fixed 25 camera olic-.lation variant of the two cameM svstem is that there is also a m~xi~lm lueasulc~llelll range for camera fields of view smaller than a certain value, since there will always be a m~ximllm value of the pc,a~ ,live baseline.
3. Circular Camera Motion Figure 32 depicts a mode I ~,a~ ive lu~aalll~".lcnt being made with a camera moving along a curved path.
30 The curve is a section of a circular arc. with a radius of curvature ~p and center of curvature at C. The optical axis of the camera lies in the plane cont~inine the circular path. The camera orientation is coupled to its translation along the path so that the optical aYis of the camera always passes through C as the camera moves along the path.
The advantage of this a~ u as co,-lpcucd to my preferred c,..I,o.~ ;, is that a much larger 35 pe, a~,e-;~ive baseline can be used without losing the point of interest from the camera field of view, for objects located near the center of curvature, C, when the field of view of the camera is narrow. Thus, the ",~asu, c,,..,,.t system shown in Figure 32 can pot~nti~ y provide lower random ~ueaau-~ ut errors.

CA 02263~30 l999-02-l6 W O98/07001 PCTrUS97/15206 As is clear from Figure 32~ there will usually be a ~I-a~i---u-ll range for which p~.a~ e ~ a~u-~...e..ls can be made, as well as a ~ range, at any given value of the p~ c.,li~e baseline In order to make measu.~...ellls at large ranges, the system of Figure 32 requires a smaller baseline to be used than does a similar straight line motion system. For certain ~u.- hi u~tionS of d, Rp, and camera field of view it is possible for both the minimnm and .---xi .. ~.. ea~u~ .. l ranges to decrease as d increases. Thus, this cun ed camera path system has the S disadvantage, as co..-~led to my preferred ~,.nbc ' c, of having a limited span of ranges over which ...,a ,Ul~...~,..,~ can be made.
This curved camera path system ~vould be preferred in cases where only a small span of object ranges are of interest, and where there is plenty of space around the object to allow for the relatively large camera motions which are feasible. I consider the primary operating span of ranges of the circular camera path system shown in Figure 10 32 to be (0 < z < 2Rp).
Another disadvantage of the system shown in Figure 32 for the measurement of in.accPccib~e objects is the difficulty of moving the required hardware into position through a small inspection port.
The method chosen for using a tl ~ ' . to determine the camera's position along the path will depend on how this path is generated ~enh~"~ y For instance, if a circular path is gen~r~tP(l by swinging the camera 15 about a center point, then the position will probably be most cc,l,~nie-llly l-dnsduced as an angular lUCd~Ul- ..l~,.il.
If the path is ~, ~ by moving the camera along a circular track, then the position will probably be l-ansd.,ced as a linear position. The method of ~ ,ln. i--g the position of the camera becomes an issue when conci~prin how to describe an arbitrary motion of the camera, as I discuss below.
Two cameras can be used with the circular camera path just as in the case of a linear camera path. In fact, 20 mode 2 ,..~asul~ can use up to four cameras to make the l~aSUl~ lt, with either linear or circular camera motion. Multiple cameras can be used with any camera path, and in fact, there is no need for all cameras to follow the same path. The C- ..l,.."~ 4ui~ ts~ as will be shown, are simply that the relative positions and c - onc as well as the distortions and effective focal lengths of all cameras be known.
A system using another F- ~lly useful camera motion path is shown in Figure 33. Here the camera is 25 moved in a circular arc, as in Figure 32, but now the camera is oriented to view 5~ 11yperp~n~i~ul~r to the plane of the arc. In Figure 33 a tiny video camera is placed at the tip of a rigid borescope, similar to my third and fourth preferred ~ ho-l;~.,...,l~ This bo,~co~e has an end section with the capabilit~ of being erected to a position perpen~lic~ r to the main bo,~ ,o~e tube. When this erection is acc~mplichpd the camera looks ~ lly along the axis of the bo,ci~,o~e. To make the perspective ,..~,dsu,~..,enl, the bo~e~co~e (or some distal portion of it) 30 is rotated about its axis, thus swinging the camera in a circular path. In this case it is the rotation of the camera about the optical axis which is coupled to the translation of the camera. The camera position would be l~ .cd~lced by an angular ,..easul~l-lc;l,l in this system.
An advantage of the system shown in Figure 33 is that it allows both large and small p~l~e~ e baselines to be generated with an in:~LI u~ ,nt that can be insened through a small diameter inspection pOn. Of course, it still 35 would re~uire that there be considerable space in the vicinity of the objects to be inspected to allow for the large motions which can be g,~ t~ ~1 .. . ... .

The illsLIu~ shown in Figure 33 could combine the circular motion just descn~ed with an internal linear motion as in my fourth ~.. ho/l.. ~ to offer the capability of making lnea-,u-ell,~"tS either to the side or in the forward direction.

K. Operation of Embodiments Using Arbitrary Camera Motion 1. De,~ r of Arbitrary Camera Motion I must first explain how I describe an arbitrary camera motion, before I can explain how to make a lllCa ,ul~ nl using it. To make accurate lll~a~ul~ ,lll5~ the motion of the camera must be accurately known, either by cull~ll ucling the system very accurately to a specific, known geometry, or by a process of calibration of the motion. If calibration is to used to determine the motion of the camera, then that motion must be repc~t~'!e to 10 the required level of precision, and the method of calibration must have the requisite accuracy.
In general, the true position of the camera along its path is described by a scalar parameter p. This p~ rtf r could be a distance or an angle, or some other parameter which is convenient for describing the camera position in a particular ease.
The position of the camera is ",onito,ed by a transducer which produces an output r1(p). Here, r1 is an output 15 scalar quantity which is related to the true position along the path, p, by a calibration curve p~r,~).
The ~ l path of the camera in space is expressed as a vector in some convenient coold system.
That is, the physical position of the camera (more precisely, the position of the nodal point of the camera's optical system) in space is tA~"c;~,~ed as a vector, rc(p(r1)) or rc(r1), in a cOoldillàtc system that I call the exte~nal cooldii~dte system or the global COOIdilldte system.
20 Likewise, the ori~n~inn of the camera in space is cA~lrc~ ,ed a rotation matrix, which describes the orirnt~tif n of the camera's internal cool-li,.aLe system with respect to the global cooldindtc system. Thus, the camera's fi~ ;f~n at any point along its path is ~ ;,sed as Rc(p(r1)) or Rc(r,). The matrix Rc lldllSrù~IIIS any vector CA~ ,sed in the global coo,~" e system inlo that vector ~A~ ,ed in the camera's internal coordinate system.
The matrix Rc is the product of three individual rotation matnces~ each of which represents the effect of rotation of 25 the camera's ~ool.l,llate system about a single axis:
Rc(r1) = Rz(~z(r1)) Ry(~u(r1)) Rs(9~(r1)) (31) where 6z, ~u, and 6~ are the angles that the coordinate system has been rotated about the C~ll~-r ling axes.
Now, in general, the terms rc(r1) and Rc(r1) will not be in(~ but will be coupled together by the ~,~ulll~;Ll~ and cor.;,LIu~Lion of the p~ .e~Live .uea~ul~"lellt system. An example was shown in Figure 32, 30 where the camera's orientation is coupled to its position, so that the optical axis always passes through the center of -vdlule of the camera's path.
If two or more carneras are used, then each one will have a location and uli._.,ldLion ~A~ ed similarly. I will assume that the sarne global ~,oul-lh,dle system is used to describe the motion of all cameras. This must be the case, but if at some point in the process separate global coordinate systems are used, and if the r~ ionshirc 35 between these separate global coordinate systems are known, then it is possible to express all of the camera motions in a single global coo,di,late system in the manner shown below for expressing one cool~lh~dle system in terms of another.

~0-CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97/15206 To ,~ the rel~ti- nchir- between camera motion and the Illca~ul~ nl. what is required is that the positions and oripnt~tions of the camera(s-) be accurately known relative to an external (global) coordinate system.
This ~,ooldilldd~e system is fixed with respect to the in~lru.,.~nl alll)a-dll-s, but it has no inherent relationship to the object being measured. The position of the object in the global coordinate system becomes fixed only ~vhen the in~l,u~ n~ is itself fixed at some convenient position with respect to the object. When this is done, lhe position of S points on the object can be d~,t~--lli--cd in the global coordinate system, or in some other closely related coordinate system which is also fi:;ed with respect to thc illalll ' 2. The General P~ ,eu~ive Mea~u.-,..-~,.-l Process Above, I taught how to perform the pc.~e~live ..,ea~ul~ when the camera ul.de~ues a pure tr~ns~ n.
Now consider the case where the camera, ~ b.~s a rotation as well as a translation between viewing positions Pl 10 and P2.
Considering again Figure 12, an arbitrary vector r~ can be ~ ,lessed as:
rl = d + r2 (32) where the vector r2 is drawn from the origin of the P2 coo-,li-.ale s~stem to the end of vector r, . Any or all of these vectors could be e,.~ d in either coordinate system. I choose to define lhem as being e,~ ed in Pl 15 coo. lind~es. Then r2 = r~--d can be re-e~,lc~d in the P2 coordinate system by using the lldll~rulllldlion that the coo.-li..atcs of a point undergo when a coordinate system is rotated about its origin.
It is a fact that there is no single set of coordinates that can be defined that will uniquely represent the general rotation of an object in three .l;,.~ ;o~ space. That is, for any cOoldillâtc system one might choose, the order in which various sub-rotations are applied will affect the final orientation of the object. Thus, one must make a 20 specific choice of PlPmPnt~l ~ùi_t , and the order in which they are applied, to define a general rotation.
The way that coo~di..dle system rotations are most oflen defined is to begin with the two coordinate systems in ~lig t. Then one system is rotated about each of its coordinate axes in turn, to produce the final ~lipnmP.nt of the rotated ~;u<,--li--a~e system. I define the IJ-ucc(lul~ for rotating coo--li--dle system P2, beginninp with it aligned with P1, to obtain the rotated coordinate syslem P2 as the following:
25 1. Rotate P2 about 3:2 by an angle ~.
2. Rotate P2 about V2 by an angle ~v 3. Rotate P2 about Z2 by an angle ~z.
This pl~- ~i means that the effect of lhis rotation of the P2 coordinate system with respect to the P1 cool~li..dle system on the coordi-làtes of a point in space can be ~I,-essed in the P2 system as the product of a 30 rotation matrix with the vector from the origin to the point. That is:
v2 = R v~ (33 or v, = R~(6z) Ry(~v) Rs(9~) v~
35 where -~1-W 098/07001 PCTrUS97/15206 COS~ Sin~ 0~
RZ(~Z)= -Sin~Z COS~ 0 (35) o o cos~,, O - sin~V ~
RY(OV) = ~ 1 ~
sin~, 0 cos~

Rl(~)= 0 COS~I sin~
O - sin6z cos~
and where vl is the vector between the origin of the P2 coordinate system and the point as ~ ssed in the uluuldled Pl cool.lillàte system and v2 is the same vector as eA~ aa~d in the rotated P2 cooldinale system The lldua~lllla~iol. to go the other way, that is, to change coordinates as I ~ ~d in the rotated P2 system to S coo--lindLes ,.,~,asu.t;d in the Pl system is Vl= R1V2 (36) vl = R,1(9r) Ryl(~) R~ ) V2 v~ = R,( - ~2) Ry( - ~v) Rz( - ~z) v2 Consider again the pe.a~,euLive ,..cdau-~,-.e.,L process At vie-~ing position Pl, visual coordinate location lu~,aaulelll~ a are made in a couldilldte system which depends on the orientation of the camera at that position When nodal point of the camera is moved to position dvl as measured in the Pl coordinate system, the camera will rotate slightly, so that the visual ~ou.~' e system at position P2 is not the same as it was at position P1 I refer to 10 the rotation matrix, which ~ldllarulllls coordinates lu~aaul~d in the visual cool~lilldle system at Pl to ~,OOl~"
.I.~ul~d in the visual coo.di..ate system at P2, as Rl2 Clearly this rotation matrix is a product of three simple rotations, as detailed in (34) and (35) But, I have shown how to eApress lll~,a~ul~,lll~nt~ made in the P2 cuo~din~1c system in terms of the Pl coo.dinale system rv~ = Rjl2 rv2 so that Equations (12) can now be ~;AI.~U ,aed as rvl = zvl av~ (38) rvl = R;l2 zv2av2 + dv, and these can be solved as above to get rvl=2[[avl Ril2av2][avl -Ril2av2] +I3]dvl (39) where, to repeat, dVI is the tr~nCl~tinn of the camera nodal point between positions Pl and P2 as eA~ aaed in the 20 Pl cûoldindL~ system In analogy with (18) and (19) I define a new coordinate system r,l, = rvl--2 dvl (40) Then 25rm = 2 [ avl Rl1 av2; [ avl - Ri2 av2 ]Ll dVI (41) CA 02263~30 l999-02-l6 W O98/07001 PCT~US97/15206 Comparing (41) to (19) shows that any camera rotation between viewing positions Pl and P2 is easily taken into account in the pc~ ivt; ,l-~aau~e,--~ t. ~s long as that rotation is known.

In "-~,asu,~ mode 1, the eAl f ;~ t~l data obtained are four camera image position coordinates mltl~lm2-yl~r~2) for each objecl point of interest and the readings of the camera position lldnaduc~l, r11 and ~72, at the two viewing positions Pl and P2.
As e~ P~I for the first preferred e--ll,;: ' ~ , the data processing begins by correcting the ~-easu~d image point location data for distortion as eA~ aaed by Equation (22). Next, the data are scaled by the inverse of the effective focal length of the cc,~ c-d optical-video system as ~ aaed by Equation (23). Then, for each object point of interest, the visual location vectors aV, and ay2 are formed as ~ aed in Equation (24).
10 Next, the rl;~ f ~ vector between the two viewing positions is c~ late(l in global coordinates as:
dy(r12,r11) = rc(r12) - rc(~1l) (42) As stated previously, I call this vector the p~ ,e~ive displacement.
The relative rotation of the camera between the two viewing positions is c~lcul~t~d as:
Rlz(172~11) = Rc(712)Rc (171) (43) 15 Fqu~tic!n (43) simply says that the rotation of the camera between positions Pl and P2 is equivalent to the rotation of the camera in global coordinates at P2 minus the rotation it had al P1.
The pe~a~e~liv~ r~.. 11 is rc-cA~ aed in the camera internal coordinate system at P1 by taking into account the rotation of the camera at that point. That is:
dV, -- Rc(r1l ) ds(712~ ) (44) 20 The location of the object point of interest in the measurement coordinate system is then co--,ln~ed as:
rm = 2[avl Ri2 aV2]lav~ - Ril2 av2~ dvl (45) where the IllC~laUl~ coordinate system is parallel to the internal coordinate system of the camcra at Pl. with its origin located midway between P 1 and P2.
F:. (45) ~,A~csses how to locate a point in the measuremenl coordinale system under co",ylclcly general 25 con~litionc~ for any arbitrary motion of the camera, provided that the motion is known accurately in some global co-,ldi"a~ system. If the motion is l~r~~ 'le, it can be c~libr~t~ and thus known. Equation (45) is the fully three-tlin-~-- I least squares estimate for the location of the point.
To complete the mode 1 p~.al.e~L-ve ~' --- I Illedaul-,.lle~-l process. Equation (45) is used for both points A and B individually, then Equation (7) is used to calculate the distance between these points.
30 It is in",o,~nt to note that if multiple image position coordinate l.l~.dSul~luCuts are made for the same object points under exactly the same conditions in an attempt to lower the random error, then one should average the individual point location Ille~laul~ 5 given by Equation (45) before c~lc~ tine the distance between the points - using Fq~ tio~ (7). This gives a ct~tictic~lly unbiased estimate of the distance. If one instead c~lc~ t~c a distance estimate with each set of lu~dSul~ 5 and then averages them, one obtains what is known as an asymptotically biased estimate of the distance.

CA 02263~30 1999-02-16 W O 98/07001 rCTAJS97/15206 If two eameras are used, one simply uses each individual camera's distortion parameters to correct the image measured with that camera as in Equation (22). Then, the scaling by the inverse focal length is carried out for eaeh individual camera as expressed by Equahon (23) Then, for each object point of interest, the visual location vectors aVI and aV2 are formed as ~ aaed in Equation (24), where now the data in ay~ w ere detenninPd with one of the eameras and the data in aV2 were d~ d with the other The ~ -dil-del of the data processing is 5 identieal whether one or two cameras are uscd to make the ~ dau~ t The ~ulll~lly of Ill~a~uu~n._nl mode 2 for an arbitrary camera motion is depicted in Figure 34. The i."~ l data obtained are four camera image position couldilldlcs (~ 11 Y'Trz~ m2I Y'" 2) and the two readings of the camera position ~.dns.lu-,e~ and ~72, for each object point of interest In this mode of ~;---cnl, the camera positions are not the same for each point of interest, so thal there may be either three or 10 four camera positions used for each distance to be d~l~----incd Figure 34 depicts the situation when the distance between two points, A and B, is to be d~t~--u~.ed It is clear that this medaul~ llt mode makes sense only for a certain class of camera motion, object distance, object shape eollLIlldlions For instance, with the camera motion shown in Figure 32, and a more or less planar objeet located near the eenter of ~;ulvdL~ , C, there is little or no ability to view different portions of the object by moving the 15 earnera, so that there is no reason to use ~I-~asu.c-l-.,.l~ mode 2.
However, as shown in Figure 35~ there are other cih~fion5 when only the use of ~ dsu-~ e.~l mode 2 makes a daUII_IU~ feasible In Figure 35, the distance between two points on opposite sides of an object is desired. The objeet has a shape and an oripnt~tion such that both points cannot be viewed from any single camera position whieh is physieally acc~cci~le~ As shown in Figure 35, the ~o,~ ion of eireular camera motion and 20 ~ ,daul~l~lc;llL mode 2 allows this ~lledaul~,ll.~nl to be made This .I.casul~ ellt could also be made with an arbitrary camera rotation ~ ~bo~ f of the system shown in Figure 31.
Consider now that the ll-~,aau elll~nta depicted in Figure 34 or 35 are to be performed Say that the camera position ~Idu~s.h~cel readings are { 77IA, ~72A, 11IB, 1728 } at viewing positions {PlA, P2A, PlB, P2B} l~a~e~ ly.
Then, the actual camera positions in the global coordinate system are { rc(~71A)~ rc(712A)~ rc(7~ls), rc(~12s) }
25 ~c;~,c~Lively Likewise~ the orien~ nc of the camera's coordinate system ~vith respect to the global coordind1e system are { Rc(r1lA)~ Rc(r12A), RC(771B), RC(t12B) }
In Ill~a~ul~;lll~lll mode 2, the positions of the points A and B are each d~terrninrd indep,~ ~Pntly by the basic point location process ~ ,sed by Equation (45) to give rmA and rmg ~~I c~Lively According to that process, rmA
is d~ 1 in a Illed~ul~lllc llL coordinate system parallel to the coo-d---dle system of the camera at PlA, while 30 rmB is d~ d in a cooldinale system which is parallel to the camera coordinate system al PlB.
The location vectors for points A and B are then re~ ssed in the global coordinate system as:
rAG = Rc (7tIA) rmA (46) rBC = Rc (711B) rmB
and the vector between the origin of the A ..~a~ul~ ,nt ~oo.dind1e syslem and the origin of the B cooldilldle 35 system in global cooldinales is ~ql~ulqfed as (Figure 34) dAs = 2 [ rc(71lA) + rC(r12A)-- rC(~71s)-- rC(77~s)] (47) CA 02263~30 1999-02-16 W O 98/07001 PCT~US97/15206 Finally, the distance between points A and B is c~icnl~ as:
r = Irl = ¦dAs + rAc -- rl3GI (48) Once again, if two or more cameras are used, one need only correct the image locations for distortion and scale the image locations by the inverse focal length for each camera individually to perform the ~-leasu~ 7 just as long as the positions and orirnt~tionc of the cameras are all ~ ased in the same global coordinate system. And, S just as before, if multiple identical Ill~a~ul~ are made, one should calculate the average location vectors before r~lmll~tjng the distance with Equation (48), rather than averaging individual distance dr~Pnnin~tionc 3 ~pplin~tion of the General Process to Correction of Translation Stage ~ntatic n 1l Errors I have shown four preferred PmhorlimPnts of my ~ r~ where in each case, a single camera will be moved along a s.~ lly straight line. If the motion is a perfect tr~nC~ ln7 then in Fq~ ionC (43) and (44), Rc(~72) is 10 equal to Rc(r1l ), R~2 is the 3 x 3 identity matrix, and the direction of pelalJe~tive tlicpl~rPmPnt dg(r12,l1l ), hence dVI, remains the same for any (172,~11). In this case Rc is i(~Pn~ifiPd with the product Rz R~, which simply makes use of the fact that the orientation of a vector can always be ~ aaed by at most two angles, since a vector is not changed by a rotation about itself. Finally, with a perfect straight linc tranâlation of the camera, Equation (45) reduces to Equation (27).
As an example of the use of the general .lleas~ ent proceâs to correct for errors of motion, consider the third (EMB) and fourth (EME) preferred ~ bo~ nl~ Assume that the tranCl~tion stage has rotational errors, which have been cha ac~ ed with a ~ OIl process, which will be described later. As a result of this calibration, the tr~nC~ n stage rotational error R(77) is known in some calibration coordinate system. To simplify the r~libr:ltion task, I specify that the calibration coo-dil~dle system be the same as the global coordinate system, and I explain 20 later how to ensure this. I further specify that the global cou-Jii~dte system has its ~ axis along the nominal tr~nc1 ~ion direction, which is simply a matter of ~leri~ o~
The errors of translation stages are not well specified by their nl~nllf:-~tl.rers. For typical small ball bearing tr~nCl~tion stages, a comparison of various m~mlf~rtllrers' cpe~ifir~tionc is concic~pnl with e~pected rotational errors of applu~Nllldlely 300 mi~ulaJialls and transverse tr~nCla~ n~l errors of about 3 microns. A moment of 25 thought ~vill convince the reader that with these levels of error, the rotational error will contribute more to the l~.ea:lu~ C.nL error of the system than will the tr:~nCl~ion enor for any object distance Jarger than 10 mm. Thus, for ~ aa~ nl~ s of objects which are at distances greater than 10 mm, it is ~~as(,nable to correct for onlv the rotational error of the ~r:~nCl~tion stage. I now show how to do this.
The image point location data are l~uceaaed in the same manner as has already been ~lescribe~l The position 30 of the carnera nodal point can be ~ a~d as:

rc(71) = O = p(71) (49) o so that the pela~Jc~ e ~iicrl~P~Pn~ in global co~,..lh~tcs is r~lclllatPd as:
dg(772,rtl) = rC(712) - rc(17l) = P(172) - P('71) (50) 35 For any position of the camera. the rotational orientation of the camera can be t~ ased as Rc(r1) = RcgR(71) (51) CA 02263~30 1999-02-16 WO98/07001 PCTrUS97/15206 where Rc~ is the o,it;l,~tion of the camera with respect to the global coordinate system at some referenee position where R(r7) is defined to be the identity matnx. Both RCB and the rotational error, R(11), are detrrrnined in the calibration process.
As the next step in ,--edsu,~,--ent processing, then, the relati~e rotation of the camera between positions Pl and P2 is ~ t~d as:
Rl2(l12,71l) = RC(712)RCI(~11)=RcgR(712)R~ 71 )RCI (52) Since the rotation matrices are orthogonal, their inverses are simply r~lr~ tr(l as their ~ ,OS.~C
The ~,~!e~,livti rlic~ .. u is then ~d~rullned to the camera eo~"dil,dte system at Pl as:
dVI = RCgR(~11 ) [P(~72) - P(T11 )] (53) and finally the position of the point of interest is c~lc~ t~d by using results (52) and (53) in Equation (45), and distances between points of interest are c~lrul~trd with Equation (7).
Note that the process I have just outlined implicitly includes the possibility of correction of position lld,-~luce, errors, given by the r~lihr~ti~n curve p(71).
If transverse tr~ncl~tion errors of the stage are to be corrected. then the calibration process must determine these errors, and the correction data must be inco,l,o,dl~d into the general ~ a~u~ ent formalism given in the previous section in a similar manner to that shown here for l~lldliolldl errors.
L. Calibration I find it collv~ to divide the n~ibr~ionc of my Illed:~UI~ system into three classes, which I call optical e~lihr~tion ~lignmrnt c~lihr~ti~n and motion c~libr~ ~
In optical calibration, the optical properties of the camera ~vhen it is ~onsid~.~d simply as an image forming 20 system are drtrrminrrl In :llignmrnt calibration, ~ iti~ -l properlies of the camera which affect the dim~on~io Ill~.a~Ul~.lllell~ are Art~ l Both of these classes of calibration must be accomplichf d in order to make a ~-.ea~ulLlnent with my terhniql-e Optical calibration has been briefly considered in some of the prior art of cropjc l..~.~a~ur~"-. --~, while ali~nmrnt calibration is new.
Motion calibration is not necessarily required to make a ll-cas,.~ .-t, but it may be required in order to make 25 ,l~ea~ lllel~l~ to a specific level of accuraey. Whether this calibration is required or not is determinrd by the accuracy of the ha~lw~uc which controls the motion of the camera.
I. Optieal ~ lil.".l.~.n There is a slandard calibration techni~lue known in the field of photo~ ly which is the preferred method of ~ r~ ;n~ the optical calibration of the camera. The terhniqllç is dicc~-cce~l for instance, in the following 30 artieles:
"Close-range eamera n~lihr~tinn", Photogrammetric Engineering, 37(8), 855-866, 1971.
"Accurate linear trrhnitlur for camera ealibration : ~lrrin~ lens distortion by solving an eigenvalue problem", Optical ~ngineering, 32(1), 138-149, 1993.
I will outline the e4~;p., - .-~ and ~ Jce~lult; of this calibration here.
The e4ui~ ,.t required is a field of calibration target points which are suitable for viewing by the eamera to be r~iihr~tr~1 The relative positions of the target points must be known to an accuracy better than that to which ~6-CA 02263~30 1999-02-16 the camera is expected to provide mc&.7uJ~ nls. The number of calibration points should be at least twenty, and as many as several hundred points may be desired for very accurate work.
The calibration target field is viewed with the camera and the image point locations of the target points are ed in the usual way by aligning a video cursor with each point in turn, and co~ n~ the Col-l,uulel to store the IlledsUI~,;i image point location. The geometry of this process is depicted in Figure 36.
It is hll~lldllt that the relative alig~ nt of the camera and the calibration target field be such as to ensure that target points are located ovcr a range of distances from the camera. If the target field is restricted to being at a single distance from the camera, the d~lcl-llindlion of the camera effective focal length will be less accurate than otherwise. Another l~-luil~ nt is that targets be distributed with l~o~bl,, ul-irO--ldly over the whole camera field of view. There is no other ~t;~ui-~ for a~ Pnt between the camera and the target points.
Assume that k target points have been located in the image plane of the camera. The measured coordinates of the jth image point are denoted as (~m~ Yi7r~j) The following (2 x k) matrix of the measured data is then formed:

[ YiTnl Y,7n2 ~irnk ~ ( ) In Figure 36 the vector rOk, which is the (unknown) position of the kth calibration object point in the camera 15 COOl lind~e system, can be written as:
rok = Rc(~z, ~v, ~z) [ rck--rc] (55) where rck is the known position of the kth calibration point in the r~lihratio~ target field internal coordinate system (the c~libration coordinate system), rc is the unknown position of the camera's nodal point in the calibrr~iorl cooldin -le system, and R~ is the unknown rotation of the camera's coordinate system with respect to the ~libr~1ion cooldin~tc system. As before, rotation matrix Rc is the product of three individual rotation matrices 20 Rz(~z)Ry(~v)R~ z)~ each of which is a function of a single rotational angle about a single coordinate axis.
The kth ideal image position vector is defined as:

[ r (1) ] (56 where i is the equivalent focal length of the camera. The (2 .~i k) ideal image position matri,Y is defined as:
rho~m = [rhojn,1 rho~m2 ~- ~ rho,mk] (57) Similarly, the image point ~ooldillal~ error for the kth point is defined as:

[ fDV(r~ok) ] (58) and the (2 .Y k) irnage coordinate error matrix is:
rhoD = [ rhoD1 rhoD2 ~ - rhoDk ] (59) The error r,.i~io,~, fDI and fDv define ~he image location errors which are to be considered in the camera 30 calibration. A number of di~erent error functions are used in the art. The following fairly general expression is given in the article "Ml.~ a vision-based approach to flexible feature ulcdsul~lllent for inspection and reverse e.l~ lec~ g", Optical Engineering, 32, 9, 2201-2215, 1993:

. .

W O 98/07001 PCTrUS97/15206 fDJ.(rhOk) = ~o + (al + a2¦rh~k¦2 + a3¦rhok¦4)r2rr,k + a4(¦rh~k¦2 + 2 2 ~ ) + 2 aS ~2r~ .Y2~zk (60) fD~(rhOk) = yO + (a6 + a2lrh0kl2 + a3lrhok¦4)y~ + as(lrhokl2 ~ 2y~ ) + 2a4:~imky,mk where,ofcourse, ¦rhok¦2 = ~iTnk2 +Y2~r~k2 The following equation e~presses the rcl~tionchip between the measured image point positions. and the ideal image point pociti-mc rho' = rho,m+ rhoD (61) Using the ~lu~ltilies defined above, Equation (61) ~ enls 2k equations in 15 unkrlowns The ul~llow..s are the three angles of the camera rotation 17Z~ l9U~ 171.~ the three ~o,.lllull~llls of the camera location 2:c, Yc. zc" the equivalent focal length, i, and the eight parameters of image position error To~ Yn, al, a6. In order to obtain a solution, one must have k > 8. As I have stated above, one wants to use many more points than this to obtain the most accurate results.
As I previously stated, 5 call all eight of the image position error p,.. ,.. ,.. l~r5 "distortion", but only some of them relate to the optical field abe--.l~ion which is usuallv referred to as distortion. The parameters ~0 and yO
represent the di~ ,n-,e between the center of the image ~ au~..nent coordinate s~stem and the position of the optical axis of the camera. Pd.d...~lcrs al and a6 represent different scale factors in the ~ and ~ di-~ulio..~.
Pd~ tl a a2 and a3 represent the standard axially symmetric optical Seidel aberration called distortion.
15 p~ lr~ . a4 and aS 1~ possible non-symmetrical distonion due to tilt of the camera focal plane and on of the elements in lenses.
The o~ ~.d~ rd set of Fqll ltionC (61) has no e~act solution. C~ n .~ly, the calibration data pluceaaillg ~iPtP~~ 'C best fit values for each of the 15 parameters by minimi7ing the length of an error vector. This is done by an iterative ~ ~cal process called non-linear least squares ~ n Specific ~l~orithmc to r l ' non-linear least squares o~ nn are well known in the art. They are rA for instance, in the book Nu-..~,.icdl Recipes by William H. Press, et. al., published by Cambridge UIU~IeIailY Press, Ist Ed. 1986. This book provides not only the theory behind the numerical terhniqllec but also working source code in Fortran that is suitable for use in an arpli~ation program. A second edition of this book is available with working source code in C. Another book which is helpful is R. Fletcher, "Proctical Methods oJ
25 Optimization, VoL 1-- Unconstrained Optimization, John Wiley and Sons, 1980.
A second option for , ' - - of the non-linear least squares o~ ' ion is to use one of the available "canned" nllmPric~l software packages such as that from Numerical ~l~nrithmc Group, Inc. of Downers Grove, Illinois, USA. Such a package can be licenscd and incul~,o,dl~d into application l~lu~ , such as the program which controls computer 228. A third option is to use one of the ~IUlllicl~ly high level mz~tht-m~ti~l analysis 30 l~ grS such as MATLAB~, from The Math Works, Inc. of Natick, M~ "~ , USA. These l:~n~ges have high level ope-~liùl,s which , ' - powerful o~ ir~n routines, and also have available compilers, which can produce portable C language code from the very high level source code. This portable C code can then be reccmpiled for the target system, ~OIIIIJ~I~I 228.

The o~l i "~i",l ion process begins with ap~ulu~ ate values for the fifteen unknowns and adjusts these iteratively to 111;11;111;7~' the quantity:

CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97/15206 Q =Irho'- rhoim -rhoDI2 (62) The ideal value of Q is zero; the ("JLi...i~dlion process attempts to find the smallest possible value.
The staning values for the unknowns are not critical in general, but the iterative process w itl converge faster if the starting values are not far away from the true values. Thus, it makes sense to perform the optical calibration with a specific alignmPnt of the camera with respect to the calibration target array, so that the camera ~lignm~n~
5 p~""""t~ are d~"~-u- il--a~ely already known. It is also a good idea to use any available information about the apy~u~ d~e camera focal length and the distorLion parameters in the starting values For Lhe bolc~co~es I have tried, I have found Lhat terms a4 and aS in Equation (60) are not ne~c~ y, in fact, use of them seems to cause slow convergence in the u~Li~ ion The first six calibration pd~ { 6~ CI YC~ Zc }, refer to the position and ali~nmPnt of the carnera 10 as a whole. The oLher nine parameters are snl.cf~l~-el~lly used to correct l..~su,cd image point position data to ideal image point position data by:

[ Yim ~ ~ fDL,(rho ) ] (63) which is another way of expressing Equations (22). After the image point positions are corrected~ then the visual 15 location vector used in the ..I~,a ,ul~ Equations (27) and (45) is defined as:
1 ~ rhoim 1 aV = - ~ L - i J (64) which is a more compact way of expressing Fq--~ionc(23) and (24).
2. .Ali~rtn.~nt ~lihr;~ti -In the pc.~ iv~; ,--easu~ process the object of interest is viewed from two points in space, which are called Pl and P2. Recall that the vector conr~e~ g the camera nodal point at viewing posiLion Pl to Lhe camera nodal point at viewing position P2 is defined as the pcl~l e~,live ~icpl~ m~n~ d. The essence of ~lignmrnt calibraLion is to ~ .ll;nf the orientation of the p~ ,eulive ~icpl~rPmpnh d, with respect to the camera's internal .,ooldindle system. Once d is known in the camera COOI~P:'-'~ system, then the position of object points can be r~lrul~trd using either Equation (27) or Equation (45), as d~J~nul ~
Since the camera's position and o. - 1 are cshm~P(l during the optical calibration p-ucedu-~ given in the previous sub-section, these data can be used to determine the :~lienment of d in the carnera's coo-~li..dLe system if that calibration is done twice, from two different viewing positions. In fact, these are exactly the calibration data that are needed to ", pl~ l the pt;l~ycuLiv~ a~ul~ e~ll fot a general motion of the camera which was ouLlined in Fqu~l~i(mc (42) through (45). All one need do is to carry out the optical calibration ~)JuCellult; at the two Illeasulu.ll~,.lL camera positions with a fixed r~lihr~ oll target array. This is, in fact, what is done in the 30 photo~;ldll.lll~,ly field, and is what can be done with a general motion l ...l-o.l;.,,. ~t of my system.
In my preferred elul)odilll~nls, there is considerable information is available about the motion of the camera that would not be used if one were to proceed as I have just sugg~ctP(l For instance, if the tr~n~l~tion stage is accurate, then that means that the orientation of the camera does not change between Pl and P2, and, in fact, it doesn't change for anv position of Lhe camera along its path. For those e~-l-o~l;.--- .,l~i where the geo.llelly of the CA 02263~30 1999-02-16 W 098tO7001 PCTrUS97/15206 camera path is accurately known, such as the preferred embo~limrnts~ one can determine the alignment of d in the carnera's coordinate system at one point on the path and thereby know it for any point on the path.
In addition, the pc~ ive baseline Idl may be especially accurately known, d~ren~in~ on the perforrnance of the position lla~ ucel~ and how accurately it is aligned ~ ith the motion of the translation stage. As a third possibility, it is possible to accurately measure rotational errors in the translation stage, as long as the motion of the S stage is ~ 1,1e All of this ilLlul~ tion can be taken into account in order to d~ ine a better estimate of the t~ ;Ol~ of d in the camera coc"dil~dL~ system, and thus, to achieve a more accurate mea~ul~lllc~
As a first example of ~ nml nt calibration, consider that t vo optical calibrations have been done at t vo positions along the camera path, as dicrncse~ above. The calibration data available for the camera position and orientation are then rc(~72)~ rc(~ Rc(r12), and Rc(r7l ). Also consider that it is known that the camera moves along lO an accurate straight line.
The camera orientation in the calibration coordinate svstem is then estim~t~d as:
Rc = 2 (Rc(~72) + Rc(~1l )) (65) Note that the di~ llce between RC(r12) and RC(71I ) gives an in~fi~tir~n of the precision of this calibration.
15 The p~ e~,Live tli~ re".. ~-t in calibration coordinates is ~cti ~ as:
dg(~72,r11) = rc(~72) - rc(~1~) (66) and in camera coold it is ~ '~ as:
dVI = R~dg(~72,r7l ) (67) Because there is no rotation of the camera, it is known that the orientation of this vector does not change with camera position.
20 In Fqn~tinnc (25) and (26) the lllca~ul~ ,.ll process was defined in terms of rotation matrices such that:
d dVI = Rz(61. ) Ry(~) ~ (68) o where d = Idvl I . Writing the ll-easu~ed co~ o.~r -lc of d~.~ from Equation (67) as (d V~, d V~, d Vz ) one writes the following equation, using ri.~finitinnc (35):
d cos~v cos~z ~ dvs 25Rz(~z) R~(~v) O = d - cos~1~ sin~z = dv~ (69) O sin~V d vz Equation (69) can be solved for the rotation angles as:
~, = arcsin( dZ) = arcsin( dv"
~/dv2 + dvi~

Thus, the final step of this alignml~n~ calibration process is to determine the two angles ~, and ~z with Equation (70). During the l~.~asu~ ,.rl process, these angles are used in Equation (26).
As a second exarnple of ~ nmP.nl calibration, consider that an optical calibration has been previously done~
30 and now it is desired to do an ~lignml~nt calibration. This would be the normal situation with the first and second CA 02263530 l999-02-l6 W 098/07001 PCT~US97/15206 preferred tullbo~li",~ c since the ~lienmPnt of the camera with respect to its translation ma~ change ~vhenever the cu~c is clamped to the BPA. Consider also that the motion of the camera is known to be conctr:linPd to be along an accurate straight line (that is, any errors in the motion are known to be smaller than the col.cs~o~ g level of error required of the l~c.l~u~lll~,.ll).
Once again, a r~lihr:lti~n target array is viewed from two positions of the camera along its path of motion.
5 According to Figure 36 and Equation (55), one can write:
rO~;l = Rc [rck - rC(t1l)] (71) ~ rOb2 = Rc [ rck--rC(772)]

The visual location vectors, which are r~lr~ tPd from the distortion corrected image position data according to Equation (64), can also be written as:
aV~l = 0~1 = rO~lUkl (72) avk2 = -- = rOk2 Uk2 in terms of the object point uoold ~ .. One corrects the distortion of the measured data using Equation (63) with the d~ ,-~ on ~".."~ ~frs obtained in the previous optical calibration.
Define the following qll~ntitiP.C where it is assumed that k calibration points are used:
AV1 = [ avll av21 ~ ~ - avl1 ] (73) Av2 = [ avl2 av22 ~ ~ ~ av~2 ]
rc", = ~ rCl rc2 ~ ~ rcb]
lle=[11 1] (k~ r ~
Ull O O

Ul= O O ', O
O O O Ukl ~ ul2 ~ ~ ~
~ U22 ~ ~
U2= 0 0 ', O
O O O Uk2 15 Then Fqu~tir~nc (71) can then be written as:
Avl = Rc I rc., - rc(t7l)lklul (74) AV2 = Rc [rc~,--rC(r12)1k]U2 and Equations (74) can be ' nPd into:
[AV1 AV2 ] = R~ [[ rcu~ rc ~ ] - [ rc(~ rC(r12)1k ]] U12 (75) where Ul2 is:
U [Ul 0 ] (76) F, ~n(75)~ S~ ls 6k equations in 2k + 9 ~ ls, thus, they can be sol~cd for k > 3. The unknowns are U12(2k unknowns) and Rc~ rc(~11), and rc(712), which each contain 3 unknowns. Equation (75) makes full use of the fact that RC is constant; and is thus a more efficient way of eC~tinl~tin~ the nine unknowns of interest for the ~lignmPnt calibration than was the previous case.
Fq-~qtionc(75) are solved by a similar nc-'ir~P~r least squares process as was used for Equation (61). Once the camera positions and orientation are ectim~tp~ one simply uses Equations (68) through (70) to dPtPrminP the ~ligntn~nt angles, which are used during the llled~ul~ ,lll process.
To improve the e~~ ,y even further, in the case where d = ¦d¦ is con~id~-~d accurately known, Equations (75) can be solved by a .,onsL.dincd least squares V~ ion rather than the llnronctr:linPd optimization I have so far ~ ce-l Such l~ul~lc.icaJ p.~,~,e.lu-~ are 11icr--ccPd in R. Fletcher, "Practical A~lethods of Optin-iza~ion, ~fol. 2 10 -- Constroined Optimization, John Wiley and Sons, 1980. Most, if not all of the '-canned" numerical software offers routines for conslldilled 1J~JI i,..;,~l ion as well as unconall~ined ~ hui~llion~ and so do the high level m:lthPm~tir~l analysis l~n~ Ps In this case, the ~o~ dh~l is:
lrc(712) - rC(~ d = O (77) It is possible to use an inequality ~onsl-~illl as well, so that if it is known that there is a certain level of uncertainty in the dt;ltul..illdLion of d, then Equation (77) could be replaced with:
lrc(t12) - rC(~ - d < ~d (78) where ~d is the known level of ullc~ ahlly in d.
As a third example of ~lignmPnt calibration, I now consider the case where there are rotational errors in the 20 motion of the camera in my third and fourth preferred ~-. l o~ I have already PYpl~ir-~ how to make the ut;ll.cnl in this case, in the sub-section entitled "Appliration of the General Process to the Correction of Tr~ncl~tion Stage ~ t~-ti~n~ll Errors". In motion calibration sub-section 3. below, I will explain how to determine the rotational errors. ~ere, I explain how to take these errors into account during the ~lignmPnt calibration.
For ~lignmP-~t calibration of the EMB or EME wiLh a known stage rotational error, it is nc, e~dly to determine 25 the static ~lignmen~ of the camera with respect to the translation dircction in the presence of this error. Recall that at any point along the camera's path:
RC(71) = RCRR(7~) (51) where now R(t7) is known from the motion c:~libr~ion process.
Once again, a calibration target array is viewed from two positions of the camera along its path of motion 30 According to Figure 36 and Equation (55), one can write:
rObl = RCRR(-I1) [rCk --rc(~71)] (79) rOk2 = Rcl5R(712) ¦ r~k -- rc(~12)]
These are extended just Equations (71 ) were to obtain:
AV1 = RCgR(77l) 1rc~l--rc(7~ k]Ul (80) AV2 = RCgR(-72) I rcJ~--rC(-72)1k]U2 and Fqu~tions (80) can be co.-.l,;~ d into:

W O 98/07001 PCTAUS97/lS206 [A~l Av2]= Rc~[[R(~I)rc~l R(~2)rc~l]-lR(~I)rc(~ k R(~2)rC(~2)lk]]Ul2 (81) which is the same opl i ,.~ ion problem as was Equation (75). This is handled exactlv the same way to estimate RCR, rC(r~ and rc(172) With RCg~ the rotation of the camera at any point in its path is known as Rc(r1) from Frln~inn (51). I have assumed that the rotation of the stage does not affect the offset of the stage, so that the lllc~ul~nle.lt in this case is ~rromplishpd with Equations (49) through (53), Equation (45), and finally Equation 5 (7).
3. Motion C~lihrqtion For the third alignmPn~ calibration case above, the rotational errors of the translation stage must have been previously d~l~ . .--,~ rd in a motion calibration inUCe~IU~ ef~,~ablY, this motion c~lihralioil is done at the factory, for a s~ ,hly of the EMB or ~ME. These calibration data are then inco.l,u.~ted into the sohware of the 10 complete ,--cas~ ll.e... scope that is constructed using the particular s~ c~ ,I.ly in question.
The small rotation errors of a linear translation stage can be conveniently ~-.casul~d using a pair of electronic tooling ~ut( coll ~~ as depicted in Figure 37. Each of these al-locollim~tnrs is internall~ aligned so that its optical axis is a~ ~Rly parallel to the --' r:ll axis of its precision ground cylindrical housing. Such h.i,l-u are available from, for example~ Davidson Optronics of West Covina, t'~lifnrni~ USA or Micro-15 Radian In~ le..~ of San Marcos, California, USA.
In Figure 37, two cnllim~'or V - blocks ~02 are mounted to a flat stage calibration baseplate 600. The two precision .."~h~ d V - blocks 602 are locatcd with precision pins so that their axes are accurately pt;,i -iirlll~r, to norrnal ~ h;~ g 1ohPr~nrPc The two V - blocks 602 thus define the directions of a Cartesian cuol~ dle system, which is defined as indicated on Figure 37.
Arl EMB S~h~ Iy V - block 606 is also mounted to baseplate 600 and located with pins, so that its axis is ac~u~ , parallel to the ~ axis defined by V - blocks 602. Also installed on baseplate 600 is actuator mounting block 608.
The :~ntoc( llim:3tors 604 are installed into their respective V - blocks and are both rotated about their lu;"JcuLil,~ axes so that their Illedsuleul-~-ll y a~es are oriented accurately perpPn~lirlll~r to lhe monnting plate.
With the av~ncnllim~tors installed and aligned, EMB translation stage ~ulJ~ bly 550 is placed into V -block 606. An eniarged view of a portion of this ~ 'y is shown in Figure 38. S--h~ccemhly 550 consists of distal bacP,ri~tP 514 (see Figures 21 - 23) to which is mounted tr~ncl~ion stage 180, and transducer m~llnting bracket367. Tr~ncl~ionstage 180is~o~ osedoffixedbase 182andmovingtable 184. TrA..~lucPl 360is mounted in bracket 367, and its operating rod 361 is mounted to tr:lncd~lcPr ~tt:~chmPnt bracket 369. Bracket 369 30 is in turn mounted to moving table 184.
The ~,luce.l~t; given here assumes that I ' stage 180 has been mounted to distal baseplate 514 so that the axis of tr~lnslati- m is oriented parallel to the cylindrical axis of the distal bacPp~e. This alignm~Pnt need only be accurate to normai ",~rk;l",~g tolerances~ as I will discuss later. If the specific design of the hardware is different than I show for the preferred P.mho-~imPnt it is neCei~dly to use some other a~ plo~,riate method of 35 ensuring that the axis of translation of stage 180 is oriented parallel to the ~ axis defined by the calibration hardware, and that this orientation is accurale to normal m~lrhining ~-lerPnre$

-~3 -.

CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97/15206 S~ cc~ ly 550 is rotated about its axis to make the top surface of distal baseplate 514 nominally parallel to baseplate 600 and then it is clamped into position. For purposes of clarity, the clamp is not shown in Figure 37.
Stage operating arm 614 is then attached to moving table 184. Actuator 610 is installed in mnunting blocl~
608. and actuator operating rod 612 is attached to operating arm 61~. Thus, the stage can now be moved back and forth over its range of travel and a function of its position~ ~7(p), can be read at the output of position ~r~m~ r 5 360.
Stage 180 is moved to the mid range of its travel by the use of actuator 610. Mirror platform 618 is then attached to moving table 184. Mirror platform 618 has mounted to it two mirror mounts 620, which in turn hold a lo..~ mirror 622 and a transverse mirror 624.
Mirror mounts 620 are then adjusted to tilt each of the mirrors in two angles so as to center the return beams 10 in ~ntncollim~tnrs 604 as d~,t~ -fined by the angular readouts of the autncollim~tors (not shown).
Tr~ncl~ion stage 180 is then moved to one end of its travel using actuator 610. Calibration data are then recorded by moving stage 180 toward the other end of its travel range in a series of suitably small steps in distance.
The output of position l~ cd~ 360, r1, is recorded at each step position. as are the angular readings of the alltocollim~tors Note that one need not be con~ellled with the actual distance stage 180 is moved between steps, 15 unless one is also i,.l,~n~ g to calibrate llansducer 360 at the same time.
The readings from the ~tncollin~ r viewing along the :~ axis will be (2 ~v~ 2 ~z), where the positive direction for the angles is counter-clockwise when the view is along the axis from positive coordinates toward the origin (i.e., the right hand rule). The readings from the S~ ocollim~tor viewing along the z axis will be (2 ~v~ --2 20 The rotational error of the stage at any point can be e.~ ,sed as:
R(~1) = Rz(~z)RY(~v)R~ ) (82) It is more efficient to record and store the three angles ~z(r1)~ ~v(71)~ ~r(r1) and calculate R(77) whenever it is needed. When the calibration data are used in a lllcdaul~lllellt ~ edul~ it will be neceaad,y to interpolate between the stored values of 11 to estimate the rotation angles at the actual values of 71 used in the particular lu~aaul~ t Such interpolation p-ocedu~es are well known in the art.
An error analysis shows that the angles uledaul~d during this calibration process will be a mixture of the c~ ~n~ of the true rotational angles, if the calibration geometry is not perfectly aligned with the trAnslation direction of the stage. However, the level of Ihe mixed ~O..-~O~ is proportional to the error in the geometry, and thus will be small. For instance, if the angles ~ g the calibration geometry were all in error by three degrees (which is much larger than one would expect, using normal m~rhining tolerances), the llleaau-~d stage 30 rotation angles would be 10% in error in the worst case. Since it is unlikely that the repeatability of a Ir~nCl~tit)n stage will be much more than ten times better than its absolute angular accuracy, this level of error is alJ~ ,lidtc for calibration of the stage angular errors. Thus, use of a calibration geometry which is d~le~ ~ll;l\~d by precision ...~. I~il~il~g is adequate to perforrn the calibration medsu~ ,nla.

CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97/15206 M. F.li.,.;.~ g Routine ~lignnlPnt Calibrations in BPA Elllbodi,-,c~
There is an inconvenience with the first and second embodiments as taught above, which is that a new qlignmPnt cqlibrqti~ n might have to be p~.r." -~ each time a new Ill-,a~ul~"-lclll situation is set up. In qliEnmpnl rqlihr~tion the nri~Pntqtinn of the boleseopc'~ I,,ea~ul~ l,l coordinate system with respect to the motion provided by the BPA is ~ ."i~f ~l With a standard bo,~cul,c, this orientation may not be w ell controlled, and thus every 5 time the bo~ ,u~c is repositinnpd with respect to the BPA, there is the logical l~4uilcl--elll for a new qlignment cqlihrr~ion Of course, whether a new calibration would actually be required in any specific instance depends on the accuracy required of the ~limDnc~ ca~ l in that instance. And of course, whether or not avoiding the inco"._nience of the ~e4uilcl.l~,l t for routine qlienrnPnt calibrations is worth the q~ innq.1 structure and l.lu~,cdu,e drCrribr(l here will be ~ d solely by the user's qpplir,qti~n 10 I describe here mnrlifir~tionc to the bo,~-,ope, to the BPA, and to the calibration and measu~l"~"l IJlu~;dul~s which work together to ~' ~ the need for routine qli~nm~nt calibrations in bol~.,ul.c/BPA embodiments of my p~ ,e~ c - ~ lIL system. The user of my system may select from one of the 5~ ce~ Qlly described cn"~ )n of - ' 'rqtionc as required to improve the accuracy of the l~easul ~ 5 made and/or to make the system more convenient to use.
1. Detailed P~plqm~tiorl of the problem A first difficulty with the first and second embodiments of my system is depicted in Figure 39. Here, the lens tube of the bultibcope is not perfectly straight. Thus, when the bol~a~uLIc is clamped to the BPA at different points along its length, the gPomptricq-l relqti~ ', between the p~l~,)e~,Live ~licrlqrcmPnt d and the visual COO~ a1e system changes. This means that, for accurale work, an ali~ll"~rl~l cqlihr,qtit must be pelru,l"ed whenever the 20 borcj~,ol,c is clamped at different positions along ils length.
A second difficulty is depicted in Figures 40A and 40B. Coordinate axes parallel to the visual couldilldte system are drawn in Figure 40 to make it easier to visualize Ihe gcvll-~,llical r~ tirJnchirC In these Figures the bol~"copc is shown aligned along a mPchqnirql axis (A - A). The Figure is drawn in the plane which contains the merhqnirql axis and which is also parallel to the p.,.~,cclive llicplqrrmPnt d.
In Figure 40B the bùle~uu~ has been rotated by 180 degrees about the rl~Pchsnirql axis with respect to its position in Figure 40A. In Figure 40A, the; , ~ of the visual ~ axis that is perpPn~lic~ qr to the page is directed into the page. In Figure 40B, the co~ nt of the visual r axis that is perpPnr~ir-~lqr to the page is directed out of the page.
The orientation of d with respect to the vjsual coordinate system is not the same in Figures 40A and 40B.
30 (This may be most clear when eul1 ,idtiling the visual z axis.) Thus, when the axis of mPrh~nir~l rotation of the b~ copc is not parallel to the pel~,e~live d;~ l the orientation of the p~l~,ue~ live (licpl~r~m~nt in the visual coo,di.~t~ system will change when the boltiscope is rotated about that mPrh~nical axis. For the system shown in Figures 4 and 5, the ~ ~ h~,l;."ll axis of rotation is d~ ~~' by the V groove of lower V block 142 of the BPA. This means that an ali&nmpnt calibration must be p~lrolllled whenever the borescope is clamped at a 35 new angular orientation with respect to the BPA. unless the V groove is ~ccurately aligned along the tr~n~ n axis of the translation stage.
A third di~lculty is caused by the ~ ictirc of the lens tube of a standard bolescùl,e. The envelope of the lens tube is typically made of thin wall stainless steel tubing. Such an enve}ope is unlikely to be perfectly circular 5s CA 02263~30 l999-02-l6 W O 98/07001 PCTrUS97/15206 at any position along its length, and it has already been ~iccllcced how unlikel- it is to be straight. Rotation of such a ~omPtri~lly imp~rfect envelope in a V groove will lead to a varying orientation of d w ith respect to visual coùldilldtcs even if the V groove were aligned with d and the clamping position along the length tube were ~.,rl~ 1 Once again, the situation is that if the bo~ ope is moved with respect to the BPA, then ~lignmCnt r~lih-~tint must be repeated, at least for accurate work.
One d~lua~ to addressing these problems would be to C'~ r ~e~ ~C the ~ignmPnt of the p~ e~l-vt;
e-..~ -.l with respect to the visual coordinate system as a function of the position and orientation of the bo.~e~co,~)e with respect to the BPA. While this would work in theory, the amount of calibration effort necessary and the lik~lihood of poor rçpe~t~hility of borescope orient~ion, due to the chald~,le,i~lics of the lens tube envelope, make this approach u,lall,~.,live.
2~ Desc~ tiol. of a First Variant of Borescope/BPA Embodiments Figure 41 shows a first mnAifir~tinn to my BPA ~ ,o~ lc which solves thêse problems. In Figure 41, clamp 140 is shown in the open position in order to better show the mn(iifi~tion~
A portion of bol~cope lens tube 124 has been enclosed by a metro/ogy slee-~e or calibration sleeve 650.
Calibration sleeve 650 is co"l~,lised of a thick-walled cylindrical tuhe 652 ~ ith sleêve ferrules 654 attached at lS either end. Sleeve nuts 656 screw on to ferrules 654 to clamp the assembly to lens tube 121 at any selected position along lens tube 124.
The outer diameter of cylindrical tube 652 is fabricated to be accurately circular and straight. This is typically done by a process known as Celllt;llCiS grinding. Tube 6S2 is 1"~ f~,.al~ly made of a rather hard material, for instance high carbon steel coated with hard chrome, or case-hardened stainless steel. On the other hand, upper V
20 block 144 is ~ re,hbly made of a ~ull-~.hdl softer material, for instance, low carbon steel, alllminllm or brass.
Because of these relative hald~,;t~ , and because of the thick wall of tube 652, it is no longer neue~ y to use a layer of resilient material to line upper V block 144, and thus it is not shown in Figure 41. This also means that a much higher ~l~mpin~ pressure can be used in this system than could be used in the original system of Figure 4.
Calibration sleeve 650 lies in the V groove in lower V block 142. The di - - of the V grooves in both 25 lower V block 142 and upper V block 144 have been modified from those shown in previous figures in order to clamp the larger diameter of tube 652. In order for the groove in lower V block 142 to act as the position reference for sleeve 650, and hênce, 1l1~ -- 'y, for video bol~ope 120, hinge 148 is now fabricated with an d~Jlu~,liale amount of play, so that the groove in upper V block 144 takes a position and orientation which is dct~rTnin~d by sleeve 650 when clamp 140 is closed.
The groove in lower V block 142 is accurately aligned to the translation axis of stage 180 to a predetermined tolerance using one of the methods to be descrihcd later.
An alt~,.,ali~, e..,l)od;.. ~.l of a c:llihra~inn sleeve is shown in Figure 42. There a strain-relieving calibration sleeve 660 is shown attached to video bul~s~ olJc 120. At the distal end, sleeve 660 is attached to bol~scol)e lens tube 124 with the same ferrule (654) and nut (656) system that was shown in Figure 41. At the proximal end, 35 sleeve 660 is attached to the body of the enAoscope through a torque ll~ elling clamping collar 658. In the ellllx ' thal was shown in Figure 41, the uvel hallgillg torque due to the proximal (rear) portion of bo,~scu~e 120 is ~once"l,~ted on the small diameter lens tube 124 at the point at which lens tube 124 exits ferrule 654 Video ~ ~Aoscope svstems vary in the size and weight of their proximal portions. and it is probable that in some CA 02263~30 1999-02-16 W O 98/07001 PCT~US97/15206 cases, the ovelhd~ g torque will exceed Ihe capacity of lens tube 124 to resist bending. In this alternative ~ ~1 o~ , collar 658 transfers this torque to a more robust portion of the ~ ~los~ul-e. As shown in Figure 42, with the generic video bor~:~copc 120, collar 658 is securely clamped to illnmin~ion adapter 126; this cl~mring can be done with any of several common and well-known t~rhni~ ec Collar 658 is constructed so as to provide the ncce~.ly operating clearance for fiber optic conr~r~r 128. Depending on the design of the bolesco~e being S used, it may be that some other portion of the bol~,u;)e will be the most suitable ~tt~rhment point for collar 658.
3. Operation of the First Variant of Borescope/BPA F~..ho~ n. . ~
Consider Figure 41 once again. In use, calibration sleeve 650 is semi-pe~-udnenlly attached to bo-~uopc 120.
When nuts 656 are tight~.n~-l sleeve ferrules 654 grab tightly without marring or denting the surface of lens tube 124, fixing the relative locations of lens tube 124 and the outer cylindrical surface of sleeve 650. Since the visual 10 coordinate system is fixed with respect to the outer envelope of the bo~ upc, the outer surface of sleeve 650 is fixed with respect the visual couldindte system. I call the assembly of borescope 120 and calibration sleeve 650 the perspective meu~ t assembly.
The ~.~ ivt; lu~dSul~ clll assembly can be located at any position inside clamp 140, and can be clamped in that position, as long as a cignifir~nt length of sleeve 650 is cont~in~d within the clamp. The action of placing 15 sleeve 650 in the V groove in lower V block 142 a~ ~ four degrees of freedom of the motion of sleeve 650.
The two unronctr~int-cl motions are rotation about the axis of the sleeve, and tr~nCl~tion along that axis.
Tr~r~cl~tion is, of course, limited to a range of distances over which a iignific~t length of the sleeve will be od inside the clamp. Since the bo~ ope is clamped inside the sleeve, ils motion is similarly constrained and controlled, as is the motion of the visual cooldhl..!c system. These two degrees of freedom are precisely those 20 nc~ea~aly to allow bul~,i,.,o~e 120 to view objects at different positions with respect to BPA 138 (Figure 4).
Since the groove in lower V block 142 is a: _ 'y aligned with the tr:~nsl~ n axis of stage 180, and since the outer surface of sleeve 650 is a~.,ul~tely cylindrical, the relative ori~ lio~-s of d and the visual coold' system do not change as the pel~e.,live ll~CaSul~ l assembly is rotated or ~rlncl~t~d in lower V block 142. Note that there need be no particular orientation of the visual coordinate syslem with respect to the axis of the 25 cylindrical outer surface of sleeve 650. The only requirements for there to be a constant relative .lrit nt~ion between d and the visual coordinate system are that the surface of sleeve 650 be accuratelv cylindrical, and that the axis of the locating V groove be accurately directed along d.
For making ~ a ,u..,.l,e~ on objects at widely differing depths inside enclosure 110 (Figure 4), sleeve 650 can be moved on lens tube 124, but when it is moved, a new plignm~nt calibration will be required, in general.
3û The range of depths that can be ~ r~ by a p.,l~C. li~e l"easul~l.,c..l assembly without recalibration is (h ~ d by the length of sleeve 65û. For many users a limited range of available ~..casul~ l depths is not a problem because their objects of interest are confined to a small rangc of depths inside the enclosure.
Calibration sleeve 650 could be made nearly as long as lens tube 124. This suggesls another option for kli..,;.~ g the need for routine ~lignmt-n~ calibrations. I call this option the ~netrology bol escope. A metrology 35 bu-l SCOIJC, a new in~l.u~ ,.ll, is a rigid bol~ upc built with a lens tube which is thicker, stiffer, and harder than normal. The outer envelope of lens tube 124 of a metrology bo,~ Je is precision fabricated to be ac~urhtel~
cylindrical. Such a scope does not need c~lihr~tinn sleeve 650 in order to provide accurate pc;l~e~;live rlim~ nci.~n:
- lu~asulelll~"~lswithonlyasinglealignrl~entcalibration.

.

CA 02263~30 1999-02-16 W O 98/07001 PCT~US97/15206 Standard bol~copes, with their thin en-elopes, tend to get bent in use. A small bend does not ruin a bc,-~s~opc for visual ir cpection but it would ruin the accuracy of any c~tlibra~Pd p~ live Ill-a~ulu~ assembly.
Since the metrology bo~ opc is more resistant to such bending, it is the superior technical solution.
An ~tltlition~l advantage of the system shown in Figure 41 over that shown in Figure 4 is that bul~scopes with different lens tube diaullelcl~ can be fitted with applul~lidte cztlihr,t1ion sleeves of the same outer diameter. Thus, 5 when the r~libm{ion sleeve is placed into lower V block 142, the centerline of the bc~-~scope is always at the same position with respect to the BPA, which is not the case when different diameter bo-~s~,ul,es are directly inserted into the V block. Keeping this cenlellille at a constant position makes the mo--n-in~ of the BPA with respect to cn-,lo~ul'e 1 10 and incpee~ion port 112 (Figure 4) less co-U~ IPd when b~lt:sl~ul.)eS of different t~i~mP~Prc are to be ~ . . .
10 I have already stated that the V groove in lower V block 142 is accurately aligned with the translation axis of tr,tncl ltit n stage 180. I now explain exactly what this means, and then how that cont~ nn can be achieved.
A V groove is made up of two bearing surfaces which, idcally, are sections of flat planes. If these surfaces are perfect, then the col~ ùn~ g planes will intersect in a straight line. It is when this line of intersection is parallel to the translation axis of stage 180, that it can be said that the V groove is accurately aligned with the tr~mcl~ion 15 The purpose of the V groove is to locate the cylindrical outside diameter of the calibration sleeve accurately and repeatably. By locating a ~ylhldricàl object ac~uldt~ly, I mean that for a short section of a perfect cylinder, the oriPnt~ion of the axis of the cylindrical section does not depend on where along the length of the V groove the cylindrical section happens to bear, and that there is a co~ c single line contact between each bearing surfaee and the eylindrical section, no matter where that section happens to lie along the V groove, and no matter how 20 long that section is.
A V groove will serve to loeate a ~ ylhldli-,àl object ac-,u,dlely even if the bearing surfaces are not planar, just so long as three conditions hold. First, each of the bearing surfaces must either have a ~yllull~lly about a straight line axis or must be perfectly planar. Second, the straight line axis of one surface must be parallel to the axis or plane of the other surface. Third, surfaces with syll~ y about a straight line axis must either be convex or have 25 a ~ lly large radius of curvature that there is only one line of contact between the cylindrical object and the surface.
This means, for instance, that two accurately cylindrical bodies can serve to ac-,uldL~ly locate a third cylinder just as long as the axes of the first two cylinders are parallel, and such a system could be used instead of the preferred V groove.
It is also possible to form two physical lines of contact, by cutting a ~;ylindlical groove into a plane surface or into a larger radius ~,ylinl-i-,al groove, for exarnple. These physical lines can serve to accurately locate a cylinder, but only if the cylindrical groove is oriented a-,~.uldll ly parallel to the plane surface or cylinder into which it is cut.
If the cylindrical groove is not so oriented, the contact lines forrned thereby will not be straight and will not serve to accurately locate a uylilldlh,dl body.
In order to locate the C~ "~ sleeve lt;y~a~bly, it is ne~essaly to pay a~ lu~ te attention to ll,A;-~l~;tl;ng the clP~nlint~cc of both the outer surface of the calibration sleeve and of the locating surface on the BPA, whether that surface is Pmbotiipd as a V groove or as some other d,~",.o~,.iat~ geometry.

W O98/07001 PCTrUS97/15206 To maintain the accuracy of the pc;.DI,e~live ,l-ea4u.c..,~ . one must mamtain the orientation of the visual cùo.dinale system with respect to the outer surface of the calibration sleeve, and one must also maintain the qlignm~Pnt of the BPA reference surface with respect to the perspective flicriqren Pnt In order to maintain these ~,~ v~u.,LIh,al rPlqtif~nrhips over a wide range of u~claling tclll~)cldlul~s, one must pay careful attention to the effects of di~ nlial thermal eyrAncif~n especially in those emho~limenfc which use an alignable BPA reference surface.
4. How to Achieve Accurate ~lig of the BPA Reference Surface In any ~I;c~ of accuracy must include a dc-r~ilion of the size of errors which are allowed while still justifying the label accurate . In my p.,.s~ecliv~ .,-cas~n~ll-"--l svstem, the error of interest is the error in the r1irl Pn~ionAl lll ,dDul~ cnl being made. As far as the qlignm~Pnt of the system is con~,e.--ed, an unknown error in the orientation of d with respect to the visual coo-~ ~~ system will cause a systematic error in the distance 10 Ill~,aalll~,lll~,lll.
Analysis shows that a micqlignmPnt of d will cause a systematic "-~ as.~n l.,~"l error which will vary linearly with the distance (range) between the object being measured and the nodal point of the bol~Dcope optical system.
That is, this systematic error in a distance ~ucdau,~"l~,-l can be ~ aaed as a fraction of the range to the object, for example, 1 part in 1000, or as an error angle, e.g. 1 milliradian. In detail. the error in any given llleasu~.lle,.l 15 depends on the position of the object in the apparent field of view of the bOI.aco~)c in each of the hvo views, and on the fractinnql portion of the field of view snh~,P.nf'Pd by the distance being l..cdDu,ed.
In the worst case, the error in the measured distance is a~.~,-u, h-.ately equal to the angular error in the o ~r of d times the range to the object. That is, a 1 millir?~iqn angular error in the orientation of d cul-~ o~.ds a~luA---...tely to a worst case distance 1ll~, ~;t~ nt error of 1 part in 1000 of the range.
20 A given level of a~ ~ ~p~- le ayat~llatic ,u~daur~....,l,t error will COIl~a~l d to an Arcept '~" level of miclqlignmPnt For the purposes of this dicf~llccinn~ I will define two levels of au~c,~ .bk error. I call a Class 1 lll~,aDur~ ,nl one that is accurate to 1 part in 1000 of the range. I call a Class 2 u.caDul~l..cnt one that is accurate to 1 part in 10,000 of the range. These a-r~p~~ le error levels are concictent with the random error nqrqhilitiC5 of the pcla~Je~ e ~ aaun system when it is ---r 1' ~C ~ with standard endoscopy e~ui~ ei~l A
2S random error of 1 part in 1000 of the range is fairlv straightfor~vard to achieve using a standard video borescope, while achieving 1 part in 10,000 random error requires either (1) the use of a high l~auh nn bul~aculJe optical system and a high resolution video camera back and some averaging of ~l.caDu.~...enls, or (2) the averaging of a large number of ll.~,asu,~ nls~
The a~,llie~ -t of a micqli~nmPn~ of I millirad, i.e., 0.001 cm. per cm.. is straightforward by use of 30 precision l"~rll;~ .g 'ec~ ,c as long as ~rqnc~ if)n stage 180 has been fabricated with accurate mPchq-ni ~f,~ to its ~rqncl~inn a~;is. If it has not been so fahri~qt~pfl one proceeds as follows.
Usually, the top surface of moving table 184 of stage 180 ~Figure 5) is guaranteed by the ~ I tO be parallel to the trq-nel~inn to within a specified tolerance. Often, this tolerance is 0.1 millirq.~liqn If the top of the moving table has not been ac~,u-.ltely aligned with the tmllel ttion then one can measure the pitch of the top 35 surface by ~ .cl,~ ..fl;..f~ a dial indicator above the stage and i~ q-ing on the top surface of moving table 184 as it translates below. This known pitch can then be CO~ )f ~-~A'rd for in the mq~hining of lower V block 142.

CA 02263~30 1999-02-16 W O98/07001 PCT~US97/15206 If there is not a convenient reference for the direction of the translation axis as measured in the plane of the top surface of moving table 184, suitable reference holes are easily made by mnuntin)a the stage on a drilling machine and using the motion of the stage itself to determine the relative positions of the holes.
Once stage 180 has been chald-,lt~ d and/or modified, lower V block 142 is fabricated with standard V tecl,.~l4,1es while paying particular attention to two key factors. First, the bottom surface of lower V
5 block 142 must be oriented au~ulutely parallel to the tr~rlCl~tinn axis of the fabrication machine when the V groove is cut into its upper surface (or tilted to offset the pitch of the top of moving table 184, l~lca~ul~d as ~1ic~ cd - P-l; t 1~ above). Secondly, the V groove, and any reference holes, are ~ -Pd with a fixed tool spindle location and with the machine tool moving lower V block 142 only along a single tr~ncl~'inn axis. This ~..,.. ~..t~ C S that the V groove will be parallel to the line between the centers of the reference holes to an accuracy 10 ~le~ ..i-.Pd by the str~ightnpss of the machine tool translation axis.
The achievement of a micqlig-~ ~ d~)lJlo~l idle to Class 2 Ill~,a~UI~ llLS, i.e. 0.1 milliradian, by precision g is possible, but difficult and e,~ ive. One way to make it more feasible is to do the final grinding of the V groove into block 142 with block 142 mounted to the translation stage. The stage motion itself is used to provide the necesscuy motion of the block with respect to the grinding wheel. The disadvantage of this approach is 15 that the length of the V groove is limited to s ,lll~lldl less than the length of travel of the stage. The advantage is that the alignmPn~ of the V groove with the t.a--~ldlioll will be accurate to within the accuracy of tr~nCl~tinn of the stage.
For Class 2 accuracy, it may be l,.er~. ~ le to align the V with respect to the tr~nCl~io~ of the stage. One way to accomr!ich this ~ is to use shims to adjust the position of lo-ver block 142 in pitch with respect to the 20 top of moving table 184 and in yaw with respect to a reference surface attached to the table top. A second way is to split lower block 142 into two plates with a variable relative ~lignmP.n~ in pitch and yaw. Such a device would be sirnilar to and work on the same yl . ' as the Model 36 Multi-axis Tilt Platforrn sold by the Newport Corporation of Irvine, CA, USA. The upper plate of this assembly is steered with respect to the lower plate in pitch and yaw through the use of adjusting screws, while the lower plate is conventionallv attached to the top of 25 moving table lX4.
A rig for d~t~ ln-il,g the :lligl-mPnt of lhe V groove to the tr~n~l~tinn of the stage is depicted in Figure 43.
Here is shown a front elevation view of a tr~nc~ nn stage 180, to which is attached a split lower V block 143.
Split lower V block 143 is con~lJu~;led as ~licrllccP(I in the previous paragraph. As before, upper V block 144 acts as a clamp; the screw or ~ . l ~ .., which provides cl~mping force is not shown. A reference cylinder 700 is 30 clamped into split lower V block 143 so that a suitable length of cylinder 700 extends out of the clamp towards the observer. Reference cylinder 700 is selected to be straight and circular to a very high degree of accuracy. A pair of dial ;~ ul.~. 702 are mounted to the work surface by conventional means which are not shown. Tn~ orc 702 are sncrPntlPd over reference cylinder 700 and disposed to either side of it. Sensing feet 704 of dial intlir~nrc 702 contact the shaft at the same distance from the clamp as ~ ,asul~d along cylinder 700. Sensing feet 704 have a flat~5 surface over most of their diameter in order to avoid errors due to the imperfect aliEnnlPnt of the indicator ll axis with the axis of reference cylinder 700.
To determine the desired ;llignmPn~ stage 180 is translated back and forth along its length of travel, and the readings of the dial indicators are monitored. Errors in pitch of the V groove are indicated by changes in the CA 02263~30 1999-02-16 W O 98/07001 PCTrUS97tl5206 average of the two dial indicator readings. Errors in yaw are indicated by changes in the di~e~ ce of the t vo readings. The Ali~-- of the V grooveis perfect when the dial inAiratnrC do not change as the stage is ' 1 The a..~e, .... ~-1 of dial indicators 702 is not restricted to that shown in Figure 43. One could orient one of the in~ tors so that it was ~ d vertically above reference cylinder 700, and the other could be oriented 5 ho. i~Ui.~ y Then one indicator would directly indicate pitch, while the other ~ ould directly indicate yaw. The only 1~4Uil~ ,Ut is that the two i: " ~-c not be oriented in exactly the same direction, and for best sens.~ivily and most co--~!~.uence~ there should be a right angle between their oriP~nt~tinnc One can check for the presence of g ~ ~: ical i- l ~ r~ lionc in the co~ n of reference cylinder 700 and the V groove in lower V block 143 by loocPning the clamp, rotating reference cylinder 700 about its axis to another 10 angular position, ti~' g the clamp, and redoing the check. One can also repeat the test at different positions along the length of cylinder 700 to check for errors in its c~rAi~htm~cc A good reference for the theory and practice of making such .,-~asu c,..enls is Handbook of Dimensional h~e~u~ , 2nd Edrtion, by Francis T. Farago, Industrial Press, New York, 1982.
One could directly indica~e on the plane surfaces of the V groove rather than using a reference cylinder as I
15 have shown. But in that case! one would be ...P ~ g imperfP~tj~nc in these surfaces as well as their Plig when what one really cares about is how the existing V groove acts to locate a cvlinder. Since accurate reference cylinders are readily available, I prefer the method I have shown~
Of course, it must be kept in mind that one cannot expect to .1~ errors in the geometry of cylinder 700 or in the V groove in lower V block 143 to a level better than that provided by the ctrAightr~cc and repP:lt~hility of 20 motion of tr~nCl~tifm stage 180. Since the purpose of this test rig is to align the V groove with respect to this motion, errors in this motion do not affect the validity of the results.
One can check for the integrity of the rotation of a cylinder in a V block by ~ a mirror on an adjustable mirror mount so that the mirror is a~),ulu-dlllatel~ perpenAiC~ to and centered on the axis of the cylinder. This process is depicted in Figure 44. In Figure 44 a laser 710 produces a laser beam 716. Laser beam 716 is reflected 25 from a mirror which is part of mirror mount assembly 712. The beam reflected from the mirror is allowed to impact a viewing screen 714.
The mirror mount is adjusted to produce the smallest motion of the laser spot as the cylinder is rotated in the V block. Any residual motion of the spot, which cannot be reduced by 3~1j ,', 't of the angular orientation of the mirror, is due to non-constant angular ù, on of the cylinder as it rotates while ~ g contact with the V
30 block. The variation of the onentation of the axis of the cylinder can be sensed to within a few tenths of a milliradian in this way. A s~ ilivit~ on the order of a microradian can be achieved, once the mirror has been aligned as shown here, by vie ing the mirror with an ~ ~c-llim~tnr which has been aligned nearly pc.~~.-~1;. ..l~-to the mirror, and again rotating the cylinder.
It is possible to conceive of a motion of the cylinder in the V block that is not perfect, but is such that the 35 mirror remains at a constant angular oliellldtiol while the cylinder is being rotated. (One way is for the cylinder to wobble as it rotates.) What is in~ollanl about such a situation is that any motion which causes an error in the ,tive ulea:~u~ ll will also cause an error when being tested by the technique depicted in Figure 44.
- 5. Description of a Second Variant of BorescopelBPA F.~ odi-",.,~

A second morlific~tinn to BPA ~ .~lhOfl;lllf ~ is shown in Figure 45. This differs from the first mndifif ~tinn in that there is now an angle scale or plu~d~ur 670 attached to cylindrical tube 652. A protractor pointer 672is attached to a pointer mollnting bar 673 which is in turn attached to lower V block 142. Pointer 672 has s-lffif iPnt length to enable the angular orientation of the p.,.~eclive ~ asul~lllclll assembly to be deterrnined no matter where it is located in its range of translation with respect to clamp 140.
S In this ~,.1 vA;~ , the V groove in lower block 142 need not be accurately aligned with the pcl~e~live fli~pl Another option would be to use the strain-relieving c~lihratif.)n sleeve 660 as depicted in Figure 42. Then an angular scale could be ~Iv,.~ g~o~.~ly marked on the outer diameter of collar 658.
6. Operation of the Second Variant of Borescope/BPA F.mho~' ~ -It was shown in Figure 40 that the :lliEnment of the pcl~live ~ cr~ d, in the visual uooldillale system is a function of the rotation of the pf ~~e~_live ~ ,a~ulcnl- -lL assembly about the axis of the cylindrical surface of the calibration sleeve. In this second e lub- ' t, the acquisition of an :lf~flilinn~l piece of i~ aLion during the luc,a~ul~ lt, and an ~lf~liiinn:~l step in Alignmf~nt calibration, enable one to calculate the ~lignmf nt of d, and thus make an accurate pel~,e~,live lufd~u.~,.u~,nl, despite the presence of a mic~llignm~nl between the axis of the calibration sleeve and the p~ c~;live di~ f ~ will explain the opf r~linn of the Ill~a~ululllcllt in this section, and the nc;~,f ~du.y ~flflitinn:ll r~ljhr~tion of the system in the next section.
Figure 46 is similar to Figure 40 but it contains additional inform~ti~. As before, a visual coordil.ale system (:t v, Yv, Zv) is defined by the ~ and y axes of the video camera focal plane, and the optical axis of the bol~sco~,c. In Figure 46 cvul~lindte axes parallel to the visual ~~ool~ lale system are shown in the field of view of the bûl~,s~,ù~.
As before, the Figure is drawn in the plane which contains the axis of mPr~ nir~l bol~copc rotation, A-A, and which is parallel to the pw~l~live rlicrl~rf~ment d. None of the visual cocl-lhldte axes ~v~ Yv~ Zv are necessarily co~ d in the plane of the Figure. Again as before, in Figure 46A, the ..v,.,pv~ 1 of the visual 2 axis that is perpPn~ir~ to the page should be visualized as being directed into the page, while it should be visualized as being directed out of the page in Figure 46B.
One may define a b~,lcscu~ mPrh~nir~l coordinate system which rotates with the bolc~ol)e, which has a fixed " l~livl~chir with respect to the visual UO(Jldiild~t~ system, and which has its ~ axis parallel to A-A as follows:

(1.) The l~m axis is oriented along A-A.
(2.) The Ym direction is chosen to be perprn~icul~r to both the optical axis, Zv~ and to :~Tn. This can be c~,.c~xd m~thPm~lir~lly as:
Zv x 3~m j Zv X :~ml (83) (3.) Finally, the Z,7, axis is chosen to be pc,~ r to both :~m and Yrn axes in the usual way as:

~m x Vm I (X4) The mrrh~nir~l coordinate system (~,7" y~ ) is depicted in Figure 46. One important implication of this d~rllulion is that the optical axis. zu, is ~l~r;~nt~ed to lie in the (Z,7" ~ ) plane.
~2-W O 98/07001 PCT~US97/15206 Also shown in Figure 46 is a translation coordinate system. (~t, YL1 zt ), which has a fixed orientation with respect to the trancl~tio~ stage. The :~t axis is defined to lie along the p~lapc~live ~1icpi~rP nPnt d. For the moment, the di~ iona of the yt and zt axes are taken to be arbitrary, but the (~o Yt, Zt) system is defined to be a conventional right-handed Cartesian cooldilld~ system.
For the purposes of this ~licc~ccion~ all of these coordinate systems will be assumed to have origins at the same 5 point in space, although they are drawn a~pd. - d in Figure 46 for clarity.
What one needs is an expression for d in the visual cool-lindL~ system. This expression will depend on the rotation of the bo.ea~,o~,e about the mr~rh~ jr:~l axis A-A. The ,.,.~ 'e~ for this rotation is taken to be the angle ~'I
As mPntionPd previously with regard to the general perspective ~ daul~luellt process, in order to discuss 10 rotations in three ~ c one must carefully define what p-ùcedul~ is being used for a series of sub-rotations.
I define the specific ,~Jlu~ lult; for rotating the ",erh~n;. ~I coordinate system to align it with the tr:~ncl~ion ~ooldfi.aLc system as follows:

(a.) Rotate the m ~ooldinale system about ~ until Ym lies in the (~t, Yt) plane.15 (b.) Rotate the m cooidinate system about Ym until Zm coincides with zt.
(c.) Rotate the m coordinate system about Zm until l~m coincides with :~t (and yrn coincides with yt ) ~thr~m~tir~lly, this p-u~,cJul~ can be e~ aaed as:
v~--Rz(~z)Ry(~)R~ )Vm = Rvm (85) where the 3 x 3 matrices R have been defined in Fq~ ionc (33-35). In Equation (85), vt and vm are 3 x I
rnatrices which contain the 5~ ~ of any arbitrary vector as ~ Jltiaa~;d in the tr~ncl~tion and m~rh:-nir:-l coo- li-~.Le systems l~a~,ti~ c The angles ~ >V~ and ~z are the angles by which the coordinate system is rotated in each step of the plU~.,CdUlt;,. At each step, the angle is nlcdaul~d in the coordinate system that is being rotated. The positive direction of rotation is defined by the right hand rule.
25 Step (a.) of the pl~c~lu-e implicitly states that ~ = O when Ym lies in the (~t, Yt) plane. In the embodiment shown in Figure 45, the orientation of the ptila~Je~live ll~waul~,.u~ assembly for which ~ ~ O is defined by the scale on plulld~lor 670. Together, these two facts mean that it is the location of the zero point on the scale of p~ull~;Lo~ 670 which defines the orientation of the Yt and Zt axes. The orient~tion of these axes can no longer be considered arbitrary.
30 The inverse lld-.sru-",d-on to Equation (85), that is, the ~u~,edu~ for rotating the tr~nCl~ti- n ~,oo~d,-,~.te system to align it with the merh~nie~l system, can be ~ lesaed as:
V = R (--~)Ry(--~v)Rz(--~z)Vt = R~ )Ryl(~V)Rz (~Z)Vt (86) Recall that the merh~n~ coordinate system was defined so that the visual z axis is confined to the ~rCh~nir~l (I, Z) plane. The ~.~ : ', of the visual and ~- ch~ uoll~ dlt; systems is depicted in Figure 3~ 47. Because of the way the re~ nchir between these two coordinate systems was defined, there are only two rotation angles - - y to align one with the other. The specific procedure for rotating the merh~ni~
~,uo~dh~ale system so that it is aligned with the visual coordinate system is simply:
b3-. ~ . . .

W O 98/07001 PCTrUS97/15206 (a.) Rotate about the mrch~nic~l y axis by angle 9u (b.) Rotate about the mP~h~ni~l z axis by angle ~z.

In m~thrm~ti~ 1 terms this is:
S ru = RZ(~z)Ry(6v) rm (87) Angle ~v ~ a~ a a rotation of the optical axis with respect to the nlP~h~nic~1 z axis; this rotation is confined to the ~. .f~ h~ni. ~1 (2, Z) plane. Angle ~z ~ ienl~ a rotation of the visual coordinate system about the optical axis.
Corntining Fq~ ionc (86) and (87), one can express the rPI~tionchip betweena vector as expressed in the Sr~ncl~tion and visual coordinate systems as:
r" = Rz(~)Ry(~) Rz l (~r)Ry l (~v)Rz-l (~z) rt (88) Since the ~ vector, as e~ aed in translation coordinates is simply d = d O , one has:
O

dv= Rz(~z)R~ )Rz~ )Ry~ )Rzl(~z) O (89) Therefore, to determine the three-tlimPncinn~l position of a point of interest with this second mn(iific~tion, one d~,t~ s the visual location veetors aV, and aV2 as usual. One also records the reading of ~IUIId-,lol 670 as indieated by pointer 672. This is the angle ~. One then uses the four angles (~z, ~v~ ~v~ ~z) as dPtPrminP(l in an 15 ~ nmpnt l-~lihr~ti~ I, in F.qu~til~n (89) to d~;le-ll-inc the di~ veetor as ~ ased in visual .,OO~
This ~lignmPnt ~~lihr~tion is ,lic,,~ d below. Finally, one uses Equation (19) to delel.llhle the position of the point.
There are many other ways that the rotation of the p~,.a~e~live u.edau.~ assembly with respect to the BPA
could be ~ Ptl For instance, the rotation could be sensed with an optical or an electrical ~".nc.l, cPr, and the 20 user would then avoid having to read a scale manually. It is also possible to attach the plOtldull)l to the BPA and the pointer to the pe-a~,el,livt; Ill~daulc--lellL assembly to achieve the same result as does the preferred embodiment shown in Figure 45. In addition, the angle scale could be read more precisely when neceaadl ~ by using a conventional vernier scale index instead of the simple pointer 672.
It is hll~lldlll to consider how a~-~uudhl~ one must del~ P ~r in order to achieve the accuracy desired in 25 the ~.a~C~,live Ill.,..~l...".lcnt. Assume that the mic~ligrlmPnt of the mPrh~ni~ axis with respect to the Ul~ ~ axis is small enough that the sines of the angles ~1, and ~z can be replaced with the angles .scl~s Then it can be ~ by dilLI,.nlialion of Equation (89), that the worst case error co--,l.o~ in dV/d is ~/~ times the error in ~. If we take ~ and ~z to have equal m~gnit~ P5 and call that m~gni~ P
~1, then the worst case error co~ ~~ in dv is ~ . Thus, any co~ on of mic~ nm ~Pn~ , and 30 rotational ll-ea~ult;l--~.nl error, ~, that forms the same product will create the same level of svstematic error in the p~a~Liv~ dSu~.l.~.ll.
As an example, assume that the mic~lignmPnt of the ~~ axis with respect to the tr~nCI~tion is 10 milliradians (û.57 degrees), a value easilv achieved with non-precision fabrication techniques. In this case, to achieve a p-,~ e~;live Illcd~un ~ ,nl to Class I accuracy ( I pan in 1000 of the range) the allowable error in the rotation of the pe.~.e,.;live l..eaau,~"-..,l assembly is 71 millirq.(~iqnc, or 4.1 degrees. For Class 2 accuracy under the same con~ o~c~ the IllCa:~urcnlclll of the rotation must be ten times more accurate.
7. Calibration of a System using the Second ~n~lifie~qtion In the ~ u~;on of c-qlihrqtinn above, it was shown how to calibrate both the optical parameters of the~ bol~"eope, and the relative ~lignmPn~ of the visual and l-im~ iOl~ cool, s. The assumption there was that the axis was directed exactly along the lldnsldlion direction or that there would be no rotation of the bules-,o~e between rqlihralinn and llled~ul~ The ,alignmPn~ calibration d~l~, Illit-~ the two qlignmPnt angles and ~v of the trqncl~tinn with respect to the visual coordinate system.
If ~ulcaw~)c is t, ~ ~ from one viewing position to a second viewing posibon. and if lhe location of the 10 nodal point is dctcl - i in the same calibration coordinate system at both positions~ then the Iqliennnpnt of the plqrPmPr~t vector in the visual coordinate system can be dclc,lllil,cd from Equation (67) as:
dv--Rc [ rc(712) - rC(71l) ] (90) where r11 and 712 are pa~dl~ctcl~ denoting the translation position at the first and second viewing positions.
Fq-~q~ion (90) c,~ s the standard, liCn - rqlihrq~ion process. The result is specific to the particular 15 oripn~ion ~z~ that the p~,.a~Jc~tiv~ asul~..lcnt assembly has during the qlienmp~ calibration, if the mechqni axis of rotation of the p~ ,tive l".,asul~.,l.,.-l assembly is not aligned with the ~.~ tive A;~
To perform the qligr-- cq-librption for a system using the second mn~ifieqSinn this standard process is pe.ru,l..ed twice, with the p~ eclive l--easul~-"elll assembly being rotated in the clamp of the BPA between these two ~li~nmPnt calil,-dtiùl s.
20 The preferred rotation between the two ~lignmPn~ calibrations is a~JIvlu~hlldlelv 180 degrees In other words, a standard ~ligr-- ca ibration is ~ rO- .- ~ with, for instance, the c~libr~ti-~n target array serving as the object of interest in Figure 4. Then, the pe.~p ~ l.,ea ,u,~ nl assembly is rotated l 80 degrees inside the clamp of the BPA and the calibration target array is moved to the other side of the BPA so that the targets can again be viewed, and a second ~liermcnt calibration is l~elrull.,cd.
In terms of the rotation angles defined in Ftln~tinnc (86) and (87) one can write the directions of the .~e~,live /~ c~ s in the visual ~,uol.linates for these two :llignmPnt r~lihr~ti~ ~ as:

duA= Id I = RZ(t~Z)Rv(~u)R~ )Rs/l(~v)R2~l(¢)z) ;0 (91) duB = ld l = R~(~Z)Rv(~u) Rr ~ 2)Rv-l(~v)Rz l(~z) o In F~. ~ (91) the known ~ are the rotation angles of the p.,.~c~live l~ asull ' assembly ~rl and ~2 and the direction vectors duA and dull (which are known from use of Equation (90) as a result of the two 30 indtvidual ~lignmPns calibrations). The unknowns are the four alignmpnt angles ~z, ~v~ ¢)2. and ~v Since the length of both direction vectors is fixed at unity, there are four ~ p~ rl~l equations in four unhluw.ls.

WO 98/07001 PCT~US97/15206 Fq~ innc (91) can be rewritten as:
dUA = Q RZ 1(¢',1 ) S (92) dUB = QRzl(~22)s where matrix Q is a function of ~z and ~v and where vector s is a function of ~z and ~v. The first equation can be solved for s to give:
S = R~r(¢)II) Q-~dU~ (93) and this can be '- ' in the second equation to give:
d"A = QRzl(~r2)Rz(~2~ ldUg (94) Fqn~tinn (94) represents two non-linear eqn~tionc in two unknowns. It can be solved for ~z and ~v by an iterative numerical plu~t;dult;, such as Newton's method. In fact, (94) can be solved by a non-linear o~ ,air~n process sirnilar to that des-,lil,ed above in the di~u . .ion of optical calibration.
10Once these two angles are known, they can be s~hsti~ Pd into Equation (93) to solve for ~z and ~v This latter solution is strai~;h~l wald. The vector s can be written explicitly as:
cos(~v) cos(~z) s = cos(~v)sin(~z) (95 - sin(~v) so that the z ~ . ., of s will give ~v easily.
I note for .1~ ~ that one can also calibrate such a system with a c~ iorl of mPch~ni~ and optical 15 l~r1 .;q"Pc One can use the test rig of Figure 43 to directly measure the angles ~v and ¢)z that the mPrh~ni~ ~l rotation axis makes with respect to the tr~lnCI~i- n axis. When one does this, one is also inherently defining the specific orientation of the IrdlL~ldtioll y and z axes, so that then one must set the zero point on protractor 670 to co-l~ .olld with this specific ~ o --- and also to define a plane which contains the optical axis of the bo-~sco~.e.
Once these con-l;l;on~ have been satisfied, one then can use Equation (89) to determine the ~ nrnen1 angles ~v 20 and ~z of the translation with respect to the visual coordinate system using standard ~ nnnPnt calibration data, in the same manner as was diCc~c~-A above.
8. ~AA~ rFIi~ti-~nc and Fl"l-o~
The improved system I have dPsrrihed is also ~rplic~h' to any single camera, linear motion emhol' of the p.,.~e~ e t;...~,nl system, if the camera is given a similar freedom to rotate about an axis which is not 25 a~igned with the linear motion. Figures 40, 46, and 47 apply just as well to this case as to the bor~scuyelBpA
~ ...hC~A;..,. ~1 Aicrllcce~i in detail. The same Illea~ult;lll~.-t~, the same eqn~ nc and the same ~yanded ~lignm~nt calibration as I have disclosed can be used to perform an accurate p~ ye~;live ...ea~u.~ t with such an Although the i~ ved system has been described with reference to particular ernhodimPnts it will be 30 ,-, lAf ~ od by those skilled in the art that it is capable of a variety of alternative embodiments.
For example, one may use a pair of spherical bodies attached to and arranged so as to surround the bol~scope and disposed with some separation along its length, instead of the cvlindrical calibration sleeve of my preferred "bo~ ,.e,-l.~ This structure would allow the l~u~ u~C the required two degrees of motional freedom (when CA 02263~30 1999-02-16 W O98/07001 PCTrUS97/15206 located in the V groove, but not clamped in position) and yet would provide the required orientation control when used in conilln~tion with the BPA.
It has been m~n~ d that there are any number of alternative groove shapes that can be used instead of my preferred V groove for the BPA reference surface. One could also use two separate short V grooves to locate the calibration sleeve, unlike the single long V groove of my preferred ~ bc ' c In this case, the hvoV grooves 5 would have to be a~,-,u-~tcl~ aligned with respect to each other, but this cun~l~u~,lion could save weight.
Another alternative would be to use a ~,ylhl(llicdl reference surface on the BPA and a V groove mounted on the bol~;acuye This would work juât as well as the preferred embodiments in terms of the accuracy of the UI~ l. The disadvantage is that the ccnte.linc of the bo~ oye would move with respect to the BPA as the bolescuyc was rotated, thus making it more difficult to perform the lu~ ul~.llc.ll through a small inspection port 10 as shown in Figure 4.
The reference surface on the l~,es~uye does not have to be mounted over the lens tube, as it is in my preferred . Depcn~line on the detailed con~l- u~lion of the individual bo~ ;uye and on the need for a tr~ncl~ti~ degree of freedom in the arplica~i~m it is possible to provide the reference surface somewhere on the body of the bolt:~,uyc. The advantage is that there is then less of the length of the bo~ oye lens tube deAi~ t~ ~ to 15 the support of the ~olr~coye, and thus more of the length is useable for reaching deep into an C~ClObul~.
It is also possible to provide systems which have only the rotational degree of freedom, for those applications in which the depth of the object of interest is fixed. One simple example is that a specific region of the lens tube envelope could be marked as the region to be clamped into the BPA. If the bo,~,scol,e is always clamped at this same position, then there will be no change m ~lignm~ n~ because of curvature of the lens tube envelope. This 20 simple system is still subject to lack of r~pP~t~hility in the ~lienmPn~ because of non-circularity of the lens tube, but it may be adequate for certain ~ppli~ti The system of using ~ . ' y reference surfaces to provide a l~r ' I ~~ relative ~lignmrnt between a bu,.,;,coyc and a bol~uù~)~, pociti(ln~r could also be used with other, less co , ' ~, lll~,asul~.ll.,.lt systems which were known prior to my pe,~e~ e l~led~ul~'''ent system to allow more flexibility in aligning the view to objects of 25 interest.

.. . . . .. .

W O98/07001 PCT~US97/15206 Conclusiom I?~mifin~fiQns, and Scope Accordillgl~, the reader will see that the dimF~ n~ion~l .uea~u~ system of this invention has many advantages over the prior art. My system provides more accurate ~ u~ than hitherto available, because I
show how to arrange the ~...el.l to ...i~ the inherent random errors, and also because I show how to 5 d~ t~ r and take into account the actual geometry of and any systematic errors in the hald~.al~ My system provides Ill-,a;,uu~ s at lower cost than previously available because I correctly teach how to add the IllC~l~ul~ capability to current, widely available, visual inSpeCtinn hardware. In addition, my system provides a more flexible luca~u~ ent technique than previously known, in that I teach how to make Illea~ule.l.enl~ that are simply i...~ ~ I with the prior art. Using my invention, it is possible to build special purpose Ill~aSul~ le~L
10 systems to meet any number of specific nl~ ul~lllelll l~luil~ llls that are currently not being ~ u 'y add..~ d.
Although the h.~,enlioll has been t1Fs~ ihed with reference to ~ -ti~ul~r F ...I-O~I;Il.f "1~, it Will be understood by those skilled in the art that the invention is capable of a variety of alternative emho~limFntc within the spirit and scope of the a~..ded claims.

Claims (22)

  1. A method of perspective measurement of the three-dimensional size of an inaccessible object using a camera having a field of view, said camera being translated along a substantially straight line from a first viewing position to a second viewing position, characterized in that the first and second viewing positions are selected so that a single point on the object is viewed at an apparent angular position near one edge of the field of view at the first viewing position, and at substantially the same apparent angle on the other side of the field of view at the second viewing position, thereby minimizing the random error in the measurement.
  2. 2 A method of perspective measurement of the three-dimensional size of an inaccessible object using a camera, said camera being translated along a substantially straight line from a first viewing position to a second viewing position, characterized in that any errors in the translational motion of the camera are determined in a calibration process and in that these errors are then also taken into account in the measurement result.
  3. 3 A method of perspective measurement of the three-dimentional distances between selected points on an inaccessible object using a rigid borescope said borescope being translated along a substantially straight line from a first viewing position to a second viewing position wherein said borescope forms a first optical image of said object at said first viewing position and a second optical image of said object at said second viewing position, and wherein each of said selected points on said object is individually located in each of said first and second images, characterized by the use of a fully three - dimensional least squares estimation procedure to determine the measurement result.
  4. 4 An apparatus for measuring three-dimensional distance between individual user selected points on an inaccessible object, comprising at least one probe body and additionally characterized by:
    (a) one or more cameras located near the distal ends of said at least one probe body, said cameras forming images of said selected points on said object;
    (b) motion means for moving at least one of said one or more cameras with respect to its probe body, said motion means providing a plurality of relative camera positions for each of said cameras;
    (c) orientation means for providing a relative spatial orientation for each of said cameras at each of said relative positions;
    (d) position determination means, for determining the relative positions of each of said one or more cameras, said position determination means also producing camera position data which is provided to a computing means;
    (e) orientation determination means, for determining the relative orientations of each of said one or more cameras, said orientation determination means also producing camera orientation data which is provided to the computing means;
    (f) image measurement means, for measuring the positions of said images of said user selected points on said object, said image measurement means also producing point position data which is provided to the computing means; and (g) computing means that receives said camera position data and said camera orientation data and said point position data. said computing means being adapted to calculate the three - dimensional distances between said user selected points on said inaccessible object
  5. 5. The apparatus of claim 4 characterized in that the orientation means is combined with the motion means such that said relative orientation are predetermined functions of said relative positions thereby making it possible to eliminate the use of said orientation determination means except during calibration, or in that said plurality of relative camera positions constitutes a set of fixed relative positions, thereby making it possible to eliminate the use of said position determination means except during calibration
  6. 6. The apparatus of claims 4 or 5 characterized in that said relative camera positions all lie along a substantially straight line. or along a substantially circular are.
  7. 7. The apparatus as claimed in any one of claims 4, 5 or 6 characterized in that said relative camera spatial orientations are all substantially the same.
  8. 8. The apparatus of claim 6 characterized in that said relative camera positions lie along said circular are said circular are having a center of curvature, and in that each of said cameras has an optical axis, and in that the orientation of each of said one or more cameras is coupled to its position along the are so that said optical axis is always substantially aligned with said center of curvature of the are, or in that the orientation of each of said one or more cameras is such that said optical axis is aligned substantially perpendicular to the plane containing the are.
  9. 9. A method of determining the three - dimensional distance between a pair of points on an object, characterized by the steps of:
    (a) providing one or more cameras, each of which has an internal coordinate system and an effective focal length, and further providing a plurality of relative camera positions for each of said cameras, wherein each of said cameras has a spatial orientation at each of said relative positions, and wherein said relative positions and said spatial orientations are determined in an external coordinate system, such that said relative camera positions form camera location vectors in said external coordinate system;
    (b) acquiring a first image of a first point of said pair of points on the object with one of said one or more cameras located at a first viewing position, said camera having a first spatial orientation at said first viewing position. thereby defining a first measurement coordinate system which is coincident with the internal coordinate system of said camera at said first viewing position;
    (c) acquiring a second image of said first point of said pair of points on the object with one of said one or more cameras located at a second viewing position, said camera having a second spatial orientation at said second viewing position, thereby defining a second measurement coordinate system which is coincident with the internal coordinate system of said camera at said second viewing position;

    (d) measuring the coordinates of said first image of said first point in said first measurement coordinate system and measuring the coordinates of said second image of said first point in said second measurement coordinate system;
    (e) correcting the measured coordinates of the first image of said first point to adjust for any distortion of the camera located at the first viewing position, and correcting the measured coordinates of the second image of said first point to adjust for any distortion of the camera located at the second viewing position. thereby producing first and second final first point image coordinates for said first and second viewing positions in said first and second measurement coordinate systems;
    (f) multiplying the first final first point image coordinates by the mathematical inverse of the effective focal length of the camera located at the first viewing position and multiplying the second final first point image coordinates by the mathematical inverse of the effective focal length of the camera located at the second viewing position, to determined the mathematical tangents of the angles at which said first point is viewed in said first and second measurement coordinate systems;
    (g) forming a least squares estimate of the three dimensional coordinates of said first point in a first temporary measurement coordinate system, thereby forming an estimate of the vector location of said first point in said first temporary measurement coordinate system using said mathematical tangents of the viewing angles of said first point in said first and second measurement coordinate systems and the relationships between said first and second camera viewing positions and said first and second camera spatial orientations determined in said external coordinates systems wherein said first temporary coordinate system has an origin and wherein said origin has a vector location in said external coordinate system;
    (h) calculating a vector location of said first point in said external coordinate system by adjusting the vector location of said first point in said first temporary measurement coordinate system according to said first and second camera spatial orientation;
    (i) acquiring a first image of a second point of said pair of points on the object with one of said one or more cameras located at a third viewing position, said camera having a third spatial orientation at said third viewing position, thereby defining a third measurement coordinate system which is coincident with the internal coordinate system of said camera at said third viewing position:
    (j) acquiring a second image of said second point of said pair of points on the object with one of said one or more cameras located at a fourth viewing position, said camera having a fourth spatial orientation at said fourth viewing position, thereby defining a fourth measurement coordinate system which is coincident with the internal coordinate system of said camera at said fourth viewing position. and wherein at least one of said third and fourth viewing positions is different from either of said first and second viewing (k) measuring the coordinates of said first image of said second point in said third measurement coordinate system and measuring the coordinates of said second image of said second point in said fourth measurement coordinate system;

    (I) correcting the measured coordinates of the first image of said second point to adjust for any distortion of the camera located at the third viewing positions and correcting the measured coordinates of the second image of said second point to adjust for any distortion of the camera located at the fourth viewing position, thereby producing first and second final second point image coordinates for said third and fourth viewing positions in said third and fourth measurement coordinate systems;
    (m) multiplying the first final second point image coordinates by the mathematical inverse of the effective focal length of the camera located at the third viewing position and multiplying the second final second point image coordinates by the mathematical inverse of the effective focal length of the camera located at the fourth viewing position, to determine the mathematical tangents of the angles at which said second point is viewed in said third and fourth measurement coordinate systems;
    (n) forming a least squares estimate of the three dimensional coordinates of said second point in a second temporary measurement coordinate system. thereby forming an estimate of the vector location of said second point in said second temporary measurement coordinate system, using said mathematical tangents of the viewing angles of said second point in said third and fourth measurement coordinate systems and the relationships between said third and fourth camera viewing positions and said third and fourth camera spatial orientations determined in said external coordinate system. wherein said second temporary coordinate system has an origin and wherein said origin has a vector location in said external coordinate system;
    (o) calculating a vector location of said second point in said external coordinate system by adjusting the vector location of said second point in said second temporary measurement system according to said third and fourth camera spatial orientations;
    (P) calculating the vector location of the origin of the first temporary coordinate system by forming the average of the camera location vectors for the first and second camera viewing positions;
    (q) calculating the vector location of the origin of the second temporary coordinate system by forming the average of the camera location vectors for the third and fourth camera viewing positions;
    (r) calculating a vector from the origin of the second temporary coordinate system to the origin of the first temporary coordinate system by subtracting the vector location of the origin of the second temporary coordinate system from the vector location of the origin of the first temporary coordinate system;
    (s) calculating the vector from the second point of said pair of points to the first point of said pair of points with the equation r = dAB + rAG - rBG
    wherein dAB is the vector from the origin of the second temporary coordinate system to the origin of the first temporary coordinate system, rAG is said vector location of said first point in said external coordinate system, and rBG is said vector location of said second point in said external coordinate system; and calculating the distance between said pair of points by calculating the length of the vector r.
  10. 10. An apparatus for measuring three-dimentional distances between selected points on an inaccessible object.
    wherein the apparatus includes a rigid borescope which is fastened to a linear motion means, said linear motion means having a range of travel and which also constrains the borescope to move along a substantially straight line, said apparatus further comprising a driving means which controls the position of the linear motion means within its range of travel and also a position measurement means for indicating the position of said linear motion means. characterized by the use of a linear motion means selected from the group consisting of crossed roller slides and ball slides and air bearing slides and dovetail slides. wherein the linear motion means is preferably a linear translation stage, the driving means is preferably an actuator. and the position measurement means is preferably a linear position transducer attached to said translation stage
  11. 11 An apparatus for meaning three - dimentional distances between selected points on an inaccessible object, wherein the apparatus includes a rigid borescope which is fastened to a linear motion means, said linear motion means having a range of travel and which also constrains the borescope to move along a substantially straight line, said apparatus further comprising a driving means which controls the position of the linear motion means within its range of travel and also a position measurement means for indicating the position of said linear motion means. characterized by the use of a lead screw and nut as a driving means and, optionally wherein both the driving means and the position measurement means are embodied in a micrometer.
  12. 12. An apparatus as claimed in either of claim 10 or claim 11 wherein said borescope has a field of view, and wherein said borescope includes a video imaging means, and wherein said video imaging means is comprised of a video sensor optically coupled to said borescope, and wherein said video sensor has different spatial resolutions along its two sensing axes, characterized in that said video sensor is rotationally oriented with respect to said borescope such that its higher spatial resolution axis is aligned substantially parallel to the projection of the linear motion of the borescope as observed in the field of view. thereby obtaining the highest precision in the distance measurement.
  13. 13 An electronic measurement borescope apparatus for measuring three - dimensional distances between selected points on an inaccessible object, characterized by (a) a video camera, including an imaging lens and a solid state imager, for producing video images of the object, and a video monitor, for displaying said video images;
    (b) a linear translation means, for moving the video camera with a substantially constant orientation along a substantially straight line, said linear translation means and camera being disposed at the distal end of a rigid probe, and said linear motion means also having a range of travel;
    (c) an actuating means, for moving the linear translation means to any position within its range of travel;
    (d) a position measurement means, for determining the position of the linear translation means within said range of travel. whereby the position of the video camera is also determined, said position measurement means also producing position measurement data, said position measurement means also having a first data transfer means for supplying the camera position data to a computing means;(e) a video cursor means. for displaying variable position cursors on said video images said video cursor means having a second data transfer means for supplying the spatial positions of said variable position cursors to the computing means: and (f) said computing means having a user interface said user interface being in communication with said video cursor means and said second data transfer means such that a user can manipulate said video cursor means until said variable position cursors are aligned with the images of said selected points on said inaccessible object, and further such that said spatial positions of said variable position cursors are supplied to the computing means at user command, and further such that said computing means receives the camera position data through said first data transfer means, and further such that said computing means calculates and displays the three - dimensional distance between the selected points on said inaccessible object.
  14. 14 An apparatus as claimed in claim 13, characterized in that the actuating means is a motorized micrometer driving a positioning cable, said cable being looped around a pair of idler pulleys and being attached to the linear translation means or in that the actuating means is a motorized micrometer located at the distal end of said rigid probe, said motorized micrometer being attached to the linear translation means.
  15. 15 An electronic measurement endoscope apparatus for measuring three - dimensional distances between selected points on an inaccessible object, characterized:
    (a) a video camera, including an imaging lens and a solid state imager, for producing video images of the object, and a video monitor, for displaying said video images;
    (b) a linear translation means, for moving the video camera with a substantially constant orientation along a substantially straight line, said linear translation on means also having a range of travel, and said linear translation means and camera being disposed internally into a rigid housing, said rigid housing being disposed at the distal end of a flexible endoscope housing;
    (c) an actuating means, for moving the linear translation means to any position within its range of travel;

    (d) a position measurement means. for determining the position of the linear translation means within said range of travel whereby the position of the video camera is also determined, said position measurement means also producing position measurement data, said position measurement means also having a first data transfer means for supplying the position measurement data to a computer means;
    (e) a video cursor means. for displaying variable position cursors on said video image said video cursor means having a second data transfer means for supplying the spatial positions of said variable position cursors to the computing means; and (f) said computing means having a user interface said user interface being in communication with said video cursor means and said second data transfer means such that a user can manipulate said video cursor means until said variable position cursors are aligned with the images of said selected points on said inaccessible object, and further such that said spatial positions of said variable position cursors are supplied to the computing means at user command and further such that said computing means receives the camera position data through said first data transfer means, and further means, and further such that said computing means calculates and displays the three - dimensional distances between the selected points on said inaccessible object.
  16. 16. An apparatus as claimed in claim 15 characterized in that the actuating means is a positioning wire encased in a sheath, which is driven by a motorized micrometer, or in that the actuating means is a motorized micrometer located at the distal end of the apparatus said motorized micrometer being attached to the linear translation means.
  17. 17. An apparatus as claimed in any one of claims 13 to 61 wherein said video camera has a field of view, and wherein an illumination means for illuminating said field of view is being carried by the linear translation means, characterised in that the illumination of said field of view remains substantially constant as said camera is moved.
  18. 18. An apparatus for making measurements of the three - dimensional distances between selected points on an object, said apparatus including a camera. and a support means, whereby said camera can be moved along a substantially straight translational axis from a first viewing position to a second viewing position as part of the measurement process, and whereby said camera can also be rotated about a rotational axis for alignment with objects of interest prior to a measurement, said rotational axis being at an arbitrary alignment with said translational axis, characterized by:
    (a) a means for measurement of an angle of rotation of said camera about said rotational axis; and (b) a means for incorporating said measurement of said angle of rotation into said measurement of three - dimensional distances.
  19. 19 An apparatus as claimed in claim 18 characterized in that said means for measurement of an angle of rotation has a first portion which rotates with said camera and also has a second portion which is fixed to said support means. wherein said camera is preferably a substantially side-looking rigid borescope, said borescope having a lens tube envelope and said lens tube envelope having an outer surface and wherein said support means preferably comprises a borescope positioning assembly and wherein said rotational axis is preferably defined by the engagement of a first reference surface attached to said borescope with a second reference surface attached to said borescope positioning assembly, whereby said first reference surface is preferably a cylinder and said second reference surface is preferably a V groove, and said cylindrical first reference surface is said outer surface of said lens tube envelope or is a calibration sleeve attached to said borescope.
  20. 20. An apparatus for making measurement of the three - dimensional distances between selected points on an object, said apparatus including a substantially side-looking rigid borescope which can be moved along a substantially straight translational axis from a first viewing position to a second viewing position as part of the measurement process, wherein said borescope can also be rotated about a rotational axis for alignment with objects of interest prior to a measurement, characterized by the arrangement of said rotational axis to be accurately aligned with said translational axis.
  21. 21. An apparatus as claimed in claim 20 wherein said borescope has a lens tube envelope and said lens tube envelope has an outer surface, characterized in that said borescope is preferably moved along said translational axis by a borescope positioning assembly and said rotational axis is preferably defined by the engagement of a first reference surface attached to said borescope with a second reference surface attached to said borescope positioning assembly, whereby said first reference surface is preferably a cylinder and said second reference surface is preferably a V groove, and said cylindrical first reference surface is said outer surface of said lens tube envelope or is a calibration sleeve attached to said borescope.
  22. 22. An apparatus for measuring three - dimensional distances between individual user selected points on an inaccessible object, wherein the apparatus includes a rigid borescope which is fastened to a linear motion means, said borescope forming an image of said points on said object and said linear motion means having a range of travel and which also constrains the borescope to move along a substantially straight line, said apparatus further comprising a position measurement means for indicating the position of said linear motion means within its range of travel, and also an image measurement means for determining the positions of said selected points in said image, said apparatus further comprising a computation means for incorporating said positions of said selected points in said image and said position of said linear motion means into a calculation of said three dimensional distance, characterized by a computation means that is programmed to perform a fully three - dimensional least squares estimation procedure to determine the measurement result.
CA002263530A 1996-08-16 1997-08-08 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects Abandoned CA2263530A1 (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US08/689,993 US6009189A (en) 1996-08-16 1996-08-16 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects
US08/689,993 1996-08-16
US08/871,289 US6121999A (en) 1997-06-09 1997-06-09 Eliminating routine alignment calibrations in perspective dimensional measurements
US08/871,289 1997-06-09
PCT/US1997/015206 WO1998007001A1 (en) 1996-08-16 1997-08-08 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects

Publications (1)

Publication Number Publication Date
CA2263530A1 true CA2263530A1 (en) 1998-02-19

Family

ID=27104515

Family Applications (1)

Application Number Title Priority Date Filing Date
CA002263530A Abandoned CA2263530A1 (en) 1996-08-16 1997-08-08 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects

Country Status (4)

Country Link
AU (1) AU4168497A (en)
CA (1) CA2263530A1 (en)
GB (1) GB2333595B (en)
WO (1) WO1998007001A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103649641A (en) * 2011-05-05 2014-03-19 西门子能量股份有限公司 Inspection system for a combustor of a turbine engine
CN114537705A (en) * 2022-04-25 2022-05-27 成都飞机工业(集团)有限责任公司 Airplane flaring conduit belt error assembly method and device, storage medium and equipment

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE10031719A1 (en) 2000-06-29 2002-01-10 Leica Microsystems Illumination device and coordinate measuring device with an illumination device
US7349083B2 (en) 2004-07-21 2008-03-25 The Boeing Company Rotary borescopic optical dimensional mapping tool
CA2597891A1 (en) 2007-08-20 2009-02-20 Marc Miousset Multi-beam optical probe and system for dimensional measurement
WO2011083217A2 (en) * 2010-01-06 2011-07-14 Mathias Lubin Video endoscope
WO2015095727A2 (en) 2013-12-20 2015-06-25 Barnet Corbin Surgical system and related methods
GB201803286D0 (en) 2018-02-28 2018-04-11 3D Oscopy Ltd Imaging system and method
US10712290B2 (en) * 2018-04-30 2020-07-14 General Electric Company Techniques for control of non-destructive testing devices via a probe driver
US10670538B2 (en) * 2018-04-30 2020-06-02 General Electric Company Techniques for control of non-destructive testing devices via a probe driver
CN112587124B (en) * 2020-12-29 2024-02-09 苏州半鱼健康科技服务有限公司 Measuring device and measuring method for measuring three-dimensional data of spine
CN114018154B (en) * 2021-11-09 2024-05-28 国网河北省电力有限公司经济技术研究院 Work well space positioning and orientation method and device and electronic equipment
CN114800056A (en) * 2022-04-30 2022-07-29 徐德富 Method for machining and mounting high-form-position precision part
CN115597516B (en) * 2022-10-26 2024-10-29 华能澜沧江水电股份有限公司 High-precision geological crack monitoring method and system based on unmanned aerial vehicle image
CN116394068B (en) * 2023-06-09 2023-09-29 成都飞机工业(集团)有限责任公司 Method for automatically measuring AC axis zero positioning precision of five-axis linkage numerical control machine tool

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE69515649T2 (en) * 1994-12-28 2000-11-16 Keymed (Medical & Industrial Equipment) Ltd., Southend-On-Sea DIGITAL MEASURING DOSCOPE WITH HIGH-RESOLUTION ENCODER

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103649641A (en) * 2011-05-05 2014-03-19 西门子能量股份有限公司 Inspection system for a combustor of a turbine engine
CN114537705A (en) * 2022-04-25 2022-05-27 成都飞机工业(集团)有限责任公司 Airplane flaring conduit belt error assembly method and device, storage medium and equipment

Also Published As

Publication number Publication date
GB2333595B (en) 2001-03-21
GB9903366D0 (en) 1999-04-07
WO1998007001A1 (en) 1998-02-19
GB2333595A (en) 1999-07-28
AU4168497A (en) 1998-03-06

Similar Documents

Publication Publication Date Title
CA2263530A1 (en) Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects
US6009189A (en) Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects
EP1692996B1 (en) Image orienting coupling assembly
US10682039B2 (en) Video endoscopic device
US6459481B1 (en) Simple system for endoscopic non-contact three-dimentional measurement
US10105199B2 (en) Mounting system that maintains stability of optics as temperature changes
US7161741B1 (en) Focusing systems for perspective dimensional measurements and optical metrology
US5061995A (en) Apparatus and method for selecting fiber optic bundles in a borescope
US7134992B2 (en) Gravity referenced endoscopic image orientation
US5573492A (en) Digitally measuring scopes using a high resolution encoder
CN107666849A (en) Medical optics connector system
US6121999A (en) Eliminating routine alignment calibrations in perspective dimensional measurements
US5231539A (en) Nodal-point adjusting retroreflector prism and method
WO1996020389A1 (en) Digitally measuring scopes using a high resolution encoder
CN116807362A (en) Mirror structure of stereoscopic high-definition laparoscope and adjusting method
EP0495351B1 (en) Position measuring device for endoscope
EP1234159A1 (en) An optical position detector
US6100972A (en) Digital measuring scope with thermal compensation
US5044728A (en) Device for correcting perspective distortions
US7084971B2 (en) Technoscope
CN116539627B (en) Endoscope probe, three-dimensional measurement endoscope and flaw detection method
JP2003505725A (en) Monocular mirror or endoscope with offset mask
CN111417835B (en) Imaging incident angle tracker
JP3478729B2 (en) Advanced structure of stereoscopic endoscope
JPS5860720A (en) Length measuring endoscope

Legal Events

Date Code Title Description
EEER Examination request
FZDE Discontinued