WO1998007001A1 - Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects - Google Patents

Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects Download PDF

Info

Publication number
WO1998007001A1
WO1998007001A1 PCT/US1997/015206 US9715206W WO9807001A1 WO 1998007001 A1 WO1998007001 A1 WO 1998007001A1 US 9715206 W US9715206 W US 9715206W WO 9807001 A1 WO9807001 A1 WO 9807001A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
measurement
position
means
coordinate system
Prior art date
Application number
PCT/US1997/015206
Other languages
French (fr)
Inventor
David F. Schaack
Original Assignee
Schaack David F
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US08/689,993 priority Critical patent/US6009189A/en
Priority to US08/689,993 priority
Priority to US08/871,289 priority patent/US6121999A/en
Priority to US08/871,289 priority
Application filed by Schaack David F filed Critical Schaack David F
Publication of WO1998007001A1 publication Critical patent/WO1998007001A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B23/00Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
    • G02B23/24Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00147Holding or positioning arrangements
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/00163Optical arrangements
    • A61B1/00174Optical arrangements characterised by the viewing angles
    • A61B1/00183Optical arrangements characterised by the viewing angles for variable viewing angles
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Detecting, measuring or recording for diagnostic purposes; Identification of persons
    • A61B5/06Devices, other than using radiation, for detecting or locating foreign bodies ; determining position of probes within or on the body of the patient
    • A61B5/065Determining position of the probe employing exclusively positioning means located on or in the probe, e.g. using position sensors arranged on the probe
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Detecting, measuring or recording for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/107Measuring physical dimensions, e.g. size of the entire body or parts thereof
    • A61B5/1076Measuring physical dimensions, e.g. size of the entire body or parts thereof for measuring dimensions inside body cavities, e.g. using catheters
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical means
    • G01B11/02Measuring arrangements characterised by the use of optical means for measuring length, width or thickness
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS, OR APPARATUS
    • G02B23/00Telescopes, e.g. binoculars; Periscopes; Instruments for viewing the inside of hollow bodies; Viewfinders; Optical aiming or sighting devices
    • G02B23/24Instruments or systems for viewing the inside of hollow bodies, e.g. fibrescopes
    • G02B23/2407Optical details
    • G02B23/2423Optical details of the distal end
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B1/00Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor
    • A61B1/04Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances
    • A61B1/05Instruments for performing medical examinations of the interior of cavities or tubes of the body by visual or photographical inspection, e.g. endoscopes; Illuminating arrangements therefor combined with photographic or television appliances characterised by the image sensor, e.g. camera, being in the distal end portion

Abstract

Spatial locations of individual points on an inaccessible object are determined by measuring two images acquired with one or more cameras which can be moved to a plurality of positions and orientations which are accurately determined relative to the instrument. Once points are located, distances are easily calculated. This new system offers accurate measurements with any convenient geometry, and with existing endoscopic apparatus. It also provides for the measurement of distances which cannot be contained within any single camera view. Systematic errors are minimized by use of a complete and robust set of calibration procedures. A standard measurement procedure automatically adjusts the measurement geometry to reduce random errors. A least squares calculation uses all of the image location and calibration data to derive the true three-dimensional positions of the selected object points. This calculation is taught explicitly for any camera geometry and motion.

Description

APPARATUS AND METHOD FOR MAKING ACCURATE THREE-DIMENSIONAL SIZE MEASUREMENTS OF INACCESSIBLE OBJECTS

Technical Field

This invention relates to optical metrology, specifically to the problem of making non-contact dimensional measurements of inaccessible objects which are viewed through an endoscope

Background Art

A Introduction

In the past several decades, the use of optical endoscopes has become common for the visual inspection of inaccessible objects, such as the internal organs of the human body or the internal parts of machinery These visual inspections are performed in order to assess the need for surgery or equipment tear down and repair, thus the results of the inspections are accorded a great deal of importance Accordingly, there has been much effort to improve the art in the field of endoscopes

Endoscopes are long and narrow optical systems, typically circular in cross-section, which can be inserted through a small opening in an enclosure to give a view of the inteπor They almost always include a source of illumination which is conducted along the intenor of the scope from the outside (proximal) end to the inside (distal) end, so that the inteπor of the chamber can be viewed even if it contains no illumination Endoscopes come in two basic types, these are the flexible endoscopes (fiberscopes and videoscopes) and the rigid borescopes Flexible scopes are more versatile, but borescopes can provide higher image quality, are less expensive, are easier to manipulate, and are thus generally preferred in those applications for which thev are suited

While endoscopes (both flexible and rigid) can give the user a relatively clear view of an inaccessible region, there is no inherent ability for the user to make a quantitative measurement of the size of the objects he or she is viewing There are many applications for which the size of an object, such as a tumor in a human body, or a crack in a machine part, is a cπtically important piece of information Thus, there have been a number of inventions directed toward obtaining quantitative size information along with the view of the object through the endoscope The problem is that the accuracy to which the size of defects can be determined is poor with the currently used techniques Part of the reason is that the magnification at which the defect is being viewed through the borescope is unknown The other part of the problem is that the defects occur on surfaces which are curved in three dimensions, and the view through the endoscope is stπctly two-dimensional Many concepts have been proposed and patented for addressing the need to make quantitative measurements through endoscopes Only some of these concepts address the need to make the measurement m three dimensions Few of these concepts have been shown to provide a useful level of measurement accuracy at a practical cost

Probably the simplest approach to obtaining quantitative object size information is to attach a physical scale to the distal end of the endoscope. and to place this scale in contact with the object to be measured The problems with this are that it is often not possible to insert the scale through the available access hole, that the objects of interest are almost never flat and oπented in the correct plane so that the scale can lie against them, that it is often not possible to mampulate the end of the endoscope into the correct position to make the desired measurement, and that it is often not permissible to touch the objects of interest

These problems have dπven work toward the invention of non-contact measurement techniques There have been a number of systems patented which are based on the principle of optical perspective, or more fundamentally, on the pπnciple of tnangulation

B Non-Contact Measurements Using Tnangulation and Perspective

What I mean by "use of perspective" is the use of two or more views of an object obtained from different viewing positions, for dimensional measurement of the object By "dimensional measurement" I mean the determination of the true three-dimensional (height, width, and depth) distance between two or more selected points on the object To perform a perspective dimensional measurement, the apparent positions of each of the selected points on the object are determined in each of the views This is the same pπnciple used in stereoscopic viewing, but here I am concerned with making quantitative measurements of object dimensions, rather than obtaining a view of the object containing qualitative depth cues As I will teach, given sufficient knowledge about the relative locations, oπentations and imaging properties of the viewing optical systems, one can accurately determine the locations of the selected points in a measurement coordinate system Once these locations are known, one then simply calculates the desired distances between points by use of the well-known Pythagorean Theorem

Perspective is related to and based on tnangulation, but tnangulation is also the pnnciple behind making any measurement of distance using the measurement of angles

The earliest related art of which I am aware is descπbed in US Patent 4,207,594, (1980) to Morns and Grant The basic approach of this work is to measure the linear field of view of a borescope at the object, then scale the size of the object as measured with video cursors to the size of the field of view as measured with the same cursors The linear field of view is measured by determining the difference in borescope insertion depth between alignment of the two opposite edges of the field of view with some selected point on the object

A major problem with this approach is that it cannot determine the depth of the object In fact, the patent specifies that the user has to know the angle that the plane of the object makes with respect to the plane perpendicular to the borescope line of sight This information is almost never available m any practical measurement situation A second problem is that this technique gives valid results only if the optical axis of the borescope is onented precisely perpendicular to the length of the borescope

In US Patent 4,702,229, (1987) Zobel describes a rigid borescope free to slide back and forth between two fixed positions inside an outer mounting tube, to measure the dimensions of an object As with Morns and Grant, Zobel does not teach the use of the pnnciple of perspective, and thus discusses only the measurement of a flat object, oπented perpendicular to borescope hne of sight

US Patent 4,820,043, (1989) to Diener desenbes a measurement scope after Zobel (4,702,229) with the addition of an electronic transducer on the measurement scale, an instrumented steenng pnsm at the distal end, and a calculator The pnnciple is that once the distance to the object is determined by the translation of the borescope proper according to Zobel, then the object size can be determined b> making angular size measurements with the steerable pnsm Again, there is no consideration of measurement of the depth of the object

US Patent 4,935,810, (1990) to Nonami and Sonobe shows explicitly a method to measure the true three- dimensional distance between two points on an object by using two views of the object from different perspectives They use two cameras separated by a fixed distance mounted in the tip of an endoscope, where both cameras are aligned with their optical axes parallel to the length of the endoscope The fixed distance between the cameras causes the measurement error to be rather large for most applications, and also places a limit to how close an object to be measured can be to the endoscope In addition, the two cameras must be precisely matched in optical charactenstics in order for their technique to give an accurate measurement, and such matching is difficult to do

All four of these pnor art systems assume that the measurement system is accurately built to a particular geometry This is a significant flaw, since none of these patents teach one how to achieve this perfect geometry That is, if one attempts to use the technique taught by Nonami and Sonobe, for instance, one must either independently develop a large body of techniques to enable one to build the system accurately to the required geometry, or one must accept measurements of poor accuracy None of these inventors teach how to calibrate their system, and what is more, one cannot correct for errors in the geometry of these systems by a ca brauon process because none of these systems include any provision for incorporating calibration data into the measurement results In US Patent 5,575,754, (1996), Konomura teaches a system of perspective dimensional measurement which is based on moving a ngid borescope along a straight line in a manner similar to Zobel, but now the borescope is moved by a vanable distance to obtain the two views of the object Konomura recogmzes that using a vanable distance between the viewing posiUons allows one to obtain lower measurement errors, in general Konomura also recogmzes the necessity of compensating for the effects of certain aspects of the actual measurement geometry being used, thus implicitly allowing for incorporation of some calibration data into the measurement result The patent does not teach how to do the calibration, and unfortunately, the compensation technique that is taught is both incomplete and incorrect In addition, the motion technique taught by Konomura is not inherently of high precision, that is, it is not suitable for making dimensional measurements of high accuracy Konomura's apparatus for holding the borescope allows the scope to be rotated with respect to the apparatus in order to align the view with objects of interest, but there is no consideration given to the repeatability of borescope positioning which is necessary in order to assure accuracy in the measurement

All of these measurement techmques are limited to objects which are small enough to be completely contained within the field of view of the endoscope In addition, there are other applications of interest which simply cannot be addressed by any of these techmques, for instance where the object has a shape and an onentation such that the two ends of a dimension of interest cannot both be viewed from any single position Disclosure of the Invention

While the pπor art in this area is extensive, there remains a need for a measurement system which can provide truly accurate dimensional measurements on an inaccessible object By "truly accurate", I mean that the level of accuracy should be limited only by the technology of mechanical metrology and by unavoidable random errors made by the most careful user There also remains a need for a usefully accurate measurement at low cost By "usefully accurate", I mean that the accuracy of the measurement should be adequate for the purposes of most common industπal applications By "low cost", I mean that with some embodiments, the user should be able to add this measurement capability to his or her existing remote visual inspecUon capability with a lower incremental expenditure than is required with the pnor art There also remains a need for a measurement system which can be applied to a wider range of situations than has been addressed in the pnor art

Meeting these goals inherently requires that the measurement should not depend on an apparatus being built precisely to a particular geometry, nor to particular optical charactenstics Instead, the measurement system must be sufficiently complete and comprehensive so that the apparatus is capable of being calibrated and there must exist a sufficient set of methods to perform this calibration

In one aspect, therefore, my invention provides a method of locating an object point of interest in three dimensional space using one or more cameras which can be moved among any of a plurality of predetermined relative viewing positions and oπentaϋons 'Predetermined" in this case means that these quantities are determined before the measurement result is calculated, and that the measurement requires no auxiliary information from or about the object Because the camera(s) can be moved, the viewing positions that are used to perform a particular point location can be chosen duπng the measurement, according to the requirements of the particular measurement being performed The apparent locations of the images of the point as viewed from two different positions are measured and, using the predetermined geometry of the system, and the predetermined optical charactenstics of the camera(s), a fully three-dimensional, least squares estimate of the location of the point is calculated The geometry of the measurement is completely general, and there arc identified a complete set of parameters which can be calibrated in order to ensure an accurate measurement A complete set of calibration methods is taught This aspect of my invention enables one to accurately locate a point using whatever measurement geometry is most advantageous for the application

In another aspect, the invenuon provides a method in which the motιon(s) of the camera(s) is (are) constrained to one of a vanety of specific paths According to this aspect, different camera paths have advantages for different measurement applications Also, according to this aspect, it is possible to determine the oπentatιon(s) of the camera(s) with respect to its (their) path(s) of motion in a calibrauon procedure, and to take into account this (these) onentatιon(s) in the determination of the location of the point of interest In addition, according to this aspect, it is possible to determine enors in the actual motιon(s) with respect to the ideal motιon(s) and to also take these enors into account in locaϋng the point Thus, this aspect allows one, for instance, to accurately locate a point using existing endoscopic hardware w Inch was not onginally designed to make measurements, and which is not built according to the assumptions and requirements of the pnor art In another aspect, the method of locating a point of interest is used to determine the three-dimensional distances between points of interest on a remote object, where all the points of interest can be contained within a single view of the camera(s) being used. This aspect allows one to, for instance, perform an improved perspective dimensional measurement under conditions similar to those addressed by the prior art.

In another aspect, the method of locating a point of interest is used to determine the three-dimensional distance between a pair of points on an object, where the two points of the pair cannot necessarily be contained within any single view of the camera being used. This aspect allows one to perform a new mode of perspective dimensional measurement which has the capability of accurately measuring distances which are impossible to measure at all in the prior art. This aspect also offers the capability of performing the most precise dimensional measurements achievable with my system.

In another aspect, my invention provides a method of locating a point of interest in three-dimensional space using a single camera, subjected to a substantially pure translation between two viewing positions, in which the first and second viewing positions are selected so that the point of interest is viewed first near the edge of one side of the field of view and secondly near the edge of the opposite side of the field of view. This aspect allows one to automatically obtain one of the key conditions required for achievement of the lowest random enor (highest precision) in the perspective dimensional measurement.

In still another aspect, the invention provides an apparatus for measuring the three-dimensional distances between points on an inaccessible object, wherein the apparatus includes a borescope supported by a linear motion means, a driving means which controls the position of the linear motion means, and a position measurement means which determines the position of the linear motion means. Here the improvement is that a linear motion means is used which provides a motion of very high accuracy. In an embodiment, one may select the driving means to be an actuator, for instance an air cylinder. Also the position measurement means may be embodied as a linear position transducer.

In another aspect, the invention provides an apparatus for measuring the three-dimensional distances between points on an inaccessible object, wherein the apparatus includes a borescope supported by a linear motion means, a driving means which controls the position of the linear motion means, a position measurement means which determines the position of the linear motion means, and wherein the improvement is the use of a lead screw and matching nut as the driving means. In an embodiment, one may embody both the driving means and the position measurement means as a micrometer.

In another aspect, the invention provides apparatus according to the previous two aspects, but wherein the borescope includes a video camera, and wherein the video camera is correctly rotationally oriented with respect to the borescope in order to satisfy a second key condition required for the achievement of measurements of the highest feasible precision.

In still another aspect, the invention provides an apparatus for measuring the three-dimensional distances between points on an inaccessible object, wherein the apparatus includes a video camera mounted on a linear translation means, and wherein this assembly is mounted on the distal end of a rigid probe to form an electronic measurement borescope. This aspect thus provides a self-contained measurement system, one which can provide measurements of much higher accuracy than those provided by prior art systems.

In another aspect, the invention provides an apparatus for measuring the three-dimensional distances between points on an inaccessible object, wherein the apparatus includes a video camera mounted on a linear translation means, and wherein this assembly is mounted at the distal end of a flexible housing to form an electronic measurement endoscope This aspect also provides a self-contained measurement system ith high measurement accuracy, but in this case in a flexible package that can reach inaccessible objects under a wider range of conditions

In another aspect, the invention provides an apparatus for measunng the three-dimensional distances between points on an inaccessible object which comprises a camera and a support means for moving the camera along a straight translation axis, wherein the camera can also be rotated about a rotation axis for convement alignment with an object of interest, and wherein the improvement comprises a means for measunng an angle of rotation about the rotation axis and also a means for incorporating the measured angle into the result of the perspective measurement The rotation m this case is made pnor to (and not duπng) the measurement process This aspect then allows a user to make accurate dimensional measurements while allowing rotation of the camera for convement alignment while also requinng only infrequent calibrations

In still another aspect, the invention provides an apparatus for measunng the three-dimensional distances between points on an inaccessible object which comprises a substantially side-looking borescope, where the borescope can be translated along a straight line and where the borescope can also be rotated about a rotational axis, wherein the improvement comprises the arrangement of the rotation axis to be accurately aligned with the translation axis. This aspect also enables a user to make accurate dimensional measurements while allowing rotation of the borescope while also requinng only infrequent alignment calibrations

Further objects, advantages, and features of my system will become apparent from a consideration of the following descπpϋon and the accompanying schematic drawings.

Bnef Descnption of the Drawings

Figure 1 shows the definitions of vanous quantities related to a πgid borescope Figure 2 depicts the change in perspective when viewing a point in space from two different positions Figure 3 depicts the imaging of a point in space with a camera Figure 4 is a perspective view of the mechamcal portion of a first embodiment of the invention and its use in a typical measurement situation

Figure 5 is a detailed perspective view of the mechamcal portion of the first embodiment of the invention Figure 6 is a cross-sectional view of a portion of the structure shown in Figure 5 Figure 7 is a block diagram of the electronics of the first embodiment of the invention Figure 8 is a view of the video momtor as seen by the user duπng the first stage of a first distance measurement procedure

Figure 9 is a view of the video monitor as seen by the user duπng the second stage of a first distance measurement procedure Figure 10 shows two views of the video monitor as seen by the user duπng the first stage of a second distance measurement procedure

Figure 11 shows the two views of the video monitor as seen by the user duπng the second stage of a second measurement procedure

Figure 12 shows a general relationship between the viewing coordinate systems at the two viewing positions Figure 13 depicts a second mode of the dimensional measurement process taught by the present invention Figure 14 is a block diagram of the electronics of a second embodiment of the invention Figure 15 is a front view of the mechanical portion of a second embodiment of the invention Figure 16 is a plan view of the mechanical portion of a second embodiment of the invention Figure 17 is a rear view of the mechanical portion of a second embodiment of the invention Figure 18 is a left side elevation view of the mechanical portion of a second embodiment of the invention Figure 19 is a right side elevation view of the mechanical portion of a second embodiment of the invention Figure 20 is a perspective view of the mechanical portion of a third embodiment of the invention Figure 21 is a plan view of the internal structures at the distal end of the third embodiment Figure 22 is a left side elevation view of the internal structures at the distal end of the third embodiment Figure 23 is a πght side elevation view of the internal structures at the distal end of the third embodiment Figure 24 is a plan view of the internal structures at the proximal end of the third embodiment

Figure 25 is a left side elevation view of the internal structures at the proximal end of the third embodiment Figure 26 is a πght side elevation view of the internal structures at the proximal end of the third embodiment Figure 27 is a proximal end elevation view of the internal structures at the proximal end of the third embodiment Figure 28 is a block dιagτam of the electronics of the third embodiment Figure 29 is a plan view of the internal structures at the distal end of a fourth embodiment

Figure 30 is a left side elevation view of the internal structures at the distal end of the fourth embodiment Figure 1 depicts the perspective measurement mode 2 process when a camera moves in a straight line path, but when the onentation of the camera is not fixed Figure 32 depicts the perspecϋve measurement mode 1 process when a camera is constrained to a circular path which lies in the plane of the camera optical axis

Figure 33 shows an endoscope which implements a circular camera path where the camera view is perpendicular to the plane of the path

Figure 34 depicts the measurement mode 2 process with a general motion of the camera Figure 35 depicts the measurement of a distance with a combination of circular camera motion and measurement mode 2

Figure 36 illustrates a group of calibration target points being viewed with a camera located at an unknown position and onentation

Figure 37 illustrates the process of calibration of rotational enors of the translation stage used in the third and fourth embodiments

Figure 38 shows an enlarged view of the components mounted to the translation stage duπng the calibration process depicted in Figure 37

Figure 39 represents an example of the change in alignment between a perspective displacement vector d, and a borescope's visual coordinate system that can occur if the borescope lens tube is not straight Figure 40 depicts the change in alignment between the perspective displacement and the visual coordinate system that can occur if the borescope is rotated about an axis that is not parallel to the perspective displacement

Figure 41 is a perspecuve view of a first vaπant of borescope BPA embodiments of the mvenuon

Figure 42 is a perspective view of an embodiment of a strain-relieving calibration sleeve

Figure 43 is an end elevation view of a test rig for determining the alignment of a V groove with respect to the translation axis of a translation stage

Figure 44 depicts the process of determining the alignment errors caused by imperfecUons in the geometry when a cylinder rotates in a V groove

Figure 45 is a perspective view of the mechamcal portion of a second variant of borescope BPA embodiments of the mvenuon Figure 46 depicts the relationships between the three Cartesian coordinate systems used in analyzing the effects of a misalignment of the borescope axis of rotation with respect to the perspective displacement

Figure 47 shows the relationship of the borescope visual and mechanical coordinate systems

Best Modes for Carrying out the Im ention

A Explanation of the Pnor Art of Perspective Measurement and its Limitations In order to clarify the discussion of the perspective measurement and the problems in the prior art, I will carefully define the terms and processes being used Figure I depicts the distal end of a rigid borescope 2 together with a representation of its conical optical field of view 4 Field of view 4 is defined by a nodal point 10 of the borescope optical system and a cone that has its apex there The "nodal point" of an optical system is that point on the optical axis of the system for which an optical ray incident at the point is not deviated by the system

The axis of conical field of view 4 is assumed to coincide with the optical axis 8 Figure 1 is drawn in the plane which both contains optical axis 8 and which is also parallel to the mechamcal centerline of the borescope 6 The apex angle, 11, of the field of view cone is denoted as FOV, half that angle, 12. is denoted as HFOV, and the "viewing angle" of the borescope with respect to the centerline of the scope, 14, is denoted as VA Viewing angle 14 is defined to be positive for rotation away from the borescope centerline, to match standard industry practice The change in perspective when viewing a point in space from two different positions is depicted in Figure 2 A nght handed global Cartesian coordinate system is defined by the unit vectors x , y , and "z A particular point of interest, P„ al T = x x + y y + z ~z , is viewed first from position PI, then from position P2 The coordinate system has been defined so that these viewing positions are located on the x axis, equally spaced on either side of the coordinate ongin I call the distance d between the viewing positions the perspective baseline, and I call the vector d = d x the perspective displacement According to the known perspecUve measurement technique, viewing coordinate systems are set up at PI and

P2, and both of these coordinate systems are aligned parallel to the global coordinates defined in Figure 2

As part of the perspective measurement, the object point of interest is imaged onto the flat focal plane of a camera, as depicted in Figure 3 In Figure 3, a point 16 is imaged with a lens that has a nodal point 10 An image plane 18 is set up behind nodal point 10, with the distance from the plane to the nodal point being denoted as i This distance is measured along a perpendicular to image plane 18, and is often refened to as the effective focal length of the camera The nodal point is taken as the ongm of a Cartesian coordinate system, where the z axis is defined as that perpendicular to the image plane that passes through the nodal point The z axis is the optical axis of the camera

In the model of Figure 3, the camera lens is considered as a paraxial thin lens According to paraxial optics, rays that strike the nodal point of the lens pass through it undeviated It is important to realize that any imaging optical system, including that of an endoscojre, can be represented as a camera as shown in Figure 3

For object point 16 at (x, y, z) one can wnte these coordinates in standard spheπcal polar coordinates about the nodal point as x = o' ήaθ cosφ y = o' smff sn z = o' cosθ (1) where o' is the distance from the object point to the nodal point, and the polar angle θ is shown in Figure 3

By the properties of the nodal point, the angles of the transmitted ray will remain the same and one can wnte the image point location 20 as xtm = - 1' sιn#

Figure imgf000012_0001
z,m = - ■ (2)

But % = i' cosø so that i sιn# cosφ _ i x _ l x

Xm = cosø ~ ~ o' cosø ~ T (3)

_ z sιn# smφ _ i y _ _ ι y_ y,m = ^sT~ ~ o" ∞sθ ~ z (4)

That is, the transverse coordinates of the image point (x,m,y,m), are directly proportional to the transverse coordinates of the object point

When considenng the performance of a real optical system, as opposed to a paraxial model, the image of an object point will be blurred by what are called point abenations and it will be displaced by what are called field abeπauons I define the locauon of the image point to be the location of the centroid of the blur spot, and I refer to the extent to which Equations (3) and (4) do not hold for the image point centroid as the distortion of the optical system Clearly, consideration of the distortion of the optical system is important for making accurate measurements, and this was recognized in some of the prior art I will later show how to determine the distortion and how to take it into account in the measurement

Considenng now the view from position PI in Figure 2. one may wnte

Figure imgf000012_0002
where (xvi , yvι , Z i) are the coordinates of the point of interest in the viewing coordinate system at PI Similar expressions in terms of (xV2.3 v2. Zvi) hold for the view at P2 Using the facts that xvι = x + , xV2 — x — jf ,

J/vi = irv2 = y, and Zvi = Zv2 = z , the solution of the four equations for the posiUon of the point P, in global coordinates is

Figure imgf000012_0003

(i 2f2) I' (»Ximl + Xmϊ)

y = ( — ) (y»mi) = — j y, 2)

To make a measurement of the true, three dimensional distance between two points A and B, in space, one has simply to measure the three dimensional position (x, y, z) of each point according to (6) and then to calculate the distance between them by the well known formula

Figure imgf000012_0004
This is the perspective measurement process taught by both Nonami and Sonobe in US Patent 4,935,810 and by Konomura in US Patent 5,575,754 Nonami and Sonobe use two cameras, one located at each of points PI and P2 m Figure 2, while Konomura uses a single camera, translated along a straight line from PI to P2

Unfortunately, this known process has severe limitations First, it will give accurate results onlv when the optical (z) axes of the cameras at both viewing positions are perfectly aligned along the global z axis Secondly, it also requires that the x axes of the cameras at both viewing positions be aligned perfectly along the perspective displacement

In the case of Nonami and Sonobe, they do not teach how to achieve these necessary conditions In addition, for their system the two cameras must be identical in both distortion and effective focal length in order to give an accurate measurement They do not teach how to achieve those conditions either As an additional difficulty with their system, Nonami and Sonobe deal with the redundancy inherent in the final equauon of (6) by specifically teaching that only three of the four available apparent point location measurements should be used for each point location determination In fact, they go to a great deal of trouble to ensure that only one of the image point y position measurements can be used This amounts to throwing information away and in general, considenng the effects of measurement errors, it is not a good idea In the case of Konomura, where there is only one camera, which is a borescope, the first necessary condition means that the viewing angle of the borescope must be accurately equal to 90° in order for the measurement to be accurate Konomura realizes that this is a problem and teaches the use of the following equation for the case where the viewing angle is not 90° d' = d cos(VA - 90°) (8) Unfortunately, use of Equation (8) does not coπectly take into account the change in the perspective measurement which occurs when the camera viewing angle is not equal to 90° In addiUon, Konomura makes no provision for ensunng that the measurement x axis of the camera is aligned with the perspective displacement Since the camera is a standard video borescope, in which the video sensor could be attached to the borescope proper at any rotational angle, there is little likelihood that the x axis of the video sensor will be aligned to the perspective displacement

Konomura has nothing to say about how to handle the redundant equations in (6)

B Descπpuon of a First Embodiment

Figure 4 shows a view of the mechanical portion of a basic embodiment of my system and its use in a typical measurement situation In Figure 4, an object 100 with a damaged area or feature of interest 102 is being viewed with a video borescope system 120 Object 100 is completely enclosed by an enclosure 1 10 In Figure 4 only a small portion of the wall of enclosure 110 is shown The borescope has been inserted through an inspection port 112 in the wall of enclosure 110

The borescope is supported by and its position is controlled by a mechanical assembly that I call the borescope positioning assembly (BPA), which is denoted by 138 in Figure 4

Several features of video borescope system 120 are shown in Figure 4 to enable a better understanding of my system The configuration shown is meant to be geneπc, and should not be construed as defining a specific video borescope to be used

Conical field of view 122 represents the angular extent of the field visible through the borescope The small diameter, elongated lens tube 124 comprises the largest portion of the length of the borescope The remainder of the borescope is compπsed successively of an illumination interface adapter 126, a focusing nng 130, a video adapter 132, and a video camera back or video sensor 134 Video camera back 134 represents every element of a closed circuit television camera, except for the lens Video adapter 132 acts to optically couple the image formed by the borescope onto the image sensing element of video camera back 134 as well as serving as a mechamcal coupling

Illumination adapter 126 provides for the connection of an illumination fiber optic cable (not shown) to the borescope through a fiber optic connector 128 The illumination (not shown) cuts lens tube 124 near the apex of field of view cone 122 to illuminate objects contained within cone 122 5 A camera connector 136 connects video camera back 134 to its controller (not shown) through a cable which is also not shown

The portion of BPA 138 which directly supports the borescope is a clamp assembly 140, which clamps lens tube 124 at any convement position along its length, thereby supporting the weight of borescope 120 and determining its position and onentation BPA 138 is itself supported by a structure which is attached to enclosure 10 110 or to some other structure which is fixed in position with respect to object 100 This support structure is not part of the present invention

BPA 138 is shown in more detail in Figure 5 Lens tube 124 has been removed from clamp 140 in this view for clanty Clamp 140 is compπsed of a lower V - block 142, an upper V - block 144, a hinge 148, and a clamping screw 150 The upper V - block is lined with a layer of resilient matcnal 146, in order that the clamping pressure 15 on the lens tube 124 can be evenly distπbuted over a substantial length of the tube

Lower V - block 142 is attached to moving table 184 of a translation stage or slide table 180 Translation stage 180 is a standard component commercially available from several vendors, and it provides for a smooth moUon of moving table 184 which is precisely constrained to a straight line Translation stage 180 consists of moving table 184 and a fixed base 182, connected by crossed roller bearing slides 186 Fixed base 182 is attached 0 to a BPA baseplate 162

The beaπngs in translation stage 180 could also be either ball beaπngs or a dovetail slide Such stages are also commercially available, and are generally considered to be less precise than those using crossed roller beaπngs, though they do have advantages, including lower cost Translation stage 180 could also be an air beanng stage, which may offer even more motion accura than does the crossed roller bearing version although at a 25 considerable increase in system cost and complexity

Also attached to BPA baseplate 162 is a micrometer mounting block 166 Mounting block 166 supports a micrometer 168 Micrometer 168 has an extension shaft 170 , a rotaung drum 178, and a distance scale 172 As drum 178 is rotated, a precision screw inside the micrometer rotates inside a precision nut, thus changing the distance between the end of extension shaft 170 and mounting block 166 Of course, micrometer 168 could be a 30 digital unit, rather than the traditional analog unit shown

Micrometer extension shaft 170 is connected to an actuator arm 174 through a bushing 176 Actuator arm 174 is mounted to moving table 184 Bushing 176 allows for a slight amount of non-parallel moUon between micrometer extension shaft 170 and moving table 184, at the cost of allowing some backlash m the relative motions of table 184 and shaft 170 Micrometer scale 172 can be read to determine the position of moving table 35 184 within its range of motion

Figure 6 shows a detailed view of bushing 176 and the interface between micrometer extension shaft 170 and actuator arm 174 Shaft 170 is captured within bushing 176 so that arm J74 will follow position changes of shaft 170 in either direction, with the previously menuoned small amount of backlash Figure 7 shows a block diagram of the electronic portion of this first embodiment Figure 7 represents the electronics of a standard, known borescope video system except for the addition of a cursor controller 230 and a computer 228 In Figure 7, an illumination controller 200 is connected to the borescope through a fiber optic cable 206 as has previously been descnbed Video camera back 134 is connected to camera controller 212 through camera control cable 135 as has also been descnbed For the known system, the video signal out of the camera

5 controller is connected to a video momtor 214 and, optionally, to a video recorder 216, through a video cable 137 as shown by the broken line in Figure 7 In this embodiment, the video signal from camera controller 212 is instead sent to cursor controller 230 The video signal as modified by cursor controller 230 is then supplied to video momtor 214 and to video recorder 216 Use of video recorder 216 is optional, though its use makes it possible for the user to repeat measurements or to make additional measurements at some later time, without

10 access to the oπginal measurement situation

Figure 8 shows a view of the video monitor as seen by the user On video screen 310 there is seen a circular image of the borescope field of view, which I call the apparent field of view, 312 Inside apparent field of view 312 is shown an image of the object under inspection 314 Superimposed on video screen 310, and hence on image 314, are a pair of cross-hairs, fiducial marks, or cursors. 316 (Cursor A) and 318 (Cursor B) These cursors can

15 be moved to any portion of the video screen, and can be adjusted in length, bnghtness, and line type as required for best alignment with points of interest on image 314 Note that these cursors do not need to be cross-hairs, other easily discernible shapes could also be produced and be used as well

The generation of video cursors is well known by those familiar with the art, so is not part of this invention The functions of cursor controller 230 are controlled by computer 228 (Figure 7) Computer 228 has a user

20 interface that allows manipulation of the cursor positions as desired It also provides a means for the user to indicate when a given cursor is aligned appropπately, so that an internal record of the cursor position can be made It provides means for the user to input numencal data as read from micrometer scale 172 In addition, computer 228 contains software which implements algoπthms to be descnbed which combine these numerical data appropπately to denve the true three dimensional distance between points selected by the user Finally, computer

25 228 provides a display means, whereby the dιstance(s) determined is (are) displayed to the user Clearly, this display could be provided directly on video screen 310, a technology which is now well known, or it could be provided on the front panel or on a separate display screen of computer 228

The system descnbed by Konomura in US patent 5,575,754 is similar to mine in that it also allows one to move a borescope to vaπous positions along a straight line path and to view and select points on the image of the

30 object on a video screen However, Konomura uses a cylinder sliding within a cylinder, driven by a rack and pi on, to do the positioning of the borescope There are two basic problems with Konomura's positioning mechanism which are overcome in my system

The first problem with Konomura's mechanism is that it is difficult and expensive to achieve an adequate accuracy of straight line travel with a cylinder sliding inside a cylinder as compared to the translation stage of my

35 preferred embodiment, which is widely available at reasonable cost and which provides exceedingly accurate motion The accuracy of the straight line motion directly affects the accuracy of the perspective measurement The second problem is that Konomura's use of a rack and pinion to dπve the position of the borescope means that that position will tend to slip if the borescope is not oriented exactly horizontal, due to the weight of the moving assembly Konomura makes no provision for holding the position of the borescope With mv micrometer dnve, or more generally, with a lead screw drive, the high mechanical advantage means that there would be no tendency for the position to slip even if the lead screw were supporting the full weight of the moving assembly

There are fundamental reasons why the translation stage or slide table of my prefeπed embodiment provides a more accurate straight line motion than does a cylinder sliding within a cylinder First, the translation stage uses roiling friction rather than the sliding friction of Konomura's system This means that there is much less tendency to alternating stick and slip motion ("stiction") Secondly, the translation stage makes use of the pπnciple of averaging of mechamcal errors The ways and rollers of slides 186 of stage 180 are produced to very tight tolerances to begin with Then, the ways and rollers are heavily preloaded so that, for instance, any rollers that are slightly larger than the average undergo an elastic deformation as they roll along the ways Thus, the motion of moving table 184 is determined by an average of the positions that would be determined by errors in the ways and the individual rollers One cannot use a large preload to average out enors in a cylinder sliding within a cylinder, because then the fπction would become too high This is especially true because of the large surface contact area involved I menuoned above that a dovetail slide could also be used in my system Such a slide can be preloaded to average motion errors without the fπction becoming too high simply because the surface contact area is suitably small

C Operation of the First Embodiment

The view of the object shown in Figure 8 has the problem that it is a two-dimensional projection of a three- dimensional situation Clearly the combination of cursor controller 230 and computer 228 is capable of making relative measurements of the apparent size of features on object image 314, as is well known But, because there is no information on distance, and because the distance may vary from point to point in the image, there is no way to determine the true dimensions of object feature 102 from image 314

The solution offered by my perspective measurement system is to obtain a second view of the object as shown in Figure 9 This second view is obtained by translating video borescope 120 a known distance along an accurate straight line path using BPA 138 descnbed above

The following discussion of the operation of my system assumes that it is the distance between two points on the object which is to be determined As will become clear, it is straightforward to extend the measurement process to as many points as desired In the case of more than two points, the distances between all other points and one particular reference point could be determined by the process descnbed below Additionally, the distances between all of the points taken as pairs could be determined from the same data gathered during this process

I will now outline a first mode of distance measurement operation As was shown in Figure 8, to begin the process the borescope is aligned with the object to produce a view where the points of interest are located substantially on one side of the field of view In that view, cursor A (316) and cursor B (318) are aligned with the two points of interest, respectively, as shown in Figure 8 When the cursors are aligned conectly, the user indicates this fact through the user interface of computer 228, and computer 228 records the locations of cursors A and B The user also then enters the position of moving table 184 as indicated on micrometer distance scale 172

Using micrometer 168, the user then repositions the borescope to obtain a second view of object 100 As shown m Figure 9, the user selects a second position of the borescope to bnng the points of interest to substantially the other side of the borescope field of view as compared to where thev were in the first view The cursors are then once again used to locate the positions of the points of interest, cursor A for point A and cursor B for point B In Figure 9, Cursor B (318) is shown temporanly moved to an out of the way position to avoid the possibility of confusion when the user is aligning cursor A with Point A The user has the option of aligning and recording the cursor positions one at a time, if desired When the cursors are positioned correctly, or when each cursor is positioned, if they arc being used one at a time, the user indicates that fact through the user interface of computer 228 The user then enters the new position of moving table 184 as indicated on micrometer distance scale 172

With the data entered into computer 228, (two cursor position measurements for each point of interest and two borescope position measurements) the user then commands the computer to calculate and display the true three dimensional distance between the points which were selected by the cursors The computer combines the measured data with calibration data to determine this distance in a software process to be descnbed further below The calibration data can be obtained either before the measurement or after the measurement, at the option of the user In the latter case, computer 228 will store the acquired data for future computation of the measured distance Also, in the case of post-measurement calibration, the user has the option of directing computer 228 to use preliminary or previously obtained calibration data to provide an approximate indication of the distance immediately after the measurement, with the final distance determination to depend on a future calibration

The measurement process just outlined is that expected to be the one most generally useful and convement However, there is no requirement to use two separate cursors to determine the apparent positions of two points on the object, because one cursor would work perfectly well as long as the cursor position data for each point of interest are kept organized properly In addition, it may be that the distances between more than a single pair of points is desired In this case, there are just more data to keep track of and nothing fundamental has changed I now outline a second mode of distance measurement operation Consider measurement of the distance between two points which are so far apart that both points cannot lie on substantially the same side of apparent field of view 312 Figures 10 and 11 show an example of this situation, where the three-dimensional distance between the two ends of an elliptical feature is to be determined Figures 10A and 10B show the mo steps involved in the determination of the three dimensional location of the first end of the elliptical feature Figures 11A and 1 IB show the two steps involved in the determination of the three dimensional location of the second end of the elliptical feature In this mode of distance measurement, a point of interest on the object is first brought to a location on one side of apparent field of view 312 and a cursor is aligned with it The cursor position and micrometer position data are then stored The view is then changed to bnng the point of interest to the other side of apparent field of view 312, and the cursor position and micrometer position data are once again stored This same process is earned out sequentially for each point of interest on the object After all of the cursor and micrometer position data are gathered, the computer is instructed to calculate die desired distances between points Note that in this second mode of distance measurement operation, the two points of interest could be located so far apart that they could not both be viewed at the same time In this case the measurement still could be made That is, there is no requirement that the distance to be measured be completely containable within apparent field of view 312 The only limit is that two suitable views of each point be obtainable within the translation range of BPA 138 This is a capability of my system that was not conceived of in the prior art In detail, the process of making a measurement of the distance between two points, both of which are contained within a relatively small portion of apparent field of view 312 as shown in Figures 8 and 9, (I call this measurement mode 7) is made up of the following steps

1 A specific area of interest on object image 314 is located in apparent field of view 312 by sliding and rotating borescope 120 inside borescope clamp 140

2 Borescope clamp 140 is locked with clamping screw 150 to secure the position and onentation of the borescope with respect to BPA 138

3 Micrometer drum 178 is rotated to select a first view of the object, with both points of interest located substantially on one side of apparent field of view 312, such as that shown in Figure 8 The approximate position of the micrometer as read from scale 172 is noted

4 Micrometer drum 178 is rotated to select a second view of the object, such as that shown in Figure 9 This step insures that a suitable view is, in fact, obtainable within the range of motion of micrometer 168, and that, for instance, the view is not blocked by intervening objects

5 Micrometer drum 178 is then rotated back again to approximately the position selected for the first view At this point, the rotation of the micrometer is again re\ersed so that the micrometer is being rotated in the direction that is necessary to move from the first view to the second view After a sufficient reverse rotation to ensure that the backlash of bushing 176 has been taken up, the micrometer rotation is halted This is now the selected viewing position for the first view

6 Cursors 316 and 318 are then aligned with the selected points on object image 314 using the user interface provided by computer 228

7 When each cursor is aligned conectly, computer 228 is commanded to store the cursor positions The cursors can be aligned and the positions stored either sequentially, or simultaneously, at the option of the user

8 The user reads micrometer scale 172 and enters the reading into the computer with the user interface provided

9 Micrometer drum 178 is now carefully rotated m the direcUon necessary to move from the position of the first view to the position of the second view This rotation stops when the user judges the second view to be satisfactory for the purposes of the measurement desired, such as that shown in Figure 9

10 The user repeats steps 6, 7, and 8 11 The user commands the computer to calculate and display the true three-dimensional distance between the points selected by the cursors in steps 6 and 10 If desired, the computer can be commanded to also display the absolute positions of each of the two points These absolute positions are defined in a coordinate system to be descnbed below

In detail, the process of making a measurement of the distance between two points, when they cannot both be contained within a relatively small portion of the apparent field of view 312 as shown in Figures 10 and 11, (I call this measurement mode 2) is made up of the following steps 1 Computer 228 is instructed to command cursor controller 230 to produce a single cursor While it is not absolutely necessary to use a single cursor, I believe that the use of a single cursor helps avoid unnecessary confusion on the part of the user

2 The user adjusts micrometer 168 to approximately the midpoint of its range by rotating drum 178 3 A specific area of interest on object image 314 is located in apparent field of view 312 by sliding and rotating borescope 120 inside borescope clamp 140 The two points of interest are identified, and the borescope is positioned so that the center of apparent field of view 312 is located approximately equidistant between the two points of interest

4 Borescope clamp 140 is locked with clamping screw 150 to secure the position and orientation of the borescope with respect to BPA 138

5 Micrometer drum 178 is rotated to select a first view of the first point of interest The first view is selected so that the point of interest is located substantially on one side of apparent field of view 312, such as that shown in Figure 10 A The approximate position of the micrometer as read from scale 172 is noted 6 Micrometer drum 178 is rotated to select a second view of the first point The second view is selected so that the point of interest is located substantially on the other side of apparent field of view 312 from where it was in the first view, such as that shown m Figure 10B This step insures that a suitable view is, in fact, obtainable within the range of motion of micrometer 168, and that, for instance, the view is not blocked by intervening objects 7 Steps 5 and 6 are repeated for the second point of interest, as depicted in Figures 11 A and 1 IB This step ensures that suitable views are, in fact, obtainable for the second point of interest with the borescope alignment chosen in step 3

8 Micrometer drum 178 is then rotated to approximately the position selected for the first view of the first point of interest (Step 5) At this point, the user makes sure that the micrometer is being rotated in the same direction that is necessary to move from the first view to the second view of the first point of interest After a sufficient rotation to ensure that the backlash of bushing 176 has been taken up, the micrometer rotation is halted This is now the selected position for the first view of the first point of interest

9 The cursor is then aligned with the first point of interest on object image 314 using the user interface provided by computer 228

10 When the cursor is aligned conectly, computer 228 is commanded to store the cursor position

11 The user reads micrometer scale 172 and enters the reading into the computer with the user interface provided

12 Micrometer drum 178 is now carefully rotated in the direction necessary to move from the position of the first view to the position of the second view This rotation stops w hen the user judges the second view to be satisfactory for the purposes of the measurement desired

13 The user repeats steps 9, 10, and 11

14 Micrometer drum 178 is rotated to obtain the first view of the second point of interest, which was selected during step 7 The user repeats step 8 for this first view of the second point of interest 15. The user repeats steps 9 to 13 for the second point of interest.

16. The user commands the computer to calculate and display the true three-dimensional distance between the points. If desired, the computer can be commanded to also display the absolute positions of each of the two points, in the coordinate system to be defined below.

I have repeatedly emphasized that the user should position the points of interest first near one edge of apparent field of view 312, then near the other edge, during the measurement. The reason is that analysis of the enors in the measurement shows that there is, in general, an optimum perspective baseline to be used, and that this optimum baseline differs for each individual measurement. The pnmary requirement on the perspective baseline is that it be chosen to be proportional to the range of the object from the camera A secondary, and less important, requirement is that the perspective baseline be chosen to correspond to an optimum measurement viewing angle of the borescope being used. While the exact optimum measurement viewing angle depends on the detailed charactenstics of the borescope, for most borescopes the optimum angle will be somewhere near the edge of the field of view, although not exactly at the edge. Use of the procedure as specified above automatically ensures that both of these requirements on the perspective baseline are being achieved for the particular measurement being attempted.

It should be clear that the measurement process and mechanical hardware I have defined could be used with borescopes other than video borescope 120 as I have described it. For instance, the video borescope could be implemented with a tiny video camera and lens located at the distal end of a rod or tube without changing this system of measurement at all. In such an "electronic borescope" there would be no need of a lens train to conduct the image from the distal end to the proximal end. While flexible electronic endoscopes built this way are currently available, I am not aware of a rigid borescope like this. However, when one considers that optical components keep getting more expensive while solid state imagers keep getting less expensive and that the resolution of solid stage imagers keeps increasing, it seems likely that electronic borescopes will be used at some future time, especially in the longer lengths, where the optical performance of ordinary borescopes is degraded. (I will later describe an electronic measurement borescope; here 1 am speaking of an electronic borescope that contains no inherent measurement capability.)

This system could also be used with a visual borescope, that is, one with no video at all, requiring only that the borescope eyepiece contains an adjustable fiducial mark with a position readout (a device commonly called a "filar micrometer"). Such an embodiment of the system, while feasible, would have the strong disadvantage of requinng the manual transcription of fiducial position data, which would be a source of enors. It also would have the disadvantage of requiring the user to perform a delicate and precise task, namely accurately aligning the fiducial mark with a selected point on the image of the object, while under the physical stress of looking through the borescope. (In general the borescope would be located at a position awkward for the user). And, of course, such a visual measurement borescope would not be a standard component, unlike the video borescope I have discussed. It is also clear that the video system could be a digital video system as well as the analog system I have discussed. That is, the video signal could be digitized into discrete "pixels" with a video "frame grabber" and all video processing could be done digitally. Konomura's system is implemented with digital video Such systems are more expensive than my simple analog system, and there is another disadvantage that, is more subtle, but is important

A detailed analysis shows that the perspective measurement process is much more sensitive to enors made in locating the image point in the direction of the perspective displacement than in the direction perpendicular to the perspective displacement This means that for best measurement accuracy, one wants to arrange the system so that the smallest feasible enors are made along the direction of the perspective displacement as projected onto the image plane For standard video systems there is a difference in resolution between the honzontal and vertical video directions, with the horizontal direction having the higher resolution This higher resolution applies not only to the video sensor itself, but also to the cursor position resolution

In a digital system, the honzontal cursor resolution is limited by the number of honzontal pixels in each line of video, which is typically about 512 or certainly no more than 1024 for a standard system In an analog system, the honzontal cursor resolution is not limited to any particular value, since it is a matter of timing It is straightforward to build an analog cursor positioning system which provides a cursor resolution of nearly 4000 positions across the video field This higher honzontal cursor resolution available to my analog system is valuable in minimizing the enor in the measurement As I previously mentioned, the prior art perspective measurement assumes that the viewing camera optical axis is oπented perpendicular to the perspective displacement, that is, along the z axis in Figure 2 It also assumes that the honzontal and vertical axes of the camera are onented along the x and y directions in that Figure Clearly, in view of Figures 1 and 4, these assumptions are not adequate if one wants to use any substantially side- looking borescope without any specific alignment between the optical axis and the centerline of the borescope or without any specific rotational onentation of video camera back 134 with respect to field of view 122

Because of the higher available resolution in the honzontal video direction, one wants to anange things so that the perspective displacement is viewed along the honzontal video direction Thus, in order to achieve the most precise measurements possible with my system, the user should prepare the borescope before making measurements by rotating video camera back 134 about the axis of borescope 120 so that the horizontal video direction of camera back 134 is approximately aligned to the plane in which the optical axis of field of view 122 lies (This assumes that there is no additional rotation of the image about the optical axis inside the borescope If there is such an additional rotation, then one rotates the camera back to align the honzontal video direction with the projected direction of the perspective displacement as seen at the position of the video sensor )

This alignment will ensure that measurements are made with the smallest feasible random enor But, in order to obtain the random enor reducing properties of this alignment, it is not necessary that it be very accurate Thus, this preparatory alignment is not a formal part of the measurement procedure, nor of the calibration of the system, which is discussed later In any case, whether this preparatory alignment is performed or not, my calibration procedure determines the actual alignment of the camera, and my data processing procedure takes that alignment correctly into account in the measurement

In the measurement processes that were descnbed above, the expeπmental data obtained arc four image position coordinates (x' , y'ltnl , x'ιm2, y'm2) for each object point of interest and the reading of the micrometer at each viewing position I now explain how to combine these measured quantities, together with calibration data, in an optimum way to determine the distance between the two points of interest

Figure 12 shows a generalized perspective measurement situation Here, two viewing coordinate systems are set up, each of which is determined by the x and y axes of the camera focal plane, and their mutual perpendicular 'z — x x y In Figure 12 a first coordinate system has its origin at the first observation point, PI, and a second coordinate system has its ongin at the second observation point, P2 Because there may be a rotation of the camera in moving between PI and P2, the coordinate axes at PI and P2 are not parallel, in general These coordinate systems arc denoted by the subsenpts 1 and 2 That is, the PI coordinates of a point are expressed as (xi, jyj , z ) while the coordinates of the same point, as expressed in the P2 system, are (x2, y%, z ) The P2 coordinate system has its ongin at d m the PI system

To accomplish the perspective measurement, the arbitrary point P, is viewed first in the PI coordinate system, then in the P2 coordinate system

Because in this first embodiment I use a translation stage which provides a high degree of precision in the motion of the camera, 1 assume for now that there is no rotation of the camera in the translation between P 1 and P2 In this case, the coordinate axes of the two systems in Figure 12 are parallel The partial generalization here is that the perspective displacement between PI and P2, d, can be at any arbitrary orientation with respect to the viewing coordinate axes

In the discussion of the pnor art above, the following relationships between the camera image plane coordinates and the conesponding object point coordinates were determined

Figure imgf000022_0001
where I is the distance from the nodal point of the optical system to the image plane Similar equations hold for the observations at both camera positions These image point data can be wπtten in vector form as

Figure imgf000022_0004
Figure imgf000022_0002
or

Figure imgf000022_0003

The vector av, which I call the visual location vector, contains the image point location data for the measurement of the apparent location of a point P, from a given viewing position These data, of course, are assumed to have been conected for camera distortion as was previously explained and will be discussed in detail later The distance, z, is unknown When one measures the apparent locations of P, from two viewing positions, separated by a vector d, one has two vector equations r, = r2 + d = z2 av2 + d (12) where π is the location of a point as expressed in the coordinate system which has its ongin at PI, and r2 is the location of the same point as expressed in the coordinate system tied to P2

Expressions (12) represent 6 equations 4 unknowns The four unknowns are the three components of r1 (or r2)

Subtracting the two Equations (12), one obtains

2) a„ι - z2 av2 = d (13) which can be wntten as a matπx equation

Figure imgf000023_0001

Expression (14) represents three equations in two unknowns When there are more equations than unknowns, the system of equations is called over - determined, and there is in general no exact solution However, because the coefficients of the equations are expeπmentally determined quantities that contain noise, one wouldn't want an exact solution, even if one happened to be available What one wants is a solution that "best fits" the data in some sense The standard cπtenon for "best" is that the sum of the squares of the deviations of the solution from the measured data is minimized This is the so-called least squares solution or least squares estimate

The least squares solution of the over determined system of Equations (14) can be simply expressed by introducing the left pseudo-inverse of the data matπx

Figure imgf000023_0002

Adding the two Equations (12), one obtains

2 r, - d = [ avl av2 ] (16)

Substituting ( 15) into ( 16) [*vi a„ ][ av) av , LI + -]' (17)

where Ij is the identity matnx of dimension 3 Equation (17) gives a three-dimensional least squares estimate for the location of the point of interest, P, , as expressed in the coordinate system at viewing position PI, for the visual location vectors avι and av2 measured at viewing positions PI and P2 respectively

To aid m the comparison of expressions (17) to the prior art result (6), introduce an auxiliary coordinate system into expression (17) Recall that (6) refers to a coordinate system which is defined such that the ongin lies exactly half ay between the two observation positions Therefore, define r- = r, - 5 d (18)

Then rm = -iavι av2 ][av, - av2 ]u d (19)

Thts is the simple, general expression for the location of a point of interest, given expenmentally determined apparent positions, when the perspective displacement d is oriented in some arbitraπ direction Expression (19) is correct and complete as long as the motion of the camera between the two viewing positions is a pure translation An important conclusion from expression (19) is that the determination of the position of a point, r. from the measured data requires only the knowledge of the perspective displacement vector d, as expressed in the PI coordinate system, and the image distance or effective focal length, i (from (11)) Of course, the image point position data incorporated in visual location vectors avι and av2 must have been conccted for the distortion of the optical system before being used in (19), as was previously explained

To compare (19) to the prior art result, note that the left pseudo-inverse of a matnx can be written as

Au = (ATA) Ατ (20) and specify that d is directed along the x axis, as was assumed in the derivation of (6) When one also assumes that y,mι = y,m2 , one finds that (19) reduces to

Figure imgf000024_0001

Clearly, (21) is identical to (6) for the case where y,mι = y,m2

The optimum way to use the four measurements, according to the least squares cπtenon, is given by (19) It reduces to result (21) for the case here d is directed along the x axis and when the two images are located at the same y positions If the measured \m\ does not equal y,m2 but the difference is small, it can be shown that the least squares result is the same as using the average value of y,mιand y,m2 in (21) in place of ym\ ( but only when d is directed along the x axis)

Now consider measurement mode 1 Assume that two points of interest, A and B, are viewed from two camera positions, PI and P2 The distance between PI and P2 is d, and is simply calculated from the expeπmental data as d — - h , where /ιand l2 are the micrometer readings at viewing positions PI and P2 respectι\ely Considering now the determination of the location of either one of the points, one next corrects the measured image position data for distortion As I discuss further in the calibration section, I use the term distortion to refer to any deviation of the image position from the position that it would have if the camera were perfect This is a much more general definition than is often used, where the term refers only to a particular type of optical field abenation

Of course, this distortion coπecting step is performed only if the distortion is large enough to affect the accuracy of the measurement, but this will be the case when using any standard borescope or almost any other type of camera to perform the perspectn e measurement As is further descnbed in the calibration section, one can write the image position coordinates as

*"«m = Xlm - fθx(xtm, ym) (22)

Figure imgf000024_0002
where (x' ]m, y'ιm) are the experimental measurements and (xιm, ym) are the distortion corrected versions The same equation applies to the data at both camera positions, that is, both x'mland x m2 are subjected to the same coπection function fox and both y.mland j 2 are corrected with /χ>„ The distortion conection functions fox and }DV are determined in a calibration process which is described in the calibration section This calibration process is known in the art

Next, the data are scaled by the inverse of the effective focal length of the combined optical-video system That is, the data (x,mι, y,mι, x!m2. Vmi) are multiplied by a factor necessary to generate the equivalent true values of the tangent of the viewing angles

Figure imgf000025_0001
and likewise for the other two measurements for this point on the object from position P2 The equivalent focal length, i, is preferably determined in the same calibration process as is the distortion, as will be descnbed later in the calibration section

Next two visual location vectors are formed from the scaled, distortion coπected image position measurements These vectors are

avι = (24)

Figure imgf000025_0004

The perspective displacement is formed by placing the perspective baseline (the measured distance between viewing positions PI and P2) as the first element of a vector

Figure imgf000025_0002

The perspective displacement is then transformed to the viewing coordinate system defined by the camera at PI by multiplication of db by a pair of 3 x 3 rotation matrices Ry and Rz dvl = R, Rv db (26)

The multiplications in Equation (26) are standard matrix multiplications of. for instance, a 3 x 3 matπx with a 3 x 1 vector Rotation matnees Rv and Rz describe the effects of a rotation of the coordinate system about the y axis and about the z axis respectively They are each defined m a standard way as a function of a single rotation angle The definitions of the rotation matnees. and the calibration process for determination of the rotation angles, are given later. The alignment calibration process that I define there to determine these rotation angles is new The location of the point being determined is then calculated according to Equation (19) as

rm = av2 ][ av, av2 ] dvι (27)

Figure imgf000025_0003

The process ending with the calculation expressed in Equation (27) is performed for the data obtained on points A and B in turn, and then Equation (7) is used to calculate the desired distance between points A and B

Measurement mode 2 is depicted m Figure 13 Here there are up to a total of four viewing positions used The fields of view of the camera at each position are indicated by the solid lines emanating from the viewing position, while the camera optical axes are denoted b\ the dot-dash lines Dashed lines indicate schematically the angles at which points A and B are viewed from each position

Because of the accurate motion provided by translation stage 180, the viewing positions all e along a straight line and the viewing coordinate systems are all parallel Figure 13 is drawn as the projection of a three- dimensional situation onto the x, z plane of the camera Thus, the viewing positions and the line along hich they are drawn do not necessaπly lie m the plane of the Figure, nor do the points of interest A and B

Point of interest A is viewed from positions PI A and P2A with perspective baseline dA, while point B is viewed from P1B and P2B with perspective baseline dβ The expcπmcntal data obtained during the mode 2 measurement process are the four image point coordinates for each of the points A and B, and the four \ lewpoint positions along the camera motion axis Z1A, hh. JIB, and 12Q Note that two of the viewing positions could be coincident, so that a total of three different viewing positions would be used, and this mode would still be distinct from mode 1

Vectors rA and TB are determined using the perspective baselines dA — 12A - 'IA and dD = 12B - u as has just been descnbed for measurement mode 1 The distance between the coordinate origins for the measurements of A and B is then calculated as

Figure imgf000026_0001

Next, vector dAB in the camera coordinate system is calculated as

Figure imgf000026_0002

Finally, the desired distance between points A and B, r, is calculated as r = |r| = |dAB + rA - rB| = rTr (30) where the vertical lines indicate the magnitude (length) of a vector

Measurement mode 2 can have a lower random enor than does measurement mode 1 because the points of interest can be brought to the optimum apparent angular position in each of the views, whereas the apparent angular positions chosen for the points in measurement mode 1 is necessarily a compromise

In contrast to the pnor art, my data processing procedure conectly takes into account the general geometry of the perspective measurement Because of this, it is possible to define a complete set of parameters which can be calibrated in order to obtain an accurate measurement no matter what measurement geometry is used Thus, it is only my measurement system which can make an accurate measurement with a standard, off the shelf, video borescope In addition, I make use of all of the available measurement data in an optimum way, to produce a measurement with lower enor than otherwise would be provided Finally, my system provides a new measurement mode (mode 2) which allows one to measure objects which are too large to be measured with prior art systems, and which provides the lowest feasible random measurement enor

D Descnption of a Second Embodiment

Figure 14 shows a block diagram of the electronic portion of a second embodiment of my system The new elements added as compared to the first embodiment are a position transducer 360, a motion actuator 410, a motion controller 452 and a position measurement block 470 The latter two blocks are combined with cursor controller 230 and computer 228 into a block called the system controller 450 Position transducer 360 is connected to position measurement block 470 by a position transducer cable 366 Motion actuator 410 is connected to motion controller 452 with an actuator cable assembly 428

This second embodiment of the electronics could be built with die capability of completely automatic operation of the position of borescope 120 That is, borescope 120 could be positioned anywhere within the range of travel of translation stage 180 (Figure 5) under control of computer 228 upon operator command In this case, the user would only have to command some initial position for translation stage 180, then align and clamp borescope 120 appropnately as descnbed above for the operation of the first embodiment, and then never have to touch any of the mechanical hardware again duπng the measurement process The two viewing positions, PI and P2, as descnbed previously, would be selected by the user by moving stage 180 under computer control

Such automatic positioning of borescope 120 could be closed-loop positioning That is, the computer would position the borescope by moving the borescope until a particular desired position was indicated by the combination of transducer 360 and position measurement block 470

In fact, the same commercial vendors who supply translation stages often supply complete positioning systems which combine a translation stage with the motion control and position measurement blocks shown in Figure 14 Most often these systems use an actuator compnsing an electric motor, either a dc motor or a stepping motor, dπv g a precision lead screw That is, the actuator is essentially a motonzcd micrometer Clearly, there are a number of different actuators and position transducers that can be used in any such system

What I consider the best mode for implementing this second embodiment of the invention is somewhat different than the system I have just described I believe that a system can be built at lower cost and be at least as convement to operate if it is built as I will now descnbe Since the pnmary use of the mechamcal subsystem is to move borescope 120 (Figure 4) back and forth between two positions, this embodiment is directed toward making that process simple and quick Generally speaking, it takes a long time for a motor dπven translation stage to move between positions spaced a significant distance apart The second embodiment of BPA 138 is shown in Figures 15 through 19 Figure 15 is a front view. Figure 16 is a top view, Figure 17 is a rear view, while Figures 18 and 19 are left and πght side views respectively

The same borescope clamp assembly 140 as was used in the first embodiment is also used in this second embodiment As before, lens tube 124 has been removed from clamp 140 in these views for clarity Clamp 140 is compnsed of a lower V - block 142, an upper V - block 144, a hinge 148, and a clamping screw 150 The upper V - block is lined with a layer of resilient mateπal 146, for the same reason given m the description of the first embodiment

Also, just as in the first embodiment, lower V - block 142 is attached to the moving table 184 of a translation stage or slide table 180 The translation stage consists of a moving table 184 and a fixed base 182, connected by crossed roller beaπng slides 186 Fixed base 182 is attached to a BPA baseplate 162 The differences between this second embodiment of BPA 138 and the first embodiment are contained in the methods by which moving table 184 is positioned and how its position is determined In this second embodiment an air cylinder 412 is mounted to an actuator mounting bracket 422 which is in turn mounted to baseplate 162 Air cylinder 412, which is shown best in Figure 18, has two air ports 420 and an extension rod 418 Air hoses (not shown) are connected to ports 420 and are contained within actuator cable assembh 428 which was shown on the block diagram, Figure 14 The air hoses convey air pressure from motion controller 452 (Figure 14) Extension rod 418 is connected to an actuator attachment bracket 424 through an actuator attachment bushing 426 Bracket 424 is fastened to moving table 184 as is best shown in Figures 16 and 17

On the other side of the moving table / borescope clamp assembly from air αlinder 412 is mounted a linear position transducer 360 Position transducer 360 consists of a linear scale body 362 and a scale read head 364, which are attached to each other as an integral assembly, but which are free to move with respect to one another within limits along one direction Attached to read head 364 is a position transducer cable 366 which connects to system controller 450 as was shown in Figure 14 Scale body 362 is mounted to moving table 184 through a scale body mounting bracket 363 Read head 364 is mounted to BPA baseplate 162 through a read head mounting bracket 365

Attached to the upper side of actuator attachment bracket 424 is a dovetail slide 404 Mounted on dovetail slide 404, as best shown in Figures 16 and 18, is an adjusting nut bracket 394 Bracket 394 contains a fixed nut 396 which in turn contains an adjusting screw 398 Adjusting screw 398 has an adjusting screw knob 400 and an adjusting screw tip 402 disposed at opposite ends of its length Bracket 394 also contains a bracket position locking handle 406 Locking handle 406 is connected to a locking cam 407 mounted inside bracket 394 Locking cam 407 is shown only in Figure 17

Dovetail slide 404 and adjusting nut bracket 394 and the items contained therein form a subassembly known as the forward stop positioner 390 An exactly similar assembly called the rearward stop positioner 388 is mounted to the BPA baseplate behind translation stage fixed base 182 Rearward stop positioner 388 is best shown in Figures 16, 17 and 19

Depending on the position of moving table 184, adjusting screw tip 402 of adjusting screw 398 of forward stop positioner 390 can contact end stop insert 393 of end stop 392 as best shown in Figures 16 and 18 Similarly, the rearward stop positioner 388 is aligned so that the tip of its adjusting screw can contact the rear end of moving table 184, as can be best visualized from Figures 16 and 17 In Figure 16 is shown a stop pin hole 440 the purpose of which will be explained below

Although the overall length of BPA 138 could be made shorter if read head 364 were mounted to moving table 184 and scale body 362 were mounted to baseplate 162, 1 have chosen to mount the unit as shown because then cable 366 does not move with table 184 Either way will work, of course

E Operation of the Second Embodiment

As stated above, the differences between this second embodiment and the first embodiment relate to how the borescope is moved and how the position of the borescope is determined

The inclusion of position transducer 360 and position measurement block 470 as shown in Figure 14 means that the user of the instrument is no longer responsible for making position readings and transcπbmg them into computer 228 When the user indicates that the cursors are positioned as desired, as was described in the operation of the first embodiment, the computer will now automatically command a camera position measurement from position measurement block 470 and will automatically store this datum Note that position transducer 360 need not be an absolute encoder of position From Equation (28) (and the similar expression for measurement mode 1, which is not a display equation) it is clear that the measurement depends only on the distance moved between viewing positions A constant value can be added to the encoded position without changing the measurement in any way Thus, position transducer 360 together with position measurement block 470 need only produce a position value that has an offset which is constant over the period of a measurement This offset need not be the same from measurement to measurement This means that transducer 360 can be what is called an incremental distance encoder, and this is what will be descnbed

As I will explain later with regard to other embodiments, if one wants to conect for enors in the camera motion, or if one wants to use a camera motion that is not constrained to a perfect straight line, then it is necessary to know the absolute position of the camera with respect to some fixed reference point The distance encoder that I descπbe here has what is known as a "home" position capability The home position allows one to use the incremental encoder as an absolute encoder when and if required

Position transducer 360 contains a precision magnetic or optical pattern formed on a plate inside scale body 362 Read head 364 reads the pattern and thereby produces signals which change according to changes in relative position between read head 364 and scale body 362 The unit depicted here is sold by RSF Elektroruk Ges m b H of Tarsdorf, Austna, but similar u ts are available from Remshaw pic of the United Kingdom and Dr Johannes Heidenhain GmbH of Germany The unit shown is available m resolutions as small as 0 5 micrometer (μm), with guaranteed positioning accuracy as good as ± 2 μ over a length of 300 millimeters For die short length unit used in the BPA, one would expect the accuracy to be considerably better

Position measurement block 470 interprets the signals from read head 364 to determine changes in the position of read head 364 with respect to the scale inside scale body 362 Position measurement block 470 formats the position data into a form that is understood by computer 228 If the home position capability has not been used, then measurement block 470 will report a position relative to the position that the transducer assembly was in when the power was turned on If the home capability has been used, then the position will be reported relative to the fixed home position Whether the home position capability is used or not is a design decision which depends on whether motion errors are to be corrected The method of correction for errors in the motion is discussed at length below in a sub-section entitled "Operation of Embodiments Using Arbitrary Camera Motion"

The existence of motion actuator 410 and motion controller 452 means that the user is not required to manually move the borescope between PI to P2 This has the advantage of eliminating any chance that the user will accidentally misalign BPA 138, hence borescope 120, during the measurement process It also has the advantage of eliminating the tedious rotation of the micrometer barrel 178 which is required dunng operation of the first embodiment

Air cylinder 412 is a double action unit, which means that air pressure applied to one of the ports 420 will extend rod 418 while air pressure applied to the other port will retract rod 418 When a differential pressure is applied between the ports, rod 418 will move until it is stopped by some mechanical means If there is no other mechamcal stop, rod 418 simply moves to either its fully extended or fully retracted position

Through the action of bushing 426 and attachment bracket 424, moving table 184 is constrained to move with extension rod 418 The extent of motion of table 184 is controlled by the mechamcal stops created by the combination of forward stop positioner 390 and end stop 392 and the combination of rearward stop positioner 388 and the rear end of moving table 184 For instance in the fonvard motion direction, the limit to the motion of table 184 is determined when adjusting screw tip 402 of adjusting screw 398 contacts insert 393 of end stop 392 Since the limit positions of table 184 are determined by these mechanical stops, backlash in bushing 426 docs not affect the accuracy or repeatability of this positioning Thus, viewing positions PI and P2 are solely determined by the position of these mechamcal limit stops The measurement of these positions, however, is subject to am backlash contained within position transducer 360, or within the attachments of the transducer to the remainder of the structure

Considenng now the forward stop positioner 390, operating handle 406 rotates cam 407 to either produce or remove a locking force due to contact between cam 407 and dovetail slide 404 Thus, when unlocked bracket 394 can be slid back and forth along dovetail slide 404 until adjusting screw tip 402 is located to give the desired stop position Handle 406 is then rotated to force cam 407 against slide 404 to lock bracket 394 in place Adjusting screw 398 can then be rotated in fixed nut 396 with handle 400 to produce a fine adjustment of the stop position

Once the positions of adjusting screws 398 of forward stop positioner 390 and rearward stop positioner 388 are set as appropπate for the desired perspective viewing positions PI and P2, moving back and forth between these positions is a simple matter of reversing the differential pressure across air cylinder 412 Depending on the length of the air hoses which connect cylinder 412 to motion controller 452, the charactenstics of air cylinder 412, and the mass of the assembly being supported by moving table 184, it may be necessary to connect a motion damper or shock absorber (not shown) between moving table 184 and BPA baseplate 162 This would be required if it is not possible to control the air pressure change to produce a smooth motion of table 184 between the stops at P 1 and P2 Stop pin hole 440 is used as follows At the beginning of the measurement process, it makes sense to start with moving table 184 centered in its range of travel Therefore, a stop pin (not shown) is inserted into hole 440 and computer 228 is instructed to cause motion controller 452 to apply air pressure to cylinder 412 to produce an actuation force which will cause moving table 184 to move backwards until it is stopped by the stop pin At this point the user is ready to begin the measurement set up process

If the home positioning capability of transducer 360 is to be used, after the instrument is powered up, but before measurements are attempted, computer 228 is instructed by the user to find the home position Computer 228 then commands motion controller 452 to move actuator 410 back and forth over its full range of motion Computer 228 also commands position measurement block 470 to simultaneously look for the home position signature in the output signal from transducer 360 Once the home position is found, the offset of the position output data from position measurement block 470 is set so that a predetermined value conesponds to the fixed home position

In detail, the process of making a measurement of the distance between two points, both of which are contained within a relatively small portion of apparent field of view 312 as shown in Figures 8 and 9, (that is, measurement mode 1) is made up of the following steps in this second embodiment

1 Translation stage 180 is centered in its range of travel by use of a stop pin as described above 2 A specific area of interest on object image 314 is located in apparent field of view 312 by sliding and rotating borescope 120 inside borescope clamp 140 3 Borescope clamp 140 is locked with clamping screw 1 0 to secure the position and orientation of the borescope with respect to BPA 138 4. Computer 228 is instructed to remove any differential air pressure across air cylinder 412. The stop pin is removed from hole 440. Moving table 184 is now free to move. The user moves table 184 rearward until the view on video screen 310 is approximately as shown in either Figure 8 or Figure 9.

5. Rearward stop positioner 388 is positioned so that the adjusting screw tip contacts the rear end surface of moving table 184. Stop positioner 388 is then locked at this position.

6. The user moves table 184 forward until the view on video screen 310 is approximately as shown in the opposite view of Figures 8 and 9. That is, if in step 4, the view in Figure 9 was attained, then in this step, the view in Figure 8 is to be obtained.

7. Forward stop positioner 390 is adjusted so that the adjusting screw tip contacts end stop insert 393, and is then locked into position.

8. The computer is instructed to apply air pressure to move table 184 rearward. The view on video screen 310 is inspected and any fine adjustments to the position of the borescope are made by rotating the adjustment screw of rear stop positioner 388. This is position P2.

9. The computer is instructed to apply air pressure to move table 184 forward. The view on video screen 310 is inspected and any fine adjustments to the position of the borescope are made by rotating the adjustment screw of forward stop positioner 390. This is position PI.

10. Cursors 316 and 318 are then aligned with the selected points on object image 314 using the user interface provided by computer 228.

11. When each cursor is aligned conectly, computer 228 is commanded to store the cursor positions. The cursors can be aligned and d e positions stored either sequentially, or simultaneously, at the option of the user.

12. Computer 228 automatically commands a position reading from position measurement block 470. Computer 228 records this position reading as the position of PI.

13. Computer 228 is instructed to apply air pressure to cylinder 412 to move table 184 rearward. Steps 10 to 12 are repeated for P2.

14. The user commands the computer to calculate and display the true three-dimensional distance between the points selected by the cursors in steps 10 and 13. If desired, the computer can be commanded to also display the absolute positions of each of the two points in the coordinate system that was defined in the operation of the first embodiment.

The mode 2 measurement has a detailed procedure which is modified in a similar manner as compared to the detailed procedure given for the first embodiment.

In this second embodiment, the data acquired and the processing of that data are identical to that described for the first embodiment. If motion enors are to be corrected, the data processing is slightly more involved, and will be discussed below in the section entitled "Operation of Embodiments Using Arbitrary Camera Motion".

F. Description of a Third Embodiment

The mechamcal portion of a third embodiment of my invention is shown in an overall perspective view in Figure 20 and in detailed views in Figures 21 through 27. This embodiment implements a new type of rigid borescope which 1 call an electronic measurement borescope (EMB) Figure 28 is an electronic functional block diagram of the EMB system

In Figure 20 electronic measurement borescope 500 has a borescope probe tube 512 which itself contains an elongated viewing port 518 at the distal end At the proximal end of probe tube 12 is located a fiber optic connector 128 Tube 512 is attached to a proximal housing 510, to which is mounted an electronic connector 502 An electronic cable (not shown) connects EMB 500 to a system controller 450 as shown in Figure 28

Figures 21, 22, and 23 are respectively a plan view and left and πght side elevation views of the distal end of electronic measurement borescope 500 In these three views borescope probe tube 512 has been sectioned to allow viewing of the internal components

In Figures 21 through 23 a mimature video camera 224 is shown mounted to a moving table 184 of a translation stage 180 Camera 224 is made up of a solid state imager 220 and an objective lens 121 Pnsm 123 redirects the field of view of camera 224 to the side so that the angle between the optical axis of the camera and the translation direction is approximately 90 degrees, or some other substantially side-looking angle as required for the desired application Solid state imager 220 transmits and receives signals through imager cable 222

In these figures, the hardware that mounts the lens and the prism has been omitted for clarity In addition. schematic optical rays are shown in Figures 21 and 22 purely as an aid to understanding The optical system shown for camera 224 is chosen for illustration purposes, and is not meant to represent the optics that would actually be used in electronic borescope 500 Such optical systems are well known in the art, and are not part of this invention

Fixed base 182 of translation stage 180 is fastened to distal baseplate 514 which in turn is fastened to borescope probe tube 512

The position of moving table 184 is controlled by a positioning cable 482, which is wrapped around a positioning pulley 484 Positioning cable 482 is clamped to moving table 184 through a distal motion clamp 486 Pulley 484 is mounted to baseplate 514 through a pulley mounting shaft 485

Motion clamp 486 supports a distal fiber clamp 492. which in turn supports an illumination fiber bundle 127 Fiber bundle 127 is also supported and attached to moving table 184 by a fiber end clamp 494 Fiber end clamp 494 has internal provision for expanding the bundle of fibers at the end to form fiber output surface 129 (shown in Figure 23)

Fiber bundle 127 and imager cable 222 are both supported by two distal cable stabilizer clamps 490, which arc in turn clamped to and supported by positioning cable 482 The more distal cable stabilizer clamp 490 is captured inside a distal stabilizer slot 491 , which is itself attached to baseplate 514

Also mounted to distal baseplate 514 is a transducer mounting bracket 367, which in turn supports a linear position transducer 360 Transducer 360 is attached to moving table 184 through a transducer operating rod 361 and a transducer attachment bracket 369 Position transducer cable 366 extends from the rear of the transducer towards the proximal end of the borescope Transducer cable 366 is clamped in transducer cable clamp 371 so that tension on cable 366 is not transfened to transducer 360 Clamp 371 is mounted to baseplate 514

Figures 24 through 27 are respectively a plan view, a left side elevation view, a nght side elevation view and a proximal end elevation view of the proximal end of electronic measurement borescope 500 In these views proximal housing 510 has been sectioned to allow viewing of the internal components . In Figure 24. borescope probe tube 512 has been sectioned as well, for the same reason.

In Figure 24, imager cable 222, transducer cable 366, and actuator cable 41 1 have been shown cut short for clarity. In Figure 27, the same cables have been eliminated, for the same reason

In Figures 24 through 27 the proximal end of positioning cable 482 is wrapped around a positioning pulley 484. Pulley 484 is supported by a mounting shaft 485, which in turn is mounted to proximal baseplate 516 through a pulley support bracket 487.

The proximal end of fiber bundle 127 is attached to illumination fiber optic connector 128. The proximal ends of imager cable 222 and position transducer cable 366 are attached to electronic connector 502. Connector 502 is supported by proximal housing 510. Housing 510 also supports borescope probe tube 512 through bulkhead 498. Cables 222 and 366 are clamped in bulkhead 498. Cable 366 is stretched taught between the distal and proximal ends of probe tube 512 before being clamped at both ends, while cable 222 is left slack as shown.

Clamped to positioning cable 482 is a proximal motion clamp 488. Clamp 488 is supported by a proximal translation stage 496, which is in turn mounted to proximal baseplate 516 through a proximal stage support bracket 499. The position of proximal translation stage 496 is controlled by the action of actuator 410 through actuator attachment bracket 424. Bracket 424 is attached to the moving table of translation stage 496. Actuator 410 contains an actuator output shaft 413 which operates bracket 424 through an actuator attachment bushing 426. Actuator 410 is attached to proximal baseplate 516 through an actuator mounting bracket 422.

Actuator 410 is shown as a motorized micrometer. Actuator electrical cable 411 connects actuator 410 to electronic connector 502.

As shown in Figure 28, electronically this embodiment is very similar to the second embodiment (compare Figure 14). The primary difference is that the video camera back 134 in Figure 14 has been split into solid state imager 220 and imager controller 221 in Figure 28.

G. Operation of the Third Embodiment

This third embodiment contains the essentially the same elements as did the second embodiment, and from the user's standpoint the operation is virtually the same except that now all operations are performed through the user interface of computer 228, and the user makes no mechamcal adjustments at all, except for the initial positioning of EMB 500 with respect to the object to be inspected. The key to this third embodiment is that the motion of actuator 410 is transfened to proximal translation stage

496, thence to positioning cable 482, and finally to moving table 184 at die distal end of the scope. As a result, camera 224 is moved a known distance along a straight line path, which allows one to make dimensional measurements as I have described in the first embodiment. This third embodiment has the advantage that the image quality does not depend on the length of the borescope, thus making this of most interest when the object to be inspected is a long distance from the inspection port.

The optical quality of objective lens 121 can be made higher than the optical quality of a rigid borescope. However, solid state imager 220 will in general not have as high a resolution as do external video imagers such as video camera back 1 4 which was used in the first t o (BPA) embodiments Thus the tradeoffs in image quality between the BPA embodiments and this EMB cannot be encompassed by a simple statement

Distal translation stage 180 is shown implemented in Figures 21 to 23 with a ball bearing slide This could also be either a crossed roller slide or a dovetail slide The slide selected will depend on the charactenstics of the application of the EMB A dovetail slide can be made smaller than either of the other two options, so that the smallest EMB can be made if one were used A dovetail slide would also have more fπction than the other two options, and this would not always be a disadvantage For instance, if the EMB were to be used in a high vibration

Figure imgf000034_0001
ironment. the extra friction of a dovetail slide would be valuable in damping oscillations of the translation stage position

With this third embodiment, any error due to rotational motion of the translation stage will not act through a long lever arm, unldce with the first two (BPA) embodiments Thus, the translation accuracx of the stage is less cntical in this embodiment, which means that it is more feasible to use a less accurate ball or dovetail slide instead of a crossed roller slide

The elimination of the long lever arm is a second reason why this third embodiment will be prefencd when the object to be inspected is distant from the inspection port Because fiber bundle 127 is moved along with camera 224, the illumination of the camera's field of view does not change as the camera's position is changed Both fiber bundle 127 and imager cable 222 must move with the camera, thus they are directly supported by positioning cable 482 to avoid putting unnecessary forces on moving table 184

It is possible to provide a second pulley and cable arrangement to take up the load of the fiber bundle 127 and imager cable 222, thus eliminating any stretching of positiomng cable 482 due to that load, but that makes it more difficult to keep the assembly small, and there is little or no advantage when the position transducer is located at the distal end of the scope, as I have shown

Distal cable stabilizer clamps 490 fasten fiber bundle 127 and imager cable 222 to positiomng cable 482 to keep them out of the way of other portions of the system Distal stabilizer slot 491 controls the orientation of the more distal stabilizer clamp 490 to ensure that fiber bundle 127 and cables 222 and 482 keep the desired relative positions near stage 180 under all conditions

Fiber bundle 127 and imager cable 222 must have sufficient length to accommodate the required translation of camera 224 Position transducer cable 366 is of fixed length Thus, transducer cable 366 is fixed at die proximal end of borescope 500 to bulkhead 498 and is clamped between bulkhead 498 and transducer cable clamp 371 with sufficient tension that it will remain suspended over the length of probe tube 512 Fiber bundle 127 and imager cable 222 are run over the top of transducer cable 366 so that transducer cable 366 acts to prevent fiber bundle 127 and imager cable 222 from contact with positiomng cable 482 In this manner, unnecessary increases in the fhctional load on positioning cable 482 due to contact with the other cables are avoided

This simple scheme for keeping the cables apart will work only for a short EMB For a longer EMB. one can place a second cable spacer and clamp similar to bulkhead 498 near the distal end of probe tube 512 but far enough behind the hardware shown in Figures 21 - 23 so that the cables can come together as show n there Then all of the cables w ill be under tension between the proximal and distal ends of the EMB In such a svstcm, one could also use a long separating member, placed between positioning cable 482 and the other cables, to ensure that they do not come into contact

For very long EMBs, it will be necessary to support all of the cables 127, 222, 366, and 482 at several positions along the length of probe tube 512 in order to prevent them from sagging into each other and to prevent positiomng cable 482 from sagging into the wall of tube 512 Such support can be provided by using multiple cable spacers fixed at appropnate intervals along the inside of tube 512 These spacers must remain aligned in the correct angular onentation, so that the friction of cable 482 is minimized

The end of fiber bundle 127 is expanded as necessary in fiber end clamp 494 so that the illumination will adequately cover the field of view of camera 224 at all measurement distances of interest A lens could be used here as well to expand the illumination beam Viewport 518 is large enough to ensure that Uie field of view of camera 224 is unobstructed for all camera positions available with stage 180 Clearly, this viewport can be sealed with a window (not shown), if necessary, to keep the inteπor of the distal end of the EMB clean in dirty environments The window could be either in the form of a flat, parallel plate or in the form of a cylindrical shell, with the axis of the cylinder oriented parallel to the direction of motion of moving table 184 In either case, the tolerances on the accuracy of the geometrical form and position of the window must be evaluated in terms of the effects of those errors on the measurement

All camera lines of sight will be refracted by the window This can cause Lhree types of problems First, the window could cause an increase in the optical abenations of the camera, which will make the image of the object less distinct In general this will be a problem only if a cy ndncal window is placed with its axis far away from the optical axis of camera 224, or if the axes of the inner and outer cylindrical surfaces of the window are not coincident Secondly, differences in how the line of sight is refracted over the field of view of the camera will change the distortion of the camera from what it would be without the window in place This would cause a problem only if the distortion were not calibrated with the window in place Third, differences in how the line of sight is refracted as the camera is moved to different positions would cause enors in the determination of the apparent positions of a point of interest This is potentially the largest problem, but once again, it is easily handled by either fabncating and positioning the window to appropnate accuracies, or by a full calibration of the system with the window m place, using the calibration methods to be descnbed later

It is a design decision whether to locate position transducer 360 at the distal end of EMB 500, as I have shown, or whether to locate it at the proximal end of the scope Either wav will work as long as appropriate attention is paid to minimizing errors For the distally mounted transducer, because of the small size required, it is not possible to achieve the level of accuracy in the transducer that one can get with the proximally mounted transducer shown in the second embodiment However, if a proximally mounted transducer is used, one must carefully consider the errors in the transfer of the motion from the proximal to the distal end of the scope

When it is mounted distally, transducer 360 must be small enough to fit in the space available and have sufficient precision for the purposes of the measurement Suitable transducers include linear potentiometers or linear vanable differential transformers (LVDTs) Note that both of these options are absolute position transducers, so that the issue of determining a home position does not exist if uiey are used

Suitable linear potentiometers are available from Duncan Electronics of Tustin. California in the USA or Sfeπuce of Nice, France Suitable LVDTs are available from Lucas Control System Products of Hampton, Virginia in the USA For instance, model 249 XS-B from Lucas is 4 75 mm diameter b\ 48 mm long for a measurement range of at least 13 mm

These small, distally mounted transducers must be calibrated In fact, LVDT manufacturers provide calibration fixtures, using micrometers as standards What matters most to the performance of the measurement instrument is repeatability The repeatability of small linear potentiometers is generally 1 part in 10" , or 0 0001 centimeter per centimeter of travel The repeatability of an LVDT is determined by the signal to noise ratio of the signal processing electronics A signal to noise ratio of 1 part in 105 is easily obtained with small signal bandwidth, and 1 part in 106 is quite feasible, though more expensive to obtain These available levels of repeatability are quite consistent with the purposes intended for the instrument

If the EMB is to be used over a large range of temperatures, it will be necessary to include a temperature transducer at the distal end of the scope, so that the temperature sensitive scale factor of the distal position transducer can be determined and taken into account in the measurement

With the distally mounted position transducer, the only backlash that matters is the backlash between moving table 184 and position transducer 360 due to the necessary clearance between transducer operating rod 361 and transducer attachment bracket 369 This backlash will not be negligible, in general, so that the measurement procedure must use the anti -backlash elements of the measurement procedure detailed above in the descπption of the first embodiment (Bπefly, this means diat the camera position is always determined with die camera having just moved in one particular direction ) Since the system shown in Figure 28 is a closed - loop positioning system, it is straightforward to implement anti-backlash procedures automatically in the positioning software, and the user then need not be concerned with them The position transducer will not correctly measure the position of the camera if the measurement axis of the transducer is misaligned with the axis of camera motion Such a misalignment causes a so-called "cosine" enor, because the enor is proportional to the cosine of the angular misalignment This error is small for reasonable machining and assembly tolerances For instance, if the misalignment is 10 milhradians (0 6 degrees), the enor in the distance moved between camera positions is 1 part in 104 When necessan for very accurate work, this enor can be determined and taken into account in the measurement, by scaling the transducer position data accordingly The first two embodiments are also subject to this error, but in those cases the necessary mechanical tolerances are easier to achieve Note that an instrument suffenng from this error will systematically determine distances to be larger than they really are

There could be a thermal drift of the camera position if positioning cable 482 has different temperature coefficient of expansion than does probe tube 512 or if the instrument is subjected to a temperature gradient Such a drift Λvould not be a problem over the small time that it takes to make a measurement, because it is only the differential motion of the camera between viewing positions PI and P2 diat is important In more general terms, it doesn't matter if there is a small vanable offset between proximal position commanded and the distal position achieved, as long as any such offset is constant over a measurement As previously discussed, a large offset could be a problem if one desires to conect for enors in the motion of translation stage 180

Of course, differential thermal expansion of positioning cable 482 and borescope tube 512 would cause a varying tension in cable 482 Thus, unless cable 482 and 512 are made of materials with the same expansion coefficient, it may be necessary to spring load pulley support bracket 487 Whether such spπng loading is necessary is dependent on the length of tube 512 and the temperature range o\ er which the EMB must operate as well as the difference in temperature coefficients

A significant level of static friction (stiction) in translation stage 180 would require that the EMB be implemented with a distal position transducer, since otherwise there would be considerable uncertainty added to the position of the camera Dovetail slides tend to have significant stiction. so that use of a dovetail slide will 5 almost certainly require a distal position transducer If the stiction is too severe, the position setability of the camera will be compromised, which could make the instrument frustrating to use

Clearly, the EMB could be implemented with another sort of motion actuator 410 for instance an air cylinder

I have shown that there is used a proximal translation stage 496 between actuator 410 and positiomng cable 482 Clearly, this is not stπctly necessary as cable 482 could be clamped directly to output shaft 413 of actuator 10 410, provided that output shaft 413 does not rotate and can sustain a small torque

The EMB could also be implemented with a miniature motor and lead screw placed at distal end This eliminates the requirement for transfer of motion from the proximal to the distal end but it then requires more space at the distal end The advantage is that this could be used to embody an electronic measurement endoscope, that is, a flexible measurement scope Such a scope would be flexible, except for a relatively short ngid part at the 15 distal end

H Descnption of a Fourth Embodiment

Figures 29 and 30 show respectively plan and left side elevation views of the distal end of a fourth mechamcal embodiment of the invention, which I call the electronic measurement endoscope (EME) This fourth embodiment 0 is similar to the third embodiment, except that the positiomng pulley and cable system has been replaced here by a positiomng wire 532 which is enclosed except at its distal and proximal ends by a positioning wire sheath 534

In Figures 29 and 30 many of the same elements are shown as in the third embodiment, and only diose elements most directiy related to the discussion of this fourth embodiment are identified again

The distal end of positiomng wire 532 is clamped by distal positiomng wire clamp 542 Clamp 542 is 5 attached to the moving table of translation stage 180 Positioning wire sheath 534 is clamped to distal baseplate 514 with a distal sheath clamp 536

The external housing of the endoscope now consists of two portions, a flexible endoscope envelope 538 and a distal ngid housing 540 Rigid housing 540 is attached to the end of flexible envelope 538 to form an endoscope which is flexible along most of its length, with a relatively short rigid section at its distal end 0 Flexible envelope 538 includes the necessary hardware to allow the end of the endoscope to be steered to and held at a desired position under user control Such constructions are well known in the art and are not part of this invention

As in die durd embodiment, imager cable 222 and illumination fiber bundle 127 are supported by and clamped to the element which transfers motion from the proximal to the distal end of die scope Here cable 222 5 and fiber bundle 127 are clamped b\ a distal cable stabilizer clamp 490 which is itself clamped to positioning wire 532 Also as m the durd embodiment, clamp 490 is captured inside distal stabilizer slot 491 to control its position and onentation As in the third embodiment, the distal end of illumination fiber bundle 127 is supported by distal fiber clamp 492 and fiber end clamp 494. In this embodiment, fiber clamp 492 is attached to positioning wire clamp 542.

Imager cable 222, illumination fiber bundle 127, position transducer cable 366, and positioning wire sheath 534 all pass through and arc clamped to distal end cable clamp 544, which is located at the proximal end of distal rigid housing 540. Positioning wire sheath 534 is positioned in the center of cable clamp 544. while the other 5 three cables are aπanged around it in close proximity. Positioned at suitable intervals within flexible endoscope envelope 538 are a number of cable centering members 546, through which all of the cables pass.

The position of stage 180 is monitored by linear position transducer 360, which is mounted to distal baseplate 514 with transducer mounting bracket 367.

10 I. Operation of the Fourth Embodiment

Clearly, if the proximal end of sheath 534 is clamped to proximal baseplate 516 of die third embodiment, and if actuator 410 is attached to positioning wire 532, then the motion of the actuator will be transfened to distal translation stage 180. Thus, the operation is identical to that of the third embodiment, except that this embodiment is now a flexible measurement endoscope which can be brought into position for measurements in a wider range of

15 situations.

When this EME is steered to a desired position, flexible envelope 538 will necessarily be bent into a curve at one or more places along its length. Bending envelope 538 means that one side of the curve must attain a shorter length, and the opposite side a longer length, than the original length of the envelope. The same holds true for components internal to envelope 538, if these components have significant lengtii and are not centered in envelope

20 538. Thus, in order to prevent the bending of the EME from affecting the position of translation stage 180, it is necessary to ensure that positioning wire 532 runs down the center of envelope 538. Cables 222, 366, and fiber bundle 127 are also run as close to the center of envelope 538 as feasible, to minimize the stress on these cables as the EME is steered.

This embodiment almost certainly requires the use of a distally located linear position transducer 360, as

25 shown, because there is likely to be considerable stiction in the motion of positioning wire 532 inside sheath 534. Imager cable 222 and illumination fiber bundle 127 must have sufficient length to reach the most distal position of stage 180. These, as well as cable 366, are clamped to housing 540 through distal end cable clamp 544 so that no forces can be transfened from them to the measurement hardware inside housing 540. As the EME is bent, there will be small changes in the lengths of cables 222 and 366 and fiber bundle 127 Thus, there must be

30 sufficient extra length of these cables stored at the proximal end, or throughout the length of the endoscope, so that no large forces are generated when the EME is bent.

When stage 180 is moved away from its most distal position, the portion of cable 222 and fiber bundle 127 which are contained within housing 540 will bend so as to store their now excess lengths in die portion of housing 540 behind the proximal end of baseplate 514.

35 J Embodiments Using Other Camera Motions

1 Introduction

In the prefened embodiments, I teach the use of straight line camera motion between viewing positions, with a fixed camera onentation, to perform the perspective measurement The reasons that I prefer these embodiments are that they are simple and of obvious usefulness However, my system is not restricted to the use of straight line camera motion or fixed camera orientation Other camera motions are possible and can also be used when making a perspective measurement Some of these more general camera motions will be useful for specific applications Below, I show how to perform the measurement when using any arbitrary motion of the camera, and when using multiple cameras

This generalized method of perspective dimensional measurement that I teach here has an important application in improving the accuracy of the measurement made with my prefened embodiments Even with the best available hardware, the motion of the camera will not conform to a perfect straight line translation In this section, I show how to take such motion enors into account when they are known In the calibration section I will show how to determine those enors

2 Linear Camera Motion Figure 31 depicts the geometry of a mode 2 perspective measurement of the distance betw een the points A and

B being made with a camera moving along a linear patii, but where the camera onentation does not remain constant as tiie camera position changes Compare Figure 31 to Figure 13 In Figure 31, the points A and B are chosen to he at considerably ddϊerent distances (ranges) from the path of the camera in order to emphasize the differences that a vanable camera onentation creates The situation shown in Figure 31 represents the geometry of a measurement which may be made with any number of duTerent physical systems For instance, the camera could be movable to any position along the path, and the camera could be rotatable to any onentation with respect to that patii Or, the camera rotation could be restricted, for instance, to be about an axis perpendicular to Figure 31 Another possibility is tiiat the camera onentation is restneted to only a relatively small number of specific values, such as, for instance, the two specific oπentations shown in the Figure A third possibility is that the positions the camera can take are restneted to a relatively small number, either in combination with rotational freedom or in combination with restricted rotation If either the positions or th onentations of the camera are small in number, then one can use the well-known kinematic mounting pπnciple to ensure that these positions and or onentations are repeatable to a high degree of accuracy The basic concept of die measurement geometry shown in Figure 31 is that the camera is rotated towards the point of interest at any viewing position This is useful, for instance, when the camera has a nanow field of view, and when one desires to use a long perspective baseline One wants to use a long perspective baseline because it minimizes the random error in the measurement In fact, one can show that for optimum measurement results, one wants to set d e baseline at a value that keeps the angle subtended at each point of interest by the two viewing positions approximately constant for points at vanous ranges from the instrument This is just the situation shown in Figure 31, where the angles formed by the dot-dash lines are the same for both point A and point B The disadvantage of the measurement geometry shown in Figure 31 is that it requires accurately known camera motion in two degrees of freedom rather than just one. as do m) preferred embodiments Its advantage is that it provides the smallest possible random measurement error

It should also be clear to the reader that two cameras could be used to make the measurement depicted in Figure 31 If two cameras are used, it is still necessary to move one of the cameras with respect to the other to adjust the perspective baseline to the optimum value, when locating points at different ranges and relative positions When viewing an inaccessible object, the prefened implementation is to mount both cameras to a single probe assembly, but it is also possible to mount each camera on a separate probe, just as long as the relative positions and onentations of the cameras are known accurately I discuss below the parameters of the measurement geometry which must be known in order to make the measurement in the most general case One advantage of a two camera system is that the requirement for a second degree of freedom of motion can be eliminated under certain conditions, since the orientation of each of the cameras could be fixed at a suitable angle for the measurement, and the point of interest could then be brought into the field of view of each camera by translating the camera to an appropnate point along the path This situation can be envisioned from Figure 31 by assuming that Uiere is an upper camera, which is used at the viewing positions P2A and P2B. and a lower camera, which is used at the viewing positions PIA and PIB, and that both cameras can be moved along the single line of motion shown in the Figure

A second advantage of a two camera system is that the measurement data can be acquired in the time necessary to scan one video frame, once the cameras are in position, if the digital video "frame grabber" technology mentioned earlier is used Such quasi-instantaneous measurements are useful if the object is moving or vibrating For the same reason, such a system could reduce the stability requirements on the mounting or support structure for the measurement instrument

A disadvantage of a two camera implementation of the measurement shown in Figure 31 is that there will be a minimum perspective baseline set by the physical size of the cameras If the camera orientations are fixed, the minimum perspective baseline implies a minimum measurement range A second disadvantage of the fixed camera onentation vaπant of the two camera sxstem is that there is also a maximum measurement range for camera fields of view smaller than a certain value, since there will always be a maximum value of the perspective baseline

3 Circular Camera Motion Figure 32 depicts a mode 1 perspective measurement being made with a camera moving along a curved path The curve is a section of a circular arc. with a radius of curvature Rr and center of curvature at C The optical axis of the camera lies m the plane containing the circular path The camera orientation is coupled to its translation along the path so titat the optical axis of the camera always passes through C as the camera moves along the path

The advantage of this anangement, as compared to my preferred embodiments, is that a much larger perspective baseline can be used without losing the point of interest from the camera field of view, for objects located near the center of curvature, C, when the field of view of the camera is narrow Thus, the measurement system shown in Figure 32 can potentially proude lower random measurement enors As is clear from Figure 32, there will usually be a maximum range for which perspective measurements can be made, as well as a minimum range, at any given value of the perspective baseline In order to make measurements at large ranges, the system of Figure 32 requires a smaller baseline to be used than does a similar straight line motion system For certain combinations of d, Rp , and camera field of view it is possible for both the minimum and maximum measurement ranges to decrease as d increases Thus, this curx ed camera path system has the 5 disadvantage, as compared to my prefened embodiments, of having a limited span of ranges over which measurements can be made

This curved camera path system would be prefened in cases where only a small span of object ranges are of interest, and where there is plenty of space around the object to allow for the relatively large camera motions which are feasible I consider the pnmary operating span of ranges of the circular camera path system shown in Figure 10 32 to be (0 < z < 2Rp)

Another disadvantage of the system shown in Figure 32 for the measurement of inaccessible objects is the difficulty of moving the required hardware into position through a small inspection port

The method chosen for using a transducer to determine the camera's position along the path will depend on how this path is generated mechanically For instance, if a circular path is generated by swinging the camera 15 about a center point, then the position will probably be most conveniently transduced as an angular measurement If the path is generated by moving the camera along a circular track, then the position will probably be transduced as a linear position The method of transducing die position of the camera becomes an issue when considering how to desenbe an arbitrary motion of the camera, as I discuss below

Two cameras can be used with the circular camera path just as in the case of a linear camera path In fact, 0 mode 2 measurements can use up to four cameras to make the measurement, with either linear or circular camera motion Multiple cameras can be used with any camera patii, and in fact, there is no need for all cameras to follow the same path The fundamental requirements, as will be shown, are simply that die relative positions and onentations as well as the distortions and effective focal lengths of all cameras be known

A system using another potentially useful camera motion path is shown in Figure 33 Here the camera is 5 moved in a circular arc, as in Figure 32, but now the camera is onented to view substantially peφendicular to the plane of the arc In Figure 33 a tiny video camera is placed at the tip of a ngid borescope, similar to my third and fourth prefened embodiments This borescope has an end section with the capability of being erected to a position peφendicular to the main borescope tube When this erection is accomplished the camera looks substantially along die axis of the borescope To make the perspective measurement, the borescope (or some distal portion of it) 0 is rotated about its axis, thus swinging the camera in a circular path In this case it is the rotation of the camera about the optical axis which is coupled to the translation of the camera The camera position would be transduced by an angular measurement in this system

An advantage of the system shown in Figure 33 is that it allows both large and small perspective baselines to be generated with an instrument that can be inserted through a small diameter inspection port Of course, it still 5 would require that there be considerable space in the vicinity of the objects to be inspected to allow for the large motions which can be generated The instrument shown in Figure 33 could combine the circular motion just descnbed with an internal linear motion as in my fourth embodiment to offer the capability of making measurements eiUier to the side or in the forward direction

K Operation of Embodiments Using Arbitrary Camera Motion 1 Descπption of Arbitrary Camera Motion

I must first explain how I descπbe an arbitrary camera motion, before I can explain how to make a measurement using it To make accurate measurements, the motion of the camera must be accurately known, either by constructing the system very accurately to a specific, known geometry, or by a process of calibration of die motion If calibration is to used to determine the motion of the camera, then that motion must be repeatable to die required level of precision, and the method of calibration must have the requisite accuracy

In general, the true position of the camera along its path is descnbed by a scalar parameter p This parameter could be a distance or an angle, or some other parameter which is convenient for descπbmg the camera position in a particular case

The position of the camera is monitored by a transducer which produces an output η(p) Here, η is an output scalar quantity which is related to the true position along the path, p, by a calibration curve p(η)

The geometπcal path of the camera in space is expressed as a vector in some convenient coordinate system That is, the physical position of the camera (more precisely, the position of the nodal point of the camera's optical system) in space is expressed as a vector, rc(p(r/)) or rc(η), in a coordinate system that I call the external coordinate system or the global coordinate system Likewise, the orientation of the camera in space is expressed a rotation matπx, which desenbes the onentation of the camera's internal coordinate system with respect to the global coordinate system Thus, the camera's onentation at any point along its path is expressed as Rc(p(r,)) or Rc(τ?) The matπx Rc transforms any vector expressed in the global coordinate system into that vector expressed in the camera's internal coordinate system The matnx Rc is the product of three individual rotation matnees. each of which represents the effect of rotation of the camera's coordinate system about a single axis

Rc ) = R_(*.tø)) RyfAtø)) R (β (τ?)) (31) where θ2, θv, and θx are die angles that the coordinate system has been rotated about the corresponding axes

Now, in general, the terms rc(η) and Rc(τ?) will not be independent quantities, but will be coupled togeUier by die geometry and construction of the perspective measurement system An example was shown in Figure 32, where the camera's onentation is coupled to its position, so that the optical axis always passes through the center of curvature of the camera's path

If two or more cameras are used, then each one will have a location and onentation expressed similarly I will assume that the same global coordinate system is used to desenbe the motion of all cameras This must be the case, but ffat some point in the process separate global coordinate systems are used, and d"the relationships between these separate global coordinate systems are known, then it is possible to express all of the camera motions in a single global coordinate svstem in the manner shown below for expressing one coordinate system in terms of another To summanze the relationship between camera motion and the measurement what is required is that the positions and onentations of the camera(s-) be accurately known relative to an external (global) coordinate system This coordinate system is fixed with respect to the instrument apparatus, but it has no inherent relationship to the object being measured The position of the object in the global coordinate system becomes fixed only w hen the instrument is itself fixed at some convenient position ith respect to the object When this is done, the position of points on Lhe object can be determined in the global coordinate system, or in some other closely related coordinate system which is also fixed with respect to the instrument apparatus 2 The General Perspective Measurement Process Above, I taught how to perform the perspective measurement when the camera undergoes a pure translation Now consider the case where the camera undergoes a rotation as well as a translation between viewing positions PI and P2

Considenng again Figure 12, an arbitrary vector rt can be expressed as

Figure imgf000043_0001
where the vector r2 is drawn from the origin of the P2 coordinate sy stem to the end of vector r( Any or all of these vectors could be expressed m either coordinate system I choose to define them as being expressed in P 1 coordinates Then r2 = r - d can be re-expressed in the P2 coordinate system by using i transformation that the coordinates of a point undergo when a coordinate system is rotated about its ongin

It is a fact that there is no single set of coordinates that can be defined that will umquely represent the general rotation of an object in tiiree dimensional space That is, for any coordinate system one might choose, the order in which vanous sub-rotations are applied will affect the final orientation of the object Thus, one must make a specific choice of elemental rotations, and the order in which they are applied, to define a general rotation

The way Uiat coordinate system rotations are most often defined is to begin with the two coordinate systems in alignment Then one system is rotated about each of its coordinate axes in turn, to produce the final alignment of the rotated coordinate svstem I define the procedure for rotating coordinate system P2, beginning with it aligned with PI, to obtain the rotated coordinate svstem P2 as the following 1 Rotate P2 about x~ by an angle θz

2 Rotate P2 about y2 by an angle θy

3 Rotate P2 about "22 by an angle θz

This procedure means that the effect of dus rotation of the P2 coordinate system with respect to the PI coordinate system on the coordinates of a point in space can be expressed in the P2 system as die product of a rotation matnx with the vector from the origin to the point That is v2 = R Vi (33) or

Vι = RI(fl ) R,(fl») R,(ffχ) v, (34) where (35)

Figure imgf000044_0001
and where Vi is the vector between the origin of the P2 coordinate system and the point as expressed in the unrotated PI coordinate system and v2 is the same vector as expressed in the rotated P2 coordinate system.

The transformation to go the other way, that is, to change coordinates as measured in the rotated P2 system to coordinates measured in the PI system is: i = R 1 v2 (36) v1 = R.1^x) Rv 1(^) Rl 1(^) v2 v, = R,( - 0i.) Ry( - 0l,) R1( - 0_) Vj

Consider again the perspective measurement process. At viewing position PI, visual coordinate location measurements are made in a coordinate system which depends on the orientation of the camera at that position. When nodal point of the camera is moved to position dvι as measured in the PI coordinate system, the camera will rotate slighdy, so tiiat the visual coordinate system at position P2 is not the same as it was at position PI. I refer to the rotation matrix, which transforms coordinates measured in the visual coordinate system at PI to coordinates measured in the visual coordinate system at P2, as R12. Clearly this rotation matrix is a product of tiiree simple rotations, as detailed in (34) and (35). But, I have shown how to express measurements made in the P2 coordinate system in terms of the PI coordinate system: rvl = R12 rv2 (37) so that Equations (12) can now be expressed as: rvι = z„ιavι (38)

Figure imgf000044_0002
and tiiese can be solved as above to get:

1 r»ι = - [ [ a,ι R" 12 av2 ] [ avl - H av2 ] u + I3 dv (39) 2 where, to repeat, dvι is the translation of the camera nodal point between positions PI and P2 as expressed in the P 1 coordinate system.

In analogy with (18) and (19) I define a new coordinate system:

Figure imgf000044_0003

Then: rm dvl (41)

Figure imgf000044_0004
Companng (41) to (19) shows that any camera rotation between viewing positions PL and P2 is easily taken into account in t e perspective measurement, as long as that rotation is known

In measurement mode 1, the experimental data obtained are four camera image position coordinates (χ'ιmi > t i « x'mι i v'm2 ) for eacn 0DJect Poιnt of ιntercst and the readings of the camera position transducer, η{ and η2, at die two viewing positions PI and P2

As explained for the first prefened embodiment, the data processing begins by conecting the measured image point location data for distortion as expressed by Equation (22) Next, die data are scaled by the inverse of the effective focal length of the combined optical-video system as expressed by Equation (23) Then, for each object point of interest, the visual location vectors avι and av2 are formed as expressed in Equation (24) Next, the displacement vector between the two viewing positions is calculated in global coordinates as dg(ffe,77i) = re (m) - τ ύ (42)

As stated previously, I call this vector the perspective displacement

The relative rotation of the camera between the two viewing positions is calculated as

Riz(T72,?7i) = RC0?2) R '(TJ,) (43) Equation (43) simply says that the rotation of the camera between positions PI and P2 is equivalent to the rotation of the camera in global coordinates at P2 minus the rotation it had at PI

The perspective displacement is re-expressed m the camera internal coordinate system at PI by taking into account the rotation of the camera at that point That is dvι = Rctøι) d,(j72,»7ι) (44) The location of the object point of interest in the measurement coordinate system is then computed as rm = - [ av) R12 av2 ] [ avl - R12 av2 ]u dv, (45)

where the measurement coordinate system is parallel to the internal coordinate svstem of the camera at PI. with its ongin located midway between PI and P2

Equation (45) expresses how to locate a point in the measurement coordinate system under completely general conditions, for any arbitrary motion of the camera, provided that the motion is known accurately in some global coordinate system If the motion is repeatable, it can be calibrated, and thus known Equation (45) is the fully three-dimensional least squares estimate for the location of the point

To complete the mode 1 perspective dimensional measurement process. Equation (45) is used for both points

A and B individually, then Equation (7) is used to calculate the distance between these points It is important to note that u" multiple image position coordinate measurements are made for the same object points under exactiy the same conditions in an attempt to lower the random enor, then one should average the individual point location measurements given by Equation (45) before calculating the distance between the points using Equation (7) This gives a statistically unbiased estimate of the distance If one instead calculates a distance estimate with each set of measurements and then averages them, one obtains what is known as an asymptotically biased estimate of the distance If two cameras are used, one simply uses each individual camera's distortion parameters to correct the image measured with that camera as in Equation (22) Then, the scaling by the inverse focal length is earned out for each individual camera as expressed by Equation (23) Then for each object point of interest, the visual location vectors avJ. and av2 are formed as expressed in Equation (24), where now the data in avι ere determined ith one of the cameras and the data in av2 were determined with the other The remainder of the data processing is identical whether one or two cameras are used to make the measurement

The geometry of measurement mode 2 for an arbitrary camera motion is depicted in Figure 34 The expenmental data obtained are four camera image position coordinates (x ml , y'ml , xjm2, y'im2) and the two readings of the camera position transducer η\ and η2, for each object point of interest In this mode of measurement, the camera positions are not the same for each point of interest so that there may be either three or four camera positions used for each distance to be determined

Figure 34 depicts the situation when the distance between two points, A and B, is to be determined It is clear that this measurement mode makes sense only for a certain class of camera motion object distance, object shape combinations For instance, with the camera motion shown in Figure 32, and a more or less planar object located near the center of curvature, C, there is little or no ability to view different portions of the object by moving the camera, so that there is no reason to use measurement mode 2

However, as shown in Figure 35, there are other situations when only the use of measurement mode 2 makes a measurement feasible In Figure 35, the distance between two points on opposite sides of an object is desired The object has a shape and an onentation such that both points cannot be viewed from any single camera position which is physically accessible As shown in Figure 35, the combination of circular camera motion and measurement mode 2 allows tius measurement to be made This measurement could also be made with an arbitrary camera rotation embodiment of the system shown in Figure 31

Consider now that the measurements depicted in Figure 34 or 35 are to be performed Say that the camera position transducer readings are { r?ιA, T?2A, »7IB, s } at viewing positions {PIA P2A, PIB, P2B} respectively Then, the actual camera positions in the global coordinate system are { rc(r?1A) rc2A), I"C(UIB), rc(η2B) } respectively Likewise the orientations of the camera's coordinate system w ith respect to the global coordinate system are { Rc(r?ιA), R IA), RC(*7IB), RΛ IB) }

In measurement mode 2, the positions of the points A and B are each determined independently by the basic point location process expressed by Equation (45) to give rmA and rmB respectively According to that process, r^ is determined in a measurement coordinate system parallel to the coordinate system of the camera at PIA, while rmB is determined in a coordinate system which is parallel to the camera coordinate svstem at PIB The location vectors for points A and B are then re-expressed in the global coordinate system as

Figure imgf000046_0001
and the vector between the origin of the A measurement coordinate system and the origin of the B coordinate system m global coordinates is calculated as (Figure 34) i r dAn = ■"C^IA) + rc2A) - rc(T )B> - ΓC(77 B) (47)

2 1 Finally, the distance between points A and B is calculated as r = |r| = |dAB + rAC - rBG| (48)

Once again, if two or more cameras are used, one need only conect the image locations for distortion and scale the image locations by the inverse focal length for each camera individually to perform the measurement, just as long as the positions and onentations of the cameras arc all expressed in the same global coordinate system And, just as before, if multiple identical measurements are made, one should calculate the average location vectors before calculating the distance with Equation (48), ratiier than averaging individual distance determinations 3 Application of the General Process to Conection of Translation Stage Rotational Enors I have shown four prefened embodiments of my apparatus, where in each case, a single camera will be moved along a substantially straight line If the motion is a perfect translation, then in Equations (43) and (44), Rc(r?2) is equal to Rc(r?ι), Rι2 is the 3 x 3 identity matπx, and die direction of perspective displacement dgtø.r/ , hence d¥, , remains the same for any (r?2,τ?ι ) In this case Rc is identified with the product R2 Rv, which simply makes use of the fact that the orientation of a vector can always be expressed by at most two angles, since a vector is not changed by a rotation about itself Finally, with a perfect straight line translation of the camera. Equation (45) reduces to Equation (27) As an example of the use of the general measurement process to conect for enors of motion, consider the third

(EMB) and fourth (EME) prefened embodiments Assume that the translation stage has rotational enors. which have been charactenzed with a calibration process, which will be described later As a result of this calibration, the translation stage rotational enor R(η) is known in some calibration coordinate system To simplify the calibration task, I specify that the calibration coordinate system be the same as die global coordinate system, and I explain later how to ensure this I further speedy that the global coordinate system has its x axis along the nominal translation direction, which is simply a matter of definition

The errors of translation stages are not well specified by their manufacturers For typical small ball beanng translation stages, a comparison of various manufacturers' specifications is consistent with expected rotational enors of approximately 300 microradians and transverse translational errors of about 3 microns A moment of thought will convince the reader that with these levels of enor, the rotational error will contribute more to the measurement error of the system than will the translation error for any object distance larger than 10 mm Thus, for measurements of objects which are at distances greater than 10 mm, it is reasonable to correct for only the rotational error of the translation stage I now show how to do this

The image point location data are processed in the same manner as has already been descnbed The position of the camera nodal point can be expressed as p(η) rc(v) = 0 = Pfø) (49) 0 so that the perspective displacement in global coordinates is calculated as dgtøiΛi) = r *<m) - r ηi) = P( ) - P(v (50) For any position of the camera, the rotational orientation of the camera can be expressed as

Rc(r?) = R„R(»7) (51) where Rcg is the onentation of the camera with respect to the global coordinate system at some reference position where R(η) is defined to be the identity matnx Both Rcg and the rotational error, R(η), are determined in die calibration process

As die next step in measurement processing, then, the relatι\c rotation of the camera between positions PI and P2 is calculated as Ri j(»fc,»7i ) = Rc(?72) Rc' tøi ) = RcgR(%) R ' (f?ι )Rcg (52)

Since the rotation matrices are orthogonal, their inverses arc simply calculated as their transposes The perspective displacement is tiien transformed to the camera coordinate system at PI as dv, = R ,Rtøι)[p(tfe) - P( )] (53) and finally the position of the point of interest is calculated by using results (52) and (53) in Equation (45), and distances between points of interest are calculated with Equation (7) Note that the process I have just outlined implicitly includes the possibility of conection of position transducer errors, given by the calibration curve p(τ?)

If transverse translation enors of the stage are to be corrected, then the calibration process must determine these errors, and the conection data must be incoφoratcd into the general measurement formalism given in the previous section in a similar manner to that shown here for rotational enors

L Calibration

I find it convenient to divide die calibrations of my measurement system into three classes, which I call optical calibration, alignment calibration, and motion calibration

In optical calibration, the optical properties of the camera when it is considered simply as an image forming system are determined In alignment calibration, additional properties of the camera which affect the dimensional measurement arc determined Both of these classes of calibration must be accomplished in order to make a measurement with my technique Optical calibration has been briefly considered in some of the pnor art of endoscopic measurements, while alignment calibration is new

Motion calibration is not necessaπly required to make a measurement, but it may be required in order to make measurements to a specific level of accuracy Whether this calibration is required or not is determined by the accuracy of the hardware which controls die motion of the camera 1 Optical Calibration There is a standard calibration technique known in the field of photogrammetry which is the prefened method of performing the optical calibration of the camera The technique is discussed, for instance, in the following articles-

"Close-range camera calibration", Photogrammetnc Engineering, 37(8), 855-866, 1971

"Accurate linear technique for camera calibration considenng lens distortion by solving an eigenvalue problem", Optical Engineering, 32(1), 138-149, 1993 I will outline the equipment and procedure of this calibration here The equipment required is a field of calibration target points hich are suitable for viewing by the camera to be calibrated The relative positions of the target points must be known to an accuracy better than that to which d e camera is expected to provide measurements The number of calibration points should be at least twenty, and as many as several hundred points may be desired for very accurate work

The calibration target field is viewed with the camera and the image point locations of the target points are determined in the usual way by aligning a video cursor with each point in turn, and commanding the computer to store the measured image point location The geometry of this process is depicted in Figure 36 It is important that the relative alignment of the camera and the calibration target field be such as to ensure that target points are located over a range of distances from the camera If the target field is restneted to being at a single distance from the camera, the determination of the camera effective focal length will be less accurate than otherwise. Another requirement is that targets be distπbuted with reasonable uniformity over the whole camera field of view There is no other requirement for alignment between the camera and the target points Assume that k target points have been located in the image plane of die camera The measured coordinates of the ;th image point are denoted as (x'ιm y'mj) The following (2 x k) matnx of the measured data is then formed rho' = ιm2 ιm* (54)

I J ιτnl Vιm2 trunk

In Figure 36 the vector rot, which is the (unknown) position of the fclh calibration object point in the camera coordinate system, can be wπtten as r»k = R*(θz, θvA) [ rck - rt] (55) where τck is the known position of the fcth calibration point in the calibration target field internal coordinate system (the calibration coordinate system), rc is the unknown position of the camera's nodal point in the calibration coordinate system, and Rc is the unknown rotation of the camera's coordinate system with respect to the calibration coordinate system As before, rotation matπx Rc is the product of three individual rotation matnees R-(f5z)Rv(f?1/)Rx(f?1), each of which is a function of a single rotational angle about a single coordinate axis The fcth ideal image position vector is defined as rhOimjb =

Figure imgf000049_0001
where i is d e equivalent focal length of the camera The (2 x k) ideal image position matrix is defined as rholm = [ rhoιml rhoim2 • • • τ \oimk ] (57) Similarly, die image point coordinate error for the fcth point is defined as oχ( hoi) rhoDt = (58)
Figure imgf000049_0002
and the (2 x k) image coordinate enor matnx is rhoD = [rhom rhoD2 rhoDic ] (59)

The enor functions fDx and fov define the image location enors which are to be considered in the camera calibration A number of different error functions are used in the art The following fairly general expression is given in the article "Multicamera vision-based approach to flexible feature measurement for inspection and reverse engmeeπng", Optical Engineering, 32. 9, 2201-2215, 1993 + 2 ab (60)

Figure imgf000050_0001
where, of course, Irho' 2 = x' mk + ymk

The following equation expresses the relationship between the measured image point positions, and the ideal image point positions- rho' = rho,m + rhoD (61)

Using the quantities defined above, Equation (61) represents 2k equations in 15 unknowns The unknowns are the three angles of the camera rotation θz, θv, θx, the tiiree components of the camera location xc, yc, zc , the equivalent focal length, i, and the eight parameters of image position enor x0l y0, α; , α6 In order to obtain a solution, one must have k > 8 As I have stated above, one wants to use many more points than this to obtain the most accurate results As I previously stated, I call all eight of the image position enor parameters "distortion", but only some of them relate to the optical field abenation which is usually refened to as distortion The parameters o and y0 represent the difference between die center of the image measurement coordinate svstem and the position of the optical axis of the camera Parameters a and αe represent different scale factors in the x and y directions Parameters α2 and α3 represent the standard axially symmetric optical Seidel aberration called distortion Parameters α4 and 0 represent possible non-symmetrical distortion due to tilt of the camera focal plane and decentraϋon of the elements in lenses

The overdeteπruned set of Equations (61) has no exact solution Consequently, die calibration data processing determines best fit values for each of the 15 parameters by minimizing the length of an enor vector This is done by an iterative numencal process called non-linear least squares optimization Specific algonthms to implement non-linear least squares optimization are well known in the art They are discussed, for instance, in the book Numencal Recipes by William H Press, et al , published by Cambπdge University Press, 1st Ed 1986 This book provides not only the theory behind the numerical techniques but also working source code m Fortran that is suitable for use in an application program A second edition of this book is available with working source code in C Another book which is helpful is R Fletcher, "Practical Methods of Optimization, Vol J - Unconstrained Optimization, John Wiley and Sons, 1980

A second option for implementation of the non-linear least squares optimization is to use one of the available "canned" numencal software packages such as that from Numencal Algonthms Group, Inc of Downers Grove, Illinois, USA Such a package can be licensed and incoφorated into application programs, such as the program which controls computer 228 A third option is to use one of the propπetary high level mathematical analysis languages such as MATLAB®, from The Math Works, Inc of Natick, Massachusetts, USA These languages have high level operations which implement powerful optimization routines, and also have available compilers, which can produce portable C language code from the very high level source code This portable C code can then be recompiled for the target system, computer 228

The optimization process begins with approximate values for the fifteen unknowns and adjusts these iteratively to minimize the quantity Q = |rho' - r olrn - r θr,|2 (62)

The ideal value of Q is zero, the optimization process attempts to find the smallest possible value

The starting values for the unknowns are not critical in general, but the iterative process will converge faster if die starting values are not far away from the true values Thus, it makes sense to perform the optical calibration with a specific alignment of the camera with respect to the calibration target anay, so that the camera alignment parameters are approximately already known It is also a good idea to use any available information about the approximate camera focal length and the distortion parameters in the starting values For die borescopes I have tned, I have found that terms and a-, in Equation (60) are not necessary, in fact, use of them seems to cause slow convergence in the optimization

The first six calibration parameters, { θ , θv, θx, xc, yc, zc }, refer to the position and alignment of the camera as a whole The other mne parameters are subsequently used to conect measured image point position data to ideal image point position data by

/W o') rhoim = rho' - rhoD = iv„ .Wrho J (63) which is another way of expressing Equations (22) After the image point positions are conected, then the visual location vector used in the measurement Equations (27) and (45) is defined as rhoIr av = - (64)

which is a more compact way of expressing Equations (23) and (24) 2 Alignment Calibration In the perspective measurement process the object of interest is viewed from two points in space, which are called PI and P2 Recall tiiat die vector connecting the camera nodal point at viewing position PI to the camera nodal point at viewing position P2 is defined as the perspective displacement d The essence of alignment calibration is to determine the onentation of the perspective displacement, d. with respect to the camera's internal coordinate system Once d is known in the camera coordinate system, then the position of object points can be calculated using either Equation (27) or Equation (45), as appropriate

Since the camera's position and orientation are estimated during the optical calibration procedure given in the previous sub-section, tiiese data can be used to determine die alignment of d in the camera's coordinate system if that calibration is done twice, from two different viewing positions In fact, these are exactly the calibration data tiiat are needed to implement the perspective measurement for a general motion of the camera which was outiined in Equations (42) through (45) All one need do is to carry out the optical calibration procedure at the two measurement camera positions with a fixed calibration target array This is, in fact, what is done in the photogrammetry field, and is what can be done with a general motion embodiment of my system

In my prefened embodiments, there is considerable information is available about the motion of the camera that would not be used if one were to proceed as I have just suggested For instance, if the translation stage is accurate, then that means that the orientation of the camera does not change between PI and P2, and, in fact, it doesn't change for any position of the camera along its path For those embodiments where the geometry of the camera path is accurately known, such as the preferred embodiments, one can determine the alignment of d in die camera's coordinate system at one point on the path and thereby know it for any point on the path

In addition, the perspective baseline |d| may be especially accurately known, depending on the performance of the position transducer, and how accurately it is aligned with the motion of the translation stage As a third possibility, it is possible to accurately measure rotational enors in the translation stage, as long as the motion of die stage is repeatable All of this information can be taken into account in order to determine a better estimate of the onentation of d in the camera coordinate system, and thus, to achieve a more accurate measurement

As a first example of alignment calibration, consider that two optical calibrations have been done at two positions along the camera path, as discussed above The calibration data available for the camera position and onentation are then rc(?72), rc(?7i), R V2), and Rc(r7ι ) Also consider that it is known that the camera moves along an accurate straight line

The camera onentation in die calibration coordinate svstem is then estimated as

Rc = 2 (Rc(r?2) + Rc(τ?l )) (65)

Note that the difference between Rc(τ72) and Rc(t7ι ) gives an indication of the precision of this calibration The perspective displacement in calibration coordinates is estimated as dg(r,2,r7ι) = rc(r; ) - TC(V\) (66) and in camera coordinates it is estimated as

Figure imgf000052_0001

Because there is no rotation of die camera, it is known that the onentation of this vector does not change widi camera position

In Equations (25) and (26) the measurement process was defined in terms of rotation matnees such that

vi = R,(β.) Ry(flv) (68)

where d — |dvι | Wπting the measured components of d from Equation (67) as (dυx, dvy, dυz ) one writes the following equation, using definitions (35)

R_(0-) R,(A) (69)

Figure imgf000052_0003
Equation (69) can be solved for the rotation angles as θ Jυ = (70)

Figure imgf000052_0002

Thus, die final step of tins alignment calibration process is to determine the two angles Q and θz with Equation

(70) Dunng the measurement process, these angles are used in Equation (26)

As a second example of alignment calibration, consider that an optical calibration has been previously done. and now it is desired to do an alignment calibration This would be the normal situation with the first and second prefened embodiments, since the alignment of the camera with respect to its translation may change whenever the borescope is clamped to the BPA Consider also that the motion of the camera is known to be constrained to be along an accurate straight line (that is, any enors in the motion are known to be smaller than the corresponding level of enor required of the measurement)

Once again, a calibration target anay is viewed from two positions of the camera along its path of motion

According to Figure 36 and Equation (55), one can wnte (71)

Figure imgf000053_0001

The visual location vectors, which are calculated from the distortion corrected image position data according to Equation (64), can also be written as fofcl

(72)

2*1 avjfc2 = = lθk2"J:2

Zk2 in terms of the object point coordinates One corrects the distortion of the measured data using Equation (63) with the distortion parameters obtained in the previous optical calibration

Define the following quantities where it is assumed that k calibration points are used

Avι = [ avlχ av2ι - - ayjti ] (73)

Ay2 = [ a„i2 ay22 • • avfc2 ]

[ rci rc2 - - - rcfc] l b = [ 1 1 1] (fc components) uu 0 0

0 t-21 0

Uχ = 0 0 0

0 0 ukι

u2

Figure imgf000053_0002

0 0 "fc2 Then Equations (71 ) can tiien be written as

Avι = Rc [r„, - rc(r?ι)lfc]Uι (74) Ay2 = Rc [ r„ι - rc(»72)lfc]U2 and Equations (74) can be combined into-

[Ayi Av2 ] = Rc[[ rcai r„ι ] - [ r,(m)l* re(rft)l* ]] ul (75) where U12 is

U Ul2 - - [ [U 0l u °2 (76) Equation (75) represents 6k equations in 2k + 9 unknowns, thus, they can be solved for k > 3 The unknowns are Uι2(2fc unknowns) and Rc-, rc(r/ι), and τc2), which each contain 3 unknowns Equation (75) makes full use of the fact that Rc is constant, and is thus a more efficient way of estimating the nine unknowns of interest for die alignment calibration than was the previous case

Equations (75) are solved by a similar nonlinear least squares process as was used for Equation (61) Once the camera positions and onentation arc estimated, one simply uses Equations (68) through (70) to determine the alignment angles, which are used dunng the measurement process

To improve the efficiency even further, in the case where d = |d| is considered accurately known. Equations (75) can be solved by a constrained least squares optimization rather than the unconstrained optimization I have so far discussed Such numerical procedures are discussed in R Fletcher "Practical Methods of Optimization Vol 2 ~ Constrained Optimization, John Wiley and Sons, 1980 Most, if not all of the ' canned" numerical software offers routines for constrained optimization as well as unconstrained optimization and so do the high level mathematical analysis languages

In this case, the constraint is |rc(r72) - rc7l)| - d = 0 (77)

It is possible to use an inequality constraint as well, so that if it is known that there is a certain level of uncertainty in the determination of d, then Equation (77) could be replaced with

Figure imgf000054_0001
where e is the known level of uncertainty in d

As a third example of alignment calibration, I now consider the case where there are rotational errors in the motion of the camera in my third and fourth preferred embodiments I have already explained how to make die measurement m this case, in the sub-section entitled "Application of the General Process to the Correction of Translation Stage Rotational Enors" In motion calibration sub-section 3 below, I will explain how to determine the rotational errors Here, I explain how to take these errors into account duπng the alignment calibration

For alignment calibration of the EMB or EME with a known stage rotational error it is necessary to determine the static alignment of the camera with respect to the translation direction in the presence of this enor Recall that at any point along the camera's path

Figure imgf000054_0002
where now R(η) is known from the motion calibration process

Once again, a calibration target anay is viewed from two positions of the camera along its path of motion According to Figure 36 and Equation (55), one can wnte rβfci = RcgR( ) [ r«fc - r.tøi)] (79) rojt2 = RcgR(r,2) [ r t - rc(72)]

These are extended just Equations (71) were to obtain

Avι = R gR( ) [rc.ι - rctøι)lfc]Uι (80)

Aϊ2 = RCgR(f?2) [r„ι - rc(jj2)lfc]Ua and Equations (80) can be combined into [A* Ay2 ] = Rcg [[ ( )rc,ι ( 72)rcai ] - | R( )rc( ?i) R(»»)r«(»ft)l,. ]] uia (81)

which is the same optimization problem as was Equation (75) This is handled exactly die same way to estimate Rcg, rc(τ7ι ), and rc2) With Rcg> the rotation of the camera at any point in its path is known as Rc(r?) from Equation (51) I have assumed that the rotation of the stage does not affect the offset of the stage, so that the measurement in this case is accomplished with Equations (49) through (53), Equation (45), and finally Equation (7)

3 Motion Calibration For the third alignment calibration case above, the rotational enors of the translation stage must have been previously determined in a motion calibration procedure Preferably, this motion calibration is done at die factory, for a subassembly of the EMB or EME These calibration data arc then incoφorated into the software of the complete measurement scope tiiat is constructed using the particular subassembly in question

The small rotation enors of a linear translation stage can be conveniently measured using a pair of electronic tooling autocollimators as depicted in Figure 37 Each of these autocollimators is internally aligned so that its optical axis is accurately parallel to die mechamcal axis of its precision ground cylindπcal housing Such instruments are available from, for example, Davidson Optromcs of West Covina, California. USA or Micro- Radian Instruments of San Marcos, Caltfoπua, USA

In Figure 37, two colhmator V - blocks 602 are mounted to a flat stage calibration baseplate 600 The two precision machined V - blocks 602 are located with precision pins so that their axes are accurately peφendicular, to normal machining tolerances The two V - blocks 602 thus define the directions of a Cartesian coordinate system, which is defined as indicated on Figure 37 An EMB Subassembly V - block 606 is also mounted to baseplate 600 and located with pins, so that its axis is accurately parallel to the x axis defined by V - blocks 602 Also installed on baseplate 600 is actuator mounting block 608

The autocollimators 604 are installed into their respective V - blocks and are both rotated about tiieir respective axes so that tiieir measurement y axes are oriented accurately perpendicular to the mounting plate With the autocollimators installed and aligned, EMB translation stage subassembly 550 is placed into V - block 606 An enlarged view of a portion of this subassembly is shown in Figure 38 Subassembly 550 consists of distal baseplate 514 (see Figures 21 - 23) to which is mounted translation stage 180, and transducer mounting bracket 367 Translation stage 180 is composed of fixed base 182 and moving table 184 Transducer 360 is mounted in bracket 367, and its operating rod 361 is mounted to transducer attachment bracket 369 Bracket 369 is in turn mounted to moving table 184

The procedure given here assumes that translation stage 180 has been mounted to distal baseplate 514 so that the axis of translation is oriented parallel to the cylindrical axis of the distal baseplate This alignment need only be accurate to normal machining tolerances, as I will discuss later If the specific design of the hardware is different than I show for the preferred embodiment, it is necessary to use some other appropriate method of ensunng that die axis of translation of stage 180 is oriented parallel to the x axis defined by the calibration hardware, and diat this onentation is accurale to normal machining tolerances Subassembly 550 is rotated about its axis to make the top surface of distal baseplate 514 nominally parallel to baseplate 600 and then it is clamped into position For puφoses of clarity, the clamp is not shown in Figure 37 Stage operating arm 614 is then attached to moving table 184 Actuator 610 is installed in mounting block 608. and actuator operating rod 612 is attached to operating arm 614 Thus, the stage can now be moved back and forth over its range of travel and a function of its position, η(p), can be read at the output of position transducer 360

Stage 180 is moved to the mid range of its travel by the use of actuator 610 Minor platform 618 is then attached to moving table 184 Minor platform 618 has mounted to it two minor mounts 620, which in turn hold a longitudinal mirror 622 and a transverse minor 624

Minor mounts 620 are then adjusted to tilt each of the mirrors in two angles so as to center the return beams in autocollimators 604 as determined by die angular readouts of the autocollimators (not shown)

Translation stage 180 is then moved to one end of its travel using actuator 610 Calibration data are then recorded by moving stage 180 toward the other end of its travel range in a senes of suitably small steps in distance The output of position transducer 360, η, is recorded at each step position, as are the angular readings of the autocollimators Note that one need not be concerned with the actual distance stage 180 is moved between steps, unless one is also intending to calibrate transducer 360 at the same time

The readings from the autocolhmator viewing along the x axis will be (2 θv, 2 θz), where the positive direction for the angles is counter-clockwise when the view is along the axis from positive coordinates toward the ongin (i e , die right hand rule) The readings from the autocolhmator viewing along the z axis will be (2 θy, - 2 θx) The rotational enor of the stage at any point can be expressed as

R(r?) = R_(0_)R,(0y)R_rA) (82)

It is more efficient to record and store the three angles θz(η), θv(η), θx(η) and calculate R(η) whenever it is needed When the calibration data are used in a measurement procedure, it will be necessary to inteφolate between the stored values of η to estimate the rotation angles at the actual values of η used in the particular measurement Such inteφolation procedures are well known in the art An enor analysis shows that the angles measured duπng this calibration process will be a mixture of the components of d e true rotational angles, if the calibration geometry is not perfectly aligned with the translation direction of the stage However, the level of the mixed components is proportional to the error in die geometry, and tiius will be small For instance, if the angles determining the calibration geometry were all in enor by three degrees (which is much larger than one would expect, using normal machining tolerances), the measured stage rotation angles would be 10% in enor in the worst case Since it is unlikely that the repeatability of a translation stage will be much more than ten times better than its absolute angular accuracy, this level of error is appropnate for calibration of the stage angular enors Thus, use of a calibration geometry which is determined by precision machining is adequate to perform the calibration measurements M Eliminating Routine Alignment Calibrations in BPA Embodiments

There is an inconvenience with the first and second embodiments as taught above, which is that a new alignment calibration might have to be performed each time a new measurement situation is set up In alignment calibration, the onentation of the horoscope's measurement coordinate system with respect to the motion provided by die BPA is determined With a standard borescope, this orientation may not be well controlled, and thus every time the borescope is repositioned with respect to the BPA, there is die logical reqωrement for a new alignment calibration Of course, whetiier a new calibration would actually be required in any specific instance depends on the accuracy required of the dimensional measurement in that instance And of course, whether or not avoiding the inconvenience of the requirement for routine alignment calibrations is worth the additional structure and procedure descnbed here will be determined solely by the user's application I descπbe here modifications to the borescope, to the BPA, and to the calibration and measurement procedures which work togetiier to eliminate die need for routine alignment calibrations in borescope BPA embodiments of my perspective measurement system The user of my system may select from one of the subsequently descnbed combination of modifications as required to improve the accuracy of the measurements made and or to make the system more convement to use 1 Detailed explanation of the problem

A first difficulty with the first and second embodiments of m> system is depicted in Figure 39 Here, the lens tube of the borescope is not perfectly straight Thus, when the borescope is clamped to the BPA at dtfferent points along its length, the geometncal relationship between the perspective displacement d and d e visual coordinate system changes This means that, for accurate work, an alignment calibration must be performed whenever the borescope is clamped at different positions along its length

A second dtfficulty is depicted in Figures 40A and 40B Coordinate axes parallel to the visual coordinate system are drawn in Figure 40 to make it easier to visualize the geometncal relationships In these Figures the borescope is shown aligned along a mechanical axis (A - A) The Figure is drawn in the plane which contains the mechamcal axis and which is also parallel to the perspective displacement d In Figure 40B the borescope has been rotated by 180 degrees about the mechamcal axis wid respect to its position in Figure 40 A In Figure 40 A, the component of the visual x axis that is peφendicular to the page is directed into the page In Figure 40B, the component of the visual x axis that is peφendicular to the page is directed out of the page

The onentation of d with respect to the visual coordinate system is not the same in Figures 40A and 40B (This may be most clear when considenng the visual z axis ) Thus, when the axis of mechanical rotation of the borescope is not parallel to the perspective displacement, the onentation of the perspective displacement in the visual coordinate system will change when the borescope is rotated about that mechamcal axis For die system shown in Figures 4 and 5, the mechamcal axis of rotation is determined by the V groove of lower V block 142 of the BPA This means that an alignment calibration must be performed whenever the borescope is clamped at a new angular onentation with respect to the BPA. unless the V groove is accurately aligned along the translation axis of the translation stage

A third difficulty is caused by the characteristics of the lens tube of a standard borescope The envelope of the lens tube is typically made of dun wall stainless steel tubing Such an envelope is unlikely to be perfectly circular at any position along its length, and it has already been discussed how unhkeh it is to be straight Rotation of such a geometncally imperfect envelope in a V groove will lead to a varying orientation of d with respect to visual coordinates even if die V groove were aligned with d and the clamping position along the length tube were unchanged Once again, the situation is that if the borescope is moved with respect to the BPA. then alignment calibration must be repeated, at least for accurate work One approach to addressing these problems would be to charactenze the alignment of the perspective displacement with respect to the visual coordinate system as a function of die position and onentation of the borescope with respect to the BPA While this would work in tiieory, the amount of calibration effort necessary and die likelihood of poor repeatability of borescope onentation, due to the charactenstics of the lens tube envelope, make this approach unattractive 2 Descπption of a First Variant of Borescope/BPA Embodiments

Figure 41 shows a first modification to my BPA embodiments which solves these problems In Figure 41 , clamp 140 is shown in the open position in order to better show the modifications

A portion of borescope lens lube 124 has been enclosed by a metrology sleeve or calibration sleeve 650 Calibration sleeve 650 is compπsed of a thick-walled cylindrical tube 652 widi sleeve ferrules 654 attached at either end Sleeve nuts 656 screw on to ferrules 654 to clamp the assembly to lens tube 124 at any selected position along lens tube 124

The outer diameter of cyhndπcal tube 652 is fabπcated to be accurately circular and straight This is typically done by a process known as centerless gnndmg Tube 652 is preferably made of a rather hard matenal, for instance high carbon steel coated with hard chrome, or case-hardened stainless steel On the other hand, upper V block 144 is preferably made of a somewhat softer material, for instance, low carbon steel, alununum, or brass Because of these relative hardnesses, and because of the thick wall of tube 652, it is no longer necessary to use a layer of resilient matenal to line upper V block 144, and thus it is not shown in Figure 41 This also means that a much higher clamping pressure can be used in this system than could be used in the onginal system of Figure 4 Calibration sleeve 650 lies in the V groove in lower V block 142 The dimensions of the V grooves in both lower V block 142 and upper V block 144 have been modified from those shown in previous figures in order to clamp the larger diameter of tube 652 In order for the groove in lower V block 142 to act as the position reference for sleeve 650, and hence, ultimately, for video borescope 120, hinge 148 is now fabneated with an appropnate amount of play, so that the groove in upper V block 144 takes a position and orientation which is determined by sleeve 650 when clamp 140 is closed The groove in lower V block 142 is accurately aligned to the translation axis of stage 180 to a predetermined tolerance using one of the methods to be described later

An alternative embodiment of a calibration sleeve is shown in Figure 42 There a strain-relieving calibration sleeve 660 is shown attached to video borescope 120 At the distal end, sleeve 660 is attached to borescope lens tube 124 with the same ferrule (654) and nut (656) system that was shown in Figure 41 At the proximal end, sleeve 660 is attached to the body of the endoscope through a torque transfernng clamping collar 658 In the embodiment that was shown in Figure 41 the overhanging torque due to die proximal (rear) portion of borescope 120 is concentrated on the small diameter lens tube 124 at the point at which lens tube 124 exits ferrule 654 Video endoscope systems vary in the size and weight of their proximal portions and it is probable that in some cases, the overhanging torque will exceed the capacity of lens tube 124 to resist bending In this alternative embodiment, collar 658 transfers this torque to a more robust portion of the endoscope As shown in Figure 42, with the geneπc video borescope 120, collar 658 is securely clamped to illumination adapter 126, this clamping can be done with any of several common and well-known techniques Collar 658 is constructed so as to provide the necessary operating clearance for fiber optic connector 128 Depending on the design of the borescope being used, it may be tiiat some other portion of the borescope will be the most suitable attachment point for collar 658 3 Operation of the First Vanant of Borescope/BPA Embodiments Consider Figure 41 once again In use, calibration sleeve 650 is semi-permanenth attached to borescope 120 When nuts 656 are tightened sleeve ferrules 654 grab tightly without mamng or denting the surface of lens tube 124, fixing the relative locations of lens tube 124 and the outer cylindrical surface of sleeve 650 Since the visual coordinate system is fixed with respect to the outer envelope of the borescope, the outer surface of sleeve 650 is fixed with respect the visual coordinate system I call the assembly of borescope 120 and calibration sleeve 650 the perspective measurement assembly

The perspective measurement assembly can be located at any position inside clamp 140, and can be clamped in tiiat position, as long as a significant length of sleeve 650 is contained within the clamp The action of placing sleeve 650 in the V groove in lower V block 142 constrains four degrees of freedom of the motion of sleeve 650 The two unconstrained motions are rotation about the axis of the sleeve, and translation along that axis Translation is, of course, limited to a range of distances over which a significant length of the sleeve will be contained inside the clamp Since the borescope is clamped inside the sleeve, its motion is similarly constrained and controlled, as is the motion of die visual coordinate system These two degrees of freedom are precisely those necessary to allow borescope 120 to view objects at different positions with respect to BPA 138 (Figure 4)

Since the groove in lower V block 142 is accurately aligned with the translation axis of stage 180, and since the outer surface of sleeve 650 is accurately cylindrical, the relative onentations of d and the visual coordinate system do not change as the perspective measurement assembly is rotated or translated in lower V block 142 Note that there need be no particular orientation of the visual coordinate system with respect to the axis of the cylindncal outer surface of sleeve 650 The only requirements for there to be a constant relative orientation between d and the visual coordinate system are that the surface of sleeve 650 be accurately cylindncal, and that the axis of the locating V groove be accurately directed along d

For making measurements on objects at widely diffenng depths inside enclosure 110 (Figure 4), sleeve 650 can be moved on lens tube 124, but when it is moved, a new alignment calibration will be required, in general The range of depths that can be accommodated by a perspective measurement assembly without recalibration is determined by the length of sleeve 650 For many users a limited range of available measurement depths is not a problem because their objects of interest are confined to a small range of depths inside the enclosure

Calibration sleeve 650 could be made nearly as long as lens tube 124 This suggests another option for eliminating the need for routine alignment calibrations I call this option the metrology borescope A metrology borescope, a new instrument, is a ngid borescope built with a lens tube which is thicker, stiffcr, and harder than normal The outer envelope of lens tube 124 of a metrology borescope is precision fabπcated to be accurately cylindncal Such a scope does not need calibration sleeve 650 in order to provide accurate perspective dimensional measurements with only a single alignment calibration Standard borescopes, witii their thin

Figure imgf000060_0001
tend to get bent in use A small bend does not ruin a borescope for visual inspection, but it would rum the accuracy of any calibrated perspective measurement assembly Since the metrology borescope is more resistant to such bending, it is the superior technical solution

An additional advantage of the system shown in Figure 41 over that shown in Figure 4 is tiiat borescopes with different lens tube diameters can be fitted with appropnate calibration sleeves of the same outer diameter Thus, when the calibration sleeve is placed into lower V block 142, the centerline of the borescope is always at the same position with respect to the BPA, which is not the case when different diameter borescopes are directly inserted into the V block Keeping this centerline at a constant position makes the mounting of the BPA widi respect to enclosure 110 and inspection port 112 (Figure 4) less complicated when borescopes of different diameters are to be accommodated I have already stated that the V groove in lower V block 142 is accurately aligned with the translation axis of translation stage 180 I now explain exactly what this means, and tiien how that condition can be achieved

A V groove is made up of two bearing surfaces which, ideally, are sections of flat planes If these surfaces are perfect, then the conesponding planes will intersect in a straight line It is when this line of intersection is parallel to the translation axis of stage 180, that it can be said that the V groove is accurately aligned with the translation The purpose of the V groove is to locate the cylindncal outside diameter of the calibration sleeve accurately and repeatably By locating a cylindncal object accurately, I mean that for a short section of a perfect cylinder, die onentation of the axis of tiie cylindncal section does not depend on where along the length of the V groove the cylindncal section happens to bear, and that tiiere is a continuous single line contact between each beanng surface and the cylindncal section, no matter where that section happens to he along the V groove, and no matter how long that section is

A V groove will serve to locate a cylindrical object accurately even if the bearing surfaces are not planar, just so long as three conditions hold First, each of the bearing surfaces must either have a symmetry about a straight line axis or must be perfectly planar Second, the straight line axis of one surface must be parallel to the axis or plane of the other surface Third, surfaces w ith symmetry about a straight line axis must either be convex or have a sufficiently large radius of curvature that there is only one line of contact between the cylindrical object and the surface

This means, for instance, that two accurately cylindrical bodies can serve to accurately locate a third cylinder just as long as the axes of die first two cylinders are parallel, and such a system could be used instead of the prefened V groove It is also possible to form two physical lines of contact, by cutting a cylindrical groove into a plane surface or into a larger radius cylindncal groove, for example These physical lines can serve to accurately locate a cylinder, but only if die cylindncal groove is onented accurately parallel to the plane surface or cylinder into which it is cut If the cylindncal groove is not so oπented, die contact lines formed thereby will not be straight and will not serve to accurately locate a cylindncal body In order to locate die calibration sleeve repeatably, it is necessary to pay appropriate attention to maintaining the cleanliness of both the outer surface of the calibration sleeve and of the locating surface on die BPA, whetiier that surface is embodied as a V groove or as some other appropnate geometry To maintain the accuracy of the perspective measurement, one must maintain the orientation of the visual coordinate system with respect to the outer surface of the calibration sleeve, and one must also maintain the alignment of the BPA reference surface with respect to the perspective displacement In order to maintain these geometncal relationships over a wide range of operating temperatures, one must pay careful attention to the effects of differential thermal expansion, especially in those embodiments which use an ahgnablc BPA reference surface 4 How to Achieve Accurate Alignment of the BPA Reference Surface

In any discussion of "accuracy" must include a definition of the size of enors which are allowed while still justifying the label "accurate" In my perspective measurement svstem, the enor of interest is the enor in die dimensional measurement being made As far as die alignment of the system is concerned, an unknown enor in the onentation of d with respect to the visual coordinate system will cause a systematic enor in the distance measurement

Analysis shows that a misalignment of d will cause a systematic measurement enor which will vary linearly with the distance (range) between the object being measured and the nodal point of the borescope optical system That is, this systematic error in a distance measurement can be expressed as a fraction of the range to the object, for example, 1 part m 1000, or as an error angle, e g 1 mdliradian In detail, the enor in anv given measurement depends on the position of the object in the apparent field of view of the borescope in each of the two views, and on the fractional portion of the field of view subtended by the distance being measured

In d e worst case, the enor in the measured distance is approximately equal to the angular enor in the onentation of d times the range to the object That is, a 1 mdliradian angular enor in the onentation of d corresponds approximately to a worst case distance measurement enor of 1 part in 1000 of the range A given level of acceptable systematic measurement enor will conespond to an acceptable level of misalignment For the purposes of this discussion, I will define two levels of acceptable error I call a "Class 1" measurement one that is accurate to 1 part in 1000 of the range I call a "Class 2" measurement one that is accurate to 1 part in 10,000 of the range These acceptable enor levels are consistent with the random error capabilities of the perspective measurement system when it is implemented with standard endoscopv equipment A random enor of 1 part in 1000 of the range is fairlv straightforward to achieve using a standard video borescope. while achieving 1 part in 10,000 random enor requires either (1) the use of a high resolution borescope optical system and a high resolution video camera back and some averaging of measurements, or (2) the averaging of a large number of measurements

The achievement of a misalignment of 1 πullirad, I e , 0001 cm per cm is straightforward by use of precision machining techmques, as long as translation stage 180 has been fabπcated with accurate mechamcal references to its translation axis If it has not been so fabπcated, one proceeds as follows

Usually, the top surface of moving table 184 of stage 180 (Figure 5) is guaranteed by the manufacturer to be parallel to die translation to within a spec ied tolerance Often, tius tolerance is 0 1 milhradian If the top of the moving table has not been accurately aligned with the translation, then one can measure the pitch of the top surface by suspending a dial indicator above the stage and indicating on the top surface of moving table 184 as it translates below Th s known pitch can then be compensated for in the machining of lower V block 142 If there is not a convenient reference for the direction of the translation axis as measured in the plane of the top surface of moving table 184, suitable reference holes arc easily made bv mounting the stage on a drilling machine and using the motion of the stage itself to determine the relative positions of the holes

Once stage 180 has been characterized and/or modified, lower V block 142 is fabπcated with standard machining techmques while paying particular attention to two key factors First, the bottom surface of lower V block 142 must be onented accurately parallel to the translation axis of the fabrication machine when die V groove is cut into its upper surface (or tilted to offset the pitch of the top of moving table 184, measured as discussed immediately above) Secondly, the V groove, and any reference holes, are machined with a fixed tool spindle location and with d e machine tool moving lower V block 142 only along a single translation axis This guarantees that the V groove will be parallel to the line between the centers of the reference holes to an accuracy determined by the straightness of the machine tool translation axis

The achievement of a misalignment appropnate to Class 2 measurements, i e 0 1 milhradian, by precision machining is possible, but difficult and expensive One way to make it more feasible is to do the final gnnd g of the V groove into block 142 with block 142 mounted to the translation stage The stage motion itself is used to provide the necessary motion of the block with respect to the grinding w heel The disadvantage of this approach is that the length of the V groove is limited to somewhat less than the length of travel of the stage The advantage is that the alignment of the V groove with the translation will be accurate to within the accuracy of translation of the stage

For Class 2 accuracy, it may be preferable to align the V with respect to the translation of the stage One way to accomplish this alignment is to use shims to adjust the position of lower block 142 in pitch with respect to the top of moving table 184 and in yaw with respect to a reference surface attached to the table top A second way is to split lower block 142 into two plates with a vanable relative alignment in pitch and yaw Such a device would be similar to and work on the same pπnciples as die Model 36 Multi-axis Tilt Platform sold by the Newport Corporation of Irvine, CA, USA The upper plate of this assembly is steered with respect to the lower plate in pitch and yaw through the use of adjusting screws, while the lower plate is conventionally attached to the top of moving table 184

A ng for determining the alignment of the V groove to the translation of the stage is depicted in Figure 43 Here is shown a front elevation view of a translation stage 180, to which is attached a split lower V block 143 Split lower V block 143 is constructed as discussed in the previous paragraph As before, upper V block 144 acts as a clamp, the screw or mechanism which provides clamping force is not shown A reference cylinder 700 is clamped into split lower V block 143 so that a suitable length of cylinder 700 extends out of the clamp towards the observer Reference cylinder 700 is selected to be straight and circular to a very high degree of accuracy A pair of dial indicators 702 are mounted to the work surface by conventional means which are not shown Indicators 702 are suspended over reference cylinder 700 and disposed to either side of it Sensing feet 704 of dial indicators 702 contact die shaft at the same distance from the clamp as measured along cylinder 700 Sensing feet 704 have a flat surface over most of their diameter in order to avoid errors due to the imperfect alignment of the indicator measurement axis with the axis of reference cylinder 700

To determine the desired alignmen stage 180 is translated back and forth along its length of travel and the readings of the dial indicators are monitored Errors in pitch of the V groove are indicated by changes in the average of the two dial indicator readings Enors in yaw are indicated by changes in the difference of the two readings The alignment of die V groove is perfect when the dial indicators do not change as the stage is translated

The anangement of dial indicators 702 is not restneted to that shown in Figure 43 One could orient one of die indicators so that it was suspended vertically above reference cylinder 700, and die other could be oπented hoπzontally Then one indicator would directly indicate pitch, while the other would directly indicate yaw The only requirement is that the t o indicators not be oπented m exactiy d e same direction, and for best sensitivity and most convenience, mere should be a πght angle between their onentations

One can check for the presence of geometncal imperfections in the combination of reference cylinder 700 and the V groove in lower V block 143 by loosening the clamp, rotating reference cylinder 700 about its axis to anouier angular position, tightemng the clamp, and redoing the check One can also repeat the test at different positions along die length of cylinder 700 to check for enors in its straightness A good reference for the theory and practice of making such measurements is Handbook of Dimensional Measurement, 2nd Edition, by Francis T Farago, Industrial Press, New York, 1982

One could directiy indicate on the plane surfaces of the V groove rather than using a reference cylinder as I have shown But in that case one would be measunng imperfections in these surfaces as well as their alignment, when what one really cares about is how the existing V groove acts to locate a cylinder Since accurate reference cylinders are readily available, I prefer the method I have shown

Of course, it must be kept m mind that one cannot expect to determine enors in the geometry of cylinder 700 or in the V groove in lower V block 143 to a level better than that provided by the straightness and repeatability of motion of translation stage 180 Since the purpose of this test ng is to align the V groove with respect to this motion, enors m this motion do not affect the validity of the results

One can check for the integnty of the rotation of a cylinder in a V block by mounting a mirror on an adjustable minor mount so that the minor is approximately peφendicular to and centered on die axis of the cylinder This process is depicted Figure 44 In Figure 44 a laser 710 produces a laser beam 716 Laser beam 716 is reflected from a minor which is part of minor mount assembly 712 The beam reflected from the minor is allowed to impact a viewing screen 714

The mirror mount is adjusted to produce the smallest motion of the laser spot as the cylinder is rotated in the V block Any residual motion of the spot, which cannot be reduced by adjustment of the angular orientation of the mirror, is due to non-constant angular onentation of the cylinder as it rotates while maintaining contact with the V block The vanation of the onentation of the axis of die cylinder can be sensed to within a few tenths of a milliradian in this way A sensitivity on die order of a microradian can be achieved, once the minor has been aligned as shown here, by viewing the minor with an autocolhmator which has been aligned nearly peφendicular to the minor, and again rotating the cylinder

It is possible to conceive of a motion of the cylinder in the V block that is not perfect, but is such that the mirror remains at a constant angular onentation while the cylinder is being rotated (One way is for the cylinder to wobble as it rotates ) What is important about such a situation is that any motion which causes an enor in the perspectiλ e measurement will also cause an error when being tested by the technique depicted in Figure 44 5 Descnption of a Second Vaπant of Borescope/BPA Embodiments A second moddication to BPA embodiments is shown in Figure 45 This dtffers from the first modification in Uiat there is now an angle scale or protractor 670 attached to cylindrical tube 652 A protractor pointer 672 is attached to a pointer mounting bar 673 which is in turn attached to lower V block 142 Pointer 672 has sufficient length to enable the angular onentation of the perspective measurement assembly to be determined no matter where it is located in its range of translation with respect to clamp 140 In this embodiment, the V groove in lower block 142 need not be accurately aligned with the perspective displacement

Anodier option would be to use die strain-relieving calibration sleeve 660 as depicted in Figure 42 Then an angular scale could be advantageously marked on the outer diameter of collar 658 6 Operation of the Second Vanant of Borescope BPA Embodiments It was shown in Figure 40 that the alignment of the perspective displacement d, in the visual coordinate system is a function of the rotation of the perspective measurement assembly about the axis of the cylindncal surface of the calibration sleeve In this second embodiment, the acquisition of an additional piece of information duπng die measurement, and an additional step in alignment calibration, enable one to calculate die alignment of d, and dius make an accurate perspective measurement, despite the presence of a misalignment between the axis of the calibration sleeve and the perspective displacement I will explain the operation of the measurement in tins section, and the necessary additional calibration of the system in the next section

Figure 46 is similar to Figure 40 but it contains additional information As before, a visual coordinate system (x„, yv, zυ) is defined by the x and y axes of the video camera focal plane, and d e optical axis of the borescope In Figure 46 coordinate axes parallel to die visual coordinate system are shown in the field of view of the borescope As before, the Figure is drawn in the plane which contains the axis of mechanical borescope rotation, A- A, and which is parallel to the perspective displacement, d None of the visual coordinate axes xυ, yυ, zυ are necessaπly contained in the plane of the Figure Again as before, in Figure 46 A, the component of the visual x axis that is perpendicular to the page should be visualized as being directed into the page, while it should be visualized as being directed out of the page in Figure 46B One may define a borescope mechanical coordinate system which rotates with the borescope, which has a fixed relationship with respect to die visual coordinate system, and which has its x axis parallel to A-A as follows

(1 ) The xm axis is onented along A-A

(2 ) The ym direction is chosen to be peφeπdicular to both the optical axis, zv, and to xm This can be expressed mathematically as

zv * m ,„,..

*- = Ϊ^Tx^i (83)

(3 ) Finally, the zm axis is chosen to be peφendicular to both xm and ym axes in the usual way as

.-. xm x Vm ,O Λ . zm = : — ^ ~ , (84)

\ χm x ym\

The mechanical coordinate svstem (xm, ym, 2m) is depicted in Figure 46 One important implication of this definition is that the optical axis zυ, is guaranteed to lie in the (zm, xm) plane Also shown in Figure 46 is a translation coordinate system (xt, yt, zt), which has a fixed orientation ith respect to the translation stage The xt axis is defined to lie along the perspective displacement, d For die moment, the directions of the yt and z( axes are taken to be arbitrary, but the (x(, yt, zt) system is defined to be a conventional nght-handed Cartesian coordinate system

For the puφoses of this discussion, all of these coordinate systems will be assumed to have oπgins at tiie same 5 point in space, although they are drawn separated in Figure 46 for clanty

What one needs is an expression for d in die visual coordinate system This expression will depend on die rotation of the borescope about the mechamcal axis A-A The parameter for this rotation is taken to be the angle

As mentioned previously with regard to the general perspective measurement process, in order to discuss 10 rotations in three dimensions, one must carefully define what procedure is being used for a senes of sub-rotations I define the specific procedure for rotating the mechanical coordinate system to align it witii the translation coordinate system as follows

(a ) Rotate the m coordinate system about xm until ym hes in the (x<, yt) plane 15 (b ) Rotate the m coordinate system about ym until zm coincides with zt

(c ) Rotate die m coordinate system about zm until xm coincides with x( (and ym coincides with yt )

Madiematically, this procedure can be expressed as 20 v, = Rtz)Rv(φy)Rtx) vm = RVm (85) where the 3 x 3 matnees R have been defined in Equations (33-35) In Equation (85), vt and vm are 3 x 1 matnees which contain the components of any arbitrary vector as expressed in the translation and mechamcal coordinate systems respectively The angles φz, φv, and φz are die angles by which the coordinate system is rotated in each step of die procedure At each step, the angle is measured in the coordinate svstem that is being rotated The positive direction of rotation is defined by the right hand rule

25 Step (a ) of the procedure implicitly states that φx = 0 when ym lies in the (x(, yt) plane In the embodiment shown in Figure 45, the onentation of the perspective measurement assembly for which φx = 0 is defined by the scale on protractor 670 Togetiier, these two facts mean that it is the location of the zero point on the scale of protractor 670 which defines d e onentation of the yt and zt axes The orientation of these axes can no longer be considered arbitrary

30 The inverse transformation to Equation (85), that is, the procedure for rotating the translation coordinate system to align it wid the mechamcal system, can be expressed as vm

Figure imgf000065_0001
(Φv) HΦι) v* (86)

Recall that the mechamcal coordinate system was defined so that the visual z axis is confined to the mechamcal (x, z) plane The relationship of the visual and mechanical coordinate systems is depicted in Figure 35 47 Because of the way die relationship between these two coordinate systems was defined, there are only two rotation angles necessary to align one with the odier The specific procedure for rotating the mechamcal coordinate system so that it is aligned with the visual coordinate system is simply (a ) Rotate about the mechanical y axis by angle θy (b ) Rotate about the mechanical z axis by angle θ2

In mathematical terms tius is r„ = Rzz)Ry(θ ) rm (87)

Angle θy represents a rotation of the optical axis with respect to the mechamcal z axis, this rotation is confined to the mechamcal (x, z) plane Angle θz represents a rotation of the visual coordinate system about the optical axis

Combimng Equations (86) and (87), one can express the relationship between a vector as expressed in the translation and visual coordinate systems as

Figure imgf000066_0001

Since the displacement vector, as expressed translation coordinates is simply d = d , one has

dv = R,(β.)Rp(flv) R 1 (^)R» 1 (^)Rj1 (*.) (89)

Therefore, to determine the three-dimensional position of a point of interest with this second moddϊcation, one determines the visual location vectors avt and av as usual One also records die reading of protractor 670 as indicated by pointer 672. This is the angle φx One then uses the four angles (θz, θv, φv, φz) as determined in an alignment calibration, in Equation (89) to determine the displacement vector as expressed in visual coordinates This alignment calibration is discussed below Finally, one uses Equation (19) to determine the position of the point

There are many other ways that the rotation of the perspective measurement assembly with respect to the BPA could be determined For instance, the rotation could be sensed witii an optical or an electrical transducer, and the user would then avoid having to read a scale manually It is also possible to attach the protractor to the BPA and the pointer to the perspective measurement assembly to achieve the same result as does the preferred embodiment shown in Figure 45 In addition, the angle scale could be read more precisely when necessary by using a conventional vernier scale index instead of the simple pointer 672

It is important to consider how accurately one must determine φx in order to achieve the accuracy desired in the perspective measurement Assume that the misalignment of the mechamcal x axis with respect to the translation x axis is small enough that the sines of the angles φv and φz can be replaced with the angles uiemselves Then it can be calculated, by differentiation of Equation (89), that the worst case enor component in ύy/d is φ + Φ times the enor in φx If we take φy and φ„ to have equal magnitudes, and call that magnitude x. , then d e worst case enor component in dv is

Figure imgf000066_0002
φx Δφx Thus, any combination of misalignment, φ , and rotational measurement enor, Δφx, that forms the same product will create the same level of systematic error in the perspective measurement

As an example, assume dial the misalignment of the mechamcal x axis with respect to the translation is 10 rmlliradians (0 57 degrees), a value easdv achieved with non-precision fabncation techniques In this case, to achieve a perspective measurement to Class 1 accuracy (1 part in 1000 of the range) the allowable error in the rotation of die perspective measurement assembly is 71 milhradians, or 4 1 degrees For Class 2 accuracy under die same conditions, the measurement of the rotation must be ten times more accurate 7 Calibration of a System using the Second Modification In die discussion of calibration above, it was shown how to calibrate both the optical parameters of the borescope, and the relative alignment of die visual and translation coordinates The assumption there was tiiat die mechamcal x axis was directed exactly along die translation direction or that there would be no rotation of the borescope between calibration and measurement The alignment calibration determines the two alignment angles θz and θy of the translation with respect to the visual coordinate system

If borescope is translated from one viewing position to a second viewing position and if the location of the nodal point is determined in the same calibration coordinate system at botii positions, dien die alignment of the displacement vector in the visual coordinate system can be determined from Equation (67) as dv = Rc [ rc(τ72) - rc(r/ι) ] (90) where η\ and η are parameters denoting the translation position at the first and second viewing positions

Equation (90) expresses the standard alignment calibration process The result is specific to the particular onentation, φx, that die perspective measurement assembly has duπng die alignment calibration, if the mechamcal axis of rotation of the perspective measurement assembly is not aligned with die perspective displacement To perform the alignment calibration for a system using the second modification, tins standard process is performed twice, witii die perspective measurement assembly being rotated in the clamp of the BPA between these two alignment calibrations The preferred rotation between the two alignment calibrations is approximately 180 degrees In oϋier words, a standard alignment calibration is performed with, for instance, the calibration target aπay serving as the object of interest in Figure 4 Then, the perspective measurement assembly is rotated 180 degrees inside the clamp of the

BPA and the calibration target anay is moved to the other side of the BPA so that die targets can again be viewed, and a second alignment calibration is performed In terms of die rotation angles defined in Equations (86) and (87) one can wnte the directions of the perspective displacements in the visual coordinates for these two alignment calibrations as

Figure imgf000067_0001

In Equations (91) the known quantities are the rotation angles of the perspective measurement assembly φx and φxι and the direction vectors duA and d„β (which are known from use of Equation (90) as a result of the two individual alignment calibrations) The unknowns are the four alignment angles θz, θv, φz, and φv Since the length of both direction vectors is fixed at unity, there are four independent equations in four unknowns Equations (91) can be rewritten as duA = Q R φxι) s (92) duB = QR-1 (<?!>r2) s where matπx Q is a function of θ, and θv and where vector s is a function of φz and Φ The first equation can be solved for s to give s = R*(< >Iι) Q"ldu, (93) and this can be substituted in the second equation to give duA = Q 1 ('?!»χ2)R ('?!'xi) Q_1dUB (94)

Equation (94) represents two non-linear equations in two unknowns It can be solved for z and θv by an iterative numencal procedure, such as Newton's method In fact, (94) can be solved

Figure imgf000068_0001
a non-linear optimization process similar to that descnbed above in the discussion of optical calibration Once these two angles are known, they can be substituted into Equation (93) to solve for φz and φy This latter solution is straightforward The vector s can be written exphcidy as

s =

Figure imgf000068_0002
so tiiat the z component of s will give φy easily

I note for completeness that one can also calibrate such a system with a combination of mechamcal and optical techmques One can use the test ng of Figure 43 to directly measure the angles φy and φz that the mechamcal rotation axis makes with respect to the translation axis When one does this, one is also inherently defining the specific onentation of the translation y and z axes, so that then one must set the zero point on protractor 670 to correspond with dus specdic onentation and also to define a plane which contains die optical axis of die borescope Once tiiese conditions have been satisfied, one then can use Equation (89) to determine the alignment angles θy and θz of die translation with respect to the visual coordinate system using standard alignment calibration data, in the same manner as was discussed above

8 Additional Applications and Embodiments The improved system I have descnbed is also applicable to any single camera, linear motion embodiment of the perspective measurement system, if the camera is given a similar freedom to rotate about an axis which is not aligned with the linear motion Figures 40, 46, and 47 apply just as well to this case as to the borescope/BPA embodiment discussed in detail The same measurements, the same equations and the same expanded alignment calibration as I have disclosed can be used to perform an accurate perspective measurement ith such an embodiment

Although the improved system has been descnbed with reference to particular embodiments, it will be understood by tiiose skilled in the art that it is capable of a vanety of alternative embodiments

For example, one may use a pair of spherical bodies attached to and ananged so as to sunound the borescope and disposed witii some separation along its length, instead of the cylindncal calibration sleeve of my preferred embodiments This structure would allow the borescope the required two degrees of motional freedom (when located in the V groove, but not clamped in position) and yet would provide the required onentation control when used in conjunction witii the BPA

It has been mentioned diat there are any number of alternative groove shapes that can be used instead of my preferred V groove for the BPA reference surface One could also use two separate short V grooves to locate the calibration sleeve, unlike the single long V groove of my prefened embodiments In this case, the two V grooves would have to be accurately aligned with respect to each other, but this construction could save weight

Anodier alternative would be to use a cylindncal reference surface on the BPA and a V groove mounted on the borescope This would work just as well as the prefened embodiments in terms of the accuracy of the measurement The disadvantage is that the centerline of the borescope would move with respect to the BPA as the borescope was rotated, thus making it more difficult to perform the measurement through a small inspection port as shown in Figure 4

The reference surface on the borescope does not have to be mounted over the lens tube, as it is in my prefened embodiments Depending on the detailed construction of the individual borescope and on the need for a translational degree of freedom in the application it is possible to provide die reference surface somewhere on die body of die borescope The advantage is that there is then less of the length of the borescope lens tube dedicated to the support of die borescope, and ius more of the lengtii is useable for reaching deep into an enclosure

It is also possible to provide systems which have only the rotational degree of freedom, for those applications in which the depth of the object of interest is fixed One simple example is tiiat a specific region of d e lens tube envelope could be marked as the region to be clamped into the BPA If the borescope is always clamped at this same position, then there will be no change in alignment because of curvature of the lens tube envelope This simple system is still subject to lack of repeatability in the alignment because of non-circulanty of the lens tube, but it may be adequate for certain applications

The system of using complementary reference surfaces to provide a repeatable relative alignment between a borescope and a borescope positioner could also be used with other, less complete, measurement systems which were known pnor to my perspective measurement system to allow more flexibility in aligning the view to objects of interest

Conclusion. Ramifications, and Scope

Accordingly, the reader will see that the dimensional measurement system of this invention has many advantages over the pnor art My system provides more accurate measurements than hitiierto available, because I show how to arrange the measurement to minimize the inherent random errors, and also because I show how to determine and take into account the actual geometry of and any systematic enors in the hardware My system provides measurements at lower cost than previously available because I conectly teach how to add the measurement capability to cuπent, widely available, visual inspection hardware In addition, my system provides a more flexible measurement technique than previously known, in that I teach how to make measurements that are simply impossible with the pnor art Using my invention, it is possible to build special puφose measurement systems to meet any number of specific measurement requirements that are cunently not being adequately addressed

Altiiough the invention has been descnbed with reference to particular embodiments, it will be understood by tiiose skilled in the art that die invention is capable of a vaπety of alternative embodiments within the spint and scope of the appended claims

Claims

Claims
1 A mediod of perspective measurement of the three-dimensional size of a remote object using a camera having a field of view, said camera being translated along a substantially straight line from a first viewing position to a second viewing position, characterized by the use of a fully three-dimensional least squares estimation procedure to determine die measurement result
2 A method as claimed in claim 1, wherein said camera also has an internal coordinate system, and wherein the onentation of said camera internal coordinate system with respect to said substantially straight line is determined in a calibration process and wherein said onentation is taken into account in the measurement result, and wherein, optionally, enors in the motion of the camera are also determined in a calibration process and ϋiese enors are dien also taken into account in the measurement result
3 A mediod as claimed in citiier of claim 1 or claim 2, wherein the first and second viewing positions are selected so tiiat a single point on the object is viewed at an apparent angular position near one edge of the field of view at the first viewing position, and at substantially the same apparent angle on the other side of the field of view at the second viewing position, thereby minimizing the random enor in the measurement
A method of determining a set of three - dimensional coordinates for at least one point on an inaccessible object, thereby determining a location vector for each of said at least one point, charactenzed by the steps of (a) providing one or more cameras, each of which has an internal coordinate system and an effective focal length, and providing motion means for moving said one or more cameras with respect to the inaccessible object, and further providing a plurality of predetermined relative camera positions for each of said cameras, wherein each of said cameras has a spatial orientation at each of said relative positions, and wherein said relative positions and said spatial orientations are determined in an external coordinate system, b) acquiπng a set of first images of said at least one point with one of said one or more cameras located at a first viewing position, wherein said first viewing position also conesponds to one of said predetermined relative camera positions, said camera having a first spatial orientation at said first viewing position, thereby denning a first measurement coordinate system which is coincident with die internal coordinate system of said camera at said first viewing position, (c) measunng the coordinates of each of said first images of said at least one point in said first measurement coordinate system, (d) acquinng a set of second images of said at least one point widi one of said one or more cameras located at a second viewing position, wherein said second camera viewing position also conesponds to one of said predetermined relative camera positions, said camera having a second spatial onentation at said second viewing position, tiiereby defining a second measurement coordinate system which is coincident with die internal coordinate system of said camera at said second viewing position. (e) measunng the coordinates of each of said second images of said at least one point in said second measurement coordinate system,
(f) conecting the measured coordinates of each of said first images of said at least one point to adjust for any distortion of the camera located at the first viewing position, and correcting the measured coordinates of each of said second images of said at least one point to adjust for any distortion of the camera located at the second viewing position, thereby producing sets of first and second final point image coordinates for said first and second viewing positions in said first and second measurement coordinate systems,
(g) multiplying the first final point image coordinates by the mathematical inverse of the effective focal length of the camera located at the first viewing position and multiplying the second final point image coordinates by the mathematical inverse of the effective focal length of the camera located at the second viewing position, to determine the mathematical tangents of the angles at which each of said at least one point is viewed in said first and second measurement coordinate systems, and (h) forming least squares estimates of the three dimensional coordinates for each of said at least one point in a third measurement coordinate system using said mathematical tangents of die viewing angles for each of said at least one point in said first and second measurement coordinate systems and the relationships between said first and second camera viewing positions and said first and second camera spatial onentations determined in said external coordinate system, thereby forming a least squares estimate of the location vector for each of said at least one point in said third measurement coordinate system
A method as claimed in claim 4, wherein the durd measurement coordinate system is the same as the first measurement coordinate system, and/or the predetermined relative camera positions all lie along a substantially straight line or along a substantially circular arc
A method as claimed in claim 5, wherein the spatial onentation of the camera at the first viewing is substantially the same as the spatial onentation of the camera at the second viewing position, or wherein at least one camera is free to rotate about a single axis peφendicular to said straight line or said circular arc
A method as claimed in claim 5, wherein said relative camera positions lie along said circular arc. said circular arc having a center of curvature, and wherein each of said one or more cameras has an optical axis, and wherein the onentation of each of said one or more cameras is coupled to its position along the arc so tiiat said optical axis is always substantially aligned with said center of curvature of the arc or wherein the onentation of each of said one or more cameras is such that said optical axis is aligned substantially perpendicular to the plane containing the arc
8. The method of claim 4 for determining the three-dimensional distances between die points of each pair of any set of pairs of points in a plurality of points on an inaccessible object, characterized by the further steps of:
(i) performing steps (a) through (h) of claim 4 for said plurality of points;
(j) determining a difference vector between the location vectors of a first pair of said set of pairs of points by subtracting the location vector of a first point of said pair from the location vector of the second point of said pair; (k) determining the length of the difference vector by calculating the square root of the sum of the squares of the components of the difference vector; and (1) repeating steps (j) and (k) as necessary to determine the distances between the points of all remaining pairs in said set of pairs of points.
9. The method of claim 4 for determining the three-dimensional distances between a pair of points on an object, characterized by the further steps of:
(i) performing steps (a) through (h) of claim 4 for a first point of said pair of points, wherein said first and second viewing positions define respectively first and second camera location vectors in said external coordinate system, and identuying said third measurement coordinate system as a first temporary coordinate system, wherein said first temporary coordinate system has an origin and wherein said origin has a vector location in said external coordinate system; (j) calculating the vector location of said first point in said external coordinate system by adjusting die vector location of said first point in said first temporary coordinate system according to the first and second camera spatial orientations used in step (i); (k) calculating die vector location of the origin of the first temporary coordinate system by forming the average of said first and second camera location vectors; (1) performing steps (b) through (h) of claim 4 for a second point of said pair of points, wherein at least one of said first and second viewing positions are different from either of the first and second viewing positions used in step (i), and wherein said first and second viewing positions now define respectively third and fourth camera location vectors in said external coordinate system, and identtfying said third measurement coordinate system as a second temporary coordinate system, wherein said second temporary coordinate system has an origin and wherein said origin has a vector location in said external coordinate system;
(m) calculating the vector location of said second point in said external coordinate system by adjusting the vector location of said second point in said second temporary coordinate system according to the first and second camera spatial orientations used in step (1); (n) calculating the vector location of the origin of the second temporary coordinate system by forming the average of said third and fourth camera location vectors;
(o) calculating a vector from the origin of the second temporary coordinate system to the origin of the first temporary coordinate system by subtracting the vector location of the origin of the second temporary coordinate system from die vector location of the origin of the first temporary coordinate system; (p) calculating the vector from the second point of said pair of points to the first point of said pair of points with the equation
Figure imgf000074_0001
wherein dAβ is the vector from the origin of the second temporary coordinate system to the origin of the first temporary coordinate system, ΓAG IS the vector location of said first point in said external coordinate system, and ΓBG IS the vector location of said second point m said external coordinate system, and
(q) calculating the distance between said pair of points by calculating the length of the \ector r
In an apparatus for measunng three - dimensional distances between selected points on an inaccessible object, wherein the apparatus includes a ngid borescope which is fastened to a linear motion means, said linear motion means having a range of travel and which also constrains the borescope to move along a substantially straight line, said apparatus further compnsing a driving means which controls the position of the linear motion means within its range of travel and also a position measurement means for indicating the position of said linear motion means, the improvement characterized by the use of a linear motion means selected from d e group consisting of crossed roller slides and ball slides and air beaπng slides and dovetail slides, wherein the linear motion means is preferably a linear translation stage, the dnving means is preferably an actuator, and die position measurement means is preferably a linear position transducer attached to said translation stage
In an apparatus for measunng three - dimensional distances between selected points on an inaccessible object, wherein the apparatus includes a ngid borescope which is fastened to a linear motion means, said linear motion means having a range of travel and which also constrains die borescope to move along a substantially straight line, said apparatus further compnsing a dπvmg means which controls the position of the linear motion means within its range of travel and also a position measurement means for indicating the position of said linear motion means, the improvement characterized by the use of a lead screw and nut as a dnving means and, optionally, wherein both the dnving means and the position measurement means are embodied in a micrometer
An apparatus as claimed in either of claim 10 or claim 11 wherein said borescope has a field of view, and wherein said borescope includes a video imaging means, and wherein said video imaging means is compnsed of a video sensor optically coupled to said borescope, and wherein said video sensor has different spatial resolutions along its two sensing axes, further wherein said video sensor is rotationally oπented with respect to said borescope such diat its higher spatial resolution axis is aligned substantially parallel to the projection of the linear motion of the borescope as observed in the field of view, thereby obtaining the highest precision in the distance measurement An electronic measurement borescope apparatus for measuring three - dimensional distances between selected points on an inaccessible object, characteπzed by
(a) a video camera, including an imaging lens and a solid state imager, for producing video images of the object, and a video monitor, for displaying said video images,
(b) a linear translation means, for moving the video camera with a substantially constant onentation along a substantially straight line, said linear translation means and camera being disposed at the distal end of a ngid probe, and said linear motion means also having a range of travel,
(c) an actuating means, for moving the linear translation means to any position within its range of travel,
(d) a position measurement means, for determining the position of the linear translation means within said range of travel, whereby the position of the video camera is also determined, said position measurement means also producing position measurement data, said position measurement means also having a first data transfer means for supplying the camera position data to a computing means,
(e) a video cursor means, for displaying vanable position cursors on said video image, said video cursor means having a second data transfer means for supplying the spatial positions of said variable position cursors to the computing means, and
(f) said computing means having a user interface, said user interface being in commumcation with said video cursor means and said second data transfer means such that a user can mampulate said video cursor means until said vanable position cursors are aligned with the images of said selected points on said inaccessible object, and further such that said spatial positions of said vanable position cursors are supplied to the computing means at user command, and further such that said computing means receives die camera position data d rough said first data transfer means, and further such that said computing means calculates and displays the tiiree - dimensional distances between the selected points on said inaccessible object
An apparatus as claimed in claim 13, wherein the actuating means is a motoπzed micrometer dnving a positiomng cable, said cable being looped around a pair of idler pulleys and being attached to the linear translation means or wherein the actuating means is a motoπzed micrometer located at the distal end of said ngid probe, said motoπzed micrometer being attached to the linear translation means
An electronic measurement endoscope apparatus for measuring three - dimensional distances between selected points on an inaccessible object, charactenzcd by
(a) a video camera, including an imaging lens and a solid state imager, for producing video images of the object, and a video momtor. for displaying said video images, (b) a linear translation means, for moving the video camera with a substantial!) constant orientation along a substantially straight line, said linear translation means also having a range of travel, and said linear translation means and camera being disposed internally into a ngid housing, said ngid housing being disposed at the distal end of a flexible endoscope housing, (c) an actuating means, for moving the linear translation means to any position witiun its range of travel, (d) a position measurement means, for determining the position of die linear translation means within said range of travel, whereby die position of the video camera is also determined, said position measurement means also producing position measurement data, said position measurement means also having a first data transfer means for supplying the position measurement data to a computing means, (e) a video cursor means, for displaying vanable position cursors on said video image, said video cursor means having a second data transfer means for supplying the spatial positions of said vanable position cursors to die computing means, and (f) said computing means having a user interface, said user interface being in communication with said video cursor means and said second data transfer means such that a user can mampulate said video cursor means until said vanable position cursors are aligned with the images of said selected points on said inaccessible object, and further such that said spatial positions of said vanable position cursors are supplied to die computing means at user command, and further such that said computing means receives die camera position data dirough said first data transfer means, and further such that said computing means calculates and displays the three - dimensional distances between the selected points on said inaccessible object
An apparatus as claimed in claim 15, wherein the actuating means is a positiomng wire encased in a sheath, which is dnven by a motoπzed micrometer, or wherein die actuating means is a motoπzed micrometer located at the distal end of the apparatus, said motoπzed micrometer being attached to the linear translation means
An apparatus as claimed in any one of claims 13 to 16, wherein said video camera has a field of view, and wherein an illumination means for illuminating said field of view is being carried by the linear translation means, such that the illumination of said field of view remains substantially constant as said camera is moved 18 An apparatus for making measurements of the three-dimensional distances between selected points on an object, said apparatus including a camera, and a support means, whereby said camera can be moved along a substantially straight translational axis from a first viewing position to a second viewing position, and whereby said camera can also be rotated about a rotational axis for alignment with objects of interest pnor to a measurement, said rotational axis being at an arbitrary alignment with said translational axis, characteπzed by
(a) a means for measurement of an angle of rotation of said camera about said axis of rotation, and
(b) a means for incoφorating said measurement of said angle of rotation into said measurements of three- dimensional distances
19 An apparatus as claimed in claim 18 wherein said means for measurement of an angle of rotation has a first portion which rotates with said camera and also has a second portion which is fixed to said support means, wherein said camera is preferably a substantially side-looking ngid borescope, said borescope having a lens tube envelope and said lens tube envelope having an outer surface, and wherein said support means preferably compnses a borescope positioning assembly and wherein said rotational axis is preferably defined by the engagement of a first reference surface attached to said borescope with a second reference surface attached to said borescope positiomng assembly, whereby said first reference surface is preferably a cylinder and said second reference surface is preferably a V groove, and said cylindncal first reference surface is said outer surface of said lens tube envelope or is a calibration sleeve attached to said borescope
20 An apparatus for making measurements of the tiiree-dimensional distances between selected points on an object, said apparatus including a substantially side-looking ngid borescope which can be moved along a substantially straight translational axis from a first viewing position to a second viewing position, said borescope having a lens tube envelope and said lens tube envelope having an outer surface, and wherein said borescope can also be rotated about a rotational axis for alignment with objects of interest pnor to a measurement, charactenzed by the anangement of said rotational axis to be accurately aligned with said translational axis
21 An apparatus as claimed in claim 20 wherein said borescope is preferably moved along said translational axis by a borescope positiomng assembly and said rotational axis is preferably defined by the engagement of a first reference surface attached to said borescope with a second reference surface attached to said borescope positiomng assembly, whereby said first reference surface is preferably a cylinder and said second reference surface is preferably a V groove, and said cylindncal first reference surface is said outer surface of said lens tube envelope or is a calibration sleeve attached to said borescope
PCT/US1997/015206 1996-08-16 1997-08-08 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects WO1998007001A1 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US08/689,993 US6009189A (en) 1996-08-16 1996-08-16 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects
US08/689,993 1996-08-16
US08/871,289 US6121999A (en) 1997-06-09 1997-06-09 Eliminating routine alignment calibrations in perspective dimensional measurements
US08/871,289 1997-06-09

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CA 2263530 CA2263530A1 (en) 1996-08-16 1997-08-08 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects
GB9903366A GB2333595B (en) 1996-08-16 1997-08-08 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects
AU41684/97A AU4168497A (en) 1996-08-16 1997-08-08 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects

Publications (1)

Publication Number Publication Date
WO1998007001A1 true WO1998007001A1 (en) 1998-02-19

Family

ID=27104515

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1997/015206 WO1998007001A1 (en) 1996-08-16 1997-08-08 Apparatus and method for making accurate three-dimensional size measurements of inaccessible objects

Country Status (4)

Country Link
AU (1) AU4168497A (en)
CA (1) CA2263530A1 (en)
GB (1) GB2333595B (en)
WO (1) WO1998007001A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1168034A2 (en) * 2000-06-29 2002-01-02 Leica Microsystems Wetzlar GmbH Illumination device and illuminated coordinate measuring device
US7349083B2 (en) 2004-07-21 2008-03-25 The Boeing Company Rotary borescopic optical dimensional mapping tool
US7903245B2 (en) 2007-08-20 2011-03-08 Marc Miousset Multi-beam optical probe and system for dimensional measurement
WO2011083217A3 (en) * 2010-01-06 2012-09-13 Laude Frederic Video endoscope
US9848954B2 (en) 2013-12-20 2017-12-26 Corbin E. Barnett Surgical system and related methods

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8786848B2 (en) * 2011-05-05 2014-07-22 Siemens Energy, Inc. Inspection system for a combustor of a turbine engine

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996020389A1 (en) * 1994-12-28 1996-07-04 Keymed (Medical & Industrial Equipment) Ltd. Digitally measuring scopes using a high resolution encoder

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1996020389A1 (en) * 1994-12-28 1996-07-04 Keymed (Medical & Industrial Equipment) Ltd. Digitally measuring scopes using a high resolution encoder

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
N.ODA ET AL.: "Estimation of surface shape from endoscopic image sequence", PROCEEDINGS OF THE SPIE, vol. 1898, 1993, BELLINGHAM, WA, USA, pages 85 - 92, XP002049182 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1168034A2 (en) * 2000-06-29 2002-01-02 Leica Microsystems Wetzlar GmbH Illumination device and illuminated coordinate measuring device
EP1168034A3 (en) * 2000-06-29 2004-05-12 Leica Microsystems Wetzlar GmbH Illumination device and illuminated coordinate measuring device
US6975409B2 (en) 2000-06-29 2005-12-13 Leica Microsystems Semiconductor Gmbh Illumination device; and coordinate measuring instrument having an illumination device
US7209243B2 (en) 2000-06-29 2007-04-24 Leica Microsystems Semiconductor Gmbh Illumination device, and coordinate measuring instrument having an illumination device
US7349083B2 (en) 2004-07-21 2008-03-25 The Boeing Company Rotary borescopic optical dimensional mapping tool
US7903245B2 (en) 2007-08-20 2011-03-08 Marc Miousset Multi-beam optical probe and system for dimensional measurement
WO2011083217A3 (en) * 2010-01-06 2012-09-13 Laude Frederic Video endoscope
US9848954B2 (en) 2013-12-20 2017-12-26 Corbin E. Barnett Surgical system and related methods

Also Published As

Publication number Publication date
GB2333595B (en) 2001-03-21
CA2263530A1 (en) 1998-02-19
GB9903366D0 (en) 1999-04-07
GB2333595A (en) 1999-07-28
AU4168497A (en) 1998-03-06

Similar Documents

Publication Publication Date Title
US4961267A (en) Method and apparatus for making coordinate measurements
US5075562A (en) Method and apparatus for absolute Moire distance measurements using a grating printed on or attached to a surface
KR101458991B1 (en) Optical measurement method and measurement system for determining 3D coordinates on a measurement object surface
Luo et al. Accurate measurement of three-dimensional deformations in deformable and rigid bodies using computer vision
US5085502A (en) Method and apparatus for digital morie profilometry calibrated for accurate conversion of phase information into distance measurements in a plurality of directions
JP5653581B2 (en) Measuring system
EP0640829B1 (en) Scanning probe microscope
US7043961B2 (en) Tool calibrator and tracker system
US6626828B2 (en) Body cavity-observing apparatus
JP2008547026A (en) Articulated coordinate measuring machine rearrangement apparatus and method
US5825666A (en) Optical coordinate measuring machines and optical touch probes
Schreier et al. Advances in light microscope stereo vision
US20040246483A1 (en) Method and a system for determination of particles in a liquid sample
US20050168751A1 (en) Optical imaging apparatus
EP0722107A2 (en) Improved shadow probe
US5510833A (en) Method and apparatus for transforming coordinate systems in an automated video monitor alignment system
JP2010122209A (en) Articulating measuring arm with laser scanner
EP1678468B1 (en) Method for checking or calibrating the angle-dependent alignment of a high-precision test-piece
US6043890A (en) Arrangement for determining the position of a surgical microscope
EP1071922B1 (en) Method and arrangement for determining the geometry of objects using a coordinate measuring device
US7173691B2 (en) Method for calibrating the geometry of a multi-axis metrology system
CN101427155B (en) Camera based six degree-of-freedom target measuring and target tracking device with rotatable mirror
JPWO2009113484A1 (en) Reference sphere detection device, reference sphere position detection device, and three-dimensional coordinate measurement device
US4875177A (en) Datuming of analogue measurement probes
US5251156A (en) Method and apparatus for non-contact measurement of object surfaces

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA CN DE GB JP MX

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
ENP Entry into the national phase in:

Ref country code: GB

Ref document number: 9903366

Kind code of ref document: A

Format of ref document f/p: F

Ref country code: CA

Ref document number: 2263530

Kind code of ref document: A

Format of ref document f/p: F

Ref document number: 2263530

Country of ref document: CA

NENP Non-entry into the national phase in:

Ref country code: JP

Ref document number: 1998510123

Format of ref document f/p: F

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase