AU8636298A - Scanning apparatus and methods - Google Patents

Scanning apparatus and methods Download PDF

Info

Publication number
AU8636298A
AU8636298A AU86362/98A AU8636298A AU8636298A AU 8636298 A AU8636298 A AU 8636298A AU 86362/98 A AU86362/98 A AU 86362/98A AU 8636298 A AU8636298 A AU 8636298A AU 8636298 A AU8636298 A AU 8636298A
Authority
AU
Australia
Prior art keywords
arrangement
images
image
region
offset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
AU86362/98A
Inventor
Christopher Peter Flockhart
Guy Richard John Fowler
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tricorder Technology PLC
Original Assignee
Tricorder Technology PLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tricorder Technology PLC filed Critical Tricorder Technology PLC
Publication of AU8636298A publication Critical patent/AU8636298A/en
Abandoned legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/245Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures using a plurality of fixed, simultaneously operating transducers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
    • G01B11/2545Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with one projection direction and several detection directions, e.g. stereo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/97Determining parameters from multiple pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/12Acquisition of 3D measurements of objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Multimedia (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Description

WO 99/06950 PCT/GB98/02307 Scanning apparatus and methods The present invention relates to a scanning apparatus and method for acquiring the 5 three-dimensional shape, size or other three-dimensional surface features of an object. such as colour for example. The invention relates particularly but not exclusively to hand-held or other freely movable 3D scanners. As far as we are aware only two hand-held scanners are known, namely that described 10 in US 4.627,734 (Schulz) and its equivalent EP-A-553,226 and that described in our own PCT/GB95/01994 and equivalent granted patent GB 2,292.605B. The Schulz scanner arrangement utilises a fixed optical system to determine the instantaneous position and orientation of the scanner and one embodiment of the scanner in our PCT application utilises inertial sensors on the scanner to determine its position -and orientation. Other embodiments utilise software techniques to combine separate 15 individually acquired 3D surface portions into an overall 3D surface description. Both the Schulz patents and our own PCT application disclose scanners whose optical arrangements acquire depth information directly by means of an inclined photodetector array which detects an optical pattern projected onto the scanned object. - One scanner which does not require a projected optical pattern and which is freely movable with respect to the scanned object is disclosed in US 4,993,836 (Furuhashi et a[) which uses a photogrammetric arrangement comprising two spaced apart cameras with parallel optical axes and having fields of view which overlap in the region of the scanned object. An arbitrary line of points on the scanned object is projected as two 25 lines on the respective image planes of the cameras and the geometry of the camera arrangement then enables the 3D shape of the line to be determined. Overlapping lines obtained in a similar manner following rotation of the scanned object are combined by correlating their overlapping regions to derive a closed loop which defines one cross-section of the object's surface and a multiplicity of such cross sectional loops are derived and combined to form a wire frame description of the 30 object's surface. In the preferred embodiment the object is a large forging which is rotated and the cameras are fixed and spaced one metre apart. There is no reference to a hand-held scanner. The principle of the geometrical calculation of Furuhashi et al is used in some 35 embodiments of the present invention and is illustrated in Figures l and 2 below. Referring to Figures 1 and 2, which show the two cameras 1 and 2 in the XZ (horizontal) plane and the ZY (vertical) plane respectively, each camera comprises a focussing lens L of focal length a and a photosensitive imaging plane I and the optical WO 99/06950 PCT/GB98/02307 2 axis of each camera is spaced apart from the Z axis by a distance S. An arbitrary point (X,YZ) on the object in the field of view of both cameras is imaged onto the image plane I of each camera as illustrated by rays 10, 10" and 20. 20". Camera I images the point (X,Y,Z) at a point (xl, -y) in local coordinates of the camera system and camera 2 images the point at a point (-x2.-y) in the local coordinates. Considering the similar triangles formed by the undeviated rays 10, 10' and 10": (S - X)/Z= x 1 /a (X + S)/Z =-x2 /a 1 Y/Z = -y/a Hence Z= 2Sa/(x -x2) (iv) X = Z(x j + x 2)/2a (v) Y = -yZ/a (vi). 15 Hence the coordinates X, Y and Z can be determined from x 1, X2 and y. In Furuhashi ez al it is assumed that the point (XY,Z) is relatively distant in comparison with the focal length but in fact the above analysis is applicable irrespective of whether point (X. Y, Z) is distant from the cameras, provided that it is 20 in focus eg as a result of stopping down the lenses L. No correlation of the images is disclosed in Furuhashi. A problem which arises in the case of objects of complex shapes is that it is difficult to determine whether a particular point on one camera's image plane corresponds to a particular point on the other camera's image plane (ie whether the points are both conjugate points of the same point on the object's surface). For example, overhanging 25 regions of an object might obscure an underlying region so that it only appears in one camera's filed of view. This problem is particularly likely to arise when the spacing of the cameras is relatively large compared to their distance from the object. However if the spacing is reduced relative to the distance from the object, the geometrical accuracy is compromised. W091/15732 (Gordon) discloses an arrangement in which a laser scanner projects a 30 series of stripes onto the scanned object and left and right cameras to detect the distorted stripes from the object. It is recognised that a given bright point in the image plane of one camera cannot be simply correlated with an illuminated point on the surface of the object because it is not known which stripe illuminates that point. Accordingly, an arbitrary pixel in the stripe in one camera's image plane is selected and a line drawn through the centre of the camera lens projecting this line out into 35 space. This line is then projected onto the image plane of the other camera and the resulting epi-polar line in the other camera's image plane cuts a number of stripes also imaged on its image plane. Any one of these points of intersection could in principle correspond to the arbitrary pixel mentioned above. The particular point which corresponds is found by projecting all the points of intersection back into space WO 99/06950 PCT/GB98/02307 3 and determining which of the resulting lines intersects a laser stripe from the laser projector. The above arrangement has the disadvantage that a projected pattern is required and that some uncertainty in the correlation of points may arise if any of the last mentioned projected points cut or nearly cut more than one laser stripe. 5 This arrangement is also susceptible to the problem of overhang/impaired accuracy mentioned above. One other type of scanner of potentially high accuracy which may be mentioned in passing is based on the detection of Moird fringe patterns. One such scanner is disclosed in our co-pending patent application PCT/GB95/02431 (Moore). 10 One object of the present invention is to provide scanner arrangements which do not require projected patterns. Another object of certain embodiments is to provide scanner arrangements in which movement of the scanner relative to the scanned object is determined by processing the image of the object and does not require hardware such as inertial sensors (although the output of inertial sensors can be used to supplement such processing). In one aspect the invention provides an arrangement as claimed in claim I for acquiring the 3D shape of an object. 20 In another aspect the invention provides an arrangement as claimed in claim 8 for acquiring the 3D shape of an object. In another aspect the invention provides a method of processing overlapping images as claimed in claim 11. 25 In another aspect the invention provides an image processing arrangement as claimed in claim 17. In another aspect the invention provides an arrangement for acquiring the 3D shape of an object as claimed in claim 24. 30 The "viewpoints" of the cameras can be characterised by differences in position and/or in orientation of the camera relative to the object. Further independent aspects of the invention are claimed in claims 26, 27, 28 and 32. 35 Preferred embodiments of the invention are described below by way of example only with reference to Figures I to 19 of the accompanying drawings, wherein: WO 99/06950 PCT/GB98/02307 4 Figure 1 is a ray diagram showing the camera arrangement of one embodiment in plan view; Figure 2 is a ray diagram showing the camera arrangement of Figure I in elevation; Figure 3 is a sketch perspective ray diagram of the embodiment of Figures 1 and 2 showing the generation of epi-polar lines of the corners of one image region ABCD and their intersection at a common point adjacent the corresponding image region in the other camera's image plane; Figure 4 is an end elevation of the above embodiment looking through the image planes towards the object region and showing the geometrical construction of the 10 above epi-polar lines; Figure 5 is a ray diagram in plan view showing the above embodiment and illustrating the uncertainty in the object position without correlation of image points in the left and night cameras' image planes; Figure 6 is a front elevation of the above rav diagram showing how epi-polar lines can be used to assist in the correlation of image points: Figure 7 is a flow diagram showing the process of correlating the image points in the above embodiment: 20 Figure 8 is a sketch perspective view of a variant of the above embodiment utilising inertial sensors; Figure 9 is a sketch perspective view of another embodiment utilising only a single camera: 25 Figure 10 is a sketch perspective view showing a ray diagram of the embodiment of Figure 9 being used to track its position relative to an object: Figure I I is an elevation of a ray diagram showing the correlation of image points between the images of the respective cameras in the first embodiment or of the camera in different known positions in the second embodiment: Figure 12 is a ray diagram which is a section taken on XII-XII of Figure I I and illustrates the derivation of the object position relative to the camera from the correlated images; 35 Figure 13 is a flow diagram showing the process of correlating the image points in the embodiment of Figures 9 and 10; WO 99/06950 PCT/GB98/02307 5 Figure 14 is a flow diagram showing a process for obtaining a complete 3d surface description using the embodiment of Figures 10 and 11 and image correlation software: Figure 15A is an illustration of a projected fractal pattern for use in an embodiment of the invention: 5 Figure 15B is an illustration of the distortion of the fractal pattern by the inclination of the camera relative to the surface; Figure 16 is a sketch perspective view of a further embodiment of the invention; I Figure 17 is a diagrammatic transverse cross-section of an endoscope head in accordance with another aspect of the invention: Figure 18 is a diagrammatic side view of the endoscope head of Figure 17, and Figure 19 is a diagrammatic representation of a further endoscope in accordance with the present invention. Figures 1 and 2 have already been referred to and are applicable to the optics of the first embodiment. Provided that a given point x, y in one camera's image plane I can be correlated with an image point x2, y in the other camera's image plane (ie both 20 points are conjugate points of a common point X, Y, Z on the object's surface) then the coordinates of the point X, Y, Z can be found. In general, however, this correlation cannot be performed without some processing of the two images in the respective image planes of the two cameras I and 2. Figure 3 illustrates a preliminary step in searching for points A2. B2. C2 and D2 in the image plane I of camera 2 which correlate with given points A 1, B1. Cl and Dl in the image plane of camera I. Points A, B, C and D on the surface of the object define a surface region S and are imaged by both cameras as the above sets of points. Undeviated ray lines a, b, c and d connect A to A 1, B to B1, C to Cl and D to Dl and when imaged onto image plane I of camera 2 define four epi-polar lines EPa to EPd 30 respectively which meet at a point X2'. This is the conjugate point of the centre Xl of lens L of camera 1. The position of these epi-polar lines is independent of the position and orientation of the object and the vertices of the image region A2, B2. C2, D2 of camera 2 must lie on them. Figure 4 illustrates the construction of the epi-polar lines. A construction line CONST 35 is constructed so as to join the optical centres XI and X2 of the lenses L. It can then be seen that line a projects onto epi-polar line EPa whose points A2 and X2' define WO 99/06950 PCT/GB98/02307 6 with point X2 a triangle which is geometrically similar to triangle A. XI, X2. The other epi-polar lines can be constructed similarly and it will be noted that they intersect at a point X2' which is a distance 2S (the spacing between XI and X2) from X2. The uncertainty in the position of the object 0 is illustrated in Figures 5 and 6. If the object is in position 0 then points 3 and 4 in the image of camera I correlate with points 3' and 4' respectively in the image of camera 2. If, however, the object is in a position 0' then they correlate with points 5 and 6 respectively. Referring now to Figure 5. if it is assumed for the sake of simplicity that the region S of the object 0 which defines the above points is rectangular (ie there are two points 3 and two points 4 vertically displaced from each other) then in position 0 the image 1o region 3, 3, 4, 4 of camera I correlates with image region 8 of camera 2 whereas in position 0' the image region 3. 3, 4, 4 of camera 1 correlates with image region 7 of camera 2. However the vertices of both image regions 7 and 8 lie on epi-polar lines EP of points 3, 3, 4. 4. In general there will be a range of possible positions of the object corresponding to a range of correlated regions (eg 7 and 8) which however will all lie on the set of epi-polar lines EP whose positions are determined entirely by the 15 camera geometry and are independent of the position of the object. The true position of the object can be found by an algorithm which selects various rectangular image regions whose vertices lie on the respective epi-polar lines EP and compares them with image region 3, 3, 4, 4. The image region of camera 2 lying on 20 these epi-polar lines which gives the best match is taken to be that which truly correlates with the image region 3, 3, 4, 4 of camera I and thereby defines the position of the object with reference to the camera arrangement and also enables the 3D coordinates of each point on image region S to be determined by the geometrical procedure of Figures l and 2. 25 Suitable algorithms for correlating much less closely constrained image regions of much less closely corresponding images (eg photographs taken during airborne surveys) are already known -eg Gruen's algorithm (see Gruen. A W "Adaptive least squares. correlation: a powerful image matching technique"S Afr J of Photogrammetry, remote sensing and Cartography Vol 14 No 3 (1985) and Gruen. A W and Baltsavias, E P "High precision image matching for digital terrain model generation" Int Arch photogrammetry Vol 25 No 3 (1986) p254) and particularly the "region-growing" modification thereto which is described in Otto and Chau "Region growing algorithm for matching terrain images" Image and Vision Computing Vol 7 No 2 May 1989 p8 3 , all of which are incorporated herein by reference. 35 Essentially, Gruen's algorithm is an adaptive least squares correlation algorithm in WO 99/06950 PCT/GB98/02307 7 which two image patches of typically 15 x 15 to 30 x 30 pixels are correlated (ie selected from larger left and right images in such a manner as to give the most consistent match between patches) by allowing an affine geometric distortion between coordinates in the images (ie stretching or compression in which originally 5 adjacent points remain adjacent in the transformation) and allowing an additive radiometric distortion between the grey levels of the pixels in the image patches, generating an over-constrained set of linear equations representing the discrepancies between the correlated pixels and finding a least squares solution which minimises the discrepancies. 10 The Gruen algorithm is essentially an iterative algorithm and requires a reasonable approximation for the correlation to be fed in before it will converge to the correct solution. The Otto and Chau region-growing algorithm begins with an approximate 15 match between a point in one image and a point in the other. utilises Gruen's algorithm to produce a more accurate match and to generate the geometric and radiometric distortion parameters, and uses the distortion parameters to predict approximate matches for points in the region of the neighbourhood of the initial matching point. The neighbouring points are selected by choosing the four adjacent 20 points on a grid having a grid spacing of eg 5 or 10 pixels in order to avoid running Gruen's algorithm for every pixel. The first pair of points used to generate the initial approximate match can be found eg by searching for patterns or features in each image which match approximately and choosing pairs of clearly defined points within the pairs of matching patterns or features. This can be done by appropriate software or firmware without the need for human intervention. By constraining the matching image region to lie on epi-polar lines as described herein, the algorithm can be made to converge much more quickly and with less uncertainty. This is an important advantage. The procedure for deriving the complete 3d surface description of a surface region S in the present embodiment is summarised in Figure 7. In step 100, a pattern, eg 3, 3, 4, 4 (Figure 6) is selected in one camera's image plane and its vertices located and identified. The shape of the pattern boundary is preferably but not necessarily a rectangle, a parallelogram, an equilateral triangle, a regular WO 99/06950 PCT/GB98/02307 8 hexagon or some other figure which can be repeated to cover substantially the entire area of the photodetector array In step 110, the epi-polar lines of these vertices (eg EP in Figure 6) are projected onto 5 the other camera's image plane. These epi-polar lines converge to a point (which is not necessarily within the photosensitive detector of the other camera but whose coordinates in the other camera's image plane can be found by extrapolation if necessary) and define a range of pattern boundaries having a similar shape to the selected pattern boundary of step 100. In step 120 sets of possible corresponding pattern vertices in the other camera's image plane are determined by selecting a pattern shape corresponding to the shape selected in step 100 (eg rectangular in the embodiment described above) and fitting its vertices to the above epi-polar lines. A range of eg rectangular patterns of different elongation and size results. 15 In step 130 each of these patterns is compared with the pattern selected in step 100 and the pattern with the closest match (in eg image density distribution. colour distribution, contrast distribution (ie distribution of rate of change of image density) or any weighted average of the above parameters) is selected (using eg Gruen's algorithm or a similar algorithm) as the pattern which best correlates with the pattern 20 of step 100. The individual pixels within the matching patterns are then correlated with each other and the 3D coordinates of these pixels relative to the scanner can then be determined, as will be shown below with reference to Figures 12 and 13. In step 140. the above steps 100 to 130 are repeated for all other patterns in the first camera's image plane and hence all the pixels in each camera's photodetector array are correlated. It should be noted that in some cases none of the patterns selected in step 120 will correspond to the pattern selected in step 100, eg as a result of overhang of a region of the object which prevents a region of the object surface from being viewed by both 30 cameras. In such a case the processor will report that no correlation is possible and will select a new pattern (step 100). The size of the pattern selected in step 100 is not critical but should preferably be smaller than most surface features of the object in order to minimise the possibility of some pixels but not others in the selected pattern correlating with a pattern in the 35_ other camera's field of view, possibly leading to incorrect matching of the two patterns. The size could be selected by the user. A variant of the above embodiment is shown in more detail in Figure 8. Hand-held WO 99/06950 PCT/GB98/02307 9 scanner 13 is provided with an inertial sensor arrangement comprising a 3-axis vibratory gyroscope arrangement I1 (which detects rate of rotation about mutually perpendicular axes Ox, Oy and Oz) and an accelerometer 12 (which detects acceleration along the corresponding axes X, Y and Z). The output signals from these 5 sensors are processed by a microprocessor arrangement pP which is similar to that shown in Figure 2 of our granted patent GB 2,292,605B (whose entire disclosure is hereby incorporated by reference) and the resulting position and orientation information is stored in a memory (such as a miniature hard disc M) which is optionally removable). The position and orientation information can also be output from the processor to a computer PC by a wire, radio or optical link via a bidirectional output port. Computer PC is provided with a conventional RAM, hard disk and microprocessor and is arranged to perform the processing already referred to in connection with Figures 3 to 7 as well as the processing described subsequently in connection with Figures 10 to 15. The arrangement is powered from a rechargeable 15 r battery B and the acquisition of data from the cameras, accelerometer and gyroscope arrangement is controlled by a hand-operated trigger button TR. The position and orientation data from the microprocessor can be used to guide the 20 software which carries out the processing of Figure 7, particularly in the search for a matching pattern in step 130. However it should be noted that once a match has been found and the images in the two cameras correlated. the position and orientation of the object 0 with reference to the scanner 13 is defined. This position and orientation is likely to be more accurate than that obtained from the gyroscopes and 25 accelerometers because the latter are subject to drift. Hence it is preferred to use the position and orientation information from the gyroscopes and accelerometers only for guidance of the image correlation software. In principle the entire surface of the object could be scanned in one scan but in some 30 cases it may be necessary to combine the surface portions derived from overlapping scans and the processes disclosed in our co-pending PCT/GB95/01994 can be used for this purpose. 35 Scanner 13 also carries a light source LS (eg a stroboscopic light) for illuminating the object 0 and optionally a supplementary laser arrangement LA which can be used to derive additional depth information or other surface coordinate information by WO 99/06950 PCT/GB98/02307 . . 10 projecting an appropriate optical pattern onto the surface of object 0: for example it could be a triangulation arrangement similar to that disclosed in Figure 1 or Figure 3 of our PCT/GB95/01994 or it could be a Moir6 arrangement. 5 In a further major variant of the above embodiment wherein laser arrangement LA is a triangulation arrangement the optical axes of the cameras are arranged to cross at the centre of such a projected pattern, at a distance corresponding to the centre of the depth of field of triangulation. and the cameras are used to acquire a true monochrome or colour representation of the object either simultaneously with 10 triangulation or alternately. This can be superimposed on the 3D image acquired by triangulation to allow the profile obtained by triangulation to be rendered using the image data from the cameras. In this manner surface features other than profile , eg surface printing and colours can be captured. 15 The light source LS can be used to provide consistent incident light for the object 0 in the majority of ambient lighting conditions. It can be synchronised to image capture by the cameras in order to prevent interference with the triangulation process. 20 The position and attitude data obtained from the gyroscopes and accelerometers can be tagged to the image data from the cameras and triangulation arrangement to enable the data from the cameras and triangulation arrangement to be processed either in real time or off line, allowing areas which are poorly described by the triangulation signals to be to be corrected by the image data from the cameras, or vice versa. 25 In addition, surface features detected by the cameras (ie any distinct pattern of pixels) or groups of such surface features can be tracked as the scanner 13 is moved relative to the object 0 and the trend of movement and/or distortion of such features 30 and/or the trend in the movement or spacing of such surface features can be used to predict the next position and orientation of the scanner and hence to guide the software in correlating the images from the two cameras. This information can also be used to correct acceleration and rate of rotation data from the accelerometers and gyroscopes. 35 It should be noted that the above combination of stereoscopic imaging using cameras 1 and 2 and triangulation provides a system that is capable of producing range data WO 99/06950 PCT/GB98/02307 for a wide variety of surface topologies in a wide range of lighting conditions. Errors introduced by one method can be largely corrected by data from the other, since the two methods are largely complementary in their applicability in different lighting conditions. 5 The post processing methods disclosed in our PCT/GB95/01994 can be used to combine separately acquired surface regions obtained from the outputs of either a triangulation arrangement or a Moir6 arrangement or the cameras I and 2. Such 10 methods can also be used to provide position and orientation data for use in any of the processes described above requiring such methods. In one embodiment a common processor is used a) to process stereoscopic data (eg by the process of Figure 7 of the present application) from the two cameras I and 2 b) to 15 predict the position and/or orientation of the scanner relative to the object by tracking groups of features (which could be any distinct groups of pixels) between frames and c) to process triangulation data (eg as obtained by the optical arrangement of Figure 1 or Figure 3 of our PCT/GB95/01994). These processes, designated S, F and T respectively, can be carried out sequentially in ratios varying with the frame rate and 20 the processing power of the processor. For example at 60 frames/second the sequence could be: STTFITFTT, giving a 1:2:6 ratio 25 and at 30 frames/second the sequence could be: STFTFT, giving a 1:2:3 ratio. 30 Figure 9 shows a further embodiment 13' which utilises only a single camera and a microprocessor pP to track an object 0 and to determine its shape. This embodiment utilises stereoscopic image processing analogous to that outlined in Figures 1 and 2 and utilised by the embodiment of Figures 1 to 8, but 35 stereoscopically combines the images acquired at different positions and orientations of the scanner 13' relative to the object, using information on the position and orientation of the scanner acquired by tracking the image in its image plane I. This WO 99/06950 PCT/GB98/02307 processing is carried out by a computer PC which receives the output data from the scanner. This is illustrated in Figure 10. The object 0 (assumed to be polyhedral for ease of 5 illustration) moves relative to the scanner from position 0 to position O' and the face ABC is projected onto image plane I first as image abc (position 0) and then as image a'b'c' (position 0'). The undeviated ray lines Aa', Bb' and Cc' can be constructed merely from the position of the image a'b'c' and a knowledge of the camera geometry. The triangle ABC of defined size and shape can be fitted to these ray lines 10 in only one position and orientation beyond the optical centre of the lens L. Hence the position and orientation of the object can be tracked. It should be noted that the above analysis assumes that the size and orientation of the face ABC of the object is initially known. However this is not essential in order to 15 track the movement of the scanner relative to the object. Thus the ray lines Aa, Bb and Cc are consistent with a smaller object having a face A'B'C' (as shown) located nearer the camera. However (assuming its initial orientation is as shown) the difference in detected orientation of the object between positions O' and 0 will be 20 the same and the distance of movement of the scanner will be scaled by the assumed object size. The shape of the object will not be affected. Of course the initial position 0 of the object is not the only position consistent with ray lines Aa, Bb and Cc. For example these ray lines would also be consistent with a 25 face A'BC. However, since the size and shape of the face would be invariant as the object and scanner move relative to each other, the orientation and position of hypothetical face A'BC in relation to (equally hypothetical) face ABC would also remain unchanged and the relative movement of the object between positions 0 and O' as determined by tracking movement of the image abc/a'b'c' would not be 30 affected; In the above example the points abc are derived from corners of the object. However in principle any set of three or more clearly defined points or groups of points on the 35 object 0 can be tracked to track its movement and rotation relative to the scanner 13'. In practice the scanner would normally move and the object would be stationary.
WO 99/06950 PCT/GB98/02307 13 Since the tracked points will gradually move off the edge of the photosensitive array of the camera, it is desirable to track at least four points in order to track the movement of the object relative to the scanner over distances which are large in comparison with the size of the photodetector array. 5 The above analysis illustrates the tracking of the object but does not enable its shape to be determined. The shape determination is illustrated in Figures 11 and 12. Two patterns P are shown 1 0 which are distorted versions of each other as illustrated by the correlations 25 between their corresponding pixels. It is assumed that these patterns are each formed on the image plane of the camera of scanner 13' as it moves from a first to a second position relative to the object and (since they correspond) are derived from a common 15 surface portion S of the object. The undeviated ray lines shown in Figure 12 passing through the optical centre of the lens L determine the position and orientation of surface region S. given that the position and orientation of the scanner relative to the object are known, from the analysis of Figure 10. 20 The movement M shown in Figure 12 is a rectilinear movement in the direction of the image plane of the camera. and illustrates the similarity in the analysis of a stationary stereoscopic camera arrangement and a moving monocular camera arrangement. In principle however the position and orientation of the surface could be determined just as easily from a known, non-rectilinear movement of the scanner. 25 The overall process of determining the shape of the object is illustrated in the flow diagram of Figure 13. In step 200, four or more points in the image plane are selected. Alternatively, in some cases it may not be necessary to select more than three points. 30 In step 210. adjacent ones of the four or more points are connected to form a network of at least two triangles. (Alternatively it may be possible to rely on only three such points to form one triangle in certain cases). One such triangle abc has already been referred to in connection with Figure 10. 35 The second triangle enables the network to be tracked as one point moves off the edge of the photosensitive array, but this will not be necessary on certain cses The WO 99/06950 PCT/GB98/02307 14 points are tracked (step 220). The undeviated ray lines are then projected from the tracked points (step 230). 5 In step 240 the network of four or more points ( in some cses three or more points) is then fitted to the projected ray lines to define the new position of the object (eg 0' in Figure 10). In step 250 the point(s) which have moved out of the field of view are discarded and a new network is constructed (ie steps 200 and 210 are repeated). This step will not be necessary in certain cases. At the same time, the calculated position and orientation data are output. 15 In step 260, successive views are compared (cf Figures 1 and 2 and Figure 12) and, using the position and orientation output of step 250, used to obtain the 3D coordinates of the surface portion (eg S in Figure 12) of the object which is common to the two views. 20 Finally the surface portions of the object are combined to obtain the entire 3D coordinates of the object surface, using either the position and orientation data output from step 250 or using a postprocessing method as disclosed in our co-pending PCT/GB95/01994. 25 Step 260 (which is illustrated in Figures I1 and 12) is elaborated in the flow diagram of Figure 14. This flow diagram is equally applicable to the stereoscopic and monocular camera arrangements. 30 In step 300 a pattern (eg the left hand pattern P in Figure 14) is selected. In step 310 a corresponding pattern (eg the right hand pattern P in Figure 11) is searched for, using variable scaling factors eg to compress/expand the image in the X and Y directions and possibly also to apply a linear correction to the image density in 35 order to take into account oblique illumination of the object. In step 320 the above step is repeated for other patterns and the points in the WO 99/06950 PCT/GB98/02307 15 corresponding patterns are correlated (cf the mapping illustrated by lines 25 in Figure 11). In step 330 the correlated points are used to construct the 3D surface region of the 5 object region common to both cameras' fields of view (cf Figure 12) using the position and/or orientation data of step 250 of Figure 13. Additionally gyro data and/or acceleration data and/or geometrical data defining the geometry of the stereoscopic camera arrangement of Figures 1 to 8 (if used) can be utilised in this step. 10 In step 340 the process is repeated for other surface regions and the process then continues with step 270 (Figure 13). 15 In some cases it may be difficult to correlate the patterns P in step 310, eg as a result of lack of contrast in the images. The aspect of the invention in which a fractal pattern (in which the small-scale structure and the large-scale structure share a common element) is provided (eg by 20 optical projection) on the scanned object and the pattern is viewed from different angles to derive different images which are correlated to derive the 3-D surface coordinates of the region of the object on which the pattern is formed is illustrated by way of example only in Figures 15 and 15A. 'S, ~ A fractal pattern 500 (obtained by forming a cross and then superimposing a cross of half the size on each tip, and repeating this process in respect of each previous cross) is shown in Figure 15 after three iterations of the above process (in practice more iterations may be used to increase the detail and the density of coverage) and a 30 pattern region P is selected. For the sake of simplicity it is assumed that the pattern is projected orthogonally onto a flat region of the object so that the image shown in Figure 15 is an undistorted representation of the original pattern. If the pattern is viewed from another angle a distorted view (in this case vertically 35 compressed and horizontally elongated) is obtained as shown in Figure 15A. However there is only one pattern region P' having the same features as pattern region P (in this case. having its top and left hand boundaries on a line. having its WO 99/06950 PCT/GB98/02307 16 right hand boundary touching the tip of a line and its bottom boundary crossing two lines and touching the tip on another) and this pattern region can therefore be unambiguously located by searching for the above topological features by means of a suitable algorithm. Hence the entire pattern images (either left and right images as 5 viewed either by a stereoscopic camera arrangement or successive images as viewed by a moving monocular camera) can be correlated and hence the entire 3-D surface coordinates of the region of the object on which the pattern 500 is projected can be found. 10 A completely different arrangement for determining depth information, at least approximately, is disclosed in Figure 16. An array of point illumination sources LS projects light spots R onto the surface S of the object and a lens L images the illumination pattern formed by the spots on a photosensitive image plane 700. The 15 size and/or shape of each imaged spot is measured by appropriate image-processing software. The size of a spot R is directly proportional to the distance of the corresponding illuminated surface region from the corresponding light source LS and the shape (in this case the deviation from circularity) gives information about the local inclination and curvature of the surface. Hence at least an approximation to the 20 surface coordinates can be derived from the sizes and/or shapes of the spots R, given the angle of each cone of illumination and other basic geometrical information. The light sources could suitably be optical fibres and the arrangement is suitable for miniaturisation in eg an endoscope. The light sources need not be point sources but could for example be line sources. whereby the width of each illumination stripe on - the surface S is related to the distance from the light source. Alternatively, dark region(s) could be projected onto the surface S using appropriate mask(s). Background information on the geometry of an alternative range sensor using a mask is given by Lorange et alin Procs. Intl. Conference on Recent Advances in 3-D digital Imaging and Modelling May 12th-]5th 1997, pp5l-58 at p52(pub. IEEE Computer Society Press, Los Alamitos, California USA). Background information on constructing a 3-D representation of a surface from silhouettes taken at different angles (these being combined in the correct manner by reconstructing a circular 35 reference pattern initially provided beneath the object) is disclosed in the above Conference Proceedings by Niem et al at ppl73-180. Both these articles are incorporated by reference and provide further techniques for obtaining at least an WO 99/06950 PCT/GB98/02307 17 initial approximation to the surface coordinates which can be used prior to or in combination with any of the novel techniques disclosed herein, eg to assist in the process of Figure 14. Any other initial approximation could alternatively be used in the process of eg Figure 14. 5 Refering to Figure 17, an endoscope arranged to provide a 3-D representation of a body canal or cavity is disclosed, having a head having a diameter about 10mm in diameter with a central region (of about 6 mm diameter) in which two CCD photosensitive arrays 701 and 702 are disposed. The endoscope is also provided with four regularly circumferentially distributed channels 704 for accommodating surgical instruments and the like, as is standard. Cables 703 are disposed regularly about the peripheral region of the endoscope and are used for bending the endoscope to guide it through a body canal, as is standard. 15 Referring to Figure 18 a fibre-optic bundle 710 terminates at the front face of the head 705 and carries a light beam from an illumination source at the proximal end of the endoscope (not shown). The beam is projected onto the surface S of the region to be viewed and appears as an array of overlapping circular areas, only one of which is 20 shown. The size and distortion from circularity of these illuminated areas can be detected by the detector arrays 701 and 702 to provide an initial estimate of the 3-D coordinates of the surface region S (indeed in some cases satisfactory information could be provided by this technique alone, so that only one photo sensitive array would be needed). However in the stereoscopic arrangement illustrated, the 25 illuminated region S is imaged separately by the two photosensitive arrays and the images are correlated as described above with reference to Figure 14 to find the accurate 3-D coordinates of the interior of the body canal. The image is focassed by a graded index (GRIN) rod or fibre 711. 30 In a variant, the illumination source is coupled to one or more selected fibres and at least these fibres in bundle 710 are provided with lenses 708 (or GRIN portions) which focus an optical beam as a spot on the interior surface of the body canal as 35 shown. The focussed spot can be scanned eg in raster fashion either by coupling the illumination source to adjacent fibres in succession to displace the beam relative to the exit face of the fibre bundle in eg raster fashion or, less preferably, by moving WO 99/06950 PCT/GB98/02307 18 the optic fibre bundle by any suitable means, eg piezoelectrically. The resulting spot can then be tracked and imaged by the photosensitive arrays and the coordinates on the respective arrays used to derive the 3-D coordinates of the surface S. as explained with reference to Figures I and 2. Alternatively a single photodetector could be 5 employed, as in our PCT/GB95/01994, preferably satisfying the Scheimpflug condition Referring to Figure 19, which is based on Figure 1 of WO 95/14952, a conventional monocular rigid endoscope 401 having an objective lens 402 at its distal tip and an 10 ocular 403 at its proximal end is combined with a video camera 404 which focusses light exiting from ocular 403 onto the photosensitive image plane I of a photosensitive detector (such as a CCD for example) by means of a focussing lens L. In practice lens L will normally be a multi-element lens and the exposure will normally be controlled by an iris (not shown). As described thus far the arrangement is conventional. Alternatively the camera 404 may be a cine camera. in which case the light from lens L is focussed onto the photosensitive image plane of cine film. In another embodiment of the present invention the distal head of the endoscope is 20 provided with miniature inertial sensors I I and 12 similar to those shown in Figure 8 which measure angular rotation about three orthogonal axes and linear acceleration along the three orthogonal axes respectively. The signals from these sensors are integrated as in the embodiment of Figure 8 to derive the instantaneous position and orientation of the endoscope head. 25 The lens L is used to obtain stereoscopic views by alternately occluding the light exiting from the left and right regions of the ocular 403 with a shutter means 405 preferably at a rapid rate such as 50 times per second (for video), under the control of a signal from processing circuitry 408. The shutter means 405 may be provided in 30 t front of lens L as shown, between different lens elements of a multi-element lens (not illustrated) or may be located between the lens L and photosensitive image plane 1, for example. In particular the shutter may be a LCD shutter printed on a surface of lens L. The rays blocked by shutter means 405 are preferably parallel as shown but 35 may alternatively converge or diverge. Particularly if the rays are converging, the shutter should preferably be located close to the lens.
WO 99/06950 PCT/GB98/02307 19 The stereoscopic image pairs can be tagged with position information and orientation information as in the embodiment of Figure 8 in order to obtain a 3-D representation of a body canal in which the endoscope is inserted, as previously described in connection with Figure 8. 5 In a variant, the shutter 405 is omitted and the endoscope head is manipulated to change its position and/or orientation, the resulting views being tagged with position and/or orientation information in order to enable the required 3-D representation to be obtained. 10 In a further variant the inertial sensors are omitted and the 3-D representation is built up by appropriate processing of monocular images as in the embodiment of Figure 9. Preferably a hood (not shown ) is provided at the interface of camera 404 and 15 endoscope 401 to prevent stray light entering the video camera, or the camera and endoscope are integral. The endoscope may be a laparoscope, a borescope, a cystoscope or an arthroscope for example. The user may pull focus or zoom (assuming the lens has this facility) without affecting the stereoscopic imaging. 20 A switching output (synchronised with the shutter 405) and a video output may be fed to a standard stereoscopic viewing arrangement as shown in Figure 1 of WO 95/14952. The endoscope aspect of the invention is also applicable to the arrangements shown in GB-A-2,268.283. US 5.222.477 and US 5.12,650 for example. It is not limited to 25 any of the image processing methods disclosed herein and can be used in conjunction with eg the scanned pattern and depth detection arrangements of our co-pending PCT/GB95/01994 for example, using eg an optic fibre array (not shown) in the endoscope head to project the pattern. It will be apparent that a number of inventions have been disclosed. The inventive features disclosed herein are all considered to be independent but to be optionally combinable where indicated. Preferred features of the invention are defined in the dependent claims. 35

Claims (37)

1. An arrangement for acquiring the 3D shape of an object (0). the arrangement comprising: a) an image acquisition arrangement (13) for acquiring groups of images of overlapping surface regions of the object, the image acquisition arrangement being freely movable relative to the object and comprising two or more mutually offset cameras (L. I) with overlapping fields of view; 10 b) inertial sensing means (11, 12) arranged to detect movement of said image acquisition arrangement relative to the object and to generate a motion output signal representative of such movement: 15 c) image processing means (PC) arranged to correlate features (A]BlClDI; A2 B2.C2D2) of the respective images of each group derived from a feature (ABCD) of the object common to the group and to derive in respect of each group a set of output data representing the 3D shape of a surface region of the object. and 20 d) combining means (PC) arranged to combine the sets of output data into a common set of output data in dependence upon both said output data derived by said image processing means and said motion output signal generated by said inertial sensing means.
2. An arrangement as claimed in claim I wherein said inertial sensing means (11, 12) is arranged to detect rotation of the image acquisition arrangement (13) relative to the object (0) and to generate a rotation output signal and said combining means (PC) is arranged to combine said sets of output data in dependence upon both said rotation 30 output signal and translation information derived from the image acquisition arrangement.
3. An arrangement as claimed in claim I or claim 2 wherein said inertial sensing 35 means (11. 12) is arranged to generate a translation output signal and said combining means (PC) is arranged to combine said sets of output data in dependence upon both said translation output signal and translation information derived from the output data SURSTITI IT= CLu -r 1011 E o WO 99/06950 PCT/GB98/02307 21 of said processing means (PC).
4. An arrangement as claimed in any preceding claim which comprises an endoscope (401) arranged to acquire the 3D shape of a body cavity, the inertial sensing means 5 (11, 12) being mounted in the head of the endoscope.
5. An arrangement as claimed in any preceding claim wherein the mutually offset cameras (405. L, I) have a common objective lens (402) and their offset is provided by a shutter means (405) which selectively occludes offset ray bundles from the 10 object.
6. An arrangement as claimed in any of claims I to 4 wherein the respective optical axes of the cameras (L, I) are coplanar. 15
7. An arrangement as claimed in claim 6 wherein said optical axes are parallel.
8. An arrangement for acquiring the 3D shape of an object (0). the arrangement comprising: 20 a) a camera (13') which is freely movable relative to the object so as to acquire overlapping images of the object from different viewpoints: b) inertial sensing means ( 1. 12) arranged to detect movement of said camera 25 relative to the object and to generate a motion output signal representative of such movement. c) image processing means (PC) arranged to correlate features (AlBICIDI; 30 A2B2C2D2) of the respective images derived from a common feature (ABCD) of the object which are acquired by the camera from different viewpoints and to derive output data representing the 3D shape of a surface region of the object: and d) combining means (PC) arranged to combine sets of such output data into a 35 common set of output data in dependence upon both said output data derived by said processing means and said motion output signal generated by said inertial sensing means. WO 99/06950 PCT/GB98/02307 22
9. An arrangement as claimed in claim 8 wherein said inertial sensing means (11, 12) is arranged to detect rotation of the camera (L I) relative to the object (0) and to generate a rotation output signal and said combining means (PC) is arranged to 5 combine said sets of output data in dependence upon both said rotation output signal and translation information derived from the overlapping images.
10. An arrangement as claimed in claim 8 or claim 9 wherein said inertial sensing means (11. 12) is arranged to generate a translation output signal and said combining means (PC) is arranged to combine said sets of output data in dependence upon both said translation output signal and translation information derived from the output data of said image processing means (PC).
11. A method of processing overlapping 2D images of an object (0) acquired from different spaced apart viewpoints relative to the object, the 2D images being associated with pairs of corresponding regions (AlBICIDI: A2B2C2D2), each such pair of corresponding regions comprising a region of one 2D image and a region of another 2D image each derived from region of the object (ABCD) common to the 20 images, there being a mutual offset between the corresponding regions of each pair which is determined by the relative rotation (if any) and translation between the viewpoints, the method comprising the step of digitally processing the 2D images to correlate the corresponding regions by utilising a plurality of epipolar lines (EPa EPd) corresponding to one such region in one such image to constrain a search for the corresponding region in the other image.
12. A method as claimed in claim 11 wherein a 3D reconstruction of the common region of the object (0) is generated by virtually projecting the images in simulated 30 3D space with an offset between the projected images corresponding to said mutual offset.
13. A method as claimed in claim 11 or claim 12 wherein said mutual offset is derived by tracking (220) a group of points (a. b, c ; a', b', c') in the field of view of a 35 camera (L, I) arranged to acquire said overlapping 2D images, virtually projecting (230) into simulated 3D space ray lines from the tracked points as they appear in one of the images and connecting (240) an arbitrary network to the ray lines. virtually WO 99/06950 PCT/GB98/02307 23 projecting (230) into simulated 3D space further ray lines from the tracked points as they appear in the other of the images and offsetting said network to connect it to said further ray lines whereby the offset of the network corresponds to said mutual offset.
14. A method as claimed in any of claims 11 to 13 wherein said mutual offset is determined by an inertial sensing means (11, 12) arranged to detect the relative motion between the object and the viewpoints.
15. A method as claimed in claim I1 or claim 12 wherein said mutual offset is 10 predetermined.
16. A method as claimed in claim 15 wherein said images are acquired by an assembly (13) of spaced apart cameras, the assembly being freely movable relative - to the object (0).
17. An image processing arrangement for acquiring the 3D shape of an object (0) from overlapping images of the object acquired from mutually offset viewpoints, the arrangement comprising: 20 a) tracking means (PC) arranged to determine the offset between said viewpoints by optically tracking movement of features between the images: b) correlation means (PC) arranged to correlate features (P/AIB1CIDI: -5 P/A2B2C2D2) of the respective images derived from a common feature (ABCD) of the object which are acquired from said offset viewpoints. said correlation means being arranged to derive a group of two or more epipolar lines (EPa-EPd) in one image corresponding to a region in another image and to correlate that region with the corresponding region of said one image by utilising said group of epipolar lines 30 to constrain a search for said corresponding region, and c) 3D reconstruction means (PC) responsive to said tracking means and correlation means and arranged to derive said 3D shape from said offset and correlated features. 35
18. An arrangement as claimed in claim 17 wherein said 3D reconstruction means (PC) comprises means for virtually projecting the images with an offset WO 99/06950 PCT/GB98/02307 24 corresponding to said mutual offset and with the correlated features (P/AlBlClD1: P/A2B2C2D2) intersecting in simulated 3D space.
19. An arrangement as claimed in claim 17 or claim 18 wherein said tracking means 5 (PC) is arranged to virtually project into simulated 3D space ray lines from tracked points (a. b, c) as they appear in one of the images and to connect an arbitrary network to the ray lines. to virtually project into simulated 3D space further ray lines from the tracked points (a'. b'. c') as they appear in the other of the images and to offset said network to connect it to said further ray lines whereby the offset of the 10 network corresponds to said mutual offset.
20. An arrangement as claimed in any of claims 17 to 19 which further comprises an image acquisition arrangement (13/13'/401) for acquiring said overlapping images, said image acquisition arrangement being freely movable with respect to said object 15 (0).
21. An arrangement as claimed in any of claims 17 to 20 which comprises a camera (1, 1) arranged to acquire said images and wherein a computer (PC) arranged to 20 receive said images is provided with a program which implements said tracking means, correlation means and 3D reconstruction means.
22. An arrangement as claimed in claim 21 wherein said camera (L. 1) is a digital camera. 25
23. A method as claimed in any of claims I1 to 16 or an arrangement as claimed in any of claims 17 to 22 wherein said regions (P/A IBl CID 1: P/A2B2C2D2) are areas.
24. An arrangement for acquiring the 3D shape of an object, the arrangement 30 comprising: a) an image acquisition arrangement (13/13'/401) for acquiring images of overlapping surface regions of the object (0); 35 b) means for providing a fractal pattern (500/500A) on said surface regions. and WO 99/06950 PCT/GB98/02307 25 c) image processing means (PC) arranged to correlate features (P. P') of the fractal pattern between said images and to derive the 3D surface coordinates of the region of the object on which the pattern is formed. 5
25. An arrangement as claimed in claim 24 wherein said image processing means (PC) is arranged to correlate said features by identifying pattern features (P, P') of common topology in the overlapping images.
26. A method of acquiring the 3D shape of an object, the method comprising: 10 a) providing a fractal pattern (500, 500A) on a surface region of the object; b) acquiring overlapping images of said surface region, and 15 c) correlating features (P. P') of the fractal pattern between said images and deriving the 3D surface coordinates of said surface region.
27. An arrangement for determining depth information, comprising beam-forming 20 means arranged to illuminate a surface with an optical pattern, the size of each of the features of the pattern on the illuminated surface being dependent on the distance of that feature from the beam-forming means, means for detecting the size distribution of said features and processing means arranged to derive the distance distribution from said size distribution and the optical characteristics of the beam-forming means.
28. An arrangement for determining shape information. comprising beam-forming means (LS/708)) arranged to illuminate a surface (S) with an optical pattern (R), the shape of each of the features of the pattern on the illuminated surface being dependent on the gradient of the corresponding region of the surface relative to the beam-forming means, means (PC) for detecting the gradient distribution of said features and processing means (PC) arranged to derive the shape information from said gradient distribution and the optical characteristics of the beam-forming means 35
29. An arrangement as claimed in claim 27 or claim 28 wherein the beam-forming means (LS/708) comprises an array of optic fibres (710, 71 1). WO 99/06950 PCT/GB98/02307 26
30. An endoscope (401) comprising an arrangement as claimed in any of claims 27 to 29 and carrying said beam-forming means (708) in its head (705).
31. An endoscope having an optical arrangement for sensing the interior surface (S) of a body canal or cavity, the arrangement comprising an array (710) of optic fibres arranged to form a pattern of illumination on said surface, means (711) for imaging said pattern on a photosensitive image plane (701), and means (PC) for processing the image to derive depth information in respect of the illuminated surface. 10
32. A endoscope as claimed in claim 31 wherein the pattern of illumination is a scanned pattern formed by coupling illumination to adjacent optical fibres (706) in succession.
33. An endoscope as claimed in any of claims 30 to 32 having inertial sensing means (11. 12) arranged to determine the position and/or orientation of the endoscope head.
34. An image processing arrangement substantially as described hereinabove with reference to Figures 3 to 7 or Figures 10, 13 and 14 or Figures 15 and 15A of the 20 accompanying drawings.
35. An arrangement for acquiring the 3D shape of an object, the arrangement being substantially as described hereinabove with reference to Figure 8 or Figure 9 in conjunction with Figures 3 to 7 or in conjunction with Figures 10. 13 and 14 of the accompanying drawings.
36. An arrangement for acquiring the 3D shape of an object, substantially as described hereinabove with reference to Figure 16 of the accompanying drawings. 30
37. An endoscope substantially as described hereinabove with reference to Figures 17 and 18 or with reference to Figure 19 of the accompanying drawings. 35 SUBSTITUTE SH4F=T mI1 = oa
AU86362/98A 1997-07-31 1998-07-31 Scanning apparatus and methods Abandoned AU8636298A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
GBGB9716240.8A GB9716240D0 (en) 1997-07-31 1997-07-31 Scanning apparatus and methods
GB9716240 1997-07-31
PCT/GB1998/002307 WO1999006950A2 (en) 1997-07-31 1998-07-31 Scanning apparatus and methods

Publications (1)

Publication Number Publication Date
AU8636298A true AU8636298A (en) 1999-02-22

Family

ID=10816794

Family Applications (1)

Application Number Title Priority Date Filing Date
AU86362/98A Abandoned AU8636298A (en) 1997-07-31 1998-07-31 Scanning apparatus and methods

Country Status (6)

Country Link
EP (1) EP1000318A2 (en)
JP (1) JP2001512241A (en)
AU (1) AU8636298A (en)
CA (1) CA2299426A1 (en)
GB (2) GB9716240D0 (en)
WO (1) WO1999006950A2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112254674A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Close-range intelligent visual 3D information acquisition equipment

Families Citing this family (35)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1190211B1 (en) 1999-04-30 2005-04-06 Christoph Dr. Wagner Method for optically detecting the shape of objects
GB2368117A (en) * 2000-05-08 2002-04-24 Neutral Ltd Optical viewing arrangement
DE10025741C2 (en) * 2000-05-19 2002-06-13 Fraunhofer Ges Forschung Method for determining the spatial coordinates of objects and / or their change over time
US7359748B1 (en) 2000-07-26 2008-04-15 Rhett Drugge Apparatus for total immersion photography
US8625854B2 (en) 2005-09-09 2014-01-07 Industrial Research Limited 3D scene scanner and a position and orientation system
US20080195316A1 (en) * 2007-02-12 2008-08-14 Honeywell International Inc. System and method for motion estimation using vision sensors
DE102007022361A1 (en) * 2007-05-04 2008-11-06 Friedrich-Schiller-Universität Jena Device and method for the contactless detection of spatial coordinates of a surface
WO2009027585A1 (en) * 2007-08-31 2009-03-05 Institut De Recherche Sur Les Cancers De L'appareil Digestif Ircad Flexible device for determining a three-dimensional shape
FR2922640B1 (en) * 2007-10-19 2010-01-08 Ct Tech Cuir Chaussure Maroqui METHOD AND DEVICE FOR THREE-DIMENSIONAL RECONSTRUCTION OF THE INTERNAL SURFACE OF A SHOE
AT506110B1 (en) 2007-12-12 2011-08-15 Nextsense Mess Und Pruefsysteme Gmbh DEVICE AND METHOD FOR DETECTING BODY MEASURE DATA AND CONTOUR DATA
US8213706B2 (en) 2008-04-22 2012-07-03 Honeywell International Inc. Method and system for real-time visual odometry
US8238612B2 (en) 2008-05-06 2012-08-07 Honeywell International Inc. Method and apparatus for vision based motion determination
JP5206344B2 (en) * 2008-11-14 2013-06-12 オムロン株式会社 Optical measuring device
KR101166719B1 (en) 2008-12-22 2012-07-19 한국전자통신연구원 Method for calculating a limitless homography and method for reconstructing architecture of building using the same
FR2950138B1 (en) * 2009-09-15 2011-11-18 Noomeo QUICK-RELEASE THREE-DIMENSIONAL SCANNING METHOD
US8760447B2 (en) 2010-02-26 2014-06-24 Ge Inspection Technologies, Lp Method of determining the profile of a surface of an object
EP3403568B1 (en) * 2010-03-30 2023-11-01 3Shape A/S Scanning of cavities with restricted accessibility
DE102010064320B4 (en) * 2010-12-29 2019-05-23 Siemens Healthcare Gmbh Optical pointer for a surgical assistance system
US9013469B2 (en) 2011-03-04 2015-04-21 General Electric Company Method and device for displaying a three-dimensional view of the surface of a viewed object
US9984474B2 (en) 2011-03-04 2018-05-29 General Electric Company Method and device for measuring features on or near an object
US9875574B2 (en) 2013-12-17 2018-01-23 General Electric Company Method and device for automatically identifying the deepest point on the surface of an anomaly
US10019812B2 (en) 2011-03-04 2018-07-10 General Electric Company Graphic overlay for measuring dimensions of features using a video inspection device
US10586341B2 (en) 2011-03-04 2020-03-10 General Electric Company Method and device for measuring features on or near an object
US10157495B2 (en) 2011-03-04 2018-12-18 General Electric Company Method and device for displaying a two-dimensional image of a viewed object simultaneously with an image depicting the three-dimensional geometry of the viewed object
ITTO20130202A1 (en) * 2013-03-15 2014-09-16 Torino Politecnico DEVICE AND THREE-DIMENSIONAL SCANNING SYSTEM, AND RELATIVE METHOD.
US9818039B2 (en) 2013-12-17 2017-11-14 General Electric Company Method and device for automatically identifying a point of interest in a depth measurement on a viewed object
US9600928B2 (en) 2013-12-17 2017-03-21 General Electric Company Method and device for automatically identifying a point of interest on the surface of an anomaly
US9842430B2 (en) 2013-12-17 2017-12-12 General Electric Company Method and device for automatically identifying a point of interest on a viewed object
JP2015185947A (en) * 2014-03-20 2015-10-22 株式会社東芝 imaging system
US9903950B2 (en) * 2014-08-27 2018-02-27 Leica Geosystems Ag Multi-camera laser scanner
GB2572755B (en) * 2018-04-05 2020-06-10 Imagination Tech Ltd Matching local image feature descriptors
JP7316762B2 (en) 2018-04-27 2023-07-28 川崎重工業株式会社 Surgical system and method of controlling surgical system
EP3803459A4 (en) * 2018-05-30 2022-03-16 VI3D Labs Inc. Three-dimensional surface scanning
US11676293B2 (en) * 2020-11-25 2023-06-13 Meta Platforms Technologies, Llc Methods for depth sensing using candidate images selected based on an epipolar line
CN114485479B (en) * 2022-01-17 2022-12-30 吉林大学 Structured light scanning and measuring method and system based on binocular camera and inertial navigation

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4645348A (en) * 1983-09-01 1987-02-24 Perceptron, Inc. Sensor-illumination system for use in three-dimensional measurement of objects and assemblies of objects
JPS62501881A (en) * 1985-02-13 1987-07-23 ユニバ−シテイ− オブ クイ−ンスランド Digital image analysis device
JPH0749937B2 (en) * 1988-03-22 1995-05-31 工業技術院長 Shape measurement method
GB2230118B (en) * 1989-04-05 1992-12-23 Intel Corp Microprocessor providing selectable alignment checking on memory references
US5054907A (en) * 1989-12-22 1991-10-08 Phoenix Laser Systems, Inc. Ophthalmic diagnostic apparatus and method
IT1245014B (en) * 1991-01-29 1994-09-13 Dea Spa SYSTEM FOR THE THREE-DIMENSIONAL MEASUREMENT OF SCULPTED SURFACES TO MATHEMATIZE
US5309222A (en) * 1991-07-16 1994-05-03 Mitsubishi Denki Kabushiki Kaisha Surface undulation inspection apparatus
US5383013A (en) * 1992-09-18 1995-01-17 Nec Research Institute, Inc. Stereoscopic computer vision system
GB2292605B (en) * 1994-08-24 1998-04-08 Guy Richard John Fowler Scanning arrangement and method
US5559334A (en) * 1995-05-22 1996-09-24 General Electric Company Epipolar reconstruction of 3D structures
US5818959A (en) * 1995-10-04 1998-10-06 Visual Interface, Inc. Method of producing a three-dimensional image from two-dimensional images
JPH09187038A (en) * 1995-12-27 1997-07-15 Canon Inc Three-dimensional shape extract device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112254674A (en) * 2020-10-15 2021-01-22 天目爱视(北京)科技有限公司 Close-range intelligent visual 3D information acquisition equipment

Also Published As

Publication number Publication date
GB2328280B (en) 2002-03-13
WO1999006950A3 (en) 1999-04-22
GB9816756D0 (en) 1998-09-30
JP2001512241A (en) 2001-08-21
GB2328280A (en) 1999-02-17
CA2299426A1 (en) 1999-02-11
WO1999006950A2 (en) 1999-02-11
EP1000318A2 (en) 2000-05-17
GB9716240D0 (en) 1997-10-08

Similar Documents

Publication Publication Date Title
AU8636298A (en) Scanning apparatus and methods
US11629955B2 (en) Dual-resolution 3D scanner and method of using
EP3650807B1 (en) Handheld large-scale three-dimensional measurement scanner system simultaneously having photography measurement and three-dimensional scanning functions
US10088296B2 (en) Method for optically measuring three-dimensional coordinates and calibration of a three-dimensional measuring device
CN104335005B (en) 3D is scanned and alignment system
US7170677B1 (en) Stereo-measurement borescope with 3-D viewing
US8032327B2 (en) Auto-referenced sensing method for three-dimensional scanning
US20050128196A1 (en) System and method for three dimensional modeling
KR101395234B1 (en) Method for acquiring three-dimensional images
US20080101688A1 (en) 3D photogrammetry using projected patterns
US20060119848A1 (en) Methods and apparatus for making images including depth information
JP3409873B2 (en) Object input device
JP2000222585A (en) Method and device for detecting and recognizing motion, and recording medium
D'Apuzzo Automated photogrammetric measurement of human faces
Urquhart The active stereo probe: the design and implementation of an active videometrics system
US20240175677A1 (en) Measuring system providing shape from shading
Mao et al. Improved area-based stereo matching using an image segmentation approach for 3-D facial imaging
US20230355319A1 (en) Methods and systems for calibrating instruments within an imaging system, such as a surgical imaging system
Shokouhi Automatic digitisation and analysis of facial topography by using a biostereometric structured light system

Legal Events

Date Code Title Description
MK5 Application lapsed section 142(2)(e) - patent request and compl. specification not accepted