GB2253052A - Measurement of three-dimensional surfaces - Google Patents

Measurement of three-dimensional surfaces Download PDF

Info

Publication number
GB2253052A
GB2253052A GB9126856A GB9126856A GB2253052A GB 2253052 A GB2253052 A GB 2253052A GB 9126856 A GB9126856 A GB 9126856A GB 9126856 A GB9126856 A GB 9126856A GB 2253052 A GB2253052 A GB 2253052A
Authority
GB
United Kingdom
Prior art keywords
points
fact
image
block
measurement
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
GB9126856A
Other versions
GB2253052B (en
GB9126856D0 (en
Inventor
Roberto Maiocco
Caterina Cassolino
Guglielmo Raho
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
DEA Digital Electronic Automation SpA
Original Assignee
DEA Digital Electronic Automation SpA
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by DEA Digital Electronic Automation SpA filed Critical DEA Digital Electronic Automation SpA
Publication of GB9126856D0 publication Critical patent/GB9126856D0/en
Publication of GB2253052A publication Critical patent/GB2253052A/en
Application granted granted Critical
Publication of GB2253052B publication Critical patent/GB2253052B/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • G01B11/005Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates coordinate measuring machines
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • A Measuring Device Byusing Mechanical Method (AREA)
  • Length Measuring Devices With Unspecified Measuring Means (AREA)

Abstract

A television camera (10) is mounted in a measurement machine (5) so as to have complete freedom for positioning of the camera within the measurement volume of the machine, thereby replacing the taking of manual points and definition of the trajectories of the traditional measurement systems utilising feelers with a simple imaging of the surface portion to be measured. The system comprises an image processor (16) which controls the capture of two different images of the surface to locate the lines of strong contrast. A stereo processor (17) then provides for correlation between the points of the two images, calculation of the three-dimensional coordinates of these points and of the equations of the curves and the surfaces passing through these points. <IMAGE>

Description

2253G52 1 A SYSTEM FOR THE MEASUREMENT OF THREE-DIMENSIONAL SURFACES TO BE
REPRESENTED MATHEMATICALLY The present invention relates to a system for the measurement of three- dimensional surfaces to be represented mathematically.
As is known, in order to represent three-dimensional surfaces mathematically, that is to say. for the representation of surfaces which vary in space by means of mathematical equations, there are currently available systems operating according to two completely different principles: on the one hand there are systems which detect the co-ordinates of a certain number of points on the surface by means of a measuring machine having a feeler and on the other hand there are systems which are based on optical principles.
In particular, systems of the first type, which are more 20 common, provide for detection of the co-ordinates of several points on the surface and then reconstruct the equations of the surface starting from these points by utilising appropriate calculation techniques. Known systems operating according to this traditional principle measure a large number of points to obtain a sufficient precision in the subsequent mathematical representation and consequently 2 operate at low speed, low precision and are inadequate to meet the current requirements. In a known, more sophisticated, system described in a previous Patent by the same applicant and filed on 6.10.1987 under No 67848A/87, the problems of slowness and precision are resolved by the fact that the operator defines only the principal points in the surface to be measured and then the system itself provides for the acquisition of the number of points necessary for the representation of the model by an iterative method utilising successive approximations in a manner such as to achieve the desired precision, and compensate for the dimensions of the feeler. This known system offers a brilliant solution to the tasks set, but does require a considerable involvement on the part of the operator when mathematically representing surfaces of large dimensions because the initial phase in which the detection of the principal points takes place is controlled manually by the operator and, moreover, is not applicable in those cases in which the type of surf ace to be measured does. not allow the use of feelers.
Optical systems, on the other hand, perform the measurement of the surface by utilising fixed television cameras which view the three- dimensional surface. A known system of this type utilises, for example, two television cameras in fixed t j 3 positions which view the surface from two different angles, and a processor part which reconstructs the mathematical equations on the basis of the t:wo images. These optical systems have little flexibility in that they present a limited field of action, require the fixed and precise. positioning of the surface to be measured with respect to the television cameras, and therefore cannot be utilised with surfaces of large dimensions or are not adaptable to surfaces of very different dimensions which wouldrequire a different positioning of the surface with respect to the television cameras. Of the rest, other known optical systems only allow the recognition of predetermined shapes and figures, and are not usable for the recognition of three-dimensional shapes which are not known in advance.
is The object of the present invention is to provide a measurement system which solves the disadvantages and overcomes the limitations of known systems, in particular it allows the recognition of any three-dimensional surface, without any limitation on the dimensions and on the position of the surface to be measured.
According to the present invention there is provided a system for measurement of three-dimensional surfaces to be represented mathematically, characterised by the fact that it comprises a -measurement machine defining a three- 4 dimensional space for the measurement of a three-dimensional surface; optical means adapted to generate an image of the three-dimensional surface to be measured, the said optical means being carried by the said measurement machine in a manner such as to be displaceable and orientatable within the said three-dimensional space; and processor means connected to the said optical means and adapted to determine, the co- ordinates of points of the surface to be measured.
The system of the invention is therefore based on the combination of a television camera (or other optical system for acquisition of images) and a -measurement machine, which allows complete freedom in the positioning of the television camera within the available measurement volume such as to allow adaptation of the characteristics of the television camera to the surface to be measured simply by suitably choosing the position of the television camera and possibly substituting the optics (lens system) of the television camera in such a way as to obtain the maximum possible precision.
The system according to the invention is developed on four processing levels relating to increasing degrees of sophistication in the processing of the data acquired by means of the television camera and increasing degrees of automation of the system, namely:.tb 0.5011 first level: two-dimensional detection of the co-ordinates of the three-dimensional surface to be measured on the basis of an image of the surface itself acquired by means of the television camera, with detection of the third co-ordinate by means of a feeler and manual displacement of the television camera; second level: three-dimensional detection of the co ordinates of the three-dimensional surface on the basis of two images of the surface acquired by means of the television camera in two different positions of the television camera itself; third level: automatic detection and composition of the images (automatic movement of the television camera with recognition of the working volume); fourth level: reconstruction of the interior of the surface utilising only the vision system.
In particular, the measurement system which will be described hereinbelow operates on a three-dimensional surface provided with separation lines (preferably thin ones) having a strong contrast with respect to the rest of the surface to be measured so as to divide the surface itself into a plurality of "patches". In particular, these lines, which are generally black if the surface is white or vice versa, can be drawn on the surface or else may be 6 t constituted by adhesive strips preliminarily applied to the surface itself or even by a grid (of shadow or light according to the colour of the surface to be measured) projected onto the surface itself. The system thus extracts the three-dimensional outlines of the patches by determining the median line defined by the line of separation, identifies closed loops for the definition of the curves defining the edge of each patch considered, derives the mathematical form of each patch, possibly connects several 10 adjacent patches to form a unique surface, measures with a feeler of traditional type the surface according to the instructions deducible from the mathematical model obtained, and transfers the intermediate and final data to external CAD systems or to interactive graphics systems described in 15 the applicant's previously cited Patent (in this case, in particular, the traditional feeler measurement, which may be required at different stages in the processing of the image, depending on the data available, is controlled and performed by this interactive graphics system, as will be explained in more detail below). This configuration corresponds to the implementation of the above-indicated level 2, without, however, the invention being limited to this.
For a better understanding of the present invention a 25 preferred embodiment of it will now be described, purely by - 7 way of non-limitative example, with reference to the attached drawings, in which:
Figure 1 is a schematic representation of the elements forming the present system; Figure 2 shows a block diagram of the principal processing unit of the system of Figure 1; Figures 3 to 6 represent flow diagrams relating to the measurement operations of the system according to the present invention; Figures 7 to 12 are exemplary illustrations relating to the measurement of a three-dimensional or sculpture surface; and Figure 13 shows calibrating th e system.
is a reference structure used for With reference to Figure 1, the reference numeral 1 indicates a central unit comprising processor means and conventional means for communication with an operator, such as a keyboard 2, a video screen 3, and a mouse 4 usable for the selection of graphic images and instructions on the screen 3. The unit 1 comprises a dedicated image processor which cooperates with a measurement machine 5 and with a unit 6 for presentation and management of the results and for initialisation of the working environment. Preferably the unit 6 uses the components 2-4 of the unit 1 for communications with the user.
8 t In detail, the measurement machine 5 is fundamentally of known type, provided with a measurement head 8 movable by motorised carriages along three orthogonal Cartesian axes carrying a tactile feeler or s tylus 7 of the point-to-point type. To the measurement machine is fixed a television camera 10 of standard type provided with one or more standard lenses. The television camera 10 is fixed to the head 8 of the machine 5 by means of a fixing element 11 which is orientatable in a repeatable manner, which allows the television camera itself to be positioned over the whole of the volume of the measurement machine and therefore renders the system completely flexible in relation to the dimension and position of the measurable zones. The measurement machine 5 defines a working bed 9 to which is fixed a three-dimensional surface (not illustrated) to be measured and is completed by illumination means 12 for adequately illuminating the surface to be measured and increasing the contrast between the separation lines and the patches.
The unit 6, which in the illustrated example is constituted by the interactive graphics system described in the above- mentioned Patent application No 67848, but which could be replaced by other CAD systems, serves for the acquisition of 9 information necessary for initialisation of the system, that is to say it determines, and provides to the system 1, the information relating to the enabled instruments (various dispositions of the feeler) the current reference system, the parameters required for processing and the obstructed planes, that is to say the occupied volume outside which the television camera can move without encountering obstacles. Moreover the unit 6 serves to control the movement of the feeler 7 for the tactile measurement of the surface, for testing the results as will be explained hereinbelow, as well as for graphic presentation of the results themselves both on its own monitor and by means of printers and plotters (not shown). The unit 6 is also able to control a machine tool (not shown) if required.
In Figure 2 there is shown a detailed block diagram of the central unit 1 and of its connections to the measurement machine 5 and the unit 6. As is known, the unit 1 is constituted by four fundamental parts, namely: a test and measurement unit 15, an image processor 16, processor 17 and a calibration unit 18.
a stereo The four parts 15-18 communicate with one another and with the user via a "mail box" 19 and a co-ordinator unit 20 which co-ordinates the activity of the central unit 1, and are further connected to a plurality of memory areas, including: a common data area 21, a data area 22 shared with the unit 6, an intrinsic parameter memory 23 (that is to say the parameters relating to the lens system used for the television camera, such as the projection of the optical axis centre of the television camera onto the image plane where the image taken by the television camera is 'collected - and the focal length), and a mode memory 24 (including theextrinsic parameters related to the position of the television camera with respect to the measurement machine and the parameters necessary for processing, which will be listed below). The memory 24 can also contain parameters relating to the errors introduced by the lenses (aberrations and geometric distortions) and further parameters which take into account errors in the assembly of the television camera. The input of the co-ordinator unit is unidirectionally connected to the keyboard 2 and the mouse 4 and the output of the co-ordinator unit is unidirectionally connected to the screen 3. Preferably, the input of the same screen 3 is connected to the stereo processor 17 and the unit 6, although for reasons of simplicity and legibility of the drawings this has been shown in Figure 2 by the representation of two screens 3. Finally, the image processor 16 is bi-directionally connected to an acquisition card 26 the input of which is connected to the television camera 10 and the output of which is connected to its own 11 screen 27 and which functions to convert the signals provided by the television camera into digital form, memorise them and manage the graphics of the screen 27.
In detail, the test and measurement unit 15 is activated by the operator all the time that it is desired to utilise the television camera for the measurement of a surface, and is interposed between the operator on the one hand and the measurement machine 5, the image processor 16 and.the stereo processor 17 on the other. Moreover, this unit 15 provides for testing of the position of the television camera 10 and for the reading/modification of the overall modality of the system. In particular, the testing of the position of the television camera makes it possible to know all the information necessary for utilising the measurements derived by means of the television camera, and more particularly provides the rotation/translation matrix which relates the position of the television camera to the reference system of the axes of the measurement machine 5 and possibly to the instrument used (for selection of the position of the feeler 7 which will be used by the measurement processes effected with this latter). This information is derived and memorised independently of the measurement process during which the data necessary for the measurement itself are Simply read from the memory 24. In turn, the reading/modification of the system modality allows the 12 memorisation and modification of the parameters necessary for processing of the various units of the system, that ls to say: the calibration and test parameters used in these two phases, and the measurement parameters, as will be explained in more detail hereinbelow in relation to these phases.
For the performance of these functions the unit 15 comprises the following blocks:
a test block 57 the input of which is connected unidirectionally with the co-ordinator block 20, with the measurement machine 5 and with the memory 23, and bidirectionally with the memory 24 as well as with the mail box 19. The block 57 is therefore activated by the coordinator block 20, receives data necessary for position testing of the television camera (current position) from the measurement machine 5, and the intrinsic parameters from the memory 23, starts the procedure for taking the images through the mail box 19, reads the parameters relating to the processing from the memory 24 and forms the rotation/translation matrix and possibly the parameters which take into account the lens system errors and television camera -assembly errors, and memorises the parameters calculated in the memory 24; a modification block 58 for modifying the parameters 13 relating to the processing, the input of which is unidirectionally connected to the co-ordinator block 20 and bi-directionally connected to the memory 24. The block 58, when activated by the co-ordinator block 20, therefore provides for modification of the parameters necessary for processing the image memorised in the memory 24; a measurement block 59 the input of which is unidirectionally connected to the co-ordinator block 20, to the measurement machine 5 and to the memories 23 and 24, as well as being bi-directionally connected to the mail box 19. The block 59, when activated by the co-ordinator block 20, therefore activates the detection of the images and the associated high and low level processing performed by the processors 16 and 17, providing them, through the mail box 191 with commands and information necessary for processing, such as the current position of the machine, intrinsic parameters and the modes.
the the The image processor 16, activated by the block 59 or by the 20 calibration unit 18, as will be explained further hereinbelow, is concerned with all the low level services for the use of the vision and comprises a first group of blocks 28-30, a local data area 31, a second group of blocks 32-35 and a communications block 36.
In its turn, the first group of blocks 28-30 is concerned 14 with the acquisition of the image and its transmission, together with commands and parameters relating to the low level processing, to the second group of blocks and comprise: 5 a management block 28 for management of the image acquisition card, connected bi-directionally to the card 26 and serving to initialise the card, manage the passage fromcontinuous image acquisition to the end of imaging, reading the acquired image and transferring it to the block 30; 10 a communication management block 29, bi-directionally connected to the communication block 36 and the block 28 and, unidirectionally connected to the block 30; the block 29 serves for communication, according to a specified protocol, with block 36 when this controls activation of the is imaging utilisation, and to signal to the block 36 the completion of the processing of an image, as well as sending synchronisation commands between the block 36 and the television camera, and sending processing commands and the necessary parameters, received from the block 36, to the 20 block 31 via the block 30; a processor management block 30, the input of which is unidirectionally connected to the block 29 and to the block 28 and the output of which is connected to the blocks 31-35; the block 30 serves to associate each image received from 25 the block 28 with the associated.parameters as well as the processing commands received from the block 29, and to transfer them to the local data area 31; finally it communicates the availability of the data to the blocks 3235.
The second group of blocks serves to perform the low level processing itself. This processing is slightly different according as the surface portion viewed by the television camera is the surface to be measured itself or a surface relating to a known object (gauge or "pattern") the image of which is acquired for calibration of the system, that is to say for measurement of the intrinsic parameters as will be seen hereinbelow.
Specifically, in the first case, the strongly contrasting separation lines between the patches are seen as strips of a certain thickness (see Figure 7 representing the image of a surface portion in which the patches are indicated 38 and the separation lines or strips - not to scale with respect to the patches - are indicated 39). The edges of the strips thus have a steep illumination gradient with respect to the adjacent points belonging to the patches or to the strips themselves, and the maximum value of the gradient is perpendicular to the edge or outline of the strips themselves. Moreover, the pairs of points on opposite edges of each strip present the maximum opposite gradient, with 16 specular orientation with respect to the axis of symmetry of the strip itself. In this case the low level processing consists of determining the median points between pairs of points having maximum gradient of opposite value (see Figure in which the median lines are drawn as chain lines and are indicated 40), and in re-grouping the median points into lists or chains of points belonging to the same side of theedge or outline of each patch.
In the second case, in a preferred embodiment of the invention, the gauge, shown in Figure 13 and indicated by the reference numeral 44, is constituted by a set of squares 41 and 42 the crossing points of the diagonals of which are unvarying in perspective transformation, and two of which is (indicated with the reference numeral 42) are of greater dimensions than the others for recognition of the orientation of the gauge 44. In this case low level processing consists in re-grouping adjacent points of maximum gradient into lists, each relating to the perimeter of a square, and recognition, from these lists, which are the projection of squares for the reconstruction of the squares themselves and the determination of the intersection of the diagonals. The direction of the straight line on which the two squares 42 lie is-also determined so as to determine the transformation which converts the current 17 reference system (of the measuring machine) to a reference system associated with the gauge 44.
In both cases the points of maximum gradient are first processed in such a way as to determine the real coordinates of the points of the edge of the strip 39 or of the squares 41, 42 with greater precision than the resolution of the television camera (sub-pixel precision).
For the performance of these functions the second group of blocks comprises:
a block 32 for extraction of the edge points with sub pixel precision, the output of which is connected unidirectionally to the blocks 33 and 34 to which it supplies the derived co-ordinates; a block 33 for extraction of the skeleton (or skeletisation block) the output of which is connected unidirectionally with a block 34 to provide this with the co-ordinates of the median line 40; this block is activated as described above, depending on the commands associated with the acquired image, when low level processing ofan image relating to a surface to be measured is required; a block 34 for linking the edges, the output of which is connected unidirectionally to the block 35 and to the mail box 19, receiving alternatively the co-ordinates of points derived by the block 32 or the co-ordinates of the 18 median points derived by the block 33. After grouping these in chains, in the case of linked points coming directly from the block 32 (edge points of maximum gradient) the lists are supplied to the block 35, whilst in the opposite case (median points) the lists are supplied to the mail box 19 for memorisation in the common data area 21 and subsequent processing; a block 35 for extraction of notable points, the output of which is unidirectionally connected to the mail box 19. The block 35 seeks to reconstruct squares starting from the edge lines deformed by perspective, determines the intersections of the diagonals thereof as well as determining the direction of the straight line on which the two elements 42 of greater dimensions lie for determination of the transformation from the current reference system to the reference system associated with the gauge. The coordinates of the points obtained and the transformation determined are supplied to the mail box 19 for memorisation in the common data area 21 and subsequent processing.
The stereo processor 17 in turn performs the high level image processing, starting from the data processed by the image processor 16 and memorised in the common data area 21. Specifically, the stereo processor acts only in the case of image processing of a surface to be measured and does not 19 take p art in calibration. In particular, for determination of the three- dimensional co-ordinates of the surface to be measured, two images thereof are taken, each relating to a different position of the television camera, and the two images are acquired in two positions of the television camera obtained by a displacement thereof along its optical axis. Each image, pre-processed in the processor 16r is further processed in such a way as to reconstruct closed loops starting from the chains of points provided by the processor 16, and to couple each side of the loop of one image to the side of the loop of the second image which has been generated by the same separation line of the surface to be measured, for determination of the three-dimensional coordinates of the separation lines.
Specifically, the chains of points received from the stereo processor 17 (hereinafter also called outlines are disposed in a regular manner so as to form a grid which (because of the skeletisation) is devoid of crossing points (see Figure 10)- Consequently, for each end of an outline there are sought the neighbouring ends of other outlines within a certain search zone, discarding the outlines which do not have neighbours, then the outlines and their neighbours are extended up to the common intersection point. This operation is repeated for all the outlines, then a twodimensional graph (matrix) is constructed in which the co- ordinates of all the extended outlines and the intersection points are directly or indirectly memorised, and closed loops are sought. Then the closed loops of the two images are associated by searching, for each point of an image, according to reference, the corresponding point on the other image according to the perspective model, the threedimensional coordinates of the points of the surface to be measured are calculated, corresponding to the mutually associated points of the closed loops obtained, and the mathematical equations of the curves of the edge of the patches (sides of the loops) are reconstructed starting from the three-dimensional co- ordinates just derived. Finally the equations of the surfaces e xtending between these curves are derived by utilising known calculation techniques (for is example by means of Bezier functions) To perform these functions the stereo processor 17 comprises: a block 45 for seeking the proximity relationships, the input of which is connected unidirectionally to the memory 23 and to the mail box 19; a block 46 for selection and extension of the outlines, the input of which is connected unidirectionally to the block 45 and the output of which is connected to the screen 3; 21 a block 47 for construction of the two-dimensional graph, the input of which is connected unidirectionally to the block 46; a recognition block 48 for recognition and association of the closed loops, the input of which is connected unidirectionally to the block 47 and the output of which is connected to the screen 3; a block 49 for calculation of the three-dimensional co ordinates of the points of the loops and for calculation of the three-dimensional curves of the edges of the patches, the input of which is unidirectionally connected tothe block 48 and the output of which is connected to the screen 3 and to the shared data area 22; and a block 50 for calculation of the three-dimensional surface of the patch, the output of which is connected unidirectionally to the screen 3 and to the shared data area 22.
The calibration unit 18 operates when calibration of the intrinsic parameters is necessary (following start up of the system, upon assembly of a new lens system, upon modification of the focus etc). In practice, the unit 18 measures the parameters of the lens system by linking a certain number of external known points with their projections into the image upon variation of the position and orientation of the lens system. For this purpose the gauge 44 shown in Figure 13 and already described hereinabove is used, which is f irst measured, by manual control of the measurement machine, to determine the location of the fixed support of the gauge 44 (typically a sheet of plexiglass fixed to the bed 9 of the measurement machine). Then different images of the gauge 44 'are taken from different positions of the television camera (preferably by displacement of the television camera along straight lines) and all the points thus obtained, relating to the intersections of the diagonals of the squares, are used for calculating a matrix of intrinsic parameters necessary for processing the image of the surface to be measured.
Consequently the calibration unit 18 comprises: a block 52 for activatingmeasurement of the gauge, the input of which is connected unidirectionally to the coordinator block 20, and bi-directionally to the measurement machine 6; a block 53 for activating the acquisition of images, the input of which is connected unidirectionally to the block 52 and to the memory 24 and bi-directionally to the mail box 19 for the reception, from the memory 24, of the image processing parameters, the sending of commands and parameters necessary for acquiring the images of the gauge 23 44 and the reception of known points obtained from the low level processing of the images; and a block 54 for calculation of the intrinsic parameters the input of which is unidirectionally connected to the block 53 and the output of which is connected to the memory 24, for determination and memorisation of the parameters.
The measurement of a three-dimensional surface by means of the system of the present invention is achieved by the following steps, which will be described with reference to the operating blocks of Figures 3 to 6 and'with reference to Figures 7 to 13.
In particular, in the following description, the equations used for the determination of the intrinsic and extrinsic parameters will not be set out in detail, nor will the calculations used for the perspective transformation; for this purpose reference is made to what is described in the following articles: 0. D. Faugeras and G. Toscani: The calibration problem for stereo, in Proc. Computer Vision and Pattern Recognition, pages 15-20, Miami Beach, Florida, USA, 1986; 0. D. Faugeras and G. Toscani: Camera Calibration for 3D Computer Vision, in Proc. of Int. Workshop on Machine Vision and Machine Intelligence, Tokyo, Japan, February 1987; S. Ganapathy: Decomposition of transformation matrices for robot vision, Pattern Recognition Letters, 24 2:401-412, December 1984; E. Previn and J. A. Webb:
Quaternions in computer vision and robotics, Technical Report CS-82-150, Carnegie-Mellon University, 1982; Y. C.
Shiu and S. Ahmad: Finding the Mounting Position of a Sensor by Solving a Homogeneous Transform Equation of the Form AX=BX, in IEEE Conference on Robotics and Automation, pages 1666-1671, Raleygh, North Carolina, USA, April 1987; T. M. Strat: Recovering the Camera Parameters from a Transformation Matrix, pages 93-100, Morgan Kaufmann Publishers Inc. 1987; R. Y. Tsai: a Versatile Camera Calibration Technique for High-Accuracy 3D Machine Vision Metrology Using Off-the- Shelf TV Camera and Lenses, IEEE Journal of Robotics and Automation, RA- 3(4):323-344, August 1987; R. Y. Tsai: Synopsis of Recent Progress on Camera Calibration for 3D Machine Vision, in Oussama Khatib, John Craig, and Tomds Lozano-P&res editors, The Robotics Review, pages 147-159, The MIT Press, 1989.
With reference to Figure 3, the operator initially selects, by means of the keyboard 2, the operating mode (block 70) starting the calibration procedure or the test/measurement procedure or terminating the operations (block 71). This corresponds respectively to the activation, by the co-7 ordinator block 20, of the calibration unit 18 or of the test and measurement unit 15 or disactivation of the system.
If calibration is started, from the block 70 the process passes to a block 72 for activation of the measurement of the gauge, commanded by the unit 52. In this phase manual detection, assisted by the operator, first takes place and then automatic detection of some points of the edge of the gauge 44 by the feeler 7 to identify the location of the gauge itself with respect to the reference system of the measurement machine 5 and therefore to determine the coordinates of significant points in the reference system. From the block 72 the process then passes to a block 73 for activation of the acquisition of an image. In this phase the television camera 10 is orientated, manually or automatically, to view the gauge 44 (preferably, to obtain the best precision possible, compatible with the resolution of the television camera and the intrinsic limits of the system, the viewing position is chosen in such a way as to view the portion of the surface to be measured, in this case the gauge 44, from as close as possible), after which the block 53 sends, via the mail box 19, the command for acquisition of an image, by specifying the parameters for the determination of the real co- ordinates of the edges and by requesting detection of the significant points of the gauge 44. There follows a block 74 for taking a picture and low level processing of the image according to the specification provided, by the image processor 16. The
26 results of the processing (which will be described in more detail with reference to Figure 4) are then memorised in the common data area 21. The operating phases corresponding to the blocks 73 and 74 are preferably repeated several times in such a way as to obtain several images of the gauge 44 and therefore several detections of the significant points in different angular positions of the television camera in order to obtain greater precision. Subsequently the process passes to a block 75 for calculation of the intrinsic parameters, in which the significant points determined for each image are related to the points of the gauge by utilising a minimum squares algorithm. The assessments of the intrinsic parameters thus determined for each acquired image are averaged so that it may be possible to determine, is for each attitude of the television camera, the best rotation and the best translation. The parameters thus determined are memorised in the memory 23 and are thus available for the television camera attitude- testing operations and for the measurement operations to be performed on an unknown three-dimensional surface.
t If instead the operator requests the test and measurement procedure, the process passes from block 70 to a block 77 which involves the activation of the test and measurement unit 15. In particular, the block 77 relates to the choice 27 of the type of processing to be performed, so that the operator is presented substantially with the choice between testing the attitude of the television camera, memorisation/modification of the parameters relating to the image processing, and measurement of a surface.
If the operator requests memorisation/modification of the parameters, the process passes from block 77 to block 78. This block relates to the parameter modification steps which are shared in various ways by all the components of the system, and which can be sub-divided into two classes: calibration/testing parameters and measurement parameters.
In particular, the first (parameters used by the units 15 15 and 18) comprise parameters relating to the filter for extraction of the edge points (including the operator dimension and the minimum and maximum gradient thresholds, the parameters for connecting the edges (including the minimum threshold on the length of a chain), the parameters for automatic evaluation of the quality of a group of images (including a maximum error threshold).
The second measurement parameters, utilised by the processors 16 and 17, comprise the distance between the two images, the parameters of the edge extraction filter (as indicated above), the edge linking parameters (as above), - 28 the skeletisation parameters (including an indicator specifying if this operation is required), the maximum acceptable distance between two parallel edges in order that these be merged in the skeleton and the type of profile patch separation lines - to be skeletised (that-is to say black on a white background or vice versa), the parameters for control of the stereo processor (relating to the graphic output on the processor screen) and the parameters relating to the construction of the mathematical equations (including the number of extracted points of each outline and the number of terms of the polynomial of the curves of the edge and the surfaces to be calculated).
The parameter modification phase then takes place under the control of the block 58 which allows the user to memorise or modify one or more of the above-listed parameters by means of the keyboard 2 and/or the mouse 4 and the co-ordinator block 20, with display on the monitor 3.
If instead the operator requests checking of the camera attitude, the process passes from the block 77 to the block 79 relating to the initialisation operations. In this phase, among other things, the block 57 reads the intrinsic parameters from the memory 23, the modes (that is to say the 25 parameters relating to the image processing and to the 29 already assessed camera attitudes) from the memory 24, as well as the current reference system from the measurement machine 5 and the instructions relating to the already defined instruments. Moreover, the block 57 asks the operator for a label identifying the.position to be defined (such as angular position of the television, camera support), verifies its existence, possibly requests a label identifying the instrument to associate with. the position and checks its existence. From the block 79 the process then passes to a block 80 by which the block 57 requests the manual positioning of the television camera by the operator, and subsequently to a block 81 by which the block 57 reads the current position of the machine. Then it passes to the block 82 relating to the activation of the operations for capture of the image of the gauge, previously fixed by the operator to the bed 9 of the measurement machine 5. This phase causes data relating to the operations to be performed, and the associated parameters for control of the operations to be sent from the block 57 to the image processor 16 which therefore acquires the image. The block 83, relating to the low level processing to obtain the significant points of the gauge then follows as will be explained in more detail below. At the end of this processing the control returns to the block 57 which, when advised by the image processor 16, proceeds to read the results memorised in the common data area 21. The sequence described in relation to the blocks 80-83 is preferably repeated several times, for example three times.
From block 83 processing then passes to block 84 (which otherwise could precede, rather than follow, the block 83) which relates to the activation of the measurement' of the gauge 44 by the feeler 7 as described hereinabove for block72.
Finally, the block 57, by utilising the values of the intrinsic parameters of the television camera, proceeds to calculate the best rotation/translation matrix (block 85). The results obtained can then be sent to the co-ordinator 20 for display on the screen 3.
If instead the operator requests measurement of an unknown threedimensional surface, from block 77 the process passes to the block 87 relating to the initialisation operations.
In this phase, among other things, the block 59 reads the intrinsic parameters from the memory 23, the modes from memory 24, as well as the current reference system from the measurement machine 5 and the instructions relating to the instruments already defined. Further, the block 59 requests from the operator a label identifying the position to utilise and checks its existence. From the block 87 the 31 apparatus then passes to the block 88 in which the manual positioning of the television camera 10 by the operator is requested (with choice of the view as explained with reference to block 73) and subsequently to the block 89 in which the current position of the machine is measured. Thereafter it passes to the block 90 relating, to the activation of the operations for capture of the image of the surface which has been previously fixed by the operator to the bed 9 of the measurement machine 5. After memorisation of the first image the unit 59 commands automatic positioning of the television camera 10 which is displaced by the effect of the displacement of the head 8, along its optical axis, away from the surface for a distance equal to the value selected by the operator in the system parameters (block 91). Processing then passes to the block 92 for reading of the current position of the machine, and then to the block 93 for capture of the second image of the surface.
Thereafter low level processing of the first image (block 20 94) and of the second image (block 95) is performed. In particular the two processing stages may be performed in a time not immediately subsequent to the respective capture stage in that the measurement process can be interrupted immediately after capture of the second image and the subsequent processing performed at any other time, without the intervention of the operator, by recalling the images 32 captured and suitably memorised. This therefore allows a complete release of the image acquisition operations from the associated processing (as also the calibration operations, the television camera assessment operations and the processing pa rameter modification operations) which considerably increases the flexibility of the system.
After the low level processing-performed according to blocks 94 and 95 and leading to the attainment of two series of lists of outline points, as will be explained in greater detail with reference to Figure 4, the process passes to block 96 relating to the stereo processing as will be described with reference to Figures 5 and 6, so that the control of the operations is passed to the stereo processor 17 which calculates the three-dimensional co-ordinates of the points of the outline of each patch, and determines the equations of the boundary curves and the surfaces lying between them. The data thus obtained is then memorised in the shared data area 22. At the end of these operations (or 20 even before them if the results of the intermediate processes does not allow continuation of the processing on data derived from the images or in the case of a lower precision than that required by the operator), the process arrives at block 97 relating to the transfer of control to 25 the unit 6 for presentation and management of the results, 33 which measures the surface by utilising the data memorised in the shared data area 22. The unit 6 further corrects the mathematical model on the basis of the measurements taken with the feelers 7 as well as displaying and drawing up the final model as described for example in the above-mentioned Patent application.
With reference to Figure 4, relating to the low level processing performed by the second group of blocks 32-35 of the image processor 16, the captured image, memorised in the local data area 31 as a pixel matrix including a plurality of rows (X direction) and columns (Y direction) is initially handled in such a way as to calculate the luminance gradient of each pixel in the X and Y directions by utilising a Gaussian filter of specified dimensions (block 100). Thereafter (block 101) the matrix is s canned to determine the pixels having a gradient modulus greater than the predetermined threshold and (block 102) for each of these are determined two points positioned respectively in advance and behind the pixel in question, along the direction of the previously calculated gradient. In particular, these points on either side of the pixel in question are obtained from the two intersections between the direction of the gradient and the four straight lines (at X = constant and Y = constant) passing through the eight pixels adjacent to the pixel under examination (in reality only two of these 34 straight lines intersect the direction of the gradient). Then block 103 calculates the modulus of the gradient of each of these ahead and behind points by linearly interpolating the modulus of the gradient of the two pixels adjacent to the ahead or behind point (see for example Figure 8, in which the reference numeral 200 indicates a pixel having a gradient modulus greater than the predetermined threshold, the gradient of which is represented by the arrow 201. The reference numerals 202a 10 and 202b indicate the ahead and behind points respectively, resulting from the intersection between the direction of the gradient 201 and the straight line which connects the points 203a, 203b and 203c, 203d. Finally, reference numeral 204 indicates the straight line defined by the direction of the gradient of the pixel 200). Then, according to block 104, the two moduli just calculated, relating to the points ahead and behind the point in question, are compared with the value of the gradient of the pixel considered. If this latter is less than one of the two calculated values the system passes from block 104 to block 105, according to which the pixel under consideration is discarded in that it does not represent a point of maximum -gradient (an edge point of the patches or of the squares), after which processing passes to the block 106 which checks if all the 25 pixels determined in block 101 have been examined. If not it passes again to block 102 for a subsequent pixel.
If, on the other hand, the block 104 detects that the pixel considered is a maximum gradient point, processing passes to block 107 relating to the calculation of the parabola passing through three points of a twodimensional space which represents the section through the direction of the gradient, of the three dimensional function which represents (coordinate Z) the value of the modulus of the gradient of the pixel of the image having co-ordinates X and Y. In particular, the equation of the parabola passing through the Z co-ordinates (modulus of the gradient) of the pixel under consideration and the points just identified in advance and behind it is determined, and its maximum point determined.
The co-ordinates X1 and Y1 of this maximum are then considered as the coordinates of the edge point with subpixel precision and are then memorised as an offset with respect to the entire co-ordinates (in practice the displacement necessary to reach the point with the co- ordinates X, and Y1 just determined from the point of the pixel considered represents the necessary displacement, along the direction of the gradient, to locate the true maximum of the gradient with respect to the pixel considered). This operation is shown in Figure 9, 2s representing a three-dimensional space, in which the pixel 200, the points 202a, 202b and the corresponding points 208- 36 210 are illustrated (its - z co-ordinate represents the respective modulus of the gradient), as well as the parabola 211 passing through the three said points. The reference numeral 212 indicates the maximum of the parabola, having co-ordinates X, and Y1 corresponding to the point 213. The point 213 thus constitutes the edge point determined with sub-pixel precision.
The block 107 is followed by block 106 and then by block 102 until all the pixels determined in block 101 have been examined. Subsequently there is a decision block 110 relating to the type of processing to be performed, and this tests whether or not it is necessary to perform the skeletisation of the profile on the basis of commands 15 provided at the beginning to the image processor 16. In the positive case (YES output from block 110) the block 32 provides the results just acquired to the block 33 of Figure 2 so that processing passes to the operative block 111 of Figure 4. In this phase scanning of all the acquired image is effected and, for each previously determined edge point there is determined the straight line leading from the point in the direction parallel to the direction of the gradient and in the opposite direction determined by the colour of the strip (that is to say f rom the edge points of the strip towards the other edge) Then (block 112) another edge 37 1 point belonging to the straight line just identified, and positioned within a predetermined distance is sought. if the other edge point is not identified the processing passes to block 113 according to which the first edge point is eliminated (that is to say it does not truly represent an edge point of a strip, but noise, for example a spot) and then to the block 114 in which it is checked if all the edge points determined at the end of the operations 102-106 have been tested. If all the points have not been tested it returns to block 111.
On the other hand, if the opposite edge point is identified, YES output from block 112, processing passes to block 115 in which it is tested if the two identified edge points have anti-parallel gradients (that is to say the same direction and opposite sense). In the positive case processing passes to block 116 in which the two identified edge points are cancelled and replaced with the respective barycentre point. This case (flow according to blocks 111, 112, 115 and 116) is shown by way of example in Figure 10, in which the reference numeral 216 indicates an edge point determined with sub-pixel precision by block 32 for which the determined straight line 217 meets another edge point 218 having an anti-parallel gradient, so that the two points 216 and 218 belonging to two opposite edges of a strip 39 are replaced by the point 219 which belongs to the median line 38 40. Then processing passes to block 114. If, on the other hand, the two points just identified do not have an antiparallel gradient, from block 115 processing passes to- block 113 f or the elimination of the point under examination.
After having examined all the points provided by the block 32, the skeleton represented by the broken outline 40 in Figure 10 is obtained, which is devoid of interception points. Then processing passes to block 118 to which it also arrives in the case of a NO output from block 110. This corresponds to the transfer of information and control from block 33 to block 34 if skeletisation is effected, or from block 32 to block 34 if not. In block 118 the linking of either skeleton points or edge points is effected. This operation is effected starting from any point and moving in such a way as to group together all the adjacent points. If the number of adjacent points identified is less than a predetermined threshold the chain is discarded. At the end of this-operation there are available chains of points each 20 relating to a segment of the skeleton or to the perimeter of the squares of the gauge.
Then processing passes to block 119 which tests if the processing involves extraction of significant points. if not (measurement of an unknown surface the skeleton of which 39 has already been obtained), the low level processing effected by the image processor 16 is terminated and processing passes to the end block 120, in which the linking block 34 memorises the resilts in the common data area 21 by means of the mail box 19 and correspondingly advises the measurement block 59 which has initiated the operations (blocks 94 and 95 of Figure 3).
In the opposite case (measurement of the gauge,. for which the skeletisation stage has been skipped), processing passes from block 119 to block 121 in which the linking block 34 passes the processed data and control to the block 35 of Figure 2. This latter acts to select, from the chains of points received, the outlines which can constitute the perimeter of squares, discarding the chains which are too short, by checking for closure of the outline. Then processing passes to the block 122, which for each selected outline performs an iterative process to identify the four straight lines which belong to the sides of each square. If this process does not converge the test block is followed by block 124 which causes the chain under consideration to be discarded and then leads again to block 122 for the examination of a new chain, otherwise from block 123 processing passes to block 125 relating to the identification of the intersection between the four straight lines (that is to say the corners between the sides of each square), to the determination of the equations of the two diagonals, passing through pairs of opposite intersection points just identified, and to the determination of the lengths of the diagonals and their intersections giving the centres of the squares (significant points). The block 126 then checks if all the outlines have been examined"and, if not, processing returns to block 122. At the end (YES output of block 126) processing passes to block 127 for ordering of the extracted points in which, on the basis of the length of the diagonals, which is greater in the case of the two squares 42 of Figure 13, the direction of the straight line on which both the squares 42 lie is determined there is then applied a transformation of axes in such a way as to bring the axis X on to this straight line and to order 15 the points. At the end of these operations the low level processing is terminated and processing passes from block 127 to block 120 corresponding to memorisation of the data by the block 35 in the common data area 21 and to transfer of control back to block 53 or 57 of Figure 2 (that is to 20 say to passage from block 74 or 83 to block 75 or 84 of Figure 3).
With reference to Figure 5 relating to the high levgl processing performed by the stereo processor 17, and to the 25 subsequent measurement with the feeler 7 (blocks 96 and 97 41 of Figure 3) initially, in block 141, the intrinsic parameters and other information necessary for processing (for example relating to the graphic output on the screen 3) are read and then, in block 142 a processing of the first image takes place which will be described in more detail hereinbelow with reference to Figure 6, to identify the closed loops existing in the image acquired from the television camera and already subjected to low level processing. Thereafter, in block 143, the second image is processed exactly as the first in block 142.
Specifically, as in Figure 6, the high level processing of each image (blocks 142 and 143) comprises the initial reading of the image to be processed, constituted by the chains of points of an outline (typically of the points of the skeleton) and initialisation of a backing matrix, that is to say the construction of a matrix of dimensions equal to those of the image, containing the address of a chain in all the positions corresponding to the entire co-ordinates of the points which belong to this chain (block 170). In the backing matrix there are therefore memorised the addresses (of the chain) relating to the sides of the skeleton, with the exclusion of the intersections which are absent in the skeleton and must be determined. For this purpose the output direction of all the outlines (inclination of the two ends of each chain) is calculated in block 171 and, for each outline, the process searches for the existence of other neighbouring ends relating to other outlines (block 172). If one of the two ends of the chain under examination has no neighbours (NO output) the processing passes to block 173 in which the outline under examination is eliminated (its points are cancelledfrom the backing matrix), then processing passes to block 174 in which checks are made to establish if all the outlines have been examined. In the negative case processing returns from block 174 to block 172.
is If, on the other hand, the ends of the outline examined have neighbours (YES output from block 172) the outline examined is extended to the common intersection with the neighbouring outlines, as described hereinbelow with reference to an illustrated example in Figure 11 in which there are shown lines relating to several outlines to be extended. In particular, 230.indicates an outline having an end 231 for which, within the square 245 of predetermined dimensions 20 there have been found three neighbouring ends, namely the ends 232, 233 and 234 belonging to the outlines 235, 236 and 237 respectively. Then, to extend the curves according to block 175 the pairs of outlines the ends of which have the same inclination but opposite directions are identified (in the illustrated example the pair of outlines 230 and 236 having ends 231 and 233 and the pair of outlines 235 and 237 having ends 232 and 234). Then, in block 176, for each pair there is determined the equation passing through several points of the two outlines (in the example, the equations of the portions of curve 238 and 239 respectively are determined). For the outlines with ends which are not paired the curve is simply extended. Then, block 177 searches for the common intersection, if it exists (inFigure 11 the point 240 is thus determined) or the median point of the prolongation of the curve is utilised in the absence of an intersection. Finally, block 178 memorises several points on each portion of the connecting curve thus obtained up to the intersection point.
Then via block 174 the process checks if all the outlines have been examined, and if not returns to block 172. At the end of the iterations there is available a structure constituted by intersection lines (in practice the intersection points of the skeleton have been reconstructed and the isolated sides have been eliminated), which can be displayed on the screen 3 thanks to the connection between block 46 and this latter, shown in Figure 2. Then processing passes to block 179 relating to the construction of a graph representing the said structure. For this purpose there is used a matrix in which there are memorised the nodes of the graph (intersection points just 44 io reconstructed) and the branches or "links" which connect these nodes (outlines and associated extensions). By utilising this matrix, block 180 searches for the closed paths (loops) by cutting across the arms of the graph and searching for the minimum path which joins the two thus separated nodes. At the end of this procedure closed loops formed by the outlines are available for the processed image. At this point the separate processing of the two images is terminated.
Subsequently (and with reference again to Figure 5), for each closed loop thus identified in the second image (reference image) which in a preferred embodiment is that acquired with the television camera further from the surface to be measured, there is sought the corresponding loop on the first image, by connecting the points of the two images two-by-two (block 144). This coupling operation is performed in a manner which will now be explained with the aid of Figure 12, in which two corresponding loops 250 and 20 251 are represented, and in which the loop indicated 250, of greater dimensions is that relating to the first image (television camera closest). In the same Figure the reference 252 also indicates the projection from the optical centre onto the image plane (image centre). For each point of the reference image there is calculated the straight line starting from the image centre 252 and passing through the point in question (for example the point 253 in Figure 12 determination of the half line 254 -). Then the existence. of boundary points belonging to the other image which lie on the same half line is sought within a predetermined distance from the point in question (in the illustrated case the point 255 belonging to the loop 250 is determined). The search for the second point may possibly be refined by deriving the equation of the local variation of -the curve passing through the points of the second image, and searching for the point closest to the half line of the search (half line 254). Finally the ratio between the distances from the two coupled points thus identified to the image centre is calculated (R/r, with r relating to the point of the reference image and R relating to the point on the first image). At the end of the loop coupling operations performed by block 48 of Figure 2 the results can be displayed on screen 3.
Subsequently (block 145) starting from the ratio determined above, and possibly after having subjected it to filtering with a Gaussian filter, the three-dimensional co-ordinates of the points of the profile (outlines of the patches) are calculated by applying a model of the perspective projection. Specifically, the co-ordinates Xt, Yt and Zt of the points of the profile are calculated in the television 46 camera reference system, utilising the equations: Xt = (Xi - Xo) Ztla Yt = (Yi - YO) Zt/b Zt = (-Dz) R/r in which (Xo, Yo) are the co-ordinates of the image centre, a, b are the values of the focus at pixels X and Y.' Dz is the variation in height along the optical axis of the two images, (Xi, Yi) are the co-ordinates, derived above, of the point on the reference image and R/r is the ratio just determined.
Then the thus-calculated co-ordinates are transformed into the reference system of the measurement machine 5 and then the equations of the edge curves passing through the threedimensional points just determined are calculated. The results of this operation, performed by block 49 in Figure 2, can also be displayed on screen 3. Furthermore, this result can be memorised in the shared data area 22 for utilisation by the unit 6.
Subsequently, for the loops having four sides, the threedimensional surface subtended by the curves obtained is reconstructed utilising known numerical calculation techniques, by block 146. Possibly, if t - his operation has taken place for more than one loop, providing several three47 dimensional surfaces, these can be linked together. The result of these operations, performed by block 50 of Figure 2, is also displayed on screen 3 and memorised in the shared data area 22.
Thereafter, in block 147, control is passed to the unit 6 which physically measures, by means of the feeler 7, the coordinates of several points of the surface, starting from the results of the preceding processing, to inprease the precision of the results themselves and to complete the processing when the intermediate image processing data does not allow the reconstruction of the curves and/or of the surface. The measurement procedure using the feeler can be performed with the system described in the above-mentioned Patent, performing it after the processing described with reference to Figure 8 of this Patent (testing relating to block 64), or else, if the determination of the equations starting from the single image is not possible, by performing it in advance of the processing relating to Figure 5 of this earlier Patent (the operative phase relating to block 61).
Finally, in block 148 the process checks if all the square loops in the images have been measured, and if not the processing performed in blocks 144-147 is repeated for all the loops, possibly the data obtained is reprocessed to 48 obtain continuity between adjacent loops and then the process is terminated (block 150).
The advantages which can be obtained with the system of the present invention are as follows.
Thanks to the use of a movable television camera together with the measurement machine movable with respect to the surface to be measured, without restrictions on the mutual position, it is possible to measure surfaces of different dimensions from very small to very large ones, varying the width of the space encompassed and adapting the vision system to the object to be measured (for example by varying the optics utilised). Furthermore, thanks to this is flexibility in the choice of position of the television camera with respect to the surface, and to its adaptability to the object to be measured, the system is able to measure three-dimensional surfaces without limitations on the form and without stringent impositions as far as the position of this surface with respect to the device for capture of the images (television camera) is concerned.
The described system allows measurement of surfaces of unknown form without being limited to recognition of 25 predetermined shapes as in current optical systems, and at 49 the same time considerably eases the operator's work by replacing the manual pont-by-point measurement on the surface and the definition of the paths with a simple framing of the portion to be viewed. In fact, with the described system, the possible measurement with the feeler for testing the surface takes place in an automatic.manner.
With the described system it is possible to organise and programme the operations relating to the measurement of the surface in such a way as to adapt it to the contingent necessities, for example relating to the available time or to the measurement of different surfaces: in particular it is possible to capture the image (or the two images) of the surface to be measured at one time, and to effect all or part of the subsequent processing at different times, possibly even without the direct control of the operator.
The system is particularly flexible thanks to its organisation on four levels, in such a way as to adapt itself to requirements. In particular, if the system does not have to perform frequent and numerous processings, so that the intervention by the operator is not particularly onerous, it is possible to modify the described system in such a way that this works in accordance with the first level, with detection of the two-dimensional co-ordinates by means of the television camera (which in this case can also have a telecentric lens fitted), and detection of the third spatial co- ordinate by means of the feeler. The implementation of the second level, as described above, requires the use of a television camera with a nontelecentric lens (to maintain the perspective deformations used for the reconstruction of the third co-ordinate. The implementation of the third levell with automatic movement of the television camera, reduces further the involvement of the operator in that the operation for framing the surface by the television camera takes place automatically, whilst in the case of the fourth level, with reconstruction of the interior of the patches utilising a single television camera, it is possible to do away with the feeler and the associated checking phases, making the measurement faster and reducing its costs.
Finally, ' it is clear that the system described and illustrated here can have modifications and variations introduced thereto without by this departing from the protective ambit of the present invention. In particular, as well as the implementation of the principle underlying the present invention being possible at the different levels already explained above, the system according to the invention can be modified in such a way as to allow a direct intervention by the operator in the different phases which 51 require a decision, in such a way as to take account of particular requirements or to simplify the processing by the system itself.
The central unit 1 and the unit 6, instead of being implemented separately, each with separate control and processing units, can be incorporated in a single system with a single central control unit which manages a mathematical representation of the surface in all phases.
Finally, all the low and high level image processing for obtaining, from the image or images captured by the television camera, three-dimensional co-ordinates of the points of the surface, can be varied and improved by utilising appropriate calculation techniques, possibly according to future developments and methodologies and of available hardware. In particular, the television camera may possibly be replaced by any electronic image detection and memorisation system able to acquire an image of the surface viewed and to memorise it as an assembly of points characterised by the level of illumination of the corresponding points on the surface. Moreover, the flow charts of the operations and processing performed by the central unit 1 can be varied in such a way as to allow the intervention of the operator when necessary, or to eliminate some operating stages (for example because they have been 52 effected at another time) or even a different transfer of the control of the operations between different blocks of Figure 2.
It is finally to be underlined that the gauge used can be made in a different manner from that described and illustrated; in particular, the gauge elements could have different forms (for example circles) or only the elements used for the orientation of the gauge could.have a different form from the others. Moreover, the gauge could be of three-dimensional type with figures or elements positioned on different planes or with three-dimensional structures (for example a sphere).
53

Claims (28)

1. A system for the measurement of three-dimensional surfaces to be represented mathematically, characterised by the fact that it comprises a measurement machine (5) defining a three-dimensional space for the measurement of a three-dimensional surface; optical means (10) adapted to generate an image of the three-dimensional surface to be measured, the said optical means (10) being carried by the 10 said measurement machine (5) in a displaceable and orientatable manner within the said threedimensional space; and processing means (1, 6) connected to the said optical means and adapted to determine the co-ordinates of points of the surface to be measured.
is
2. A system according to Claim 1, characterised by the fact that the said processing means (1, 6) comprise means adapted to locate points of strong contrast (39, 41, 42) with respect to adjacent zones present on the said surface.
3. A system according to Claim 2, characterised by the fact that the said points of strong contrast (39) are arranged in such a way as to define closed boundary lines of portions of the surface.
4. A system according to any of Claims from 1 to 3, 54 characterised by the fact that the said optical means comprise a television camera (1o).
5. A system according to any of Claims from 2 to 4, characterised by the fact that it further includes a probe (7) for tactile detection of at least some of the said points of strong contrast (39) on the said surface, the said processing means including f irst processing means (1) and second processing means (6), the said first processing means (1) being connected to the said optical means (10) for receiving images of the surface to be measured and including means (16) adapted to determine the twodimensional coordinates of the said points of strong contrast (39), and the said second processing means (6) being connected to the said probe (7) and including means adapted to determine the third co-ordinate of the points detected by the said probe.
6. A system according to any of Claims from 2 to 4, characterised by the fact that the said optical means (10) 20 comprise means adapted to generate two different images of the said surface to be measured, each said image being constituted by a plurality of points (200, 203a-203d) correlated to the luminance of points of the surface to be measured, and the said processing means (1) comprising means 25 (48) adapted to correlate pairs of points in the two images 1 corresponding to the same point of the surface to be measured, and means (49) adapted to determine the threedimensional co-ordinates of the points of the surface starting from the said pairs-of points by means of a perspective transformation.
7. A system according to any of Claims from 1 to 6, characterised by the fact that it includes means (8, 11) for automatic movement of the said optical means (10),
8. A system according to claim 6 or Claim 7, characterised by the fact that it further includes a probe (7) for tactile detection of at least some of the points of the surface the co-ordinates of which have been determined, and means (6) adapted to correct the determined co-ordinates on the basis of the tactile detection.
9. A system according to any preceding claim, characterised by the fact that it includes means (50, 6) adapted to derive a physical model of the surface to be measured.
10. A system according to any of Claims from 2 to 9, characterised by the fact that the said processing means (1) include a calibration unit (18) adapted to generate commands 56 relating to the measurement of a known gauge surface (44) and to determine intrinsic parameters of the said optical means (10), an assessment and measurement unit (15) adapted to evaluate the attitude of the said optical means and to generate commands relating to the measurement of.a surface, and an image processor (16) adapted to capture thia image generated by the said optical means, to detect the said points of strong contrast (39, 41, 42) and to arrange them in chains of points.
11. A system according to Claim 10, characterised by the fact that the said image processor (16) is adapted to capture two images of the surface to be measured in two different positions of the said optical means (10), and by 15 the fact that the said processing means (1) further include a stereo processor (17) receiving from the said image processor (16) the chains of points relating to the said two images, the said stereo processor including means (48) adapted to link the points of the chains belonging to the 20 two images and corresponding to the same point on the surface to be measured, and means (49) adapted to determine the three-dimensional co- ordinates of the said points of the surface starting from the linked points, by means of a perspective transformation.
12. A system according to Claim 10, characterised by the fact that the said gauge surface (44) comprises figures (41, 42) having geometric characteristics which are not varied by perspective transformation, and by the fact that the said calibration unit (18) comprises means (54) adapted to correlate the points of the said gauge surface (44) to points of the image of the gauge surface itself.
13. A system according to Claim 12, characterised by the lo fact that the said gauge surface (44) is constituted by a plurality of squares (41, 42) separated by surface portions having high contrast with respect to the said squares, the said invariable characters being constituted by the intersections of the diagonals of the said squares.
14. A system according to Claim 12 or Claim 13, characterised by the fact that the said calibration unit (18) includes further means (52, 72) adapted to detect the position and orientation of the said gauge surface with respect to the measurement machine (5) and to calculate the said geometric characteristics on the said gauge surface (44), means (73) adapted to determine the said geometric characteristics on the image of the said gauge surface and means (54, 75) adapted to calculate projections of the optical centre on the image plane, focal length and dimensions of the image pixels starting from the said 58 geometric characteristics on the said gauge surface and on the said image.
15. A system according to any of Claims from 12 to 14, characterised by the fact that the said assessment and measurement unit includes means adapted to detect the position and orientation of the said gauge surface with respect to the said measurement machine, means adapted to calculate the said geometric characteristics on the said gauge surface, means adapted to determine the said geometric characteristics on the image of the said gauge surface and means adapted to calculate the rotation and the translation of the said optical means with respect to the said measurement machine starting from the said geometric characteristics on the said gauge surface and on the said image.
16. A system according to Claim 10 and any of Claims from 12 to 15, characterised by the fact that the said assessment and measurement unit (15) includes means (88) adapted to test the position of the said optical means (10), means (90) adapted to enable the said image processor (16) for the determination of the said chains of points, means adapted to determine the two-dimensional co-ordinates of the said points of the said chains and means (97) adapted to command r 59 acquisition of the third co-ordinate of the points of the said chains by means of a tactile detection probe (7).
17. A system according to any of Claims from 10 to 15, characterised by the fact that the said assessment and measurement unit (15) includes means (88, 91) adaptedto means (90, 93) adapted.to control the said image processor (16) for the determination of a first and a second plurality of the said chains of points in a first and a second image respectively relating to different distances between the said optical means (10) and the surface to be measured, and means (96) adapted to control the said stereo processor (17) for determination of the said three-dimensional coordinates.
test the positioning of the said optical means (10
18. A system according to any of Claims from 10 to 17, characterised by the fact that the said image processor (16) includes means (32, 100) adapted to calculate the luminance gradient of the image points, means (32, 101-107) adapted to determine points with the maximum gradient with respect to the neighbouring points and linking means (34, 118) adapted to link together the said maximum gradient points in chains of adjacent points within a predetermined distance.
19. A system according to Claim 18, characterised by the fact that the said image processor (16) further includes first means (33, 111-116) adapted to calculate the barycentre between pairs of points of maximum gradient, second means (35, 121-127) adapted to calculate significant points with respect to the said points of maximum gradient, and third means (110, 119) adapted to enable the said first or the said second means selectively, the said first means (33, 111-116) being interposed between the said means (32, 100) adapted to calculate the gradient and the said linking means (34, 118), and the said second means (35, 121-127) being interposed downstream of the said linking means (34, 118).
20. A system according to Claim 18 or Claim 19, 15 characterised by the fact that the said means (32, 101-107) adapted to determine points of maximum gradient include means (10) adapted to select the points of the image having a gradient greater than a predetermined threshold, means (102) adapted to determine the direction of the gradient, means (103) adapted to determine the gradient of points adjacent to the selected image points in the determined direction, means (107) adapted to calculate the maximum of a parabola passing through the three-dimensional points corresponding to the selected image point and to the determined adjacent points and having a third co-ordinate 61 equal to the respective gradient values, and to determine the co- ordinates of the maximum point of the said parabola.
21. A system according to Claim 19, characterised by the 5 fact that the said first means (33, 111-116) include means (111) adapted to determine a half line extending -from a maximum gradient point, means (112) adapted to search along the said half line for another maximum gradient point within a predetermined distance and means (115, 116) adapted to substitute the said point and the said further point of maximum gradient with their barycentre if these have a gradient in the same direction and opposite sense.
22. A system according to Claim 19, characterised by the 15 fact that the said second means (33, 121-126) include means (121) adapted to select closed chains of points having a greater length than a predetermined value and defining the perimeter of geometric forms, means (122-124) adapted to determine the equations of four straight lines of the outline, identified by points of each chain of points, means (125) adapted to determine the mutual intersection of two diagonal straight lines each connecting two opposite points of intersection of the said outline straight lines and means (126) adapted to orientate the said significant points.
23. A system according to any of Claims from 11 to 15 and 62 17 to 22, characterised by the fact that the said stereo processor (17) includes means (45-48) for the identification of closed loops in each of the images processed by the said image processor (16), the said means for identification of the closed loops comprising means (45, 171) adapted to determine the local inclination of the ends of the said chains of points; means (45, 172) adapted to group the ends of the chains of points disposed within a predetermined distance from one another; means (46, 175) adapted to identify pairs of outlines the grouped ends of which have a local inclination in the same direction and opposite sense; means (46, 176) adapted to extend the said pairs of outlines to a common point; means (47, 177) adapted to determine the intersection of the said pairs of extended outlines; and means (48, 180) adapted to determine closed paths including the said extended outlines and the said intersections.
24. A system according to Claim 23, characterised by the fact that the said stereo processor (17) includes further means (48, 144) for coupling pairs of corresponding points belonging to the said closed loops of the said two images, means (49, 145) adapted to calculate the threedimensional co-ordinates of the said pairs of coupled points and to calculate the equations and curves passing through t he said threedimensional co-ordinates and means (50, 146) adapted 63 to calculate the equations of the surface extending between the said curves.
25. A method for the measurement of three-dimensional 5 surfaces, characterised by the fact that it generates lines of strong contrast (39) on a surface to be measured, translates and orientates (80, 88, 91) optical means (10) for capture of images so as to frame the surface to be measured in a three-dimensional space, capture (82, 90, 93) images of the said surface, locate (83, 94, 95) points belonging to the said lines of strong contrast and determine (96) the co-ordinates of the said points.
26. A gauge (44) for the calibration of the measurement 15 system according to any of Claims from 1 to 24, characterised by the fact that it comprises a flat surface defining a plurality of squares (41, 42) disposed regularly on the said flat surface and having strong contrast with respect to the rest of the said flat surface.
27. A structure according to Claim 26, characterised by the fact that two (42) of the said squares have dimensions greater than the other squares (41).
28. A system for the measurement of three-dimensional surfaces to be represented mathematically, as described with 64 1 reference to the attached drawings.
GB9126856A 1991-01-29 1991-12-18 A system for the measurement of three-dimensional surfaces to be represented mathematically Expired - Fee Related GB2253052B (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
ITTO910052A IT1245014B (en) 1991-01-29 1991-01-29 SYSTEM FOR THE THREE-DIMENSIONAL MEASUREMENT OF SCULPTED SURFACES TO MATHEMATIZE

Publications (3)

Publication Number Publication Date
GB9126856D0 GB9126856D0 (en) 1992-02-19
GB2253052A true GB2253052A (en) 1992-08-26
GB2253052B GB2253052B (en) 1994-11-30

Family

ID=11408835

Family Applications (1)

Application Number Title Priority Date Filing Date
GB9126856A Expired - Fee Related GB2253052B (en) 1991-01-29 1991-12-18 A system for the measurement of three-dimensional surfaces to be represented mathematically

Country Status (5)

Country Link
JP (1) JPH0626825A (en)
DE (1) DE4143193A1 (en)
FR (1) FR2672119B3 (en)
GB (1) GB2253052B (en)
IT (1) IT1245014B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2328280A (en) * 1997-07-31 1999-02-17 Tricorder Technology Plc Scanning to obtain size, shape or other 3D surface features
US6516099B1 (en) 1997-08-05 2003-02-04 Canon Kabushiki Kaisha Image processing apparatus
US6647146B1 (en) 1997-08-05 2003-11-11 Canon Kabushiki Kaisha Image processing apparatus
US6668082B1 (en) 1997-08-05 2003-12-23 Canon Kabushiki Kaisha Image processing apparatus

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE4327250C5 (en) * 1992-09-25 2008-11-20 Carl Zeiss Industrielle Messtechnik Gmbh Method for measuring coordinates on workpieces
DE4335121A1 (en) * 1993-10-17 1995-05-04 Robert Prof Dr Ing Massen Automatic area feedback in optical 3D digitisers
DE4440573A1 (en) * 1994-11-14 1996-05-15 Matallana Kielmann Michael Determining curvature, contour, and absolute coordinates of reflecting surface
GB2371964A (en) * 2001-01-31 2002-08-07 Tct Internat Plc Surface imaging for patient positioning in radiotherapy
JP4639135B2 (en) * 2005-10-19 2011-02-23 株式会社ミツトヨ Probe observation device, surface texture measurement device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990008938A1 (en) * 1989-01-24 1990-08-09 Jacques Chazal Instrument for measuring angles and plotting with direct angle display and/or corresponding trigonometric values

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1990008938A1 (en) * 1989-01-24 1990-08-09 Jacques Chazal Instrument for measuring angles and plotting with direct angle display and/or corresponding trigonometric values

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
GB2328280A (en) * 1997-07-31 1999-02-17 Tricorder Technology Plc Scanning to obtain size, shape or other 3D surface features
GB2328280B (en) * 1997-07-31 2002-03-13 Tricorder Technology Plc Scanning apparatus and methods
US6516099B1 (en) 1997-08-05 2003-02-04 Canon Kabushiki Kaisha Image processing apparatus
US6647146B1 (en) 1997-08-05 2003-11-11 Canon Kabushiki Kaisha Image processing apparatus
US6668082B1 (en) 1997-08-05 2003-12-23 Canon Kabushiki Kaisha Image processing apparatus

Also Published As

Publication number Publication date
GB2253052B (en) 1994-11-30
JPH0626825A (en) 1994-02-04
ITTO910052A1 (en) 1992-07-29
FR2672119A1 (en) 1992-07-31
GB9126856D0 (en) 1992-02-19
ITTO910052A0 (en) 1991-01-29
DE4143193A1 (en) 1992-07-30
IT1245014B (en) 1994-09-13
FR2672119B3 (en) 1993-05-14

Similar Documents

Publication Publication Date Title
US5513276A (en) Apparatus and method for three-dimensional perspective imaging of objects
DE112014001459B4 (en) Method for determining three-dimensional coordinates on a surface of an object
US5638461A (en) Stereoscopic electro-optical system for automated inspection and/or alignment of imaging devices on a production assembly line
US7693325B2 (en) Transprojection of geometry data
US6858826B2 (en) Method and apparatus for scanning three-dimensional objects
EP2104365A1 (en) Method and apparatus for rapid three-dimensional restoration
WO2012053521A1 (en) Optical information processing device, optical information processing method, optical information processing system, and optical information processing program
US4776692A (en) Testing light transmitting articles
US10779793B1 (en) X-ray detector pose estimation in medical imaging
GB2253052A (en) Measurement of three-dimensional surfaces
US6730926B2 (en) Sensing head and apparatus for determining the position and orientation of a target object
US20240175677A1 (en) Measuring system providing shape from shading
Pajor et al. Intelligent machine tool–vision based 3D scanning system for positioning of the workpiece
El-Hakim A hierarchical approach to stereo vision
JP2961140B2 (en) Image processing method
Weckesser et al. Photogrammetric calibration methods for an active stereo vision system
Uyanik et al. A method for determining 3D surface points of objects by a single camera and rotary stage
Xu et al. 3D face image acquisition and reconstruction system
Singh et al. Digital photogrammetry for automatic close range measurement of textureless and featureless objects
Chen et al. A new robotic hand/eye calibration method by active viewing of a checkerboard pattern
CA2356618C (en) Sensing head and apparatus for determining the position and orientation of a target object
JPH0534117A (en) Image processing method
Cardillo et al. 3-D position sensing using a single camera approach
Robson et al. Surface characterisation by tracking discrete targets
JPH10228542A (en) Method and instrument for three-dimensional measurement

Legal Events

Date Code Title Description
PCNP Patent ceased through non-payment of renewal fee

Effective date: 19961218