WO1996030718A1 - System identification - Google Patents

System identification Download PDF

Info

Publication number
WO1996030718A1
WO1996030718A1 PCT/DK1996/000136 DK9600136W WO9630718A1 WO 1996030718 A1 WO1996030718 A1 WO 1996030718A1 DK 9600136 W DK9600136 W DK 9600136W WO 9630718 A1 WO9630718 A1 WO 9630718A1
Authority
WO
WIPO (PCT)
Prior art keywords
objects
positions
parameters
light
measurement
Prior art date
Application number
PCT/DK1996/000136
Other languages
French (fr)
Inventor
Bent Herrmann
Preben HJØRNET
Original Assignee
Pipetech Aps
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pipetech Aps filed Critical Pipetech Aps
Priority to AU52705/96A priority Critical patent/AU5270596A/en
Publication of WO1996030718A1 publication Critical patent/WO1996030718A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • G01B11/25Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object

Definitions

  • the present invention relates to a method for identification of a system, comprising measurement of the shape an object considered to be a part of the system, and systems operating according to this method.
  • the present invention relates to methods and systems for accurate determination of parameters of a system, such as a robot, so conveniently that subassemblies of the system can be produced with low tolerances and, thus, at low cost.
  • the present invention relates to methods and systems for determination of geometrical parameters of objects in a large variety of applications, such as in quality control of manufactured objects, alignment of objects during assembly, recognition of objects, classification of objects, determination of positions of objects, calibration of measurement systems, calibration of manipulator systems, such as robots, etc, etc.
  • a position of a specific point of a surface of an object is determined by moving a mechanical device into contact with the point and reading the position of the mechanical device.
  • coordinate measuring machines have linear or angular scales on moving parts to enable read-out of positions of these parts.
  • One class of non-contact measurements is based on optical measurement principles, according to which an object, the geometry of which is to be determined, is illuminated by a light beam, and reflected light from the surface of the object is detected by a light detector, such as a CCD camera. The position of the points of the surface reflecting light is calculated by triangulation.
  • EP 0 452 422 discloses a "shape acquisition system" capable of delivering coordinates of a three-dimensional object. The system is calibrated using a
  • calibration plate of known dimensions having one or more geometrical reliefs.
  • the precise position of the calibration plate in the system need not be known. However, specific parts, i.e. the geometrical reliefs, with a known geometry of the surface of the calibration plate have to be measured during calibration.
  • EP 0 317 768 discloses a "contour measuring apparatus" for measurement of three-dimensional contours of a surface by directing a plurality of individual light beams from point light sources onto the surface of the object to be measured, detecting the reflected beams of light from the surface of the object, and calculating the local radius of curvature of the measured surface at each point of incidence of the individual light beams. The calculation is based on the position, on the light detector (e.g. CCD-chip), of the image of the corresponding point light source.
  • the light detector e.g. CCD-chip
  • a sphere of precisely known geometrical dimensions is used to calibrate the apparatus. Both the object for calibration and objects to be measured have to be mounted in accurately known positions to ensure successful functioning of the apparatus.
  • a calibration object either has to be mounted in a precisely known position relative to the measurement system, or has to comprise reliefs of precisely known dimensions in a precisely known pattern on its surface.
  • a method for identification of a system comprising measurement of the shape of an object which is a part of the system by observing, by means of the system, arbitrary parts of the object, the position of the object being unknown, comparing data representing the surface of the object with a predefined mathematical model of the system including the shape of the object, and based on the comparison, describing, by means of a
  • characteristic geometrical parameters of an object e.g. the outer diameter of a pipe.
  • the known methods and systems require very accurate positioning of the object to be measured in the measuring apparatus or for calibration purposes, knowledge of geometrical dimensions of specific parts of the object that is measured during calibration.
  • the measuring system determines positions of points on the surface of the object along a contour, such as around a circumferential curve on the outer surface of a cylindrical pipe. If the pipe is aligned properly in the measurement apparatus, the
  • circumferential curve will be a circle, the diameter of which is easily determined. However, if the pipe is misaligned, the circumferential curve will change into an ellipsis, whereby uncertainty in the determination of the outer diameter of the pipe is introduced.
  • the requirement of alignment of the object in the measurement system is avoided according to the present invention by exploiting a priory knowledge of the shape of the object to be measured.
  • the outer surface of a cylindrical pipe may be described by two geometrical parameters, the length of the pipe and the outer diameter of the pipe.
  • the outer diameter of the pipe has been determined with a high precision without knowing the position of the pipe in the measurement system and without knowing which points on the surface of the pipe that are measured.
  • the geometry of the object is much more complicated comprising for example a joint surface of the spigot or the socket of a pipe.
  • system identification is defined as determination of a set of parameters describing a system.
  • the system comprises both the measurement system and objects to be measured.
  • the set of parameters consisted of parameters of an object to be
  • any set of parameters of the system may be determined.
  • the set of parameters of the system to be determined during calibration consists of parameters of the measurement system while the geometrical dimensions of the object measured during calibration are known.
  • a simplified calibration of the measurement system after adjustment of the system may comprise
  • An object may comprise several bodies and, correspondingly, the mathematical model may include a set of shapes of bodies forming the object.
  • the bodies forming the object may be interconnected by mechanical structures or may form parts of a larger body, or, the bodies may not be interconnected and, thus, may be moved around independently of each other.
  • the method is applied to systems for positioning of objects.
  • a method and a system is provided for alignment of objects during assembly by determination of the positions of the objects in relation to each other and repositioning, either automatically or manually, the objects into optimized
  • the objects to be aligned may be different parts of the same object, such as two surfaces of a part for a car body that has to be aligned in relation to each other, two surfaces of a bearing that has to be precisely aligned, etc.
  • robots comprising a vision system with mathematical models of specific geometrical structures may calculate the position of such a structure to be able to position another object, such as a tool, or an assembly, etc, accurately in relation to the structure.
  • a non-contact sensor may be inserted into the tool center point of a robot for regular calibration of the robot.
  • the method is applied to systems for classification of objects by defining each class by specific values of a set of geometrical
  • the measurements e.g. a measured contour along the surface of the object.
  • classification is not dependent on the positioning of the object to be classified.
  • One application of this method is within the meat industry, wherein lumps of meat are
  • the method is applied to systems for recognition of objects (A) that resembles the method of classification described in the previous section apart from an additional final step in the recognition method, wherein the determined minimum of the minimum values is compared to a prescribed threshold value and the object (A) is recognized as the object (B)
  • the method is applied to systems for positioning of objects, such as robots, etc, to identify (i.e. calibrate) the system.
  • determination of the shape of a surface may be dismantled from the system so that the system, afterwards, operates solely based on the parameters determined during system identification without being able to determine shape of a surface of an object.
  • shape of an object refers to fundamental geometrical characteristics of the object described by a set of
  • the shape of a pipe may be defined by its inner and outer diameter.
  • Objects of identical shape may be scaled, i.e. the ratios between corresponding parameters of objects of the same shape are identical.
  • the shape of an object may be defined in any way by data that for at least one specific value of the set of geometrical parameters defining the shape of the object unambiguously define points on the surface of the object, such as by mathematical functions, by a set of geometrical parameters, by coordinates defining a set of points on the surface of the object, etc, the coordinates being generated from e.g. a mathematical model, determinations by a measurement system of positions of points on the surface of an object with the given shape, e.g. by a measurement system according to the present invention, a CAD system, scanning of drawings, etc.
  • the object to be measured is observed by the measuring system by measurement of positions of arbitrary points on the surface of the object in relation to a sensor of the system.
  • the position measurements, whereby a position of a point of a surface of an object is determined may be done by moving a mechanical device into contact with the point and reading the position of the mechanical device, e.g. using a coordinate measuring machine having scales on moving parts to enable read-out of positions of these parts.
  • positions of points of the surface of an object is determined by transmitting one or more beams of radiated energy towards the object and
  • the radiated energy may be of any form, such as ultrasound radiation, sound radiation, electromagnetic radiation of any frequency, such as radiation of X-rays, gamma rays,
  • particle radiation such as radiation of electrons, neutrons, alfa-particles, etc.
  • the object to be measured may interact with the radiated energy by reflecting, refracting, diffracting or absorbing energy or by any combination hereof.
  • a detector of X-rays and a source of X-rays are provided.
  • a laser emits a linear light beam towards the object under measurement
  • a video camera with a CCD chip detects light diffusely
  • the positions of the points of the surface of the object reflecting the light beam are determined by triangulation methods.
  • the beam of radiated energy is swept across the surface of the object e.g. by moving the sources or the detectors or the sources and the detectors in fixed positions in relation to each other in relation to the object, by deflecting the beam of radiated energy by a movable
  • deflecting means e.g. a movable mirror, etc, etc.
  • the radiated energy beams may not be of known shapes.
  • a scene with objects may be illuminated by a set of incoherent light sources emitting substantially white light in all directions, such as light bulbs.
  • Two cameras with known positions in relation to each other may be used to determine positions of points of the surfaces of the objects by stereo techniques.
  • Such a system may also be used to track objects as positions of an object may be determinated as a function of time.
  • the determined positions may be used to control a zooming system of another system or the position of another system, such as a robot, a camera, a system according to the present invention, e.g for detailed measurements of the object, for classification of the object, for recognition of the object, etc, etc.
  • a robot could be equipped with a camera positioned close to the tool center point of the robot.
  • the present system may then be used to guide the tool center point to the object with a rough accuracy and then, use the robot's camera to accurately determine the position of parts of the object to be operated upon by the object.
  • more than one object may be tracked and the distance between objects may be monitored for surveillance and control purposes.
  • coherent sets of radiated energy that has interacted with arbitrary parts of the object and directions of the one or more energy beams or movements of the sensors and/or detectors in relation to the object may be detected dependent upon the sweeping method utilized. If more than one detector of radiated energy is used to detect energy from the same points of the surface of an object, the position of the source of the energy or the direction of the beam of radiated energy need not be known as the positions of points of the object interacting with the beam may be determined by stereo techniques based solely on data from two detectors.
  • the movements of detectors or sources of radiated energy, objects to be measured, or components deflecting beams of radiated energy may be performed by e.g. mechanical
  • the measurement system may comprise adjustable parts.
  • an embodiment of the invention described in more detail below comprises a laser source and a CCD camera in a housing that can be rotated about an axis. The distance between the axis of rotation and the housing is adjustable to accommodate measurement of objects of different sizes. Upon adjustment, the system is recalibrated by a system
  • a system according to the present invention comprises one ore more first sensors, the signal output of which contain information about positions (x sm , y sm , z sm ) of one or more points of the surface of an object to be measured. Positions (x sm , y sm , z sm ) measured by a specific first sensor are defined in relation to a coordinate system SM of that first sensor.
  • a system according to the present invention may comprise manipulators for moving one or more of the first sensors. The position of the first sensors moved by the manipulators will then be measured by a set of second
  • the set of variable parameters T sm comprises the position of the first sensor and other variable states of the first sensor influencing
  • the determination of the position of the surface of an object while the set of static parameters P sm comprises fixed parameters of the first sensor influencing determination of the position of the surface of an object.
  • the one or more sensor signals are denoted I sm .
  • a mathematical model for determination of the position (x sm , y sm , z sm ) of a point on the surface of an object based on the first sensor signals I sm and the parameters of the first sensor P sm and T Bm can be described by the following
  • x sm F smx (I sm , T sm , P sm ) (1)
  • y sm F smy (I sm , T sm , P sm ) (2)
  • z sm F smz ( I sm , T sm , P sm ) (3)
  • Coordinates of a position (x sm , y sm , z sm ) determined in relation to a coordinate system of a sensor may be
  • x b F bx (x sm , y sm , z sm , T b , P b )
  • y b F by (x sm , y sm , z sm , T b , P b )
  • z b F bz (x sm , y sm , z sm , T b , P b ) (6)
  • the system comprises an object of a known shape to be
  • the position of the object in relation to the coordinate system B may be described by a mathematical model comprising fixed parameters P q and variable parameters T q , T q being determined by third sensors.
  • y q F qy (x b , y b , z b , T q , P q ) (9)
  • z q F qz (x b , y b , z b , T q , P q ) (10)
  • the term total system identification denotes determination of all fixed parameters of a system, i.e. the set of parameters P sm , P b , and P q .
  • a total system identification comprises the steps of mounting an object with a known shape in an adequate, but not precisely known, position in relation to the measurement system, measuring the surface of the object, e.g. by moving manipulators of the system along suitable paths, while recording coherent sets of I sm , T sm , T b and T q followed by mathematical calculations as described below.
  • the error value defined by (14) is utilized in formation of a so-called error function which expresses the quality of the system identification, i.e. it gives a quantitative measure for how close the selected set of values of parameters P sm , P b and P q of the system are to their true values. A good determination of the parameters will result in a value of the error function close to zero.
  • the value of the error function is non-negative. Any non-negative function of e may be used as the error function C, such as
  • any known parameter estimation algorithm such as Downhill Simplex, Powell's Methods, Conjugate Gradient Methods, Variable Metric Methods, Levenberg-Marquardt Method, etc, may be used to determine the parameters resulting in the minimum value of the error function. It is an important aspect of the present invention that any point on the surface may be measured during system
  • the position of the object need not be known to be able to determine parameters of the system.
  • the system identification described above relates to the coordinate system Q of the object.
  • the system identification described above may as well relate to any coordinate system, such as the B coordinate system, the SM coordinate system, etc.
  • the methods relating to different coordinate systems are similar. It is of course required that transformations of coordinates and functions can be defined for the coordinate system in
  • the accuracy of the determination of parameters by a system identification may be further improved by performing the system identification for two or more different unknown values for one or more of the fixed parameters to be
  • the system identification could be carried out for a plurality of positions of the energy source.
  • the precision of the determination of the remaining fixed parameters not relating to the position of the energy source is increased as these parameters have to fulfil a larger set of requirements.
  • an object to be used for calibration of the measurement system of the system is formed so that the error function utilized for the calibration has one global minimum and no local minima. In this way, the risk that the
  • the estimation algorithm finds a local minimum and interprets it as a global minimum is eliminated.
  • the optimum shape resulting in an error function with one single minimum may differ significantly from the shape of the objects to be measured contrary to known methods and systems, wherein objects with a shape that is similar to the shape of the objects to be measured are used during calibration.
  • the object used to calibrate the laser line scanner for determination of geometrical parameters of pipes are used during calibration.
  • parameters describing distortion in a CCD camera may be included in the
  • a straight line on the object under measurement is imaged on the CCD chip in the camera as a straight line.
  • distortion in the optics of the system and the camera may cause the straight line on the object to be imaged onto a curve on the CCD chip.
  • the deviation of the curve from the desired straight line is determined by parameters describing the distortion of the system.
  • r is the distance of the point x ku , y ku from the optical axis x k0 , y k0 of the optical system:
  • g(r) A 1 r + A 2 r 2 (20)
  • the transformation from x ku , y ku to x k , y k is defined by four parameters P ku : x k0 , y k0 , A 1 and A 2 that are determined by system identification.
  • the parameters P ku described above may be determined independently of the other parts of the measurement system by a system identification performed exclusively on the optical system with the CCD camera. After determination of the parameters P ku , these parameters and the corresponding mathematical model, i.e. equations (17) - (20), can be transferred to any system in which the optical system with the CCD camera is going to be utilized.
  • the accuracy of the method and system can be improved by
  • the intensity in a laser light beam is given by
  • I 0 is the peak intensity at the center of the laser beam
  • r is the distance from the center of the laser beam
  • w 0 is the gaussian beam radius.
  • the center axis of the line is determined with a resolution that is higher than the resolution of the detector.
  • the center of the line of diffuse reflection of light from the surface of the object can be estimated in many other different ways, such as by calculating a weighted average of the detected intensities across the line image, calculating and determining the zero-crossing of the Hilbert
  • the width of the image of the line has to be larger than the resolution of the detector to be able to estimate the center of the line, a width in the range of 5 to 15 times, preferably 8 to 12 times, such as 10 times, the resolution of the detector is presently preferred.
  • the accuracy of the method and system can be further improved when the radiated beam of energy, such as a laser line sheet, intersects the object under measurement along a straight line, such as when a laser light sheet intersects a plane surface of an object.
  • a system based on contact measurements of the surface of an object.
  • the system comprises a cylindrical measurement sensor with a linearly displaceable actuator which is brought into contact with measurement points on the surface of the object under
  • a linear sensor such as a linear potentiometer, provides an electronic signal, such as a voltage, a current, a digital value, etc, which is a function of the position of the linear actuator.
  • each linear actuator is sensed by a linear sensor as described above for the
  • the mathematical model comprises two variable parameters T b : V 1 and V 2 and 14 fixed parameters P b : a b1x , a b1y , a blz, b b1x , b b1y , b b1z , I 1 , a bx , a by , a bz , b bx , b by , b bz , and I 2 .
  • the transformation between the coordinate systems B and Q comprises six fixed parameters P q : a x , a y , a z , b x , b y and b z .
  • a x , a y , and a z are angles of rotation of the B coordinate system in relation to the Q coordinate system.
  • b x , b y and b z is the position of the origin of the Q coordinate system in the B coordinate system.
  • the system is calibrated, i.e. the set of fixed parameters P sm and P b , with the object shown in Fig. 7 and described in more detail below.
  • P sm and P b the set of fixed parameters
  • Methods and systems according to the present invention can be utilized in a large range of systems for a large variety of applications which systems are not intended for determination of geometrical parameters of objects but wherein system identification according to the present invention is applied to calibrate the systems.
  • system identification according to the present invention is applied to calibrate the systems.
  • a number of examples are mentioned below.
  • a robot such as a robot for welding, assembling parts, etc, is provided according to the present invention with a laser line scanner and system identification is utilized to
  • the width of the gap between parts to be welded together may also be determined by system identification and the welding process may be controlled in response to the determined width.
  • boundary surfaces of the eye of a living being comprising a laser line scanner linearly displaceable in a direction substantially perpendicular to the optical axis of the eye under
  • a CCD camera detects light diffusely reflected or light specularly reflected from the various optical boundary surfaces of the eye.
  • the shapes of the boundary surfaces are determined by system identification independent of the position of the eye as described previously. Different kinds of eye diseases may be classified according to relative positions of the optical boundary surfaces. The previously described classification method may then be applied to determine presence or absence of a specific eye disease and if an eye disease is present, the seriousness of the disease may be quantified, e.g. represented by the error value.
  • An X-ray scanner e.g. for scanning of tissue of living beings, scanning of mechanical parts, scanning of weldings, etc, is provided according to the present invention
  • the surface of an object e.g. the surface of a bone of a living being, is determined by measurement of absorption of the X-rays as different materials, e.g. different kinds of tissue of living beings, absorb X-rays differently.
  • the exact positions of the source and detector of the system is determined by a system identification according to the present invention measuring X-ray absorption of an object of known shape and with a known absorption coefficient of X-rays, its position being unknown.
  • An ultrasonic scanner comprising an ultrasonic transducer for radiation and reception of ultrasound pulses.
  • the distance to the object under measurement is determined by the time interval between transmission and reception of the ultrasound pulse and the velocity of the sound waves in the medium through which the waves propagate. Calibration of and measurements by the ultrasonic scanner are performed according to the methods described previously.
  • a laser scanner for scanning of teeth e.g. scanning of drilled holes in teeth to make a filling which matches the hole perfectly
  • the surface of the tooth reflects the light beams and a pattern of light dots on the tooth is generated.
  • the laser scanner further comprises a CCD camera for detection of diffusely reflected light from the tooth and the geometrical parameters of the hole in the tooth are determined according to the method previously described.
  • the method of classification may be applied to classify different kinds of defects of a tooth optionally followed by determination of characteristic geometrical parameters for the kind of defect in question.
  • the resulting parameters may be transferred to another system for
  • a laser scanner for scanning of surfaces of living beings, e.g. the face of a human being, is provided according to the present invention, utilized to evaluate and plan plastic surgery.
  • the system provides visualization of the outcome of different proposals for surgery and the patient may
  • a method according to the present invention is provided for determination of accuracy of a measurement system as a function of the position in relation to the measurement system of the measurement, wherein an object with an
  • accurately known shape is positioned at different positions in relation to the measurement system and a system
  • the error value in each position of the object determined by the system identification represents the measurement accuracy of the measurement system at that position.
  • a system for monitoring changes in a process such as during surface treatment of mechanical parts, machining of
  • a system for measurement of the suspension of wheels on an automobile comprising four laser line scanners, each laser line scanner positioned in a specific corner of a rectangle and determining the position of the rim closest to the scanner in question by the method already described.
  • the positions of the scanners in relation to each other are determined so that the positions of the rims in relation to each other can be determined. Based on the determinations of the positions, the rims are aligned.
  • a system for quality control of patterns and colours comprising a source of white light, a colour CCD camera, a frame grabber and a computer.
  • the system is calibrated with an object having a known pattern of known colours for determination of parameters describing the system geometrically and optically, interpretation of colours included.
  • the description of the calibration object includes values for the colour and the intensity of light reflected from each point on the surface of the object.
  • the calibration is performed by comparing these values of a large number of arbitrary points on the surface of the object with the known values and adjusting the fixed parameters of the systems for the best match of values as described previously.
  • the error value can be defined by wherein ⁇ (x q , y q , z q ) is the known colour distribution of the surface, ⁇ obs is colour measured, I(x q , y q , z q ) is the known intensity distribution of the surface, and I obs is the intensity measured.
  • a system for thermo graphical applications is provided according to the present invention, operating to similar principles as the system for quality control of patterns and colours described above.
  • the system comprises an infrared sensor for sensing of infrared radiation from a body, such as a human being, and processing means adapted to determine the temperatures at specific parts of the body based on the signal values received from the infrared sensor and
  • the system may be applied in the medical field for diagnose purposes.
  • an object used for calibration of a measurement system may be mounted in and dismounted from the system automatically in order to provide automatical calibration of the system, e.g. to provide automatical calibration at regular time intervals.
  • This is particularly useful in systems comprising actuators for automatical positioning of sensors of the system. Upon a repositioning of one or more sensors, the system can be re-calibrated automatically.
  • an object used for calibration of a measurement system may be positioned in the measurement system by manipulator means and removed from the system again after calibration has been terminated.
  • automatical calibration of the system is provided, e.g. to be executed at regular time intervals, upon adjustments of parameters of the system, etc.
  • Fig. 1 illustrates schematically the operating principles of a laser line scanner for determination of geometrical parameters of a joint surface of a pipe
  • Fig. 2 shows front and side views of a laser line scanner
  • Fig. 3 shows details of the laser line scanner sensor head
  • Fig. 4 shows schematically the sensor head of a laser line scanner with corresponding coordinate systems and system parameters
  • Fig. 5 shows a double laser line scanner
  • Fig. 6 shows a laser line scanner for measurements of
  • Fig. 7 shows an object utilized for calibration of a laser line scanner.
  • Fig. 1 shows the principles of operation of a laser line scanner (1).
  • a laser (2) is mounted in a sensor head (3) together with a CCD camera (4).
  • ⁇ n optical system (5) e.g. a cylindrical lens, transforms the light beam from the laser (2) into a thin sheet of laser light (6) which intersects a pipe (7) under measurement along a thin curve (8).
  • Light diffusely reflected (9) from the points on the curve (8) of intersection between the laser light sheet (6) and the pipe (7) is detected by a CCD chip in the CCD camera (4).
  • the position of points on the curve (8) in relation to the CCD camera (4) is calculated from position of points on the image of the curve on the CCD chip.
  • a light filter that mainly transmits light originally emitted from the laser is
  • the diffusely reflected light (9) from the pipe (7) is transmitted to the CCD camera (4) via a precision mirror (10) in order to provide a compact sensor head (3).
  • the sensor head (3) is positioned on an arm (12) and the arm (12) is rotatably mounted on a
  • the pipe (7) to be measured is mounted in relation to the laser line scanner (1) so that the sensor head (3) can be rotated 360° around the joint surface of the pipe (7).
  • An encoder (11) is mounted on the shaft (13) to provide a signal containing information about the angular position of the arm (12).
  • the laser light sheet (6) sweeps the entire surface of the pipe joint.
  • the angle ( ⁇ ) between the arm (12) of the laser line scanner (1) and the sensor head (3) is between 30° and 45°.
  • the laser line scanner may be positioned outside the transportation path of the objects so that special handling equipment to position the objects in the laser line scanner (1) and remove them again after measurement is not needed and thirdly, even when scanning inner surfaces of an object, it is not
  • the coordinates of the recorded points of the joint surface of the pipe (7) are transformed into coordinates of a coordinate system of the pipe aligned with the center axis of the joint surface of the pipe (7).
  • the surface contour of the pipe (7) is now easily calculated in relation to the center axis and is compared to the design specifications of the pipe for quality control purposes.
  • the measured geometry and the reference geometry may be displayed on a monitor and/or a printer and/or be transferred to another system.
  • the geometrical parameters of the reference object may be provided in any suitable way, such as by a CAD system.
  • Fig. 4 shows schematically the sensor head of a laser line scanner with corresponding coordinate systems and system parameters.
  • a mathematical model for the laser line scanner (1) corresponding to the previously described general model (equations (1) - (16)) is described below:
  • the sensor signals I sm as recorded by the frame grabber represent coordinates (x k , y k ) on the CCD chip of points on the image of a light curve (8) on the joint surface of the pipe (7). From the position (x k , y k ) of an image, the
  • the model of the sensor comprises five fixed parameters P sm : G x , G y , L, ⁇ 1 and ⁇ 2 .
  • G x and G y are gains of the CCD camera (4) and the optical system (5) along the x- and y-axis, respectively.
  • L is the distance along the X sm -axis between the virtual laser (22) and the virtual CCD camera (23).
  • ⁇ 1 is the angle between the laser light and the Y sm -axis (not shown).
  • ⁇ 2 is the angle between the laser light and the X sm -axis.
  • the coordinate system B of the measurement system i.e. the laser line scanner
  • the coordinate system B of the measurement system is aligned with the axis of rotation of the arm (12) and the longitudinal axis of the arm (12).
  • is the only state variable of variable parameter T b of the system.
  • Coordinates of the coordinate system SM are transformed into coordinates of the coordinate system B by the following equations:
  • this model comprises four fixed parameters P b : x 0 , z 0 , ⁇ and ⁇ .
  • Z 0 and X 0 are distances between the axis of the camera (4) coordinate system SM to the center of the base coordinate system B.
  • ⁇ and ⁇ are the angles of rotation of z-axis and the x-axis, respectively, of the base
  • the transformation between the coordinate systems B and Q comprises six fixed parameters P q : a x , a y ., a z , b x , b y and b z .
  • a x , a y , and a z are angles of rotation of the B coordinate system in relation to the Q coordinate system.
  • b x , b y and b z is the position of the origin of the Q coordinate system in the B coordinate system.
  • an alternative system to the laser line scanner for determination of geometrical parameters of joint surface comprising one or more non-contact distance sensor, such as laser distance sensor based on triangulation. Each of the sensors generates a signal
  • each of the transformations of coordinates from the coordinate systems of the one or more sensors to the coordinate system (B) of the measurement system may derived analogously to the derivations for the laser line scanner described above and each transformation comprises four fixed parameters eta, phi, x 0 and z 0 , i.e. 5 sensors lead to 20 parameters.
  • the transformation from the measurement coordinate system B to the object coordinate system Q is identical to the corresponding transformation of the laser line scanner.
  • the system is calibrated with the same object as the laser line scanner.
  • the laser line scanner described previously may comprise two or more CCD cameras and two or more laser line sources, i.e. lasers with optical systems emitting a laser light sheet.
  • two cameras are mounted at a fixed distance and at an angle in relation to each other so that they receive light from the same volume in space.
  • a laser line source is rotatably positioned between the two cameras.
  • a precision angular encoder for determination of the angular position of the laser line source is provided.
  • the positions of points of the object interacting with the laser light sheet are determined by stereo techniques based on data from the two cameras.
  • the system may be calibrated with the same object as the laser line scanner.
  • the system may be used to track objects as positions of an object may be determinated as a function of time.
  • the determined positions may be used to control the position of another system, such as a robot, a camera, a system according to the present invention, e.g for detailed
  • a robot could be equipped with a camera positioned close to the tool center point of the robot.
  • the present system may then be used to guide the tool center point to the object with a rough accuracy and then, use the robot's camera to accurately determine the position of parts of the object to be operated upon by the robot.
  • more than one object may be tracked and the distance between objects may be monitored for surveillance and control purposes .
  • a mobile version such as a hand-held version of the laser line scanner is provided according to the invention.
  • a laser line source comprising a laser line source, a CCD camera, and a sound source, such as a loudspeaker, a ultrasound source, etc, positioned in a fixed position in relation to the laser line source and the CCD camera.
  • a number of sound detectors such as microphones, ultrasound detectors, etc, preferably three sound detectors, are positioned on a frame in a fixed
  • the sound source transmits sound pulses that are received by the sound detectors on the frame.
  • the position of the mobile scanner in relation to the frame (and the B coordinate system) can be calculated.
  • the signals from the sensors are transmitted to a mobile computer.
  • the transformation from the B coordinate system to the object coordinate system Q and calibration of the system may be done as already described above for the laser line scanner.
  • the velocity of sound may be determined during calibration as a part of the system identification, or, a sound transmitter and a sound receiver positioned at a fixed mutual distance for measurement of the sound velocity may be provided either on the frame or on the scanner head.
  • Fig. 5 shows a double laser line scanner which is applied when the positions of the joint surfaces of the pipe in relation to each other are critical. For example when pipes are pressed through soil, very accurate joint surfaces of the pipes are required to ensure that the pipes stay on the desired track. A skewness of an end surface may prove fatal to the construction of a pipe line pressed through ground.
  • the laser line scanner (14) to the left measures the outer surface of the spigot (15) of the pipe and the laser line scanner (16) to the right measures the inner surface of the socket (17) of the pipe.
  • Each laser line scanner (14, 16) is calibrated individually as described above as is the position of the spigot (15) relative to laser line scanner (14) and the position of the socket (17) relative to laser line scanner (16).
  • a specific calibration object is used for system identification of the entire system comprising both laser line scanners (14, 16) so that the positions of the two scanners (14, 16) in relation to each other are determined. Then, the positions of the spigot (15) relative to the socket (17) is determined and the length of the pipe and the
  • straightness of the pipe may be determined.
  • Fig. 6 shows a laser line scanner (18) for measurements on a pallet (19).
  • the sensor head (20) is
  • is the angular displacement of the object under
  • the laser line scanner shown in Fig. 6 is especially useful for measurements on pallets, tyres, rims, rolled rings, etc.
  • the object used during calibration of the laser line scanners described above is shown in Fig. 7. It is seen that the surface of the object comprises a set of plane surfaces for which the lines of intersections between the plane surfaces are parallel. The main purpose of the calibration is to determine
  • the parameters of the calibration object P q are also determined by the calibration.
  • the position of the calibration object need not be known in order to be able to calibrate the measurement system.
  • the shape of the calibration object is designed in such a way that the error function utilized during the calibration has only one minimum as a function of the fixed system parameters to be determined.
  • the equations (36) - (44) above shows that the transformation from the sensor signals I sm to coordinates (x q , y q , z q ) in the coordinate system Q of the object are non-linear.
  • a number of planes between 5 and 7 is
  • the functions g i (y q ) are linear functions of y q .
  • the laser light sheet (6) will intersect the corresponding i'th surface along a straight line and if the quality of the optics (5) of the detection system is sufficiently high to avoid distortion of the image, this line will be imaged on a straight line on the CCD chip of the CCD camera (4).
  • the image on the CCD chip can be pre-processed to compensate for distortion in the optical system (5) to generate data for a straight line.
  • the knowledge that the image is a straight line can be exploited to improve the accuracy of the system. Further, the
  • processing speed can be improved and the number of data entering the algorithms can be reduced.
  • the laser light sheet (6) will intersect the object along a curve consisting of 5 line segments defining four angles between them.
  • the parameters of each line segments are now estimated and the parameters are used as input to the
  • the calibration object shown in Fig. 7 is presently preferred for calibration of laser line scanners.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The present invention relates to a method for identification of a system, comprising measurement of the shape of an object considered to be a part of the system, and systems operating according to this method. The invention is applicable in quality control of manufactured objects, alignment of objects during assembly, recognition of objects, classification of objects, determination of positions of objects, calibration of measurement systems, calibration of actuator systems, such as robots, etc., etc. It is an important aspect of the present invention that accurate measurements of geometrical parameters of an object can be performed without the need of precisely aligning the object to be measured in relation to the measuring system. Further, a calibration object neither needs to be accurately aligned in relation to the measuring system nor needs to comprise specific points of the surface to be measured during the calibration.

Description

System identification
FIELD OF THE INVENTION
The present invention relates to a method for identification of a system, comprising measurement of the shape an object considered to be a part of the system, and systems operating according to this method.
Further, the present invention relates to methods and systems for accurate determination of parameters of a system, such as a robot, so conveniently that subassemblies of the system can be produced with low tolerances and, thus, at low cost.
Still further, the present invention relates to methods and systems for determination of geometrical parameters of objects in a large variety of applications, such as in quality control of manufactured objects, alignment of objects during assembly, recognition of objects, classification of objects, determination of positions of objects, calibration of measurement systems, calibration of manipulator systems, such as robots, etc, etc.
BACKGROUND OF THE INVENTION Several technologies and systems are known in the art of determining geometrical dimensions of objects. They mainly fall into two classes of technologies: contact measurements and non-contact measurements.
In contact measurements, a position of a specific point of a surface of an object is determined by moving a mechanical device into contact with the point and reading the position of the mechanical device. For example coordinate measuring machines have linear or angular scales on moving parts to enable read-out of positions of these parts. One class of non-contact measurements is based on optical measurement principles, according to which an object, the geometry of which is to be determined, is illuminated by a light beam, and reflected light from the surface of the object is detected by a light detector, such as a CCD camera. The position of the points of the surface reflecting light is calculated by triangulation.
For example, EP 0 452 422 discloses a "shape acquisition system" capable of delivering coordinates of a three-dimensional object. The system is calibrated using a
calibration plate of known dimensions having one or more geometrical reliefs. The precise position of the calibration plate in the system need not be known. However, specific parts, i.e. the geometrical reliefs, with a known geometry of the surface of the calibration plate have to be measured during calibration.
EP 0 317 768 discloses a "contour measuring apparatus" for measurement of three-dimensional contours of a surface by directing a plurality of individual light beams from point light sources onto the surface of the object to be measured, detecting the reflected beams of light from the surface of the object, and calculating the local radius of curvature of the measured surface at each point of incidence of the individual light beams. The calculation is based on the position, on the light detector (e.g. CCD-chip), of the image of the corresponding point light source.
A sphere of precisely known geometrical dimensions is used to calibrate the apparatus. Both the object for calibration and objects to be measured have to be mounted in accurately known positions to ensure successful functioning of the apparatus.
It is a major disadvantage of the systems of the above-mentioned kind that in order to be able to determine
geometrical dimensions of an object with a high precision, it is crucial to align the object accurately in the measurement system. This requirement of alignment is very hard to meet, and it is time consuming and expensive.
It is another disadvantage of known systems that for
calibration of the systems, a calibration object either has to be mounted in a precisely known position relative to the measurement system, or has to comprise reliefs of precisely known dimensions in a precisely known pattern on its surface.
It is yet another disadvantage of known systems that
calibration of the systems require the effort of persons with expert knowledge of the systems.
It is still another disadvantage of known systems that the systems, if adjusted, e.g. for measurement of objects of different sizes, have to be re-calibrated.
SUMMARY OF THE INVENTION Thus, there is a need for a method and a system comprising measurement of the shape of an object for a large variety of applications, such as in quality control of manufactured objects, alignment of objects during assembly, recognition of objects, classification of objects, determination of
positions of objects, calibration of measurement systems, calibration of actuator systems, such as robots, etc, etc, that overcomes the above-mentioned disadvantages of known methods and systems.
Therefore, it is a purpose of the present invention to provide a method and a system for accurately measuring geometrical parameters of an object without the need of precisely aligning the object to be measured in relation to the measuring system.
It is another purpose of the present invention to provide a calibration method for systems for measuring accurately geometrical parameters of an object, in which method the calibration object neither needs to be accurately aligned in relation to the measuring system nor needs to comprise specific points of the surface to be measured during the calibration. It is yet another purpose of the present invention to provide a method and a system for accurately measuring geometrical parameters of an object that need not be operated or
calibrated by persons of expert knowledge of such systems.
It is still another purpose of the present invention to provide a method and a system for accurately measuring geometrical parameters of an object that may be adjusted, e.g. for measurement of objects of different dimensions, without the need of a total re-calibration of the system.
It is yet still another purpose of the present invention to provide a method and a system for determination of positions of objects.
It is a further purpose of the present invention to provide a method and a system for alignment of objects during assembly by determination of the positions of the objects in relation to each other and repositioning of the objects into correct assembly positions based on the positions determinated.
It is yet a further purpose of the present invention to provide a method and a system for recognition of objects based on a match of specific geometrical parameters of the objects.
It is still another purpose of the present invention to provide a method and a system for classification of objects into classes of specific ranges of values of specific
geometrical parameters of the objects. According to the invention these and other purposes are fulfilled by a method for identification of a system, comprising measurement of the shape of an object which is a part of the system by observing, by means of the system, arbitrary parts of the object, the position of the object being unknown, comparing data representing the surface of the object with a predefined mathematical model of the system including the shape of the object, and based on the comparison, describing, by means of a
mathematical error function, the deviation between the data representing the surface of the object and the corresponding expected data derived from the model, and determining a set of parameters identifying a selected part of the system by minimizing the error function.
It is an essential aspect of the method according to the present invention that it is not required to know the position of the object to be measured nor is it required to use specific parts with known geometries of the surface of the object during measurements. This is contrary to known methods and systems for accurate determinations of
characteristic geometrical parameters of an object, e.g. the outer diameter of a pipe. The known methods and systems require very accurate positioning of the object to be measured in the measuring apparatus or for calibration purposes, knowledge of geometrical dimensions of specific parts of the object that is measured during calibration.
In order to give a better understanding of the present invention, a simple example explaining essential aspects of the invention is described below. In one embodiment of the invention, the measuring system determines positions of points on the surface of the object along a contour, such as around a circumferential curve on the outer surface of a cylindrical pipe. If the pipe is aligned properly in the measurement apparatus, the
circumferential curve will be a circle, the diameter of which is easily determined. However, if the pipe is misaligned, the circumferential curve will change into an ellipsis, whereby uncertainty in the determination of the outer diameter of the pipe is introduced.
The requirement of alignment of the object in the measurement system is avoided according to the present invention by exploiting a priory knowledge of the shape of the object to be measured. For example, the outer surface of a cylindrical pipe may be described by two geometrical parameters, the length of the pipe and the outer diameter of the pipe. When the pipe has been mounted in the measuring apparatus, its position is specified by coordinates of the axis of the pipe in the coordinate system of the measurement system. A
mathematical function is now established with coordinates of the axis of the pipe and the outer diameter of the pipe as parameters that defines points lying on the outer surface of the pipe. For a set of theoretical positions of the pipe in the measurement system, the expected circumferential curve around the pipe is calculated and compared to the measured curve. An error value of a mathematical error function, for example the mean square of the distances between points on the measured curve and corresponding points on the
theoretical curve, is calculated and the specific values of the parameters of the pipe (position and outer diameter) that result in the minimum error value are used as the resulting measurement values provided by the measurement system. Thus, the outer diameter of the pipe has been determined with a high precision without knowing the position of the pipe in the measurement system and without knowing which points on the surface of the pipe that are measured. The example above is an extremely simple example of
determination of geometrical parameters. Typically, the geometry of the object is much more complicated comprising for example a joint surface of the spigot or the socket of a pipe.
In the present context, the term system identification is defined as determination of a set of parameters describing a system. The system comprises both the measurement system and objects to be measured. In the example above, the set of parameters consisted of parameters of an object to be
measured, the parameters of the measurement system itself being known. However, any set of parameters of the system may be determined. For example, the set of parameters of the system to be determined during calibration consists of parameters of the measurement system while the geometrical dimensions of the object measured during calibration are known. Further, a simplified calibration of the measurement system after adjustment of the system may comprise
determination of the parameters changed due to the
adjustment, the remaining parameters being unchanged.
An object may comprise several bodies and, correspondingly, the mathematical model may include a set of shapes of bodies forming the object. The bodies forming the object may be interconnected by mechanical structures or may form parts of a larger body, or, the bodies may not be interconnected and, thus, may be moved around independently of each other.
According to an aspect of the invention, the method is applied to systems for positioning of objects. For example a method and a system is provided for alignment of objects during assembly by determination of the positions of the objects in relation to each other and repositioning, either automatically or manually, the objects into optimized
assembly positions based on the positions determinated. The objects to be aligned may be different parts of the same object, such as two surfaces of a part for a car body that has to be aligned in relation to each other, two surfaces of a bearing that has to be precisely aligned, etc.
Likewise, robots comprising a vision system with mathematical models of specific geometrical structures may calculate the position of such a structure to be able to position another object, such as a tool, or an assembly, etc, accurately in relation to the structure. A non-contact sensor may be inserted into the tool center point of a robot for regular calibration of the robot. According to another aspect of the invention, the method is applied to systems for classification of objects by defining each class by specific values of a set of geometrical
parameters. As described above, the measurements (e.g. a measured contour along the surface of the object) are
compared to expected values of the measurements (e.g.
theoretical contours) for each set of parameter values corresponding to a specific class and the minimum value of the error function is calculated for each class. Then, the measured object is classified in the class with the minimum of the minimum values of the error functions. It is a major advantage of this classification technique that the
classification is not dependent on the positioning of the object to be classified. One application of this method is within the meat industry, wherein lumps of meat are
classified according to shape and size to select the
appropriate succeeding process for the meat in question.
According to yet another aspect of the invention, the method is applied to systems for recognition of objects (A) that resembles the method of classification described in the previous section apart from an additional final step in the recognition method, wherein the determined minimum of the minimum values is compared to a prescribed threshold value and the object (A) is recognized as the object (B)
corresponding to the determined minimum value if this value is less than the prescribed threshold value. Contrary to known methods of recognition, wherein the
requirement of accurately positioning of the object under measurement is crucial to the success of the method applied, application of the method of recognition according to the present invention described above does not require that the position of the object is known.
The independence of position of the object under measurement is a considerable advantage when the method of recognition according to the present invention is applied to recognition of fingerprints. Typically, only a fraction of a fingerprint is available for recognition and it is unknown which part of a fingerprint the fraction represents and the orientation of the fingerprint is unknown. If a perfect match is not found a list of a number of the best matches found may be made available.
According to still another aspect of the invention, the method is applied to systems for positioning of objects, such as robots, etc, to identify (i.e. calibrate) the system.
After system identification, the equipment used for
determination of the shape of a surface may be dismantled from the system so that the system, afterwards, operates solely based on the parameters determined during system identification without being able to determine shape of a surface of an object. The term shape of an object refers to fundamental geometrical characteristics of the object described by a set of
geometrical parameters. For example, the shape of a pipe may be defined by its inner and outer diameter. Objects of identical shape may be scaled, i.e. the ratios between corresponding parameters of objects of the same shape are identical. Typically, the surface of an object with a
specific set of parameters is identical to the theoretical surface of a theoretical object with the same parameters apart from minor perturbations, caused, e.g., by the
manufacturing process. The shape of an object may be defined in any way by data that for at least one specific value of the set of geometrical parameters defining the shape of the object unambiguously define points on the surface of the object, such as by mathematical functions, by a set of geometrical parameters, by coordinates defining a set of points on the surface of the object, etc, the coordinates being generated from e.g. a mathematical model, determinations by a measurement system of positions of points on the surface of an object with the given shape, e.g. by a measurement system according to the present invention, a CAD system, scanning of drawings, etc.
The object to be measured is observed by the measuring system by measurement of positions of arbitrary points on the surface of the object in relation to a sensor of the system. The position measurements, whereby a position of a point of a surface of an object is determined may be done by moving a mechanical device into contact with the point and reading the position of the mechanical device, e.g. using a coordinate measuring machine having scales on moving parts to enable read-out of positions of these parts.
In non-contact measurements, positions of points of the surface of an object is determined by transmitting one or more beams of radiated energy towards the object and
detecting radiated energy that has interacted with arbitrary parts of the object.
The radiated energy may be of any form, such as ultrasound radiation, sound radiation, electromagnetic radiation of any frequency, such as radiation of X-rays, gamma rays,
ultraviolet light, visible light, infrared light, far
infrared radiation, UHF radiation, HF radiation, etc,
particle radiation, such as radiation of electrons, neutrons, alfa-particles, etc, etc. The object to be measured may interact with the radiated energy by reflecting, refracting, diffracting or absorbing energy or by any combination hereof.
For example in one embodiment of the invention utilizing X-rays, a detector of X-rays and a source of X-rays are
positioned with the object to be measured in between them. The energy of X-rays are detected after transmission through the object under measurement and the amount of absorbed X-ray energy by the object is determined. By moving the source and the detector in fixed positions relative to each other around the object under measurement, boarders between different materials of the object can be measured in three dimensions.
In another embodiment of the present invention utilizing visible light as explained in detail below, a laser emits a linear light beam towards the object under measurement, and a video camera with a CCD chip detects light diffusely
reflected from the surface of the object. The positions of the points of the surface of the object reflecting the light beam are determined by triangulation methods. In order to determine the positions of points on a plurality of curves on the surface formed by intersection of the beam of radiated energy, e.g. a light beam, with the surface of the object, the beam of radiated energy is swept across the surface of the object e.g. by moving the sources or the detectors or the sources and the detectors in fixed positions in relation to each other in relation to the object, by deflecting the beam of radiated energy by a movable
deflecting means, e.g. a movable mirror, etc, etc.
It is within the scope of the present invention to utilize information inherent in the detected radiated energy after interaction with the object to determine parameters of the object that is not related to the geometrical properties of the surface of the object, for example by utilizing the measured absorption of X-rays to estimate the thickness of an object, by utilizing the polarity of light to estimate roughness of a surface, etc.
The radiated energy beams may not be of known shapes. For example a scene with objects may be illuminated by a set of incoherent light sources emitting substantially white light in all directions, such as light bulbs. Two cameras with known positions in relation to each other may be used to determine positions of points of the surfaces of the objects by stereo techniques. Such a system may also be used to track objects as positions of an object may be determinated as a function of time.
Optionally, the determined positions may be used to control a zooming system of another system or the position of another system, such as a robot, a camera, a system according to the present invention, e.g for detailed measurements of the object, for classification of the object, for recognition of the object, etc, etc. For example, a robot could be equipped with a camera positioned close to the tool center point of the robot. The present system may then be used to guide the tool center point to the object with a rough accuracy and then, use the robot's camera to accurately determine the position of parts of the object to be operated upon by the object.
Further, more than one object may be tracked and the distance between objects may be monitored for surveillance and control purposes.
During sweeping of a part of the surface of the object under measurement, coherent sets of radiated energy that has interacted with arbitrary parts of the object and directions of the one or more energy beams or movements of the sensors and/or detectors in relation to the object may be detected dependent upon the sweeping method utilized. If more than one detector of radiated energy is used to detect energy from the same points of the surface of an object, the position of the source of the energy or the direction of the beam of radiated energy need not be known as the positions of points of the object interacting with the beam may be determined by stereo techniques based solely on data from two detectors. The movements of detectors or sources of radiated energy, objects to be measured, or components deflecting beams of radiated energy may be performed by e.g. mechanical
manipulators, electronic means, such as a phased array of sources of radiated energy, a living being, etc. The measurement system may comprise adjustable parts. For example an embodiment of the invention described in more detail below comprises a laser source and a CCD camera in a housing that can be rotated about an axis. The distance between the axis of rotation and the housing is adjustable to accommodate measurement of objects of different sizes. Upon adjustment, the system is recalibrated by a system
identification determining only the system parameters
changed. It is convenient to let more than one system
parameter change as requirements to the mechanical parts of the system is then relaxed keeping the cost of the system down. If only one parameter, i.e. the distance between the axis of rotation and the housing, was allowed to change, high precision mechanical parts would be required to ensure a one -dimensional translational movement of the housing. As may be understood from the above, a mathematical model describing the system is utilized according to the present invention. The system is described by specific parameters included in the formulas of the mathematical model . For specific values of each parameter to be determined, expected positions of points on the surface of the object are
calculated based on the mathematical model, and the
calculated positions are compared with measured positions. The set of specific values of the parameters that results in the closest fit of the calculated positions to the measured positions is the result of the system identification. The mathematical model comprises fixed parameters, e.g. a fixed length of an arm in a measurement system, and variable parameters, such as the angular position of a rotatable arm in a measurement system. A system according to the present invention comprises one ore more first sensors, the signal output of which contain information about positions (xsm, ysm, zsm) of one or more points of the surface of an object to be measured. Positions (xsm, ysm, zsm) measured by a specific first sensor are defined in relation to a coordinate system SM of that first sensor.
Further, a system according to the present invention may comprise manipulators for moving one or more of the first sensors. The position of the first sensors moved by the manipulators will then be measured by a set of second
sensors. For a specific first sensor, the set of variable parameters Tsm comprises the position of the first sensor and other variable states of the first sensor influencing
determination of the position of the surface of an object while the set of static parameters Psm comprises fixed parameters of the first sensor influencing determination of the position of the surface of an object. The one or more sensor signals are denoted Ism. Thus, for each first sensor, a mathematical model for determination of the position (xsm, ysm, zsm) of a point on the surface of an object based on the first sensor signals Ism and the parameters of the first sensor Psm and TBm can be described by the following
equations: xsm = Fsmx (Ism , Tsm , Psm) (1) ysm = Fsmy (Ism, Tsm , Psm) (2) zsm = Fsmz ( Ism , Tsm, Psm) (3)
Analogously, the position of each first sensor in relation to a coordinate system B for the measurement system is
determined by a set of fixed parameters Pb and a set of variable parameters Tb determined by second sensors.
Coordinates of a position (xsm, ysm, zsm) determined in relation to a coordinate system of a sensor may be
transformed into coordinates of the coordinate system B by the following equations: xb = Fbx(xsm, ysm, zsm, Tb, Pb) (4) yb = Fby (xsm, ysm, zsm, Tb, Pb) (5) zb = Fbz (xsm, ysm, zsm, Tb, Pb) (6)
The system comprises an object of a known shape to be
measured. The shape of the object is defined in relation to a coordinate system Q linked to the object. It may be defined by a mathematical function m for which points (xq, yq, zq) (coordinates in relation to the coordinate system Q) fulfil the equation: m(xq, yq, zq) = 0 (7)
The position of the object in relation to the coordinate system B may be described by a mathematical model comprising fixed parameters Pq and variable parameters Tq, Tq being determined by third sensors. Coordinates (xb, yb, zb) of the coordinate system B may be transformed into coordinates of the coordinate system Q of the object by: xq = Fqx (xb, yb, zb, Tq, Pq) (8) yq = Fqy (xb, yb, zb, Tq, Pq) (9) zq = Fqz (xb, yb, zb, Tq, Pq) (10) The term total system identification denotes determination of all fixed parameters of a system, i.e. the set of parameters Psm, Pb, and Pq.
A total system identification according to the present invention comprises the steps of mounting an object with a known shape in an adequate, but not precisely known, position in relation to the measurement system, measuring the surface of the object, e.g. by moving manipulators of the system along suitable paths, while recording coherent sets of Ism, Tsm, Tb and Tq followed by mathematical calculations as described below.
Based on the equations above, a mathematical model for calculation of positions of points on the surface of the object measured in relation to the coordinate system Q of the object is formulated:
Figure imgf000018_0001
V
Figure imgf000018_0003
Based on the knowledge of the shape of the object given by equation (7) an error value is calculated: e = m(xq, yq, zq) (14) by insertion of equations (11) -(13). If the coherent data set measured by first, second and third sensors relates to a point on the theoretical surface of the object defined by equation (7) and the fixed parameters of the system is accurately determined, the deviation of e from zero only depends on uncertainties of determinations of Ism, Tsm, Tb and Tq and deviations of the shape of the object from its theoretical shape.
The error value defined by (14) is utilized in formation of a so-called error function which expresses the quality of the system identification, i.e. it gives a quantitative measure for how close the selected set of values of parameters Psm, Pb and Pq of the system are to their true values. A good determination of the parameters will result in a value of the error function close to zero. The value of the error function is non-negative. Any non-negative function of e may be used as the error function C, such as
C(Psm, Pb, Pq) = Sum(e2)/n (15) wherein sum(e2) expresses a summation of specific values of e2 calculated based on n coherent sets of sensor measurements
Ism , Tsm, Tb and Tq, or,
C(Psm, Pb, Pq) = Sum(abs(e))/n (16) wherein sum(abs(e)) expresses a summation of specific
absolute values of e calculated based on n coherent sets of sensor measurements Ism, Tsm, Tb and Tq, etc.
According to the present invention any known parameter estimation algorithm, such as Downhill Simplex, Powell's Methods, Conjugate Gradient Methods, Variable Metric Methods, Levenberg-Marquardt Method, etc, may be used to determine the parameters resulting in the minimum value of the error function. It is an important aspect of the present invention that any point on the surface may be measured during system
identification.
It is another important aspect of the present invention that the position of the object need not be known to be able to determine parameters of the system.
The system identification described above relates to the coordinate system Q of the object. However, according to the present invention, the system identification described above may as well relate to any coordinate system, such as the B coordinate system, the SM coordinate system, etc. The methods relating to different coordinate systems are similar. It is of course required that transformations of coordinates and functions can be defined for the coordinate system in
question.
According to still another important aspect of the present invention, the accuracy of the determination of parameters by a system identification may be further improved by performing the system identification for two or more different unknown values for one or more of the fixed parameters to be
determined. For example, the system identification could be carried out for a plurality of positions of the energy source. The precision of the determination of the remaining fixed parameters not relating to the position of the energy source is increased as these parameters have to fulfil a larger set of requirements.
According to a further important aspect of the present invention, an object to be used for calibration of the measurement system of the system is formed so that the error function utilized for the calibration has one global minimum and no local minima. In this way, the risk that the
estimation algorithm finds a local minimum and interprets it as a global minimum is eliminated. The optimum shape resulting in an error function with one single minimum may differ significantly from the shape of the objects to be measured contrary to known methods and systems, wherein objects with a shape that is similar to the shape of the objects to be measured are used during calibration. For example, the object used to calibrate the laser line scanner for determination of geometrical parameters of pipes
described below has a substantially flat surface comprising a set of plane surfaces wherein the lines of intersections between the plane surfaces are parallel. This object is described in more detail below.
According to still another important aspect of the present invention any parameters describing characteristic features of parts of the system may be incorporated in the
mathematical model describing the system and, thus, in the system identification. For example, parameters describing distortion in a CCD camera may be included in the
mathematical model of the system. Ideally, a straight line on the object under measurement is imaged on the CCD chip in the camera as a straight line. However, distortion in the optics of the system and the camera may cause the straight line on the object to be imaged onto a curve on the CCD chip. The deviation of the curve from the desired straight line is determined by parameters describing the distortion of the system. An example of a mathematical model of the distortion of an optical system that has proven sufficiently accurate in many applications is given in the following equations and may be included in the sensor model: xk = xku(1 + g(r)) (17) yk = yku(1 + g(r)) (18) xku, yku are the coordinates of points on the curve on the CCD chip and xk, yk are the corrected coordinates
corresponding to a un-distorted image on the CCD chip. r is the distance of the point xku, yku from the optical axis xk0, yk0 of the optical system:
Figure imgf000022_0001
Typically, it is sufficient to define g(r) as: g(r) = A1r + A2r2 (20)
Thus, the transformation from xku, yku to xk, yk is defined by four parameters Pku: xk0, yk0, A1 and A2 that are determined by system identification.
According to the present invention, the parameters Pku described above may be determined independently of the other parts of the measurement system by a system identification performed exclusively on the optical system with the CCD camera. After determination of the parameters Pku, these parameters and the corresponding mathematical model, i.e. equations (17) - (20), can be transferred to any system in which the optical system with the CCD camera is going to be utilized.
According to a further aspect of the present invention, the accuracy of the method and system can be improved by
increasing the resolution of the measurements by utilization of knowledge of the energy distribution in the radiated beam of energy. For example, the intensity in a laser light beam is given by
Figure imgf000022_0002
I0 is the peak intensity at the center of the laser beam, r is the distance from the center of the laser beam and w0 is the gaussian beam radius. Based on the equation (21), the center of the line of diffuse reflection of light from the surface of the object can be calculated from measured
intensities from points across the line. In this way, the center axis of the line is determined with a resolution that is higher than the resolution of the detector. The center of the line of diffuse reflection of light from the surface of the object can be estimated in many other different ways, such as by calculating a weighted average of the detected intensities across the line image, calculating and determining the zero-crossing of the Hilbert
transformation of the detected intensities across the line image, etc.
It should be noted that, preferably, the width of the image of the line has to be larger than the resolution of the detector to be able to estimate the center of the line, a width in the range of 5 to 15 times, preferably 8 to 12 times, such as 10 times, the resolution of the detector is presently preferred.
According to yet another aspect of the present invention, the accuracy of the method and system can be further improved when the radiated beam of energy, such as a laser line sheet, intersects the object under measurement along a straight line, such as when a laser light sheet intersects a plane surface of an object. Points (xk, yk) along a line fulfils the equation: xk = kx yk + k2 (22)
When the parameters k1 and k2 are estimated based on N points along the line and k1 and k2 and used in the system
identification described above, the uncertainty of the determination of positions of the points (xk, yk) is reduced by a factor of the square root of N. According to the invention, a system is provided based on contact measurements of the surface of an object. The system comprises a cylindrical measurement sensor with a linearly displaceable actuator which is brought into contact with measurement points on the surface of the object under
measurement. A linear sensor, such as a linear potentiometer, provides an electronic signal, such as a voltage, a current, a digital value, etc, which is a function of the position of the linear actuator. The coordinate system SM of the
measurement sensor is aligned so that the linear actuator is displaceable along the z-axis of SM. Then, if the signal Ism from the linear measurement sensor is proportional to the displacement of the linear actuator, the mathematical model of the sensor is defined by the following equations: xsm = 0 (23) ysm = 0 (24) zsm = Ism P1 + P2 (25)
It is seen that there are two fixed parameters for the sensor Psm: P1 and P2 and that there are no variable parameters Tsm. The linear actuator for contact measurements is moved
substantially perpendicular to the z-axis of SM by means of a first and a second linear actuator in substantially
perpendicular directions, the first actuator being mounted on the second actuator. The position of each linear actuator is sensed by a linear sensor as described above for the
cylindrical measurement sensor. The coordinate transformation from the coordinate system SM of the surface position sensor to the coordinate system B1 of the first actuator is defined by the following equations: xb1 = zsm(cos(ab1x)sin(ab1y)cos(ab1z) +
sin(ab1x)sin(ab1z)) - bb1x + V1I1 (26) Yb1 = zsm(sin(ab1x)cos(ab1z) - cos(ab1x)sin(ab1y)sin(ab1z)) - bb1y (27) zb1 = zsmcos(ab1x) cos (ab1y) - bb1z (28)
The coordinate transformation from the coordinate system B1 of the first actuator to the coordinate system B of the second actuator is defined by the following equations:
Figure imgf000025_0001
Figure imgf000025_0002
zb = -xb1sin(aby) - yb1sin(abx) cos (aby) +
zb1cos(abx)cos(aby) - bbz (31)
It is seen that the mathematical model comprises two variable parameters Tb: V1 and V2 and 14 fixed parameters Pb: ab1x, ab1y, ablz, bb1x, bb1y, bb1z, I1, abx, aby, abz, bbx, bby, bbz, and I2.
Figure imgf000025_0003
Figure imgf000025_0004
Figure imgf000025_0005
It is seen that the transformation between the coordinate systems B and Q comprises six fixed parameters Pq: ax, ay, az, bx, by and bz. ax, ay, and az are angles of rotation of the B coordinate system in relation to the Q coordinate system. bx, by and bz is the position of the origin of the Q coordinate system in the B coordinate system.
The system is calibrated, i.e. the set of fixed parameters Psm and Pb, with the object shown in Fig. 7 and described in more detail below. During calibration a number of coherent data sets Ism, Tb are recorded and inserted into the
mathematical model described above followed by a system identification as described previously.
Methods and systems according to the present invention can be utilized in a large range of systems for a large variety of applications which systems are not intended for determination of geometrical parameters of objects but wherein system identification according to the present invention is applied to calibrate the systems. A number of examples are mentioned below. A robot, such as a robot for welding, assembling parts, etc, is provided according to the present invention with a laser line scanner and system identification is utilized to
determine positions of, for example, joints to be welded. The width of the gap between parts to be welded together may also be determined by system identification and the welding process may be controlled in response to the determined width.
A scanner for measurement of positions of the optical
boundary surfaces of the eye of a living being is provided according to the present invention comprising a laser line scanner linearly displaceable in a direction substantially perpendicular to the optical axis of the eye under
measurement. A CCD camera detects light diffusely reflected or light specularly reflected from the various optical boundary surfaces of the eye. The shapes of the boundary surfaces are determined by system identification independent of the position of the eye as described previously. Different kinds of eye diseases may be classified according to relative positions of the optical boundary surfaces. The previously described classification method may then be applied to determine presence or absence of a specific eye disease and if an eye disease is present, the seriousness of the disease may be quantified, e.g. represented by the error value. An X-ray scanner, e.g. for scanning of tissue of living beings, scanning of mechanical parts, scanning of weldings, etc, is provided according to the present invention
comprising an X-ray source and a X-ray detector movably positioned in fixed relations to each other and at opposite sides of the object under measurement. The surface of an object, e.g. the surface of a bone of a living being, is determined by measurement of absorption of the X-rays as different materials, e.g. different kinds of tissue of living beings, absorb X-rays differently. The exact positions of the source and detector of the system is determined by a system identification according to the present invention measuring X-ray absorption of an object of known shape and with a known absorption coefficient of X-rays, its position being unknown.
An ultrasonic scanner is provided according to the present invention comprising an ultrasonic transducer for radiation and reception of ultrasound pulses. The distance to the object under measurement is determined by the time interval between transmission and reception of the ultrasound pulse and the velocity of the sound waves in the medium through which the waves propagate. Calibration of and measurements by the ultrasonic scanner are performed according to the methods described previously.
A laser scanner for scanning of teeth, e.g. scanning of drilled holes in teeth to make a filling which matches the hole perfectly, is provided according to the present invention comprising a laser and optics to split the laser light beam into a plurality of beams transmitted towards the hole of the tooth. The surface of the tooth reflects the light beams and a pattern of light dots on the tooth is generated. The laser scanner further comprises a CCD camera for detection of diffusely reflected light from the tooth and the geometrical parameters of the hole in the tooth are determined according to the method previously described.
Further, the method of classification may be applied to classify different kinds of defects of a tooth optionally followed by determination of characteristic geometrical parameters for the kind of defect in question. The resulting parameters may be transferred to another system for
production of the filling. A laser scanner for scanning of surfaces of living beings, e.g. the face of a human being, is provided according to the present invention, utilized to evaluate and plan plastic surgery. The system provides visualization of the outcome of different proposals for surgery and the patient may
participate in evaluation of the options and selection of a specific plastic surgery.
A method according to the present invention is provided for determination of accuracy of a measurement system as a function of the position in relation to the measurement system of the measurement, wherein an object with an
accurately known shape is positioned at different positions in relation to the measurement system and a system
identification is carried out in each position. The error value in each position of the object determined by the system identification represents the measurement accuracy of the measurement system at that position. The accuracies
determined may be used to calculate correction factors for the measurements. A system for monitoring changes in a process, such as during surface treatment of mechanical parts, machining of
mechanical parts, growth of living tissue, such as plants, colonies of bacteria, bacteria, yeast, etc, etc, is provided according to the present invention, wherein the system determines characteristic geometrical parameters of the objects under measurement as a function of time.
A system for measurement of the suspension of wheels on an automobile is provided according to the invention, comprising four laser line scanners, each laser line scanner positioned in a specific corner of a rectangle and determining the position of the rim closest to the scanner in question by the method already described. By a system identification of the entire system comprising the four scanners, the positions of the scanners in relation to each other are determined so that the positions of the rims in relation to each other can be determined. Based on the determinations of the positions, the rims are aligned.
A system for quality control of patterns and colours, e.g. applied in the graphical industry, is provided according to the present invention, comprising a source of white light, a colour CCD camera, a frame grabber and a computer. The system is calibrated with an object having a known pattern of known colours for determination of parameters describing the system geometrically and optically, interpretation of colours included. The description of the calibration object includes values for the colour and the intensity of light reflected from each point on the surface of the object. The calibration is performed by comparing these values of a large number of arbitrary points on the surface of the object with the known values and adjusting the fixed parameters of the systems for the best match of values as described previously. The error value can be defined by
Figure imgf000029_0001
wherein λ(xq, yq, zq) is the known colour distribution of the surface, λobs is colour measured, I(xq, yq, zq) is the known intensity distribution of the surface, and Iobs is the intensity measured. A system for thermo graphical applications is provided according to the present invention, operating to similar principles as the system for quality control of patterns and colours described above. The system comprises an infrared sensor for sensing of infrared radiation from a body, such as a human being, and processing means adapted to determine the temperatures at specific parts of the body based on the signal values received from the infrared sensor and
optionally adapted to calculate temperature profiles of the body, and display means for display of the temperature profiles. The system may be applied in the medical field for diagnose purposes.
According to an aspect of the present invention, an object used for calibration of a measurement system may be mounted in and dismounted from the system automatically in order to provide automatical calibration of the system, e.g. to provide automatical calibration at regular time intervals. This is particularly useful in systems comprising actuators for automatical positioning of sensors of the system. Upon a repositioning of one or more sensors, the system can be re-calibrated automatically.
According to another aspect of the present invention, an object used for calibration of a measurement system may be positioned in the measurement system by manipulator means and removed from the system again after calibration has been terminated. Thus, automatical calibration of the system is provided, e.g. to be executed at regular time intervals, upon adjustments of parameters of the system, etc. BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 illustrates schematically the operating principles of a laser line scanner for determination of geometrical parameters of a joint surface of a pipe, Fig. 2 shows front and side views of a laser line scanner,
Fig. 3 shows details of the laser line scanner sensor head,
Fig. 4 shows schematically the sensor head of a laser line scanner with corresponding coordinate systems and system parameters, Fig. 5 shows a double laser line scanner,
Fig. 6 shows a laser line scanner for measurements of
pallets, and
Fig. 7 shows an object utilized for calibration of a laser line scanner. DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS
Fig. 1 shows the principles of operation of a laser line scanner (1). A laser (2) is mounted in a sensor head (3) together with a CCD camera (4). Αn optical system (5), e.g. a cylindrical lens, transforms the light beam from the laser (2) into a thin sheet of laser light (6) which intersects a pipe (7) under measurement along a thin curve (8). Light diffusely reflected (9) from the points on the curve (8) of intersection between the laser light sheet (6) and the pipe (7) is detected by a CCD chip in the CCD camera (4). The position of points on the curve (8) in relation to the CCD camera (4) is calculated from position of points on the image of the curve on the CCD chip. A light filter that mainly transmits light originally emitted from the laser is
positioned in front of the CCD camera (4) to suppress influence of background light and improve the signal to noise ratio of the measurements. The diffusely reflected light (9) from the pipe (7) is transmitted to the CCD camera (4) via a precision mirror (10) in order to provide a compact sensor head (3).
As shown in Fig. 2, the sensor head (3) is positioned on an arm (12) and the arm (12) is rotatably mounted on a
horizontal shaft (13) with a precision bearing providing rotation of the arm (12) without any play. The pipe (7) to be measured is mounted in relation to the laser line scanner (1) so that the sensor head (3) can be rotated 360° around the joint surface of the pipe (7). An encoder (11) is mounted on the shaft (13) to provide a signal containing information about the angular position of the arm (12). During a 360° rotation of the sensor head (3) around the joint surface of the pipe (7), the laser light sheet (6) sweeps the entire surface of the pipe joint. At regular intervals of angular displacement of the arm (12) coherent data sets of the relative angular position of the arm (12) and positions of images of the points on the curve (8) of intersection of laser light (6) with the pipe (7) are
recorded by a frame grabber connected to the CCD camera (4) and a computer (not shown). Typically, 1000 to 1500 points is acquired from each curve (8) and from 10 to 100 lines are acquired during a 360° rotation of the sensor head (3).
Preferably, the angle (η) between the arm (12) of the laser line scanner (1) and the sensor head (3) is between 30° and 45°. A number of advantageous features are provided when the laser light sheet (6) is incident on the surface of the object at an angle. Firstly, the end surface of the object, or any other surfaces substantially perpendicular to the axis of the object, may also be determined, secondly, the laser line scanner may be positioned outside the transportation path of the objects so that special handling equipment to position the objects in the laser line scanner (1) and remove them again after measurement is not needed and thirdly, even when scanning inner surfaces of an object, it is not
necessary to move the sensor head (3) inside the object.
According to the mathematical model described below the coordinates of the recorded points of the joint surface of the pipe (7) are transformed into coordinates of a coordinate system of the pipe aligned with the center axis of the joint surface of the pipe (7). The surface contour of the pipe (7) is now easily calculated in relation to the center axis and is compared to the design specifications of the pipe for quality control purposes.
The measured geometry and the reference geometry may be displayed on a monitor and/or a printer and/or be transferred to another system. The geometrical parameters of the reference object may be provided in any suitable way, such as by a CAD system.
Fig. 4 shows schematically the sensor head of a laser line scanner with corresponding coordinate systems and system parameters. A mathematical model for the laser line scanner (1) corresponding to the previously described general model (equations (1) - (16)) is described below:
The sensor signals Ism as recorded by the frame grabber represent coordinates (xk, yk) on the CCD chip of points on the image of a light curve (8) on the joint surface of the pipe (7). From the position (xk, yk) of an image, the
position (xsm, ysm, zsm) of the corresponding point on the joint surface is calculated according to the following equations:
Figure imgf000033_0001
Figure imgf000034_0001
Figure imgf000034_0002
Thus, the model of the sensor comprises five fixed parameters Psm: Gx, Gy, L, ѱ1 and ψ2. Gx and Gy are gains of the CCD camera (4) and the optical system (5) along the x- and y-axis, respectively. L is the distance along the Xsm-axis between the virtual laser (22) and the virtual CCD camera (23). ѱ1 is the angle between the laser light and the Ysm-axis (not shown). ѱ2 is the angle between the laser light and the Xsm-axis.
The coordinate system B of the measurement system (i.e. the laser line scanner) is aligned with the axis of rotation of the arm (12) and the longitudinal axis of the arm (12). As the sensor head (3) is rotated around the rotational axis its angular position θ is recorded from the precision angular position encoder, θ is the only state variable of variable parameter Tb of the system. Coordinates of the coordinate system SM are transformed into coordinates of the coordinate system B by the following equations:
Figure imgf000034_0003
Figure imgf000034_0004
Figure imgf000034_0005
It is seen that this model comprises four fixed parameters Pb : x0 , z0 , φ and η . Z0 and X0 are distances between the axis of the camera (4) coordinate system SM to the center of the base coordinate system B. φ and η are the angles of rotation of z-axis and the x-axis, respectively, of the base
coordinate system. The pipe under measurement is mounted in a fixed position in relation to the laser line scanner. Thus, no state variables is included in the transformation of coordinates between the coordinate system B of the laser line scanner and the
coordinate system Q of the pipe under measurement. The transformation is given by the following equations:
Figure imgf000035_0001
Figure imgf000035_0002
Figure imgf000035_0003
It is seen that the transformation between the coordinate systems B and Q comprises six fixed parameters Pq: ax, ay., az, bx, by and bz. ax, ay, and az are angles of rotation of the B coordinate system in relation to the Q coordinate system. bx, by and bz is the position of the origin of the Q coordinate system in the B coordinate system. According to the invention an alternative system to the laser line scanner for determination of geometrical parameters of joint surface is provided, comprising one or more non-contact distance sensor, such as laser distance sensor based on triangulation. Each of the sensors generates a signal
representing the distance in a specific direction to a point on the surface of an object. The coordinates of the point in the coordinate system (SM) of the corresponding sensor is given as (0, 0, zm). Each of the transformations of coordinates from the coordinate systems of the one or more sensors to the coordinate system (B) of the measurement system may derived analogously to the derivations for the laser line scanner described above and each transformation comprises four fixed parameters eta, phi, x0 and z0, i.e. 5 sensors lead to 20 parameters. The transformation from the measurement coordinate system B to the object coordinate system Q is identical to the corresponding transformation of the laser line scanner. The system is calibrated with the same object as the laser line scanner.
The laser line scanner described previously may comprise two or more CCD cameras and two or more laser line sources, i.e. lasers with optical systems emitting a laser light sheet. Preferably, two cameras are mounted at a fixed distance and at an angle in relation to each other so that they receive light from the same volume in space. A laser line source is rotatably positioned between the two cameras. Optionally, a precision angular encoder for determination of the angular position of the laser line source is provided. The positions of points of the object interacting with the laser light sheet are determined by stereo techniques based on data from the two cameras. The system may be calibrated with the same object as the laser line scanner.
The system may be used to track objects as positions of an object may be determinated as a function of time. Optionally, the determined positions may be used to control the position of another system, such as a robot, a camera, a system according to the present invention, e.g for detailed
measurements of the object, for classification of the object, for recognition of the object, etc, etc. For example, a robot could be equipped with a camera positioned close to the tool center point of the robot. The present system may then be used to guide the tool center point to the object with a rough accuracy and then, use the robot's camera to accurately determine the position of parts of the object to be operated upon by the robot. Further, more than one object may be tracked and the distance between objects may be monitored for surveillance and control purposes .
A mobile version, such as a hand-held version of the laser line scanner is provided according to the invention,
comprising a laser line source, a CCD camera, and a sound source, such as a loudspeaker, a ultrasound source, etc, positioned in a fixed position in relation to the laser line source and the CCD camera. A number of sound detectors, such as microphones, ultrasound detectors, etc, preferably three sound detectors, are positioned on a frame in a fixed
relation to the object under measurement. During operation of the mobile scanner, the sound source transmits sound pulses that are received by the sound detectors on the frame. As the mutual positions of the sound detectors are known and the velocity of sound is known, the position of the mobile scanner in relation to the frame (and the B coordinate system) can be calculated. The signals from the sensors are transmitted to a mobile computer. The transformation from the B coordinate system to the object coordinate system Q and calibration of the system may be done as already described above for the laser line scanner. The velocity of sound may be determined during calibration as a part of the system identification, or, a sound transmitter and a sound receiver positioned at a fixed mutual distance for measurement of the sound velocity may be provided either on the frame or on the scanner head.
Fig. 5 shows a double laser line scanner which is applied when the positions of the joint surfaces of the pipe in relation to each other are critical. For example when pipes are pressed through soil, very accurate joint surfaces of the pipes are required to ensure that the pipes stay on the desired track. A skewness of an end surface may prove fatal to the construction of a pipe line pressed through ground. In Fig. 5 the laser line scanner (14) to the left measures the outer surface of the spigot (15) of the pipe and the laser line scanner (16) to the right measures the inner surface of the socket (17) of the pipe. Each laser line scanner (14, 16) is calibrated individually as described above as is the position of the spigot (15) relative to laser line scanner (14) and the position of the socket (17) relative to laser line scanner (16). A specific calibration object is used for system identification of the entire system comprising both laser line scanners (14, 16) so that the positions of the two scanners (14, 16) in relation to each other are determined. Then, the positions of the spigot (15) relative to the socket (17) is determined and the length of the pipe and the
straightness of the pipe may be determined.
Fig. 6 shows a laser line scanner (18) for measurements on a pallet (19). In this system, the sensor head (20) is
stationary and the pallet (19) is positioned on a rotatable support plate (21). During measurement, the support plate (21) is rotated 360° in relation to the sensor head (20) and its angular displacement is measured by a precision angular encoder (not shown). The transformation of coordinates of points on the surface of the pallet from the sensor coordinate system SM to the coordinate system B of the measurement system is defined by the following equations:
Figure imgf000038_0001
Figure imgf000038_0002
Figure imgf000038_0003
It is seen that four fixed parameters Pb (x0, y0, φ , η ) describes the position of the sensor in relation to the B coordinate system. The transformation of coordinates of points on the surface of the pallet from the coordinate system B of the measurement system to the coordinate system Q of the object under
measurement is defined by the following equations:
Figure imgf000039_0003
Figure imgf000039_0002
Figure imgf000039_0001
θ is the angular displacement of the object under
measurement. It is seen that the transformation between the coordinate systems B and Q comprises five fixed parameters Pq: ax, az, bx, by and bz.
The laser line scanner shown in Fig. 6 is especially useful for measurements on pallets, tyres, rims, rolled rings, etc. The object used during calibration of the laser line scanners described above is shown in Fig. 7. It is seen that the surface of the object comprises a set of plane surfaces for which the lines of intersections between the plane surfaces are parallel. The main purpose of the calibration is to determine
accurately the set of parameters Psm and Pb. The parameters of the calibration object Pq are also determined by the calibration. Thus, the position of the calibration object need not be known in order to be able to calibrate the measurement system. The shape of the calibration object is designed in such a way that the error function utilized during the calibration has only one minimum as a function of the fixed system parameters to be determined. The equations (36) - (44) above shows that the transformation from the sensor signals Ism to coordinates (xq, yq, zq) in the coordinate system Q of the object are non-linear. As these equations have to be inserted into the equation (14) defining the shape of the calibration object, the function m(xq, yq, zq) has to be designed carefully in such a way that the function to be minimized by the parameter estimation algorithm has only one single minimum. It is a severe problem in the art of parameter estimation, especially when non- linear relations are included, to minimize a
function with a plurality of local minima as a parameter estimation algorithm will not be able to leave a local minimum and thus will create an erroneous result.
For the laser line scanner, the error function corresponding to an object comprising a number (i) of surfaces that can be described, by specific points or by a mathematical function (continuous or discontinuous) independently of one of the coordinates of the coordinate system (Q) of the object, for example by the following equation m(xq, yq, zq) = gi(yq) - zq = 0 (51) where gi is an arbitrary function of yq, has one single minimum. For the presently preferred embodiment of the laser line scanner, a number of planes between 5 and 7 is
preferred.
It is presently preferred that the functions gi (yq) are linear functions of yq. when gi is linear, the laser light sheet (6) will intersect the corresponding i'th surface along a straight line and if the quality of the optics (5) of the detection system is sufficiently high to avoid distortion of the image, this line will be imaged on a straight line on the CCD chip of the CCD camera (4). Alternatively, as described previously, the image on the CCD chip can be pre-processed to compensate for distortion in the optical system (5) to generate data for a straight line. As mentioned earlier, the knowledge that the image is a straight line can be exploited to improve the accuracy of the system. Further, the
processing speed can be improved and the number of data entering the algorithms can be reduced.
For example, if the calibration object has 5 plane surfaces, the laser light sheet (6) will intersect the object along a curve consisting of 5 line segments defining four angles between them. The parameters of each line segments are now estimated and the parameters are used as input to the
remaining process of system identification. Thus, the number of data used in the remaining system identification are significantly reduced and so is the processing time. Further, as the parameters of each line segment are estimated from a large number of data points, they are very accurately
determined resulting in a significant noise reduction.
The calibration object shown in Fig. 7 is presently preferred for calibration of laser line scanners. The surface of the calibration object is defined by the following equations: m(xq, yq, zq) = ai yq + bi -zq (52) when
Figure imgf000041_0001
and ai , bi are defined by
Figure imgf000041_0002
Figure imgf000042_0001
With reference to Fig. 7, the coordinates (yq, zq) of the intersections of the line segments shown in the upper part of Fig. 7 are listed below in cm:
Figure imgf000042_0002

Claims

1. A method for identification of a system, comprising measurement of the shape of an object which is a part of the system by observing, by means of the system, arbitrary parts of the object, the position of the object being unknown, comparing data representing the surface of the object with a predefined mathematical model of the system including the shape of the object, and based on the comparison, describing, by means of a
mathematical error function, the deviation between the data representing the surface of the object and the corresponding expected data derived from the model, and determining a set of parameters identifying a selected part of the system by minimizing the error function.
2. A method according to claim 1, wherein the object is a three-dimensional object.
3. A method according to claim 1 or 2, wherein arbitrary parts of the object are observed by transmitting one or more beams of radiated energy towards the object and detecting radiated energy that has interacted with arbitrary parts of the object.
4. A method according to any of claims 1-3, wherein the one or more energy beams are of known shapes.
5. A method according to claim 1 or 2, wherein arbitrary parts of the object are observed by mechanically determining positions of arbitrary parts of the surface of the object.
6. A method according to any of claims 1-4, wherein the one or more energy beams are swept across at least a part of the surface of the object while radiated energy that has
interacted with arbitrary parts of the object are detected.
7. A method according to claim 6, wherein coherent sets of radiated energy that has interacted with arbitrary parts of the object and directions of the one or more energy beams are detected.
8. A method according to any of claims 1-7, wherein the object and a sensor of the system are moved relative to each other by using a manipulator subsystem of the system while coherent sets of data representing the surface of the object and movements of the sensor in relation to the object are detected.
9. A method according to any of claims 1-8, wherein the set of parameters identifying a selected part of the system consists of parameters identifying the system without the object.
10. A method according to any of claims 1-8, wherein the set of parameters identifying a selected part of the system consists of parameters identifying the object.
11. A method according to any of claims 1-8, wherein the set of parameters identifying a selected part of the system consists of adjustable parameters identifying the system.
12. A method according to any of claims 1-11, wherein the method is repeated one or more times after adjustment of one or more parameters of the set of parameters identifying a selected part of the system, the error function comprising all the values of the parameters so that the parameters that have not been changed is determined with improved accuracy.
13. A method according to any of claims 1-12, wherein the set of parameters identifying a selected part of the system comprises parameters describing distortion, by the system, of the data representing the surface of the object.
14. A method according to any of claims 1-13, wherein the object is shaped in such a way that the mathematical error function comprises a single minimum for the set of parameters to be determined.
15. A method according to claim 14, wherein the surface of the object comprises a set of surfaces each of which is generated by parallel displacement of a curve along a
straight line.
16. A method according to claim 14 or 15, wherein the surface of the object comprises a set of plane surfaces.
17. A method according to claim 16, wherein the lines of intersections between the plane surfaces are parallel.
18. A method according to claim 16 or 17, wherein the
position of the center axis of a straight line or line segment of energy detected by the system is calculated from the energy profile across the line.
19. A method according to any of claims 16 or 18, wherein parameters of a line or line segment are estimated from detected positions of points on the line or the line segment or calculated positions of its center axis and the parameters estimated are used through the remaining steps of the method while the detected positions of points on the line or the line segments are discarded.
20. A method according to any of claims 1-19, wherein the energy beams comprise light beams.
21. A method according to claim 20, wherein the light beams comprise laser light beams.
22. A method according to any of claims 1-21, wherein the object is a pipe.
23. A method according to claim 22, wherein the pipe is of concrete.
24. A method according to any of claims 1-23, wherein the system is a laser line scanning system.
25. A method according to claim 24, wherein the laser line scanning system is rotatable about an axis.
26. A method according to any of claims 1-23, wherein the system is a robot.
27. A method according to any of claims 1-26, further
comprising classification of the object (A) into a plurality of classes of objects (B) by comparing data representing the surface of the object (A) with each of a plurality of predefined mathematical models of the system, each model including the shape of a specific object (B) that defines a specific class of objects, and for each model based on the comparison, describing, by means of a
mathematical error function, the deviation between the data representing the surface of the object (A) and the
corresponding expected data derived from the model in
question, and determining a set of parameters identifying the object (A) by minimizing the error function, and determining the minimum value of the values of the minimized error functions, and classifying the object (A) into the class of objects defined by the object (B) for which the minimum value has been attained.
28. A method according to any of claims 1-26, further
comprising recognition of the object (A) as one of a set of objects (B) by comparing data representing the surface of the object (A) with each of a plurality of predefined mathematical models of the system, each model including the shape of a specific object (B) of a set of objects (B), and for each model based on the comparison, describing, by means of a
mathematical error function, the deviation between the data representing the surface of the object (A) and the
corresponding expected data derived from the model in
question, and determining a set of parameters identifying the object (A) by minimizing the error function, and determining the minimum value of the values of the minimized error functions, and recognizing the object (A) as identical to the object (B) for which the minimum value has been attained provided that the minimum value is less than a prescribed threshold value.
29. A method according to any of claims 1-28, for
determination of accuracy of a measurement system as a function of the position in relation to the measurement system of the measurement, wherein an object with an
accurately known shape is successively positioned at a plurality of specific positions in relation to the
measurement system and the measurement accuracy of the measurement system at each of the specific positions is calculated based on the error value determined by system identification at the corresponding position.
30. A method according to claim 29, wherein the accuracies determined are used to calculate correction factors for the system.
31. A method according to any of claims 1-30, further
comprising the steps of scanning surfaces of a living being, e.g. the face of a human being, and visualizing the outcome of different proposals for surgery to evaluate and plan plastic surgery.
32. A method according to any of claims 1-30 for monitoring changes in a process, wherein characteristic geometrical parameters of the objects under measurement are determined as a function of time.
33. A system identifiable by a method according to any of claims 1-19, comprising one or more sensors for sensing positions, relative to the sensors, of arbitrary parts of the surface of an object of a known shape, the position of the object being unknown, receiving means for receiving data from the sensors of positions of arbitrary parts of the surface of the object, processing means adapted to calculate expected data of positions of parts of the surface of the object based on a predefined mathematical model of the system including the shape of the object, and to calculate the deviation, defined by means of a mathematical error function, between the received data and the corresponding expected data calculated from the model, and to determine a set of parameters
identifying a selected part of the system including the object by minimizing the error function, and communication means for communicating the determined
parameters to an operator of the system and/or to other systems.
34. A system according to claim 33 for classification of the object (A) into a plurality of classes of objects (B), further comprising processing means adapted to calculate, for each of a plurality of predefined mathematical models of the system, each model including the shape of a specific object (B) that defines a specific class of objects, expected data of positions of parts of the surface of the object (A) based on a predefined mathematical model of the system including the shape of the object (B), and to calculate the deviation, defined by means of a mathematical error function, between the received data and the corresponding expected data
calculated from the model, and to determine a set of
parameters identifying a selected part of the system
including the object (A) by minimizing the error function, and to determine the minimum value of the values of the minimized error functions corresponding to the classes of objects (B), and to classify the object (A) into the class of objects defined by the object (B) for which the minimum value has been attained, the system further comprising communication means for
communicating the class into which the object (A) has been classified to an operator of the system and/or to other systems.
35. A system according to claim 33 for recognition of the object (A) as one of a set of objects (B), further comprising processing means adapted to calculate, for each of a
plurality of predefined mathematical models of the system, each model including the shape of a specific object (B) from the set of objects (B), expected data of positions of parts of the surface of the object (A) based on a predefined mathematical model of the system including the shape of the object (B), and to calculate the deviation, defined by means of a mathematical error function, between the received data and the corresponding expected data calculated from the model, and to determine a set of parameters identifying a selected part of the system including the object (A) by minimizing the error function, and to determine the minimum value of the values of the minimized error functions
corresponding to the classes of objects (B), and to recognize the object (A) as identical to the object (B) for which the minimum value has been attained provided that the minimum value is less than a prescribed threshold value, the system further comprising communication means for
communicating the object (B) identical to the object (A) to an operator of the system and/or to other systems.
36. A system according to claim 34 or 35, wherein the object (B) is a fingerprint and the object (A) is a part of or a full fingerprint.
37. A system according to claim 34 or 35, wherein the objects (A, B) are bodies of meat.
38. A system according to any of claims 33-35 for alignment of objects, comprising communication means for communicating the determined positions of the objects to an operator or another system for repositioning of the objects into
optimized positions for subsequent processing of the objects.
39. A system (B) according to any of claims 33-35 for
calibration of another system (A), wherein the system (B) is adapted to be removably mounted on the system (A) during calibration of the system (A).
40. A system according to any of claims 33-35, comprising mechanical sensor means for determination of positions of arbitrary parts of the surface of the object.
41. A system according to any of claims 33-35, comprising one or more non-contact distance sensors, such as laser distance sensors, each of the sensors generating, based on
triangulation, a signal representing the distance in a specific direction to a point on the surface of an object.
42. A system according to any of claims 33-35, further comprising one or more sensor heads, each of which has a laser for emission of a laser light beam, optical means, for transformation of the laser light beam into a laser light sheet, and a detector for detection of light, each sensor head being adapted to transmit a laser light sheet towards the object and receive and detect light diffusely reflected from the surface of the object.
43. A system according to claim 42, wherein the detector comprises one or more CCD cameras.
44. A system according to claims 42 or 43, wherein the optical means comprises a cylindrical lens.
45. A system according to any of claims 42-44, further comprising a precision mirror to transmit the diffusely reflected light from the object to the detector in order to provide a compact sensor head.
46. A system according to any of claims 42-45, wherein the sensor head is mounted on an arm that is rotatable about a substantially horizontal axis and has an angular encoder for detection of the angular displacement of the arm.
47. A system according to claim 46, wherein the angle ( η ) between the sensor head and the arm is between 30° and 45°.
48. A system (B) comprising a plurality of systems (A) according to claim 46 and processing means adapted to
determine the positions of the systems (A) relative to each other by system identification, each of the systems (A) determining geometrical parameters of specific parts of the surface of an object, and the system (B) determining
positions of parts of the surface in relation to each other.
49. A system (B) according to claim 48, comprising two systems (A).
50. A system (B) according to claim 49, wherein the object is a pipe.
51. A system (B) for measurement of the suspension of wheels on an automobile, comprising four systems (A) according to claim 46, each system (A) being positioned in a specific corner of a rectangle, and processing means adapted to determine the positions of the four systems (A) relative to each other by system identification, each of the systems (A) determining the position of the rim closest to the system (A) in question, and the system (B) determining positions of the rims in relation to each other.
52. A system according to any of claims 33-35, comprising two cameras mounted at a fixed distance and at a specific angle in relation to each other for receiving and detecting light from the same volume in space, one or more laser sources with optical systems rotatably positioned between the two cameras for emission of laser light sheets, and processing means adapted to determine the positions of points of objects interacting with the laser light sheets by stereo techniques based on data from the two cameras.
53. A system according to claim 52, comprising one or more precision angular encoders each of which senses the angular position of a specific laser sources.
54. A system according to claim 52 or 53, comprising one or more light sources for emission of white light for
illumination of objects, the processing means being adapted to determine positions of objects based on reflected white light from the objects received and detected by the cameras.
55. A system according to any of claims 52-54 for tracking objects, comprising processing means adapted to determine positions of objects as a function of time.
56. A system according to any of claims 52-55, comprising communication means for communicating determined positions of an object to a measurement system of the system and
manipulator means for positioning the measurement system in a position close to the object.
57. A system according to any of claims 52-55, comprising communication means for communicating determined positions of an object to a measurement system of the system, the
measurement system comprising a camera having a zoom lens being adjusted in response to the determined positions.
58. A system according to any of claims 52-57 for
surveillance and control purposes, comprising processing means adapted to determine distances between objects.
59. A system according to claim 42, comprising a single sensor head, a sound source for emission of sound pulses positioned in a fixed position in relation to the laser line source and the CCD camera, a number of sound detectors for receiving and detecting the sound pulses and positioned on a frame in a fixed relation to an object, and timing means adapted to determine the time periods from transmission of sound pulses from the sound source to receptions of the sound pulses by the sound detectors on the frame, processing means adapted to determine the position of the mobile scanner in relation to the frame based on the known positions of the sound detectors in relation to each other and the velocity of sound, and communication means for communicating signals from sensors of the system to a computer.
60. A system according to claim 59, wherein the velocity of sound is determined during calibration as a part of the system identification.
61. A system according to claim 59, further comprising means for determination of the velocity of sound, comprising a sound source and a sound detector positioned at a fixed distance in relation to each other either on the frame or on the scanner head.
62. A system according to any of claims 42-45, further comprising a support plate adapted to receive the object and rotatable about a substantially vertical axis and an angular encoder for detection of the angular displacement of the support plate.
63. A system according to any of claims 33-35, further comprising two CCD cameras for detection of light that are mounted at a fixed distance and at a specific angle in relation to each other so that they receive light from the same volume in space, a laser for emission of a laser light beam, optical means for transformation of the laser light beam into a laser light sheet, the laser and the optical means being rotatably positioned between the two cameras, and processing means adapted to determine the positions of points of an object reflecting light from the laser light sheet, which light is received and detected by the cameras, by stereo techniques based on data from the two cameras.
64. A system according to claim 63, further comprising a precision angular encoder for determination of the angular direction of the laser line sheet.
65. A system according to any of claims 33-35, comprising a source of X-rays for transmission of X-rays towards an object, a detector of X-rays for detection of X-rays after transmission through the object, the object being positioned in between the source of X-rays and the detector of X-rays, processing means adapted to determine the amount of X-ray energy absorbed by the object, and manipulator means for moving the source and the detector in fixed positions
relative to each other around the object.
66. A system according to claim 65, further comprising processing means adapted to calculate the thickness of the object from determined amount of X-ray energy absorbed by the object.
67. A system according to any of claims 33-35, comprising one or more light sources for emission of white light, a
plurality of cameras positioned in fixed positions in
relation to each other for detection of light reflected from objects illuminated by the one or more light sources,
processing means adapted to determine positions of points of the surfaces of the objects by stereo techniques, and
communication means for communicating the determined
positions of the objects to an operator or another system.
68. A system according to claim 67 for tracking of objects, further comprising processing means adapted to determine positions of objects as a function of time.
69. A system according to claim 68 for monitoring distance between objects, further comprising processing means adapted to determine distances between objects as a function of time.
70. A system according to any of claims 33-35 for
determination of positions of optical boundary surfaces of the eye of a living being, comprising a supporting frame for support of a head of a living being, manipulator means for linear displacement in a direction substantially perpendicular to the optical axis of the eyes of the living being, a CCD camera for detection of light diffusely
reflected or light specularly reflected from the optical boundary surfaces of the eye, and processing means for determination of the shapes of the boundary surfaces of the eyes.
71. A system according to claim 70 for detection of eye diseases, comprising storage means for storage of a plurality of sets of relative positions of the optical boundary
surfaces of an eye, each set defining a specific eye disease, processing means operating according to claim 33 for
identification of presence or absence of one or more of the specific eye diseases, and communication means for
communicating the absence or presence of specific eye
diseases to an operator or another system.
72. A system according to claim 71, further comprising communication means for communicating, provided that an eye disease is present, the seriousness of the disease
represented by a function of the corresponding error value.
73. A system according to any of claims 33-35, further comprising an ultrasonic scanner comprising an ultrasonic transducer for radiation of ultrasound pulses towards an object and reception of ultrasound pulses reflected by the object, timing means for determination of the time interval between transmission and reception of the ultrasound pulses, processing means adapted to determine the distance to the object from the time interval and the velocity of the sound waves in the medium through which the pulses propagate.
74. A system according to any of claims 33-35 for scanning of teeth, e.g. scanning of drilled holes in teeth to make a filling which matches the hole perfectly, comprising a laser for emission of a laser light beam, optical means for
splitting the laser light beam into a plurality of beams transmitted towards the hole of the tooth generating a pattern of light dots diffusely reflected by the tooth, a CCD camera for detection the diffusely reflected light from the tooth, and processing means adapted to determine the
geometrical parameters of the hole in the tooth.
75. A system according to claim 74, comprising communication means for communicating the geometrical parameters a hole in a tooth to another system for production of the filling.
76. A system according to any of claims 33-35 for quality control of patterns and colours, e.g. applied in the
graphical industry, comprising one or more light sources for emission of white light, a colour CCD camera for detection of light reflected from a surface of an object, storage means for storage of expected values for the colour and the
intensity of light reflected from each point on the surface of an object with a known pattern of colours on its surface, and processing means adapted to operate according to the method according to claims 1-19 for identification of
parameters describing the system geometrically and optically.
77. A system according to any of claims 33-35 for thermo graphical applications, comprising one or more infrared sensors for sensing of infrared radiation from a body, receiving means for receiving data from the sensors of infrared radiation from the body, processing means adapted to determine the temperatures at specific parts of the body based on the data received from the infrared sensor and optionally adapted to calculate temperature profiles of the body, and communication means for communicating the
determined temperatures and/or temperature profiles to an operator of the system and/or to another system.
78. A system according to claim 77, further comprising manipulator means for positioning of the one or more infrared sensors in specific positions in relation to the body.
79. A system according to any of claims 33-78, further comprising a robot.
80. A system according to claim 79, wherein the robot
comprises a vision system having storage means for storing a set of mathematical models of specific geometrical
structures, processing means adapted to calculate the
position of such a structure, and manipulator means for positioning of an object accurately in relation to the structure in response to the determined position.
81. A system according to claim 79 or 80, further comprising a measurement system for insertion into the tool center point of the robot for regular calibration of the robot.
82. A system according to any of claims 79 to 81, further comprising a measurement system for accurate determination of the position of parts of the object to be operated upon by the robot.
83. A system according to any of claims 33-82, comprising an object being shaped in such a way that the mathematical error function comprises a single minimum for the set of parameters to be determined.
84. A system according to claim 83, comprising an object having a surface comprising a set of surfaces each of which is generated by parallel displacement of a curve along a straight line.
85. A system according to claim 84, comprising an object having a surface comprising a set of plane surfaces.
86. A system according to claim 85, wherein the lines of intersections between the plane surfaces are parallel.
87. A system according to any of claims 33-86, further comprising manipulator means for positioning of an object in the system.
PCT/DK1996/000136 1995-03-30 1996-03-29 System identification WO1996030718A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU52705/96A AU5270596A (en) 1995-03-30 1996-03-29 System identification

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
DK34795 1995-03-30

Publications (1)

Publication Number Publication Date
WO1996030718A1 true WO1996030718A1 (en) 1996-10-03

Family

ID=8092500

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/DK1996/000136 WO1996030718A1 (en) 1995-03-30 1996-03-29 System identification

Country Status (2)

Country Link
AU (1) AU5270596A (en)
WO (1) WO1996030718A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7164968B2 (en) 2002-04-05 2007-01-16 The Trustees Of Columbia University In The City Of New York Robotic scrub nurse
US7582377B2 (en) 2001-11-15 2009-09-01 Toyota Jidosha Kabushiki Kaisha Fuel cell and method of assembling the same
WO2011110144A1 (en) * 2010-03-11 2011-09-15 Salzgitter Mannesmann Line Pipe Gmbh Method and apparatus for measurement of the profile geometry of cylindrical bodies
EP2381213A1 (en) * 2010-04-21 2011-10-26 Aktiebolaget SKF Method and device for measuring a bearing component
RU2556310C2 (en) * 2013-03-29 2015-07-10 Борис Владимирович Скворцов Device for remote measurement of geometric parameters of profiled objects
EP3184958A1 (en) * 2015-12-23 2017-06-28 Liebherr-Verzahntechnik GmbH Sensor arrangement for workpiece identification and/or workpiece placement detection of a plurality of workpieces within a transport holder
EP3392607A1 (en) * 2017-04-18 2018-10-24 United Technologies Corporation Precision optical height gauge

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0222498A2 (en) * 1985-10-04 1987-05-20 Loughborough Consultants Limited Making measurements on a body
GB2204397A (en) * 1987-04-30 1988-11-09 Eastman Kodak Co Digital moire profilometry
WO1992008103A1 (en) * 1990-10-24 1992-05-14 Böhler Gesellschaft M.B.H. Process and device for the opto-electronic measurement of objects

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0222498A2 (en) * 1985-10-04 1987-05-20 Loughborough Consultants Limited Making measurements on a body
GB2204397A (en) * 1987-04-30 1988-11-09 Eastman Kodak Co Digital moire profilometry
WO1992008103A1 (en) * 1990-10-24 1992-05-14 Böhler Gesellschaft M.B.H. Process and device for the opto-electronic measurement of objects

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
K. KEMMOTSU, T. KANADE:: "Uncertainty in Object Pose Determination with Three Light-Stripe Range Measurements", PROC. IEEE INT. CONF. ON ROBOTICS AND AUTOMATION, vol. 3, 5 May 1993 (1993-05-05), LOS ALAMITOS, CA, USA, pages 128 - 134, XP000409800 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7582377B2 (en) 2001-11-15 2009-09-01 Toyota Jidosha Kabushiki Kaisha Fuel cell and method of assembling the same
US7164968B2 (en) 2002-04-05 2007-01-16 The Trustees Of Columbia University In The City Of New York Robotic scrub nurse
WO2011110144A1 (en) * 2010-03-11 2011-09-15 Salzgitter Mannesmann Line Pipe Gmbh Method and apparatus for measurement of the profile geometry of cylindrical bodies
EP2381213A1 (en) * 2010-04-21 2011-10-26 Aktiebolaget SKF Method and device for measuring a bearing component
RU2556310C2 (en) * 2013-03-29 2015-07-10 Борис Владимирович Скворцов Device for remote measurement of geometric parameters of profiled objects
EP3184958A1 (en) * 2015-12-23 2017-06-28 Liebherr-Verzahntechnik GmbH Sensor arrangement for workpiece identification and/or workpiece placement detection of a plurality of workpieces within a transport holder
EP3392607A1 (en) * 2017-04-18 2018-10-24 United Technologies Corporation Precision optical height gauge
US10274308B2 (en) 2017-04-18 2019-04-30 United Technologies Corporation Precision optical height gauge

Also Published As

Publication number Publication date
AU5270596A (en) 1996-10-16

Similar Documents

Publication Publication Date Title
Feng et al. Analysis of digitizing errors of a laser scanning system
US5673082A (en) Light-directed ranging system implementing single camera system for telerobotics applications
US9858682B2 (en) Device for optically scanning and measuring an environment
US4791482A (en) Object locating system
US20170160077A1 (en) Method of inspecting an object with a vision probe
CN106338244A (en) 3d measuring machine
US7502504B2 (en) Three-dimensional visual sensor
JP2002090113A (en) Position and attiude recognizing device
US6825937B1 (en) Device for the contactless three-dimensional measurement of bodies and method for determining a co-ordinate system for measuring point co-ordinates
JP4658318B2 (en) Method for determining the position and rotational position of an object
CN112334760A (en) Method and device for locating points on complex surfaces in space
US5363185A (en) Method and apparatus for identifying three-dimensional coordinates and orientation to a robot
US11276198B2 (en) Apparatus for determining dimensional and geometric properties of a measurement object
EP1985968B1 (en) Noncontact measuring apparatus for interior surfaces of cylindrical objects based on using the autofocus function that comprises means for directing the probing light beam towards the inspected surface
EP0340632A2 (en) Position locating apparatus for an underwater moving body
WO1996030718A1 (en) System identification
EP3975116A1 (en) Detecting displacements and/or defects in a point cloud using cluster-based cloud-to-cloud comparison
WO2007001327A2 (en) Apparatus and methods for scanning conoscopic holography measurements
KR100501397B1 (en) Three-dimensional image measuring apparatus
US20230186437A1 (en) Denoising point clouds
Ahlers et al. Stereoscopic vision-an application oriented overview
JP2859946B2 (en) Non-contact measuring device
JP3638569B2 (en) 3D measuring device
CN115112049A (en) Three-dimensional shape line structured light precision rotation measurement method, system and device
EP1379833B1 (en) Method for indicating a point in a measurement space

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AL AM AT AT AU BB BG BR BY CA CH CN CZ CZ DE DE DK DK EE EE ES FI FI GB GE HU IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SK TJ TM TR TT UA UG US UZ VN

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN ML MR NE

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
WPC Withdrawal of priority claims after completion of the technical preparations for international publication

Free format text: DK

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

NENP Non-entry into the national phase

Ref country code: CA

122 Ep: pct application non-entry in european phase