WO2003042924A1 - Connection of point clouds measured by a computer vision system - Google Patents

Connection of point clouds measured by a computer vision system Download PDF

Info

Publication number
WO2003042924A1
WO2003042924A1 PCT/FI2002/000888 FI0200888W WO03042924A1 WO 2003042924 A1 WO2003042924 A1 WO 2003042924A1 FI 0200888 W FI0200888 W FI 0200888W WO 03042924 A1 WO03042924 A1 WO 03042924A1
Authority
WO
WIPO (PCT)
Prior art keywords
measured
points
reference points
movable support
coordinates
Prior art date
Application number
PCT/FI2002/000888
Other languages
French (fr)
Inventor
Esa Leikas
Original Assignee
Mapvision Oy Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mapvision Oy Ltd filed Critical Mapvision Oy Ltd
Publication of WO2003042924A1 publication Critical patent/WO2003042924A1/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques

Definitions

  • the present invention relates to three- dimensional camera measurement.
  • the present invention concerns a method and a system for connecting point clouds measured by a computer vision system.
  • Computer vision systems are based on information obtained from various measuring devices.
  • Information can be measured using e.g. a laser device, a measuring head or via recognition from an image.
  • the information obtained can be utilized e.g. in quality control systems, where, on the basis of this information, it is possible to determine e.g. the correctness of shape of an object, coloring errors or the number of knots in sawn timber.
  • a computer vision system is generally composed of cameras. Traditional computer vision systems comprised only one camera, which took a picture of the object. By processing the picture, various conclusions could be drawn from it. By using different algorithms, it is possible to distinguish different levels in images on the basis of their borderlines. The borderlines are identified on the basis of intensity changes. Another method of recognizing shapes in an image is to connect it to masks and filters so that only certain types of points will be distinguished from the image. The patterns formed by the points in the image can be compared to models in a database and thus recognized.
  • a three-dimensional computer vision system In a three-dimensional computer vision system, several cameras are needed. To determine a three- dimensional coordinate, an image of the same point is needed from at least two cameras. Most three- dimensional computer vision systems therefore comprise several cameras to allow an object to be imaged from different directions without having to move the object.
  • the points are formed on the surface of the object via illumination. The illumination is typically implemented using a laser.
  • the point is imaged by cameras calibrated in the same coordinate system. When an image of the point can be produced by at least two cameras, it is possible to determine three-dimensional coordinates for the point. For the same position, a number of points are measured. The set of points thus formed is called a point cloud.
  • the object to be measured can be placed on a movable support, e.g. a rotating table.
  • a movable support e.g. a rotating table.
  • 'Rotating table' means a support that rotates about its axis. If the object can be rotated, then the camera system need not be able to measure the entire object from one position and normally fewer cameras are needed than when measurements are carried out with the object on a fixed support.
  • the movable support may also be a carrier moving on rails.
  • point clouds scanned in different positions have to be connected to the same coordinate system.
  • point clouds scanned in different positions were combined using various mathematical methods in which different overlapping areas are measured. The overlapping areas must represent the same surface and they must therefore be congruent . The method works poorly if the object has relatively flat shapes.
  • known shapes are disposed near the object, e.g. balls, which are repeatedly measured from every position.
  • the overlapping measurements and the repeated measurements of the balls in each position take measuring time. Measuring time is also spent on matching the mathematical models measured from different positions.
  • One solution to this problem is to place the object to be measured onto a support and measure the movements of the support by means of various angle or motion detectors.
  • the accuracy of these wear-prone components limits the overall measuring accuracy.
  • the object of the invention is to eliminate the above-mentioned drawbacks or at least to significantly alleviate them.
  • a specific object of the invention is to disclose a new type of method for connecting point clouds measured by a computer vision system.
  • a further object of the invention is to simplify and accelerate the process of measuring the three- dimensional shape of an object.
  • the invention describes a method for connecting point clouds measured from an object in its different positions to the same coordinate system.
  • the system of the invention comprises a movable support, an illuminating device and a sufficient number of cameras.
  • the system comprises means for storing the measured information and calculating coordinate transformations.
  • the object to be measured is fastened to a movable support.
  • the support is provided with a mechanism which is used to move it so that the cameras could see it from different directions.
  • the support plate typically has a circular shape and it is rotated about a central shaft, but the support plate may also have some other shape in it may be moved in several directions.
  • the circular support plate movable about a shaft is called a rotating table.
  • Several reference marks are fastened onto the rotating table.
  • the marks may be individually designed to allow them to be identified by the cameras, but they may also be of identical design.
  • the reference marks need not be three- dimensional bodies; instead, two-dimensional marking is sufficient.
  • the cameras measure the reference marks on the support plate.
  • Reference marks may also be attached to the object itself e.g. by means of a magnet, and it is even possible to place all the reference marks on the object. In this case the procedure involves the drawback that the reference marks may happen to be placed on the area to be measured, thus changing the object shape perceived by the measuring
  • the measurement is carried out by illuminating points on the surface the object to be measured.
  • the illumination is typically carried out by producing luminous points. Normally, a number of luminous points forming e.g. a matrix are created, thus illuminating a plurality of points simultaneously.
  • the points are imaged by cameras, and the system can be provided with as many cameras as required. The position of each individual point can be measured accurately when at least two cameras can see it .
  • the point matrix can be moved over the object by deflecting the beams producing the luminous points using e.g. two mirrors.
  • the set of points measured from the same position is called a point cloud. After the object has been measured completely in one position, its orientational position must be changed to allow any blind areas to be scanned.
  • the coordinate system comprising it will also have to be turned to allow the new measured points to be placed in the original coordinate system to fill the blind areas.
  • measurement is started by measuring the reference marks.
  • the positions of the reference marks are compared to the positions measured at the beginning of the measurement.
  • the marks are of individual shape, they can be identified by a camera, but the marks may also be mutually identical. In this case, each mark is identified on the basis of its position in relation to the others, for it does not change although the set of points is moved as an array from one place to another. From the change in the positions of the reference marks, it is possible to calculate a transformation of the coordinates of the movable support, and the point cloud measured from the new position can be placed mathematically in the original coordinate system on the basis of the coordinate transformation.
  • the requirements regarding accuracy of motion of the movable support are eliminated, because the change in the coordinates is verified subsequently by measuring the reference points.
  • the elimination of the accuracy requirement makes it possible to move the support plate with a simpler and more advantageous mechanism, by means of which a sufficient number of points can be measured quickly and advantageously.
  • the elimination of mechanical sensors also improves the measuring accuracy.
  • the required calculation power is low, so the connection of large numbers of points is a fast operation and requires no more than normal calculation capacity.
  • Fig. 1 presents a function diagram representing the method of the invention
  • Fig. 2 presents an embodiment of the system of the invention.
  • measurement is started by placing the measuring object and the reference points onto the movable support 10.
  • the reference points can also be fixedly mounted on the support, in which case they need not be placed again for each measurement but are only moved when necessary. If a sufficiently large number of reference points are mounted on the support, they will not necessarily have to be moved at all, because even if some of the points should be hidden behind the object, there will still be a sufficient number of them available.
  • the positions of the reference points and object points are measured 11.
  • the reference points are measured by cameras and their positions are stored in memory.
  • the object is illuminated by means of a laser or other radiation source to produce points on the surface of the measuring object that are visible to the cameras.
  • the points are measured by the cameras and stored in memory. After the object has been measured completely from one position, the support is moved 12 to a new position.
  • measurement is started by measuring the reference points and performing a coordinate transformation 13. Based on the measured and the original reference point positions, a transformation of the coordinates of the movable support can be calculated. Next, the object is illuminated again and new coordinates are calculated for the points produced. These new coordinates are connected to the earlier ones, transforming them into the original coordinate system 14. If any blind areas still exist, the rotating table can be turned again 15. The extent of blind areas can be estimated by qualitative criteria or e.g. by using a predetermined number of movements and an approximate change. If the object is moved again, then the procedure is resumed at step 13, otherwise the measurement is ended 16.
  • Fig. 2 represents a system according to the invention.
  • the system comprises a movable support, which in the example embodiment is a rotating table 20, cameras CAMl and CAM2 , a laser pointer LASER, reference points 22 and a data system DTE for the storage and transformation of results.
  • the rotating table 20 is provided with a mechanism that allows it to be turned and locked in place. The magnitude of the turning angle need not be accurately predetermined, so it is not necessary to provide the rotating mechanism with any special measuring devices or mechanical precision components.
  • Reference points 22 are mounted on the rotating table. The reference points, or some of them, may also be fixedly placed on the table.
  • the object 21 attached to the rotating table is illuminated with laser beams.
  • the illuminating device LASER may consist of several lasers, which are typically mounted in the form of a matrix.
  • the laser beams illuminate points on the surface of the measuring object mounted on the rotating table and these are measured by a camera system, which in the example embodiment comprises cameras CAMl and CAM2.
  • the system is provided with as many cameras as needed, usually four to eight cameras being used.
  • the measured points are stored into the data system DTE.
  • the object 21 After the object 21 has been measured completely, it can be rotated to measure blind areas. After the rotation, the change in its position is measured on the basis of the reference points. The object is rotated again until all blind areas have been measured.
  • the invention is not limited to the embodiment examples described above; instead, many variations are possible within the scope of the inventive concept defined in the claims.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention relates to the connection of point clouds measured by a compurter vision system. The object to be measured is placed on a rotating table (20), which has reference points (22) previously placed on it. First, the reference points (22) are measured, whereupon the object is illuminated by a laser (LASER) and the points thus produced are measured by means of a camera system (CAM1, CAM2). To cover any blind areas, the table is turned, whereupon the reference points (22) are measured again and the magnitudes of the rotation and movement are calculated. By utilizing this information, the points measured from the new position are transformed into the original coordinate system.

Description

CONNECTION OF POINT CLOUDS MEASURED BY A COMPUTER VISION SYSTEM
FIELD OF THE INVENTION
The present invention relates to three- dimensional camera measurement. The present invention concerns a method and a system for connecting point clouds measured by a computer vision system.
BACKGROUND OF THE INVENTION
Computer vision systems are based on information obtained from various measuring devices. Information can be measured using e.g. a laser device, a measuring head or via recognition from an image. The information obtained can be utilized e.g. in quality control systems, where, on the basis of this information, it is possible to determine e.g. the correctness of shape of an object, coloring errors or the number of knots in sawn timber.
A computer vision system is generally composed of cameras. Traditional computer vision systems comprised only one camera, which took a picture of the object. By processing the picture, various conclusions could be drawn from it. By using different algorithms, it is possible to distinguish different levels in images on the basis of their borderlines. The borderlines are identified on the basis of intensity changes. Another method of recognizing shapes in an image is to connect it to masks and filters so that only certain types of points will be distinguished from the image. The patterns formed by the points in the image can be compared to models in a database and thus recognized.
In a three-dimensional computer vision system, several cameras are needed. To determine a three- dimensional coordinate, an image of the same point is needed from at least two cameras. Most three- dimensional computer vision systems therefore comprise several cameras to allow an object to be imaged from different directions without having to move the object. The points are formed on the surface of the object via illumination. The illumination is typically implemented using a laser. The point is imaged by cameras calibrated in the same coordinate system. When an image of the point can be produced by at least two cameras, it is possible to determine three-dimensional coordinates for the point. For the same position, a number of points are measured. The set of points thus formed is called a point cloud.
The object to be measured can be placed on a movable support, e.g. a rotating table. 'Rotating table' means a support that rotates about its axis. If the object can be rotated, then the camera system need not be able to measure the entire object from one position and normally fewer cameras are needed than when measurements are carried out with the object on a fixed support. The movable support may also be a carrier moving on rails.
With a computer vision system, it is also possible scan the object and produce from it a model that can be processed. The methods of three- dimensional scanning and discrimination are considerably more complicated than corresponding two- dimensional methods.
To measure the overall shape of an object, the point clouds scanned in different positions have to be connected to the same coordinate system. In prior art, point clouds scanned in different positions were combined using various mathematical methods in which different overlapping areas are measured. The overlapping areas must represent the same surface and they must therefore be congruent . The method works poorly if the object has relatively flat shapes.
In an embodiment, known shapes are disposed near the object, e.g. balls, which are repeatedly measured from every position. The overlapping measurements and the repeated measurements of the balls in each position take measuring time. Measuring time is also spent on matching the mathematical models measured from different positions. One solution to this problem is to place the object to be measured onto a support and measure the movements of the support by means of various angle or motion detectors. However, the accuracy of these wear-prone components limits the overall measuring accuracy.
OBJECT OF THE INVENTION
The object of the invention is to eliminate the above-mentioned drawbacks or at least to significantly alleviate them. A specific object of the invention is to disclose a new type of method for connecting point clouds measured by a computer vision system. A further object of the invention is to simplify and accelerate the process of measuring the three- dimensional shape of an object.
BRIEF DESCRIPTION OF THE INVENTION
The invention describes a method for connecting point clouds measured from an object in its different positions to the same coordinate system. The system of the invention comprises a movable support, an illuminating device and a sufficient number of cameras. In addition, the system comprises means for storing the measured information and calculating coordinate transformations.
The object to be measured is fastened to a movable support. The support is provided with a mechanism which is used to move it so that the cameras could see it from different directions. The support plate typically has a circular shape and it is rotated about a central shaft, but the support plate may also have some other shape in it may be moved in several directions. The circular support plate movable about a shaft is called a rotating table. Several reference marks are fastened onto the rotating table. The marks may be individually designed to allow them to be identified by the cameras, but they may also be of identical design. The reference marks need not be three- dimensional bodies; instead, two-dimensional marking is sufficient. At the start of a measurement, the cameras measure the reference marks on the support plate. Reference marks may also be attached to the object itself e.g. by means of a magnet, and it is even possible to place all the reference marks on the object. In this case the procedure involves the drawback that the reference marks may happen to be placed on the area to be measured, thus changing the object shape perceived by the measuring device .
The measurement is carried out by illuminating points on the surface the object to be measured. The illumination is typically carried out by producing luminous points. Normally, a number of luminous points forming e.g. a matrix are created, thus illuminating a plurality of points simultaneously. The points are imaged by cameras, and the system can be provided with as many cameras as required. The position of each individual point can be measured accurately when at least two cameras can see it . The point matrix can be moved over the object by deflecting the beams producing the luminous points using e.g. two mirrors. The set of points measured from the same position is called a point cloud. After the object has been measured completely in one position, its orientational position must be changed to allow any blind areas to be scanned.
When the object is turned, the coordinate system comprising it will also have to be turned to allow the new measured points to be placed in the original coordinate system to fill the blind areas. After the turning, measurement is started by measuring the reference marks. The positions of the reference marks are compared to the positions measured at the beginning of the measurement. The marks are of individual shape, they can be identified by a camera, but the marks may also be mutually identical. In this case, each mark is identified on the basis of its position in relation to the others, for it does not change although the set of points is moved as an array from one place to another. From the change in the positions of the reference marks, it is possible to calculate a transformation of the coordinates of the movable support, and the point cloud measured from the new position can be placed mathematically in the original coordinate system on the basis of the coordinate transformation.
By utilizing the system and method of the invention, the requirements regarding accuracy of motion of the movable support are eliminated, because the change in the coordinates is verified subsequently by measuring the reference points. The elimination of the accuracy requirement makes it possible to move the support plate with a simpler and more advantageous mechanism, by means of which a sufficient number of points can be measured quickly and advantageously. In addition, the elimination of mechanical sensors also improves the measuring accuracy. In the methods used, the required calculation power is low, so the connection of large numbers of points is a fast operation and requires no more than normal calculation capacity.
LIST OF ILLUSTRATIONS
In the following, the invention will be described in detail with reference to drawings, wherein
Fig. 1 presents a function diagram representing the method of the invention, Fig. 2 presents an embodiment of the system of the invention.
DETAILED DESCRIPTION OF THE INVENTION
In the method represented by Fig. 1, measurement is started by placing the measuring object and the reference points onto the movable support 10. The reference points can also be fixedly mounted on the support, in which case they need not be placed again for each measurement but are only moved when necessary. If a sufficiently large number of reference points are mounted on the support, they will not necessarily have to be moved at all, because even if some of the points should be hidden behind the object, there will still be a sufficient number of them available.
After the placement, the positions of the reference points and object points are measured 11. The reference points are measured by cameras and their positions are stored in memory. Next, the object is illuminated by means of a laser or other radiation source to produce points on the surface of the measuring object that are visible to the cameras. The points are measured by the cameras and stored in memory. After the object has been measured completely from one position, the support is moved 12 to a new position.
After the support has been moved, measurement is started by measuring the reference points and performing a coordinate transformation 13. Based on the measured and the original reference point positions, a transformation of the coordinates of the movable support can be calculated. Next, the object is illuminated again and new coordinates are calculated for the points produced. These new coordinates are connected to the earlier ones, transforming them into the original coordinate system 14. If any blind areas still exist, the rotating table can be turned again 15. The extent of blind areas can be estimated by qualitative criteria or e.g. by using a predetermined number of movements and an approximate change. If the object is moved again, then the procedure is resumed at step 13, otherwise the measurement is ended 16.
Fig. 2 represents a system according to the invention. The system comprises a movable support, which in the example embodiment is a rotating table 20, cameras CAMl and CAM2 , a laser pointer LASER, reference points 22 and a data system DTE for the storage and transformation of results. The rotating table 20 is provided with a mechanism that allows it to be turned and locked in place. The magnitude of the turning angle need not be accurately predetermined, so it is not necessary to provide the rotating mechanism with any special measuring devices or mechanical precision components. Reference points 22 are mounted on the rotating table. The reference points, or some of them, may also be fixedly placed on the table.
The object 21 attached to the rotating table is illuminated with laser beams. The illuminating device LASER may consist of several lasers, which are typically mounted in the form of a matrix. The laser beams illuminate points on the surface of the measuring object mounted on the rotating table and these are measured by a camera system, which in the example embodiment comprises cameras CAMl and CAM2. The system is provided with as many cameras as needed, usually four to eight cameras being used. The measured points are stored into the data system DTE. After the object 21 has been measured completely, it can be rotated to measure blind areas. After the rotation, the change in its position is measured on the basis of the reference points. The object is rotated again until all blind areas have been measured. The invention is not limited to the embodiment examples described above; instead, many variations are possible within the scope of the inventive concept defined in the claims.

Claims

1. Method for connecting point clouds measured by computer vision system, in which method an object is placed on a movable support and which method comprises the steps of: producing on the surface of the object to be measured a number of points by an illuminating technique; measuring the illuminated points to form a point cloud; changing the position of the object ; repeating the steps of illumination and measurement of points until a desired set of point clouds has been obtained; and the measurement results are combined characteri zed in that the method further comprises the steps of : mounting reference points on the movable support or on the object itself; measuring the reference points; calculating the change in the coordinates of the movable support on the basis of the change in the reference points; and transforming the measurement results for each point cloud into the original coordinate system.
2. Method according to claim 1, charac teri zed in that reference points are attached to the movable support or to the object so that they can be seen by a camera from several different directions.
3. Method according to claim 1, charac teri zed in that an original reference coordinate system is formed in connection with the first measurement by measuring the reference points.
4. Method according to claims 1 and 3, characteri zed in that, after the object has been moved, the positions of the reference points are measured and a coordinate transformation is calculated by comparing the measured coordinates to the reference points in the reference coordinate system.
5. Method according to claim 1 and 4, characteri zed in that the points produced on the surface of the object are measured by a camera system.
6. Method according to claims 1, 3 and 4, characteri zed in that the measured points are transformed into the reference coordinate system by utilizing the detected change in the coordinates.
7. System for connecting point clouds, said system comprising: a movable support (20) ; a camera system (CAMl, CAM2) ; a laser illuminator (LASER) ; and a data system (DTE) characteri zed in that the system further comprises : reference points (22) arranged on the movable support or on the object (20) itself.
8. System according to claim 7, charac teri zed in that the reference points (22) are fixedly arranged on the movable support (20) .
9. System according to claim 7 or 8, characterized in that the camera system (CAM 1, CAM2) has been fitted to measure a change in coordinates by utilizing the reference points.
10. System according to claims 7 - 9, characteri zed in that the data system (DTE) has been fitted to transform the measured point cloud into the oa coordinate system by utilizing the change in the coordinates.
PCT/FI2002/000888 2001-11-13 2002-11-13 Connection of point clouds measured by a computer vision system WO2003042924A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20012204A FI20012204A0 (en) 2001-11-13 2001-11-13 Combining point clouds
FI20012204 2001-11-13

Publications (1)

Publication Number Publication Date
WO2003042924A1 true WO2003042924A1 (en) 2003-05-22

Family

ID=8562247

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2002/000888 WO2003042924A1 (en) 2001-11-13 2002-11-13 Connection of point clouds measured by a computer vision system

Country Status (2)

Country Link
FI (1) FI20012204A0 (en)
WO (1) WO2003042924A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7034249B2 (en) 2003-06-12 2006-04-25 Kvaerner Masa-Yards Oy Method of controlling the welding of a three-dimensional structure
DE102008019435A1 (en) * 2008-04-17 2009-10-29 Aicon Metrology Gmbh Method for contact less measurement of three-dimensional, complex molded components, involves collecting measuring component by fixture and recording is done in alignment of component with camera or other optical image recording devices
CN102012217A (en) * 2010-10-19 2011-04-13 南京大学 Method for measuring three-dimensional geometrical outline of large-size appearance object based on binocular vision
WO2016040634A1 (en) * 2014-09-11 2016-03-17 Cyberoptics Corporation Point cloud merging from multiple cameras and sources in three-dimensional profilometry
JP6132221B1 (en) * 2016-10-12 2017-05-24 国際航業株式会社 Image acquisition method and image acquisition apparatus
EP3252458A1 (en) * 2016-06-01 2017-12-06 Hijos de Jose Sivo, S.L. System and method for digitalizing tridimensional objects
CN108088390A (en) * 2017-12-13 2018-05-29 浙江工业大学 Optical losses three-dimensional coordinate acquisition methods based on double eye line structure light in a kind of welding detection
KR101865338B1 (en) * 2016-09-08 2018-06-08 에스엔유 프리시젼 주식회사 Apparatus for measuring critical dimension of Pattern and method thereof
JP2018152005A (en) * 2017-03-15 2018-09-27 オムロン株式会社 Measurement system, control device, measurement method
FR3082934A1 (en) * 2018-06-26 2019-12-27 Safran Nacelles LASER PROJECTION DEVICE AND METHOD FOR MANUFACTURING PARTS OF COMPOSITE MATERIAL BY DREDGING

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4590578A (en) * 1983-07-11 1986-05-20 United Technologies Corporation Off-line programmable robot
US4993836A (en) * 1988-03-22 1991-02-19 Agency Of Industrial Science & Technology Method and apparatus for measuring form of three-dimensional objects
US5285397A (en) * 1989-12-13 1994-02-08 Carl-Zeiss-Stiftung Coordinate-measuring machine for non-contact measurement of objects
US5396331A (en) * 1993-08-10 1995-03-07 Sanyo Machine Works, Ltd. Method for executing three-dimensional measurement utilizing correctively computing the absolute positions of CCD cameras when image data vary
WO1999015945A2 (en) * 1997-09-23 1999-04-01 Enroute, Inc. Generating three-dimensional models of objects defined by two-dimensional image data
US5978521A (en) * 1997-09-25 1999-11-02 Cognex Corporation Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4590578A (en) * 1983-07-11 1986-05-20 United Technologies Corporation Off-line programmable robot
US4993836A (en) * 1988-03-22 1991-02-19 Agency Of Industrial Science & Technology Method and apparatus for measuring form of three-dimensional objects
US5285397A (en) * 1989-12-13 1994-02-08 Carl-Zeiss-Stiftung Coordinate-measuring machine for non-contact measurement of objects
US5396331A (en) * 1993-08-10 1995-03-07 Sanyo Machine Works, Ltd. Method for executing three-dimensional measurement utilizing correctively computing the absolute positions of CCD cameras when image data vary
WO1999015945A2 (en) * 1997-09-23 1999-04-01 Enroute, Inc. Generating three-dimensional models of objects defined by two-dimensional image data
US5978521A (en) * 1997-09-25 1999-11-02 Cognex Corporation Machine vision methods using feedback to determine calibration locations of multiple cameras that image a common object

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7034249B2 (en) 2003-06-12 2006-04-25 Kvaerner Masa-Yards Oy Method of controlling the welding of a three-dimensional structure
DE102008019435A1 (en) * 2008-04-17 2009-10-29 Aicon Metrology Gmbh Method for contact less measurement of three-dimensional, complex molded components, involves collecting measuring component by fixture and recording is done in alignment of component with camera or other optical image recording devices
DE102008019435B4 (en) * 2008-04-17 2012-06-06 Aicon Metrology Gmbh Method for the contactless measurement of three-dimensional, complex shaped components
CN102012217A (en) * 2010-10-19 2011-04-13 南京大学 Method for measuring three-dimensional geometrical outline of large-size appearance object based on binocular vision
WO2016040634A1 (en) * 2014-09-11 2016-03-17 Cyberoptics Corporation Point cloud merging from multiple cameras and sources in three-dimensional profilometry
EP3252458A1 (en) * 2016-06-01 2017-12-06 Hijos de Jose Sivo, S.L. System and method for digitalizing tridimensional objects
KR101865338B1 (en) * 2016-09-08 2018-06-08 에스엔유 프리시젼 주식회사 Apparatus for measuring critical dimension of Pattern and method thereof
JP6132221B1 (en) * 2016-10-12 2017-05-24 国際航業株式会社 Image acquisition method and image acquisition apparatus
JP2018063162A (en) * 2016-10-12 2018-04-19 国際航業株式会社 Image acquiring method and image acquiring device
JP2018152005A (en) * 2017-03-15 2018-09-27 オムロン株式会社 Measurement system, control device, measurement method
JP7009751B2 (en) 2017-03-15 2022-01-26 オムロン株式会社 Measurement system, control device, measurement method
CN108088390A (en) * 2017-12-13 2018-05-29 浙江工业大学 Optical losses three-dimensional coordinate acquisition methods based on double eye line structure light in a kind of welding detection
FR3082934A1 (en) * 2018-06-26 2019-12-27 Safran Nacelles LASER PROJECTION DEVICE AND METHOD FOR MANUFACTURING PARTS OF COMPOSITE MATERIAL BY DREDGING
EP3587088A1 (en) * 2018-06-26 2020-01-01 Safran Nacelles Laser projection device and method for manufacturing parts in composite material by drape moulding
US11400666B2 (en) 2018-06-26 2022-08-02 Safran Nacelles Laser projection device and method for manufacturing composite material parts by drape-molding

Also Published As

Publication number Publication date
FI20012204A0 (en) 2001-11-13

Similar Documents

Publication Publication Date Title
CN110555889B (en) CALTag and point cloud information-based depth camera hand-eye calibration method
US20210207954A1 (en) Apparatus and method for measuring a three-dimensional shape
US10706562B2 (en) Motion-measuring system of a machine and method for operating the motion-measuring system
CN109215108B (en) Panoramic three-dimensional reconstruction system and method based on laser scanning
US7627197B2 (en) Position measurement method, an apparatus, a computer program and a method for generating calibration information
US6809728B2 (en) Three dimensional modeling apparatus
US4825394A (en) Vision metrology system
US7136170B2 (en) Method and device for determining the spatial co-ordinates of an object
US6377298B1 (en) Method and device for geometric calibration of CCD cameras
CN1727983B (en) Strobe illumination
CN101213440B (en) Method for forming master data for inspecting protruding and recessed figure
US7046377B2 (en) Method for determining corresponding points in three-dimensional measurement
WO1999058930A1 (en) Structured-light, triangulation-based three-dimensional digitizer
US7860298B2 (en) Method and system for the calibration of a computer vision system
JPH11166818A (en) Calibrating method and device for three-dimensional shape measuring device
KR102632930B1 (en) Method for Photometric Characterization of the Optical Radiation Characteristics of Light Sources and Radiation Sources
EP1680689B1 (en) Device for scanning three-dimensional objects
WO2003042924A1 (en) Connection of point clouds measured by a computer vision system
WO1998005922A1 (en) Calibration method
CN112767494A (en) Precise measurement positioning method based on calibration algorithm
JP4332987B2 (en) Cross-sectional shape measuring apparatus and cross-sectional shape measuring method
JPH0814858A (en) Data acquisition device for three-dimensional object
CN113203358A (en) Method and arrangement for determining the position and/or orientation of a movable object
JPH06323820A (en) Three-dimensional profile measuring method
JP2024080670A (en) Provision of real world and image sensor correspondence points for use in calibration of imaging system for three-dimensional imaging based on light triangulation

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SC SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP