JP5375201B2 - 3D shape measuring method and 3D shape measuring apparatus - Google Patents

3D shape measuring method and 3D shape measuring apparatus Download PDF

Info

Publication number
JP5375201B2
JP5375201B2 JP2009048662A JP2009048662A JP5375201B2 JP 5375201 B2 JP5375201 B2 JP 5375201B2 JP 2009048662 A JP2009048662 A JP 2009048662A JP 2009048662 A JP2009048662 A JP 2009048662A JP 5375201 B2 JP5375201 B2 JP 5375201B2
Authority
JP
Japan
Prior art keywords
distance
phase
dimensional
calculated
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2009048662A
Other languages
Japanese (ja)
Other versions
JP2010203867A (en
Inventor
恵一 渡辺
安弘 西村
Original Assignee
株式会社豊田中央研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社豊田中央研究所 filed Critical 株式会社豊田中央研究所
Priority to JP2009048662A priority Critical patent/JP5375201B2/en
Publication of JP2010203867A publication Critical patent/JP2010203867A/en
Application granted granted Critical
Publication of JP5375201B2 publication Critical patent/JP5375201B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

<P>PROBLEM TO BE SOLVED: To precisely measure a three-dimensional shape by easy calibration. <P>SOLUTION: In the case that the three-dimensional shape is determined by a phase shift method, a reference flat plate is disposed at a position of which the distance from a projecting part and an imaging part is known, and a plurality of lattice fringes different in phase are projected. The phase of each pixel is calculated from a picked-up image and a phase-distance relation is calculated from the calculated image and the known distance. Besides, a reference grid flat plate having a reference grid of which the two-dimensional coordinate on a plane intersecting the distance direction at a right angle is known is disposed at a position of which the distance is known. The two-dimensional coordinate of each pixel of the picked-up image is calculated from the reference grid, and a distance-two-dimensional coordinate relation is calculated from the calculated two-dimensional coordinate of each pixel and the known distance. In actual measurement, a measuring object is disposed at a position located at a prescribed distance and the lattice fringes different in phase are projected thereon. The phase of each pixel of the picked-up image is calculated and the distance is calculated from the phase-distance relation. The two-dimensional coordinate of the pixel is calculated from the distance-two-dimensional coordinate relation, and thus the three-dimensional shape of the measuring object is determined. <P>COPYRIGHT: (C)2010,JPO&amp;INPIT

Description

  The present invention relates to three-dimensional shape measurement using a phase shift method.

  With the improvement in quality of automobiles and the like, high shape accuracy is required for outer panel, parts, molds and the like used in automobiles. In addition, in order to reduce production costs and speed up development, it is necessary to compare and verify 3D CAD data and 3D shape data of products, and to provide prompt feedback to the production process. For this reason, a high-precision optical three-dimensional shape measuring instrument is introduced, and not only offline mold shape measurement but also in-line product shape inspection is performed.

  Conventionally, there are many methods for obtaining a three-dimensional shape of a measurement object by scanning a displacement meter (one-dimensional) or a light-cutting sensor (two-dimensional) due to restrictions such as cost and size.

  However, there is a need for a system that can perform measurement without mechanical scanning, and a 3D coordinate measurement method for measurement objects using the phase shift method has been proposed as a measurement method that may be used in such a system. ing.

  In the phase shift method, lattice fringes having phases different from each other by π / 2 are sequentially projected from the projector onto the object to be measured, and the phase of each pixel of the captured image is obtained from the four captured images. The relationship between the phase obtained by the camera and the three-dimensional coordinates can be obtained from the geometric arrangement of the projector and the camera and the lattice fringe period.

  However, in order to accurately obtain the three-dimensional coordinates as described above, it is necessary to correct the distortion of the projection and the imaging lens, etc., and the following Non-Patent Document 1 proposes using Fourier transform as the correction method. Has been.

  Hereinafter, the three-dimensional coordinate measuring method in Non-Patent Document 1 will be described.

(Z coordinate calculation)
A grid pattern is projected onto a glass plate sprayed with white to capture an image, and a phase Ψ (m, n) of each pixel of the captured image is obtained. While moving the glass plate stepwise in the z direction, the phase Ψ (m, n) is measured at each coordinate, and from the N (N = 11) images, z (m, n) and the phase Ψ (m, n) Is expressed by the following formula (i)
z (m, n)
= A (m, n) + b (m, n) [Psi] (m, n) + c (m, n) [Psi] 2 (m, n) (i)
Is obtained by the least squares method.

(X, y coordinate calculation)
A glass plate on which a sinusoidal orthogonal grating is printed and pasted is imaged, and the captured image is Fourier transformed to obtain a grating period and a phase image. FIG. 23 shows an outline of a coordinate system used in Non-Patent Document 1, and a CCD (imaging unit) receives light from an imaging object through an imaging lens. In the drawing, the distance direction from the CCD to the glass plate to be imaged is represented by z, and two directions orthogonal to the distance direction z are represented by x and y, respectively.

While the glass plate is moved stepwise in the z direction (N = 11), the captured image is Fourier transformed at each position, and the x, y coordinate enlargement ratio and the like with respect to the change in the z direction are obtained by the least square method (N = 11, first order Formula, d values and e values of the following calculation formulas (ii) and (iii)). Further, (m c , n c ) of each pixel of the captured image is obtained from the phase of the lattice. Note that (m c , n c ) means the center of m and n imaging pixels.

x = (m−m c ) [d + ez] (ii)
y = (n−n c ) [d + ez] (iii)
Further, the imaging distortion (imaging position shift) of the lattice fringes due to the distortion of the lens is corrected by obtaining the phase by Fourier transformation (two dimensions in the x direction and the y direction).

H. O. Saldner, J .; M.M. Huntley, "Profilometry using temporal phase unwrapping and a spatial light modulator-based fringe", Opt. Eng. 36 (2) 610-615 (1997)

  The phase shift of lattice fringes due to lens distortion can be corrected by Fourier transform (two-dimensional in the x direction and y direction) as shown in Non-Patent Document 1, but is calculated by Fourier transform. The correction process takes a very long time.

  The present invention provides a high-precision three-dimensional shape measuring instrument that is faster and is not affected by lens distortion or the like.

  The present invention projects a plurality of lattice fringes having different phases onto the object to be measured, and the distance direction coordinates to the object to be measured and two-dimensional coordinates orthogonal to the distance direction coordinates from the obtained lattice fringe image by the phase shift method. A three-dimensional shape measurement method for obtaining a three-dimensional shape represented by the following: a reference plate is disposed at a position where the distance from the projection unit and the imaging unit is known, and a plurality of lattice fringes having different phases are projected onto the reference plate. Then, each phase of a plurality of pixels is calculated from the obtained captured image of the lattice pattern, a phase-distance relationship is calculated from the calculated phase and the known distance, and the distance from the projection unit and the imaging unit is calculated. A reference grid plate having a reference grid with a known two-dimensional coordinate on a plane orthogonal to the distance direction is arranged at a known position, and each two-dimensional coordinate for a plurality of pixels of a captured image based on the reference grid To calculate the above Distance from each of the plurality of two-dimensional coordinates of pixels out and said known distance - calculating the two-dimensional coordinate relationship. At the time of actual measurement, the object to be measured is arranged at a predetermined distance from the projection unit and the imaging unit, and a plurality of lattice fringes having different phases are projected onto the object to be measured. Calculate the phase of the pixel, calculate the distance for the corresponding pixel based on the phase-distance relationship, and calculate the two-dimensional of the pixel from the calculated distance for the corresponding pixel based on the distance-two-dimensional coordinate relationship The coordinates are calculated to obtain the three-dimensional shape of the object to be measured.

In another aspect of the present invention, in the above method, when calculating the phase-distance relationship, the reference plate has lattice fringes having a period of 2π or less in the phase change in the projection region, and a period of phase change greater than 2π. Obtained from a plurality of distance coordinate candidates calculated from the phase of a picked-up image obtained when projecting a grid stripe with a period greater than 2π, and obtained when projecting a grid pattern with a phase change of 2π or less. A candidate that is close to the distance coordinate calculated from the phase of the captured image is set as the distance calculation result.

  In another aspect of the present invention, in the above method, when calculating the phase-distance relationship, the reference plate is set to a plurality of different distances, and the phase of each pixel is calculated at each distance, and the set plurality The distance intervals are set so that the phase difference of the correspondingly calculated phase satisfies less than 2π.

  In another aspect of the present invention, in the above method, when calculating the distance-two-dimensional coordinate relationship, the reference grid plate is set to a plurality of different distances, and the two-dimensional coordinates of each pixel are set corresponding to each distance. The intervals between the plurality of distances calculated and set are set so that the phase difference of the phase corresponding to each distance satisfies less than 2π.

  In another aspect of the present invention, a plurality of lattice fringes having different phases are projected onto the object to be measured, and the distance direction coordinates to the object to be measured are orthogonal to the distance direction coordinates from the obtained lattice fringe image by the phase shift method. A three-dimensional shape measuring apparatus for obtaining a three-dimensional shape represented by two-dimensional coordinates, and a stage on which the object to be measured is arranged at a predetermined position, and a plurality of lattice fringes having different phases on the object arranged on the stage A projection unit for projecting, an imaging unit for imaging an object placed on the stage, and a measurement processing unit for obtaining a three-dimensional shape of the object to be measured based on the captured image, A phase calculation unit, a phase-distance relationship calculation unit, a pixel two-dimensional coordinate calculation unit, a distance-two-dimensional coordinate relationship calculation unit, and a three-dimensional coordinate calculation unit, and the phase calculation unit includes: The distance from the projection unit and the imaging unit is already When a plurality of lattice fringes with different phases are projected on the reference plate, the phases of a plurality of pixels are calculated from the obtained captured images of the lattice fringes, and the phase-distance relationship calculation is performed. The unit calculates a phase-distance relationship from the calculated phase and the known distance, and the two-dimensional coordinate calculation unit of the pixel has the distance from the projection unit and the imaging unit at a known position. When a reference grid plate having a reference grid with known two-dimensional coordinates on a plane orthogonal to the distance direction is arranged, each two-dimensional coordinate for a plurality of pixels of the captured image based on the two-dimensional coordinates of the reference grid The distance-two-dimensional coordinate relationship calculation unit calculates a distance-two-dimensional coordinate relationship from the calculated two-dimensional coordinates of the plurality of pixels and the known distance. During actual measurement, the phase calculation unit is obtained by arranging the measurement object at a predetermined distance from the projection unit and the imaging unit, and projecting a plurality of lattice fringes having different phases on the measurement object. Further, the phase of each pixel is calculated from the lattice pattern image, and the three-dimensional coordinate calculation unit calculates a distance for a corresponding pixel based on the phase-distance relationship, and based on the distance-two-dimensional coordinate relationship, The two-dimensional coordinates of the pixel are calculated from the calculated distance for the corresponding pixel, and the three-dimensional shape of the object to be measured is obtained.

In another aspect of the present invention, in the above apparatus, the projection unit can project a lattice fringe having a period with a phase change of 2π or less and a lattice fringe having a period with a phase change greater than 2π in the projection region, When calculating the phase-distance relationship, the projection unit projects, onto the reference plate, lattice fringes having a period with a phase change of 2π or less in the projection region and lattice fringes having a period with a phase change greater than 2π. The phase-distance calculation unit projects a lattice fringe having a period of which the phase change is 2π or less among a plurality of distance coordinate candidates calculated from the phase of the captured image obtained when projecting the lattice fringe having a phase change of greater than 2π. A candidate close to the distance coordinate calculated from the phase of the captured image that is sometimes obtained is set as a distance corresponding to the phase of the captured image.

  In another aspect of the present invention, in the apparatus described above, the stage is provided with the reference plate for calculating the phase-distance relationship, and the reference grid plate is used for calculating the distance-two-dimensional coordinate relationship. The reference flat plate and the reference grid substrate can be set at a plurality of different distance positions with respect to the projection unit and the front photographing unit by the stage, and the plurality of distances set by the stage. Are set so that the phase difference of the correspondingly calculated phase satisfies less than 2π.

  In another aspect of the present invention, in the above three-dimensional shape measuring method or apparatus, the plurality of lattice fringes having different phases projected onto the object to be measured are sinusoidal lattice fringes.

  As described above, in the present invention, a reference plate is arranged at a position where the distance is known, a plurality of lattice fringes having different phases are projected on the reference plate, and each phase of a plurality of pixels is calculated from the captured image, and is calculated. The phase-distance relationship is calculated from the obtained phase and the known distance.

  Furthermore, a reference grid plate having a reference grid with a known two-dimensional coordinate on a plane orthogonal to the distance direction is arranged at a position where the distance is known, and the two-dimensional coordinates of the pixels of the captured image are calculated based on the reference grid. Then, the distance-two-dimensional coordinate relationship is calculated from the two-dimensional coordinates of the pixel and the known distance.

  The phase-distance relationship is obtained from polynomial approximation of the obtained phase and the known distance coordinates, and the distance-two-dimensional coordinate relationship is also obtained with the distance coordinates obtained for each pixel and the two-dimensional obtained correspondingly. It is obtained by polynomial approximation with coordinates. As described above, also for the phase-distance calibration and the distance-two-dimensional coordinate calibration, it is possible to execute the calibration in a short time while removing the influence of the lens distortion by adopting polynomial approximation. Since the calibration is executed using the captured image, the calibration can be approximated by a polynomial to the same region as the measurement region at the time of actual measurement, and the accuracy is high.

  Further, when calculating the phase-distance relationship, a grid pattern having a period with a phase change of 2π or less and a grid pattern having a period with a phase change greater than 2π is projected on the reference plate from the captured image. The absolute phase of the phase can be easily and accurately determined, and the distance can be determined with high accuracy from this phase.

  Further, when calculating the phase-distance relationship and the distance-two-dimensional coordinate relationship, the reference plate and the reference grid substrate can set a plurality of distance direction positions depending on the stage. By setting the phase difference of the phases calculated corresponding to the positions so as to satisfy less than 2π, it is possible to easily determine the absolute value of the phase without the influence of the measurement error.

It is the schematic which shows the whole structure of the three-dimensional shape measuring apparatus which concerns on embodiment of this invention. It is a figure which shows the outline | summary of the three-dimensional shape measurement procedure which concerns on embodiment of this invention. It is a figure which shows the outline | summary of the phase shift method used for the three-dimensional shape measurement which concerns on embodiment of this invention. It is a figure explaining the lattice fringe projected on the projection target which concerns on embodiment of this invention, and the principle of the imaging. It is a figure which shows the grating | lattice stripe from which a phase mutually projects on the projection object which concerns on embodiment of this invention differs in (pi) / 2. It is a figure which shows the geometric positional relationship of the phase of the projection lattice fringe and camera image which concern on embodiment of this invention. It is a figure which shows the relationship between the projection luminance setting value which concerns on embodiment of this invention, and an actual projection luminance value. It is a figure which shows the outline | summary of correction | amendment of the projection and imaging nonuniformity which concern on embodiment of this invention. It is a figure which shows the mode of the polynomial approximation of the projection luminance setting value and imaging luminance value which concern on embodiment of this invention, and the mode of approximation of a projection luminance setting value. It is a figure which shows the result of the brightness | luminance histogram obtained by correct | amending the brightness | luminance setting value of the projector which concerns on embodiment of this invention, and a brightness | luminance average. It is a figure which shows the projection result of the sine wave obtained by the calibration which concerns on embodiment of this invention. Capturing pixel q (i q, j q) according to the embodiment of the present invention is a diagram illustrating the relationship between the phase phi q, and z-axis along the lens optical axis direction of the line of sight (ii). It is a figure explaining the phase connection method which concerns on this embodiment, and the calibration method of az coordinate direction. It is a figure which shows the lattice fringe with a different period used for the more exact calculation of z coordinate which concerns on this embodiment. It is a figure explaining the procedure which calculates z coordinate using the lattice fringe of FIG. It is a figure explaining the calibration method of z coordinate and x coordinate concerning this embodiment, and z coordinate and y coordinate. It is a figure which shows the system configuration | structure of the specific example which concerns on this embodiment. It is a figure explaining the result of the projection lattice fringe and z direction calibration in the specific example of this embodiment. It is a figure which shows the measurement result of the ceramic board position in the specific example of this embodiment. It is a figure explaining the method of calculating the x, y coordinate of the pixel of a captured image from the reference | standard grid position which concerns on the specific example of this embodiment. It is a figure which shows the three-dimensional shape measurement result using the coin which concerns on the specific example of this embodiment as a to-be-measured object. It is a figure which shows the result of having measured the three-dimensional shape by changing the z coordinate position of the coin of FIG. It is a figure which shows the outline of the coordinate system used in the case of the three-dimensional measurement which concerns on a prior art.

  Hereinafter, modes for carrying out the present invention (hereinafter referred to as embodiments) will be described with reference to the drawings.

[Overview]
FIG. 1 shows a schematic configuration of a three-dimensional shape measurement method according to an embodiment of the present invention and a measurement apparatus 300 that performs this method. The three-dimensional shape measuring apparatus 300 includes a stage 12 on which an object 16 to be measured, a reference flat plate 10, and a reference grid flat plate 14 can be mounted, a projection unit 310 that projects a checkered pattern onto a measurement target, and an image that captures the measurement target. Unit 312 and a measurement processing unit 320 that performs calibration and actual measurement processing described later. The measurement processing unit 320 includes at least a calculation unit 330 and a storage unit 390. The calculation unit 330 includes a phase calculation unit 340, a phase-distance relationship calculation unit 350, a grid-pixel coordinate calculation unit 360, and a distance-two-dimensional coordinate relationship. A calculation unit 370 and a three-dimensional coordinate calculation unit 380 are included.

  In the measurement, the device under test 16 is arranged on the stage 12, a plurality of lattice fringes having different phases are projected onto the device under test 16, and a third order is obtained by a phase shift method from a lattice fringe image obtained by imaging the device under test 16. The original shape measurement processing unit 320 measures the three-dimensional shape of the DUT 16.

  FIG. 2 shows a schematic procedure of three-dimensional shape measurement (calibration and actual measurement) according to the present embodiment. As shown in FIG. 2, in the present embodiment, calibration is performed before the actual measurement (s5) is performed on the device under test 16. Therefore, at the start of measurement, it is first determined whether calibration is necessary (s1). In the calibration determination, for example, when there is no accumulated calibration data, and conditions such as calibration data update timing (elapse of a specified period) are satisfied, it is determined that calibration has not been completed (s1: NO). If valid calibration data already exists, the calibration is completed (s1: YES), and actual measurement (s5) is performed.

  In this embodiment, prior to actual measurement, at least calibration (φ-z calibration) for obtaining the relationship between phase (φ) and distance (z) shown in step s3, and distance (z) shown in step s4. And calibration (z-x and zy calibration) for obtaining the relationship between the two-dimensional coordinates (x, y) of the pixel in advance.

  In addition, as shown in step s2, by performing calibration of the sine wave of the lattice fringes projected onto the projection target (calibration target, object to be measured) in the phase shift method, a more accurate three-dimensional shape measurement is possible. It becomes. The sine wave calibration will be described later in detail.

  In the φ-z calibration (s3), the reference plate 10 having a flat surface without unevenness is mounted on the stage 12, and the distance between the projection unit 310 and the imaging unit 312 is set at a known position. Move step by step. At each position, a plurality of lattice fringes having different phases are sequentially projected. An imaging unit 312 using a CCD or the like as an imaging element sequentially captures the lattice fringes projected on the reference plate 10, and a phase calculation unit 340 calculates a phase at each pixel of the captured image from the captured image.

  The phase-distance relationship calculation unit 350 obtains a relationship between the calculated phase (φ) and a known position (distance) in the stage movement direction (z coordinate) of each pixel, for example, by polynomial approximation.

  In z-x and zy calibration (s4), a reference grid plate 14 having a reference grid pattern whose two-dimensional grid coordinates are known on a plane is mounted on the stage 12 instead of the reference plate 10.

  The reference grid plate 14 is stepped to a known distance position by the stage 12, and the reference grid plate is imaged at each step position. The grid-pixel coordinate calculation unit 360 obtains the coordinates of each pixel of the captured image (two-dimensional coordinates of a pixel that is straight in the distance direction) quickly and accurately by linearly interpolating the known grid coordinates of the captured image.

  The distance-two-dimensional coordinate relationship calculation unit 370 calculates a relationship between the obtained pixel two-dimensional coordinates and a known distance by polynomial approximation. As the reference grid substrate 14, a substrate (for example, a glass substrate) provided with a reference grid pattern with high positional accuracy by printing or the like can be used. In addition, you may use the board | substrate with which the opening part and the uneven | corrugated | grooved part are each formed in the grid | grid shape in the reference position.

  Phase-distance relation obtained by φ-z calibration and distance-two-dimensional coordinate relations obtained by z-x and zy calibration (z-x relational expression, zy relational expression) Are stored in the storage unit 390 and referred to by the phase-distance relationship calculation unit 350, the three-dimensional coordinate calculation unit 380, and the like during the calculation of actual measurement. Note that the calibration data may be stored for all pixels, or may be stored for some pixels in order to improve processing speed and reduce the amount of stored information.

  During the actual measurement (s5), the DUT 16 is mounted on the stage 12, the projection unit 310 projects a plurality of lattice fringes having different phases on the DUT 16, and the imaging unit 312 is arranged at a predetermined position. The checkered image projected on the DUT 16 is imaged.

  The phase calculation unit 340 calculates the phase φ of each pixel of the captured image. The phase-distance relationship calculation unit 350 obtains the stage direction distance (z coordinate) of the DUT 16 based on the stored phase-distance relationship from the calculated phase φ. Further, the three-dimensional coordinate calculation unit 380 calculates the remaining two-dimensional coordinates (x coordinate, y coordinate) based on the stage position distance (z coordinate) obtained based on the stored distance-two-dimensional coordinate relationship. calculate. As described above, the three-dimensional shape of the device under test 16 is converted into the three-dimensional coordinates (x, y, z-coordinate at each point of the device under test 16 from the z-coordinate and the x and y coordinates determined correspondingly. ).

[Phase shift method]
Next, the phase shift method used for measurement in the present embodiment will be described. In the phase shift method, as shown in FIG. 3, lattice fringes having different phases are projected from the projection unit (projector) 310 onto the DUT 16 and picked up by the image pickup unit (camera) 312. Find the shape from the quantity. Since the measurement accuracy largely depends on the projection pattern accuracy of the checkered pattern, it is projected using a precise grating drawn on a glass plate or film, or a checkered pattern made by optical interference is projected.

  A general-purpose data projector can be used as the projector of the projection unit. Since the data projector can easily project lattice fringes, a highly accurate measuring instrument can be realized in combination with a TV camera. In addition, by using ultra-compact projectors that use LEDs and lasers as light sources, these LEDs and lasers have a longer life than those using ultra-high pressure mercury lamps. Is easily realized.

(Measurement principle of phase shift method)
Hereinafter, the phase shift method measurement principle will be described in more detail. From the position of the projector in FIG. 4, sinusoidal lattice fringes P (u, v) whose phases are different by π / 2 are projected in order. Examples of the sine wave lattice pattern to be projected include patterns as shown in FIGS.

This sine wave lattice pattern P (u, v) is expressed by the following equation (1).
Where k = 0 to 3, T: period.

  As shown in FIG. 4, when there is an object to be measured (a flat plate) at a distance z from the projector, the brightness I ′ (x, y) on the flat plate plane can be expressed by the following equation (2). .

In Equation (2), m = 0 to M−1, M is the number of lattice fringes, T ′ is the lattice fringe period at the distance z, and a (x, y) and b (x, y) are illumination and the like This value is determined by the ambient light and the reflectance of the object to be measured.

When the lattice fringes (formula (1)) with different phases are projected (k = 0 to 3) and four images I 0 (i, j) to I 3 (i, j) captured by the camera are used, The phase φ (i, j) of the pixel is obtained by equation (3).

In the above equation (3), there is no term of disturbance light a (x, y) or reflectance b (x, y), and the phase φ (i, j) is accurately obtained, and an object having a complicated shape is obtained. Can also respond.

FIG. 6 shows the geometric positional relationship between the phase of the projected grid pattern and the camera image. Assuming that the phase of the pixel q (i q , j q ) of the checkered image obtained from the above equation (3) is φ q , the intersection of the line of sight of the pixel q and the object to be measured is the measurement point coordinate S q . , On the lattice fringe projection plane (i) of the phase φ q projected by the projector.

  In the phase shift method shown here, four different lattice fringes are used for each phase of π / 2, but lattice fringes for every 5, 6,. Further, three different lattice fringes may be used for each phase of π / 3, and the phase difference and the number of lattice fringes to be used can be determined by the measurement time and accuracy (the higher the number, the higher the number).

That is, the coordinates S q (x q , y q , z q ) of the measurement point are
-Plane of phase φ q projected from projector (i)
Line of sight of pixel q on camera image sensor (ii)
Is the intersection of Therefore, for example, for all the pixels, the phase φ is obtained using Equation (3), and the intersection of (i) and (ii) is calculated, and the three-dimensional shape of the object to be measured is obtained.

[High-precision measurement method]
(1. Outline of high-precision measurement method)
To accurately measure the shape of the object to be measured,
(1) Projecting and imaging a sine wave pattern without distortion (2) It is necessary to accurately obtain the intersection of the plane (i) and the straight line (ii).

  Therefore, in the present embodiment, calibration is executed as described above to realize high-precision measurement. Hereinafter, this calibration method will be described.

  (1) The nonlinearity between the brightness setting value of the projector and the brightness value of the captured image is corrected (calibration) and projected so as to be a sine wave grid pattern (calibration of the sine wave grid pattern).

(2) The phase φ q is obtained by projecting lattice fringes while moving the flat plate in the z direction by a precision stage, and for each pixel, the x, y, z coordinates on the line of sight (ii) of FIG. 6 and the phase φ q The relationship is obtained (φ-z, z-x, zy calibration: three-dimensional calibration). At the time of measurement, the z coordinate is obtained from the phase φ q of each pixel, and the x and y coordinates are obtained from the z coordinate.

(2. Sinusoidal grid pattern calibration)
As the projection unit 310, a general data projector can be used as described above, but the data projector is adjusted so that the appearance is improved for presentation. Therefore, the relationship between the projection brightness setting value of this projector and the actual projection brightness value is non-linear as shown in FIG. Even if the projector can be set to perform linear projection, the video card output error on the computer side that outputs data to the projector, the data conversion error on the projector side, and the display device (liquid crystal, DLP (Digital Light Processing), etc.) An output error during driving occurs.

  Further, the relationship between the amount of incident light received by the camera employed in the imaging unit 312 and the imaging luminance value is also nonlinear. Therefore, in order to project an accurate sine wave lattice pattern and accurately capture an image, it is preferable to obtain and correct the relationship between the projection brightness setting value and the image capture brightness value.

  In addition, the projector collects the light emitted backward from the lamp (filament) forward using a concave elliptical mirror to increase the projection brightness, and measures the unevenness in the amount of light using the integrator illumination system. It is difficult to completely eliminate unevenness.

  Therefore, in the present embodiment, for example, the method shown in the following steps s11 to s15 corrects the relationship between the light amount unevenness, the projection luminance setting value, and the imaging luminance value (calibration of sine wave lattice fringes).

  (S11) First, as shown in FIG. 8, a reference grid whose coordinates are known is projected from a projector onto a ceramic plate to capture an image, and an image corresponding to the projection pixel P (u, v) is captured from the grid position on the image. Pixel q (i, j) is obtained.

  (S12) Next, the projection reference grid is projected while being shifted vertically and horizontally, and imaging pixels q (i, j) corresponding to all the projection pixels P (u, v) are obtained. In addition, thousands of corresponding points can be calculated at a time by projecting and photographing a two-dimensional reference grid as shown in FIG.

  (S13) The same luminance pattern with different projection luminance setting values (brightness) is projected onto the white ceramic plate and imaged.

  (S14) The relationship between the set value of P (u, v) and the imaging luminance value of q (i, j) is approximated by a polynomial. FIG. 9A shows the appearance of a polynomial approximation (here, a quartic equation) between the projection brightness setting value and the imaging brightness value.

  (S15) Next, as shown in FIG. 9B, a necessary projection setting value corresponding to the imaging luminance value is obtained by inversely calculating the above polynomial.

  (S16: Correction Result) FIG. 10 shows a luminance histogram and a luminance average value when the luminance setting value of the projector is corrected and projected onto the white ceramic plate so that the imaging luminance value becomes constant (150). The brightness setting value is actually set from the brightness histogram in the area surrounded by the dotted line in FIG. 10A and the results of the brightness average value, deviation, intermediate value, etc. in that area as shown in FIG. I can understand that.

  FIG. 11 shows an example in which a sine wave is projected onto a white ceramic plate. The imaging brightness value was set to have an amplitude of 50 and an offset of 100. The imaging brightness value in the dotted line part attached to the center of the image in FIG. 11A is shown in FIG. From the result of FIG. 11B, it can be understood that the sine wave lattice fringes are projected and imaged almost according to the set values. That is, in practice, calibration for accurate projection and imaging of sinusoidal lattice fringes can be realized by the method described above.

(3. Three-dimensional shape calculation method and three-dimensional calibration method)
Next, a method for calculating the three-dimensional shape of an object to be measured using the above-described lattice fringes and three-dimensional (φ-z, zx, zy) calibration performed at that time will be described.

(A. Measurement principle)
In FIG. 6, the three-dimensional shape of the object to be measured is geometrically determined from parameters such as the position and orientation (optical axis) of the projector and camera, the focal length of the lens, the projection element size, and the image sensor size. Can be calculated.

  Conceptually, images of reference points arranged at different positions in the three-dimensional space are taken, and the relationship between the position of the reference point on the image and the three-dimensional coordinates in the space is solved to obtain camera parameters (camera Calibration), and then projecting the checkerboard pattern onto a plane and capturing it with a calibrated camera, and determining the projector parameters from the relationship between the position of the checkerboard image and the three-dimensional coordinates in space (projector calibration) . Since the projection plane (i) and the line of sight (ii) in FIG. 6 are obtained from the parameters of the camera and the projector, three-dimensional coordinates can be calculated from the intersection.

  Camera lens distortion correction (aberration) can be executed by imaging a grid pattern or a square lattice in advance, and projector lens distortion correction can be executed by projecting a predetermined pattern from the projector.

  In the present embodiment, attention is paid to the line of sight (ii) shown in FIG. 6, and the relationship between the phase change on the line of sight (ii) and the three-dimensional coordinates is approximated by a function. That is, the lens distortion and the like are automatically corrected by approximating the relationship between the phase change on the line of sight (ii) and the three-dimensional coordinates without performing the camera or projector lens distortion correction separately. It is possible to do.

FIG. 12 shows the relationship between the phase φ q of the line of sight (ii) of the photographing pixel q (i q , j q ) and the z axis along the lens optical axis direction.

As can be understood from FIG. 12A, the phase on the line of sight (ii) repeatedly changes beyond 2π when the z coordinate changes greatly. However, the phase can be obtained only in the range of 0 to 2π. For example, if the 2nπ phase changes in the measurement region in the z coordinate direction, there are n z coordinate candidate points. In the example shown in FIG. 12 (b) is n = 5, 5 points of the phase φ candidate points q L 0 ~L 4 is present. Therefore, as shown in FIG. 12C, if the phases are connected (referred to as unwrapping processing; hereinafter, the connected phase is referred to as an absolute phase), z is immediately obtained from the absolute phase φ. For example, in FIG. 12C, n = 1, and the corresponding z candidate point of the phase φ q is L 1 (z = L 1 ). The function representing the absolute phase is an n-th order polynomial as shown in Equation (4).

  As an approximation method, an approximation method such as Lagrangian interpolation or spline interpolation is conceivable. However, adopting a polynomial such as equation (4) is faster in calculation and easier to implement in hardware capable of real-time processing.

The calibration method with the phase φ-z coordinate according to the present embodiment has the following features compared to the method for obtaining the three-dimensional coordinate from the geometric position.
(A) It is possible to correct not only lens aberration but also local distortion.
(B) Since no geometric calculation is performed, processing can be performed at high speed.

  Similarly to the z coordinate, the x and y coordinates can be immediately obtained by polynomial approximation of the correspondence relationship with the z coordinate on the line of sight (ii) as in the following formulas (5) and (6).

(B. Phase connection method)
FIG. 13 illustrates a phase connection method and a calibration method in the z coordinate direction according to the present embodiment. In the present embodiment, as shown in FIG. 13A, the absolute phase is calculated using the z-direction calibration system using the white ceramic plate (diffusion plate) 10 and the precision stage 12, and the absolute phase and z Correspondence with the coordinates (formula (4)) is obtained. Here, phase connection was performed by utilizing the fact that the phase of the captured image increases as the ceramic plate is brought closer to the projector / camera from the farthest point in the measurement region.

  Hereinafter, this phase connection procedure (steps s21 to s25) will be described.

(S21) projecting a checkerboard ceramic plate 10 to move to the farthest point Z 0, obtains the phase φ 0 (i, j) (see ○ mark in FIG. 13 (b)).

(S22) Next, step movement in the z direction is performed to obtain the phase φ at that time. In the example of FIG. 13, the ceramic plate 10 is moved closer in the z direction, the farthest point Z 0 to Z 1 (Z 1 = Z 0 −Δz), and the phase φ 1 (i, j) is calculated. The phase calculation at each z coordinate is repeated by bringing the ceramic plate 10 closer.

(S23) In the case of φ k (i, j) <φ k−1 (i, j), phase coupling is performed by adding 2π to φ k (i, j). The absolute phase obtained by phase concatenation is indicated by □ in FIG.

(S24) Further, the above steps (s22) and (s23) are repeated (m−1 times, step movement) up to the nearest point z m−1 , and phase coupling is performed as necessary.

(S25) Using m pieces of phase data φ k (i, j) (k = 0 to m−1) and z coordinate data Z k (k = 0 to m−1) obtained by the above processing, The above equation (4) is obtained by the method of least squares. That is, the correspondence between the measured phase and the actual z coordinate is obtained (φ-z calibration).

  Here, as Δz is smaller, the number of measurement points is increased and approximation is improved. However, since calibration takes time, Δz is appropriately selected based on required accuracy and allowable processing time. Further, Δz is selected to a value that does not exceed 2π in phase change.

(C. Calculation method of z coordinate)
Next, a more accurate calculation method of the z coordinate will be described. The phase of the lattice pattern actually measured is in the range of 0 to 2π, and the absolute phase is unknown. Therefore, by using lattice stripes having different periods of (a) coarse, (b) fine, and (c) fine as shown in FIG. 14, the z coordinate can be calculated with higher accuracy. In the measurement range in the z direction (Z min to Z max ), the absolute phase of the coarse lattice fringes in FIG. 14A is 0 to 2π, the fine lattice fringes of FIG. 14B are 0 to 2πB, and FIG. ) Fine lattice fringes change from 0 to 2πC (C> B).

  Hereinafter, the z coordinate calculation procedure (steps s31 to s35) will be described with further reference to FIG.

(S31) First, as shown in FIG. 15A, from the phase φ a (i, j) of the captured image of the coarse grid stripes in FIG. 14A, the phase-distance relationship calculating unit 350 in FIG. The coordinate Z a (i, j) is obtained. Z a (i, j) is calculated using the approximate expression (4).

(S32) Similarly, as shown in FIG. 15 (b), Z b (i, j) is obtained from the phase φ b (i, j) of the captured image of the fine grid stripes in FIG. 14 (b). Here, there are B candidates for Z b (i, j), as shown in FIG. 15B , and 2π is added to φ b (i, j) (phase connection: φ b (i , J) = 2πk + φ b (i, j), k = 0 to B) and Z b (i, j) are obtained in this order.

(S33) Next, the phase-distance relationship calculation unit 350 in FIG. 1 (may be calculated by another calculation unit) calculates the difference between Z b (i, j) and Z a (i, j). this difference ε = | Z b (i, j) -Z a (i, j) | is a Z b calculated from b to a minimum (i, j), the approximate value (phi b (i, j ) = 2πb + φ b (i, j)).

(S34) Further, as shown in FIG. 15 (c), Z c (i, j) is obtained from the phase φ c (i, j) of the captured image of the fine grid stripes of FIG. 14 (c). Here, the candidate of Z c (i, j), because the C-number is, by adding 2π to φ c (i, j) (the phase connected: φ c (i, j) = 2πk + φ c (i, j) , K = 0 to C) and Z c (i, j) are obtained in this order.

(S35) The phase-distance relationship calculating unit 350 obtains a difference between Z c (i, j) and Z b (i, j), and this difference ε = | Z c (i, j) −Z b (i , J) Let Z c (i, j) obtained from c that minimizes | be a measured value (φ c (i, j) = 2πc + φ c (i, j)).

Moreover, since the measured value obtained using the fine lattice fringes has the highest accuracy, Z c (i, j) obtained from the phase corresponding to the fine lattice fringes is used as the calculated value of the z coordinate. Further, the phase-distance relationship calculation unit 350 obtains the relationship between the determined z coordinate and the phase, and this relationship is stored in the storage unit 390 of FIG.

  The calibration for the phase φ and the z coordinate (z direction) is completed by the method as described above.

(4. x and y direction calibration)
Next, calibration of the z coordinate and the x coordinate, and the z coordinate and the y coordinate will be described. In brief, first, a grid lattice (reference grid) in which x and y coordinates orthogonal to the z coordinate direction are known is placed at a distance z, and the captured image is relative to the reference grid position on the captured image. The position is obtained by linear interpolation. Next, based on the relative position of this pixel on the image, the x and y coordinates at the distance z of each pixel are calculated. These processes were sequentially executed while changing the distance z, and the relationship between the above formulas (5) and (6) was obtained from the z coordinate and the calculated x and y coordinates using the least square method.

Specifically, in the calibration, the ceramic plate 10 shown in FIG. 13A is replaced with a reference grid plate (grid lattice) 14 having a reference grid as shown in FIG. , Y-direction calibration system is constructed. In addition, according to the processing procedure (steps s41 to s45) described below, a relational expression (the above formula (5) between the z coordinate and the x, y coordinate for each pixel q (i q , j q ) of the captured image. ), (6)).

(S41) First, imaged by moving the grid grating farthest point z 0, we obtain the center position of each grid coordinates. The center position of the grid coordinates is the center coordinates G (s, t), G (s + 1, t), G (s, t + 1), G (s + 1, t + 1) of the points indicated by ● in FIG. Etc.

(S42) Next, the pixel q (i q , j q ) of the captured image and the positions (G (s, t), G (s + 1, t) of four grids surrounding this q (i q , j q ) , G (s, t + 1 ), G from (s + 1, t + 1 )) and the relative position of, q (i q, the three-dimensional coordinates x 0 of j q) (i q, j q), y 0 (i q, j q ) is obtained by linear interpolation from the x and y coordinates of the grid (see FIG. 16B).

(S43) Further, the grid grating 14 is moved stepwise in the z direction (Z 1 = Z 0 −Δz), and at each z position, the three-dimensional coordinates x 1 (i q , j q ), y 1 (i q , jq ) is obtained.

(S44) By repeating the above steps (s42) and (s43) up to the nearest point z m−1 (m−1 times, step movement), z coordinate data Z k (k = 0 to m−1). ) To obtain m pieces of x, y coordinate data x k (i q , j q ), y k (i q , j q ).

(S45) Using m pieces of x, y coordinate data x k (i q , j q ), y k (i q , j q ) and z coordinate data Z k (k = 0 to m−1), the minimum The above formulas (5) and (6) are obtained by the square method. The obtained relational expressions (5) and (6) are stored as coefficients, for example, in the storage unit 390 of FIG.

  By the method as described above, the relationship between the x coordinate and the y coordinate with respect to the z coordinate can be obtained, and the calibration can be completed.

[Concrete example]
Next, a specific example of a three-dimensional shape measurement method using the above-described calibration and measurement principle will be further described with reference to the drawings.

(1. System configuration)
The basic system is as shown in FIG. 1 described above. FIG. 17 shows the system configuration of a three-dimensional shape measuring apparatus 301 according to a specific example. This apparatus system employs a data projector 314 as the projection unit 310 and a camera (CCD camera) 316 as the photographing unit 312 in FIG. Further, a computer 322 provided with a video board and an image input board is employed as the measurement processing unit 320 in FIG. The computer 322 is constituted by a CPU or the like, and corresponds to the calculation unit 332 having the function of the calculation unit 330 in FIG. 1 and the storage unit 390 in FIG. 1, and stores data (for example, calibration data) necessary for processing. A memory 343 is provided.

  The data projector 314 and the camera 316 are arranged as shown in FIG. 17, and the measurement area is approximately z = 300 ± 15 mm and x, y = ± 15 mm. The resolution in the x and y directions is about 50 μm (= ± 15 mm / 600 pixels).

  Here, the measurement area was set to 600 × 600 pixels in the center of the camera. As described with reference to FIG. 6, in the present embodiment, since the three-dimensional coordinates on the camera line of sight (ii) are calculated, the shape is a rhombus, as shown as the measurement region in FIG. 17. Become.

  In this specific example, the lens position of the projector is shifted in order to set the lattice fringe projection distance to 300 mm.

  As will be described later, the lattice fringes were one, ten, and fifty for a projection width of 55 mm (z = 300 mm) (see FIG. 18). The check pattern is obtained by drawing a check pattern on a screen (not shown) of the computer 322 and outputting the check pattern to the projector 314 via a video board.

(2.z calibration)
An example of the z-direction calibration of the checkered pattern is shown in FIG. The calibration interval Δz was set to coarse grid stripes: 5 mm and fine grid stripes and fine grid stripes: 1 mm. Note that the calibration range was expanded ± 5 mm from the measurement region, and the distortion of the approximate expression at the end of the approximate function was almost eliminated. In the calibration example, the center pixel of the captured image is approximated.

  Next, as described with reference to FIG. 13, the ceramic plate was moved in the z direction on the precision stage, and the difference between the z coordinate measurement value of the ceramic plate and the stage position was compared. FIG. 19 shows the comparison results. The positioning accuracy of the stage used was 7 μm.

  The difference between the measured value of the z coordinate of the ceramic plate and the stage position was an average value: 25 μm at the maximum (z = 312.5 mm) and a standard deviation: 28 μm (z = 315 mm). Further, the maximum difference of all measurement coordinates (600 × 600 pixels) at each measurement position is shown as max. And min. It showed in. The maximum difference was 157 μm, and it was found that practical accuracy was realized.

(3. z-x, zy calibration)
Next, an example of an algorithm used for z-x, zy calibration will be described. Of course, the employable algorithm is not limited to the following specific examples.

(1) Calculation of grid position (s51) Using a commercially available image processing library MIL, grid center coordinates G [s] [t], s = 0 to 50, and t = 0 to 50 were obtained from the grid image. In addition, as for the reference grid 14 to be used, for example, a grid distortion chart (57983-I manufactured by Edmund Optics) has a circular dot (grid) of φ0.5 mm at an interval of 1 mm, 51 × 51 (vertical, horizontal) = 2. , 601 precision lattice patterns drawn.

(2) Calculation of x, y coordinates (s61) For the coordinate calculation pixel (target pixel) q of the captured image, the coordinates G [s ′] [t ′] of the grid closest to this pixel were obtained.

  (S62) As shown in FIG. 20A, the grid is divided into patterns (I) to (IV) from the position of the coordinate calculation pixel q and the four grids surrounding it.

  (S63) After dividing into patterns, in the pattern (region) to which the coordinate calculation pixel q belongs, the x coordinate of this pixel q is calculated (see FIG. 20B). The calculation was performed by linear interpolation from the positions of the four grids surrounding the pixel q as described above.

  The y coordinate of the coordinate calculation pixel q was also calculated by the same method.

(4. Actual measurement results)
Next, the result of actual measurement of coins having fine irregularities after the above calibration will be described with reference to FIG.

  FIGS. 21A to 21D show the fine checkered images, the amplitude images obtained from the checkered images (luminance images from which the illumination light has been removed) and the phase images are shown in FIGS. 21E and 21F as the shape measurement results. Is shown in FIG.

  From the shape measurement result of FIG. 21G, it can be understood that the coin shape can actually be measured. Although it is difficult to discriminate with the expression of FIG. 21 (g), in actuality, the coin height is measured to be different between the upper and lower sides, and the cause is due to the slight inclination of the board on which the coin is stuck. . On the contrary, it can be understood that the inclination of the surface of the object to be measured can be accurately measured.

In addition, the black part of the measurement image shown in FIG.21 (f) was not measured for the following reasons.
-Since the luminance value is saturated in any of the images (a) to (d), calculation is impossible.
・ The brightness of the amplitude image is small and the error is large.

  For example, a black-brown board portion on which a coin is pasted has a low luminance and a large phase noise (see FIGS. 21E and 21F).

  Since metal such as coins is easily specularly reflected, as shown in FIG. 21E, it is too bright or too dark depending on the surface angle, the luminance change is very large, and the imaging dynamic range of the camera is insufficient, and measurement is performed. An area that cannot be created occurs.

  FIG. 22 shows an example in which the same coin is measured by changing the z position. As described in the above section (1. System configuration), since the measurement area is a diamond, the position of the coin is shifted from the left to the right, but the shape can be measured in detail. Also, in both FIG. 22A in which z = 290 mm and FIG. 22B in which z = 310 mm, the coin measurement results (surface irregularities thereof) except that the positions are shifted to the left and right. The surface unevenness information is almost equal. In both cases, the slight tilt of the coin surface (slight tilt of the board), specifically, the fact that the upper side of the board is tilted away from the lower side in the z direction is measured almost equally. ing. Thus, it can be understood that the configuration disclosed in the specific example of the present embodiment can accurately measure the three-dimensional shape of the measurement object.

[Summary of specific examples]
(1) A three-dimensional shape measuring instrument based on the phase shift method could be constructed using a commercially available projector and a TV camera.
(2) High accuracy could be achieved by incorporating the following two methods.

(A) Correction of nonlinear error between projection luminance and imaging luminance of projector (b) Lens distortion removal in which relationship between phase on line of sight of each pixel and three-dimensional coordinate is replaced by function (3) Ceramic plate on precision stage As a result of measuring the position of the ceramic plate, the difference (z) from the stage position is as small as an average value: 25 μm and a standard deviation: maximum 28 μm at a distance of 300 ± 15 mm. It was found.

  10 reference flat plate, 14 reference grid flat plate, 12 stage, 16 object to be measured, 300 three-dimensional shape measuring apparatus, 310 projection unit, 312 imaging unit, 320 measurement processing unit, 330 calculation unit, 340 phase calculation unit, 350 phase-distance Relationship calculation unit, 360 Grid-pixel coordinate relationship calculation unit, 370 Distance-two-dimensional coordinate relationship calculation unit, 380 Three-dimensional coordinate calculation unit, 390 Storage unit.

Claims (9)

  1. A plurality of lattice fringes having different phases are projected onto the object to be measured, and the obtained lattice fringe image is represented by the distance direction coordinates to the object to be measured and the two-dimensional coordinates orthogonal to the distance direction coordinates by the phase shift method. A three-dimensional shape measuring method for obtaining a three-dimensional shape.
    A reference plate is arranged at a position where the distance from the projection unit and the imaging unit is known, and a plurality of lattice fringes with different phases are projected onto the reference plate, and each phase of a plurality of pixels is calculated from the obtained captured image of the lattice fringes. And calculating a phase-distance relationship from the calculated phase and the known distance,
    A reference grid plate having a reference grid with a known two-dimensional coordinate on a plane orthogonal to the distance direction is arranged at a position where the distance from the projection unit and the imaging unit is known, and imaging is performed based on the reference grid. Calculating each two-dimensional coordinate for a plurality of pixels of the image, calculating a distance-two-dimensional coordinate relationship from each two-dimensional coordinate of the calculated plurality of pixels and the known distance,
    During actual measurement,
    The object to be measured is arranged at a predetermined distance from the projection unit and the imaging unit,
    Projecting a plurality of lattice fringes with different phases onto the object to be measured, calculating the phase of each pixel from the obtained lattice fringe image, and calculating the distance for the corresponding pixel based on the phase-distance relationship,
    A three-dimensional shape measurement characterized in that, based on the distance-two-dimensional coordinate relationship, a two-dimensional coordinate of the pixel is calculated from a distance of the calculated corresponding pixel to obtain a three-dimensional shape of the object to be measured. Method.
  2. The three-dimensional shape measuring method according to claim 1,
    When calculating the phase-distance relationship, a grating fringe having a period with a phase change of 2π or less and a grating fringe having a period with a phase change larger than 2π are projected onto the reference flat plate, and the phase change is more than 2π. Among a plurality of distance coordinate candidates calculated from the phase of the captured image obtained at the time of projection of the lattice pattern with a large period, the distance change is close to the distance coordinate calculated from the phase of the imaged image obtained at the time of projection of the lattice pattern with a period of 2π or less. A method for measuring a three-dimensional shape, wherein a candidate is a calculation result of a distance.
  3. In the three-dimensional shape measuring method according to claim 1 or 2,
    In calculating the phase-distance relationship, the reference plate is set to a plurality of different distances, and the phase of each pixel is calculated at each distance,
    The interval between the plurality of distances to be set is set so that the phase difference of the correspondingly calculated phase satisfies less than 2π.
  4. In the three-dimensional shape measuring method according to claim 3,
    In calculating the distance-two-dimensional coordinate relationship, the reference grid plate is set to a plurality of different distances, and the two-dimensional coordinates of each pixel are calculated corresponding to each distance,
    The interval between the plurality of distances to be set is set so that the phase difference of the phase corresponding to each distance satisfies less than 2π.
  5. A plurality of lattice fringes having different phases are projected onto the object to be measured, and the obtained lattice fringe image is represented by a distance direction coordinate to the object to be measured and a two-dimensional coordinate orthogonal to the distance direction coordinate by a phase shift method. It is a 3D shape measuring device for obtaining 3D shapes,
    A stage for placing the object to be measured at a predetermined position;
    A projecting unit that projects a plurality of lattice fringes with different phases onto the object arranged on the stage;
    An imaging unit for imaging an object placed on the stage;
    A measurement processing unit for obtaining a three-dimensional shape of the object to be measured based on a captured image,
    The measurement processing unit includes a phase calculation unit, a phase-distance relationship calculation unit, a pixel two-dimensional coordinate calculation unit, a distance-two-dimensional coordinate relationship calculation unit, and a three-dimensional coordinate calculation unit,
    The phase calculation unit includes a reference plate at a position where the distance from the projection unit and the imaging unit is known, and when a plurality of lattice stripes having different phases are projected on the reference plate, a captured image of the lattice pattern obtained From the above, calculate each phase of multiple pixels,
    The phase-distance relationship calculating unit calculates a phase-distance relationship from the calculated phase and the known distance,
    The two-dimensional coordinate calculation unit of the pixel includes a reference grid plate having a reference grid with a known two-dimensional coordinate on a plane orthogonal to the distance direction at a position where the distance from the projection unit and the imaging unit is known. When arranged, calculate each two-dimensional coordinate for a plurality of pixels of the captured image based on the two-dimensional coordinates of the reference grid,
    The distance-two-dimensional coordinate relationship calculating unit calculates a distance-two-dimensional coordinate relationship from each of the calculated two-dimensional coordinates of the plurality of pixels and the known distance,
    During actual measurement,
    The phase calculation unit arranges the object to be measured at a predetermined distance from the projection unit and the imaging unit, and projects the plurality of lattice fringes having different phases on the object to be measured. Calculate the phase of each pixel from
    The three-dimensional coordinate calculation unit calculates a distance for the corresponding pixel based on the phase-distance relationship, and based on the distance-two-dimensional coordinate relationship, the three-dimensional coordinate calculation unit calculates the two-dimensional coordinate of the pixel from the calculated distance for the corresponding pixel. A three-dimensional shape measuring apparatus characterized by calculating coordinates to obtain a three-dimensional shape of the object to be measured.
  6. The three-dimensional shape measuring apparatus according to claim 5,
    The projection unit can project a lattice fringe having a period of 2π or less in phase change in the projection region and a lattice fringe having a period of phase change greater than 2π,
    When calculating the phase-distance relationship, the projection unit projects, onto the reference flat plate, lattice fringes having a period with a phase change of 2π or less in the projection region and lattice fringes having a period with a phase change greater than 2π. ,
    The phase-distance calculation unit projects a lattice fringe having a period of which the phase change is 2π or less among a plurality of distance coordinate candidates calculated from the phase of the captured image obtained when projecting the lattice fringe having a phase change of greater than 2π. A three-dimensional shape measuring apparatus characterized in that a candidate close to a distance coordinate calculated from a phase of a captured image obtained at times is set as a distance corresponding to the phase of the captured image.
  7. In the three-dimensional shape measuring apparatus according to claim 5 or 6,
    When calculating the phase-distance relationship, the stage is provided with the reference plate, and when calculating the distance-two-dimensional coordinate relationship, the stage is provided with the reference grid plate.
    And the reference flat plate and the reference grid substrate can be set at a plurality of different distance positions with respect to the projection unit and the imaging unit, respectively, by the stage,
    The interval between the plurality of distances set by the stage is set so that the phase difference of the correspondingly calculated phase satisfies less than 2π.
  8. The plurality of lattice fringes having the different phases projected onto the object to be measured, the three-dimensional shape measuring device according to any one of Motomeko 5-7 you being a sine wave checkerboard.
  9. 5. The three-dimensional shape measurement method according to claim 1, wherein the plurality of lattice fringes having different phases projected onto the object to be measured are sinusoidal lattice fringes.
JP2009048662A 2009-03-02 2009-03-02 3D shape measuring method and 3D shape measuring apparatus Expired - Fee Related JP5375201B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009048662A JP5375201B2 (en) 2009-03-02 2009-03-02 3D shape measuring method and 3D shape measuring apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009048662A JP5375201B2 (en) 2009-03-02 2009-03-02 3D shape measuring method and 3D shape measuring apparatus

Publications (2)

Publication Number Publication Date
JP2010203867A JP2010203867A (en) 2010-09-16
JP5375201B2 true JP5375201B2 (en) 2013-12-25

Family

ID=42965499

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009048662A Expired - Fee Related JP5375201B2 (en) 2009-03-02 2009-03-02 3D shape measuring method and 3D shape measuring apparatus

Country Status (1)

Country Link
JP (1) JP5375201B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3381350A1 (en) * 2017-03-31 2018-10-03 Nidek Co., Ltd. Subjective optometry apparatus and subjective optometry program

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012202771A (en) * 2011-03-24 2012-10-22 Fujitsu Ltd Three-dimensional surface shape calculation method of measuring target and three-dimensional surface shape measuring apparatus
CN102628676B (en) * 2012-01-19 2014-05-07 东南大学 Adaptive window Fourier phase extraction method in optical three-dimensional measurement
JP6041513B2 (en) * 2012-04-03 2016-12-07 キヤノン株式会社 Image processing apparatus, image processing method, and program
JP6299150B2 (en) * 2013-10-31 2018-03-28 セイコーエプソン株式会社 Control device, robot, control system, control method, and control program
JP2015099050A (en) * 2013-11-18 2015-05-28 セイコーエプソン株式会社 Calibration method and shape measuring device
JP6602867B2 (en) * 2014-12-22 2019-11-06 サイバーオプティクス コーポレーション How to update the calibration of a 3D measurement system
CN104729429B (en) * 2015-03-05 2017-06-30 深圳大学 A kind of three dimensional shape measurement system scaling method of telecentric imaging
JP2017126267A (en) * 2016-01-15 2017-07-20 株式会社Pfu Image processing system, image processing method and computer program
JP2019105458A (en) * 2017-12-08 2019-06-27 株式会社日立ハイテクファインシステムズ Defect inspection device and defect inspection method

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2913021B2 (en) * 1996-09-24 1999-06-28 和歌山大学長 Shape measuring method and device
JPH11166818A (en) * 1997-12-04 1999-06-22 Suzuki Motor Corp Calibrating method and device for three-dimensional shape measuring device
JP3417377B2 (en) * 1999-04-30 2003-06-16 日本電気株式会社 Three-dimensional shape measuring method and apparatus, and recording medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3381350A1 (en) * 2017-03-31 2018-10-03 Nidek Co., Ltd. Subjective optometry apparatus and subjective optometry program

Also Published As

Publication number Publication date
JP2010203867A (en) 2010-09-16

Similar Documents

Publication Publication Date Title
US10563978B2 (en) Apparatus and method for measuring a three dimensional shape
US10677591B2 (en) System and method for measuring three-dimensional surface features
TWI480832B (en) Reference image techniques for three-dimensional sensing
US9322643B2 (en) Apparatus and method for 3D surface measurement
JP2015057612A (en) Device and method for performing non-contact measurement
EP1777487B1 (en) Three-dimensional shape measuring apparatus, program and three-dimensional shape measuring method
KR101257188B1 (en) Three-dimensional shape measuring device, three-dimensional shape measuring method, and computer readable recording medium for three-dimessional shape measuring program
JP4112858B2 (en) Method and system for measuring unevenness of an object
EP1596158B1 (en) Three-dimensional shape input device
US20150015701A1 (en) Triangulation scanner having motorized elements
JP5395507B2 (en) Three-dimensional shape measuring apparatus, three-dimensional shape measuring method, and computer program
TWI396823B (en) Three dimensional measuring device
US10812694B2 (en) Real-time inspection guidance of triangulation scanner
KR100615576B1 (en) Three-dimensional image measuring apparatus
KR101461068B1 (en) Three-dimensional measurement apparatus, three-dimensional measurement method, and storage medium
TWI460394B (en) Three-dimensional image measuring apparatus
Wang et al. Three-dimensional shape measurement with a fast and accurate approach
EP2475954B1 (en) Non-contact object inspection
KR101121691B1 (en) Three-dimensional measurement device
US8199335B2 (en) Three-dimensional shape measuring apparatus, three-dimensional shape measuring method, three-dimensional shape measuring program, and recording medium
US7548324B2 (en) Three-dimensional shape measurement apparatus and method for eliminating 2π ambiguity of moire principle and omitting phase shifting means
US6611344B1 (en) Apparatus and method to measure three dimensional data
CN100338434C (en) Thrre-dimensional image measuring apparatus
JP5390900B2 (en) Method and apparatus for determining 3D coordinates of an object
Xu et al. Phase error compensation for three-dimensional shape measurement with projector defocusing

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20120111

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20121126

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20130108

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20130301

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20130528

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20130612

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20130827

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20130909

LAPS Cancellation because of no payment of annual fees