WO2016171263A1 - 形状測定装置及び形状測定方法 - Google Patents
形状測定装置及び形状測定方法 Download PDFInfo
- Publication number
- WO2016171263A1 WO2016171263A1 PCT/JP2016/062801 JP2016062801W WO2016171263A1 WO 2016171263 A1 WO2016171263 A1 WO 2016171263A1 JP 2016062801 W JP2016062801 W JP 2016062801W WO 2016171263 A1 WO2016171263 A1 WO 2016171263A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- disturbance
- rigid body
- cutting line
- shape
- measured
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 109
- 230000003287 optical effect Effects 0.000 claims abstract description 179
- 238000012937 correction Methods 0.000 claims abstract description 119
- 238000005259 measurement Methods 0.000 claims abstract description 73
- 238000005520 cutting process Methods 0.000 claims description 288
- 238000003384 imaging method Methods 0.000 claims description 136
- 238000012545 processing Methods 0.000 claims description 120
- 230000008859 change Effects 0.000 claims description 118
- 230000008569 process Effects 0.000 claims description 70
- 238000004364 calculation method Methods 0.000 claims description 53
- 238000000691 measurement method Methods 0.000 claims description 4
- 230000001678 irradiating effect Effects 0.000 claims description 2
- 230000000694 effects Effects 0.000 abstract description 2
- 238000003860 storage Methods 0.000 description 30
- 238000010586 diagram Methods 0.000 description 27
- 238000004891 communication Methods 0.000 description 22
- 238000011088 calibration curve Methods 0.000 description 20
- 230000006870 function Effects 0.000 description 11
- 229910052782 aluminium Inorganic materials 0.000 description 9
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 9
- 238000009826 distribution Methods 0.000 description 6
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 238000013519 translation Methods 0.000 description 6
- 238000004519 manufacturing process Methods 0.000 description 5
- 230000010355 oscillation Effects 0.000 description 5
- 229910000831 Steel Inorganic materials 0.000 description 4
- 238000006243 chemical reaction Methods 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 239000010959 steel Substances 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000007246 mechanism Effects 0.000 description 3
- XEEYBQQBJWHFJM-UHFFFAOYSA-N Iron Chemical compound [Fe] XEEYBQQBJWHFJM-UHFFFAOYSA-N 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 239000011265 semifinished product Substances 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- RTAQQCXQSZGOHL-UHFFFAOYSA-N Titanium Chemical compound [Ti] RTAQQCXQSZGOHL-UHFFFAOYSA-N 0.000 description 1
- 239000000919 ceramic Substances 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 238000009434 installation Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 229910052742 iron Inorganic materials 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 150000002739 metals Chemical class 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 239000000047 product Substances 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 239000010936 titanium Substances 0.000 description 1
- 229910052719 titanium Inorganic materials 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
- G01B11/06—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness for measuring thickness ; e.g. of sheet material
- G01B11/0608—Height gauges
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
- G01B11/2513—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object with several lines being projected in more than one direction, e.g. grids, patterns
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/02—Measuring arrangements characterised by the use of optical techniques for measuring length, width or thickness
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
- G01B11/25—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures by projecting a pattern, e.g. one or more lines, moiré fringes on the object
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/521—Depth or shape recovery from laser ranging, e.g. using interferometry; from the projection of structured light
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/90—Arrangement of cameras or camera modules, e.g. multiple cameras in TV studios or sports stadiums
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30136—Metal
Definitions
- the present invention relates to a shape measuring device and a shape measuring method.
- the optical cutting line for measuring the original shape formed in the width direction of the rigid body to be measured
- the optical cutting line is oblique to the optical cutting line. It has been proposed to form further light cutting lines in the direction (directions not parallel to each other).
- measurement of the same point of a rigid body to be measured that should originally have the same surface height is performed twice for each of a plurality of points at different longitudinal direction positions and different width direction positions. Thereafter, the magnitude of the disturbance (vertical movement or rotation) that best matches the surface height of the plurality of points is derived by optimization calculation, and the influence of the disturbance is removed from the measurement result.
- the optimization calculation may not converge correctly if the measurement error increases in the surface height measurement at each measurement point.
- the technique disclosed in Patent Document 1 includes a case in which there are three simultaneous movements that can exist as disturbances: vertical movement (parallel movement in the height direction), rotation about the longitudinal axis, and rotation about the width axis.
- vertical movement parallel movement in the height direction
- rotation about the longitudinal axis rotation about the width axis.
- an error is superimposed on the measurement result.
- the present invention has been made in view of the above problems, and the object of the present invention is to perform parallel movement in the height direction, rotation around the longitudinal axis, or rotation around the width axis during conveyance.
- An object of the present invention is to provide a shape measuring device and a shape measuring method capable of measuring the surface height of a rigid body to be measured more accurately even when any of the three disturbances occurs.
- a surface of the measured rigid body from a plurality of linear laser light sources that move relative to the measured rigid body along the longitudinal direction of the measured rigid body is measured by a plurality of optical cutting lines with a plurality of linear laser beams irradiated to the surface of the rigid body to be measured that moves relative to the longitudinal direction.
- An imaging device that irradiates the three linear laser beams and images reflected light from the surface of the rigid body to be measured at predetermined intervals in the longitudinal direction, and the imaging device.
- An arithmetic processing unit that performs image processing on a captured image relating to the imaged optical cutting line and calculates a surface shape of the rigid body to be measured, and the imaging device has a width direction of the rigid body to be measured With the light cutting line extending to A first linear laser light source that emits an optical cutting line for shape measurement used for calculating a surface shape of the rigid body to be measured, parallel to a longitudinal direction of the rigid body to be measured, and A second linear laser light source that emits a first correction light cutting line that intersects the shape measuring light cutting line and is used to correct the influence of disturbance acting on the rigid body to be measured; It is parallel to the longitudinal direction of the measuring rigid body, intersects the shape measuring light cutting line, and exists at a position in the width direction of the measured rigid body different from the first correction light cutting line.
- a third linear laser light source that emits a second correction light cutting line that is used to correct the influence of a disturbance acting on the rigid body to be measured, and the shape measuring light cutting line in a predetermined longitudinal direction. Take an image at each time corresponding to the interval, and A first camera that generates a captured image of the shape measuring optical cutting line, and the correction optical cutting line is imaged at each time corresponding to a predetermined longitudinal interval, and each of the correction optical cutting lines at each time is corrected.
- a second camera that generates a captured image of the optical section line, and the arithmetic processing unit captures an image of the optical section line for shape measurement at each time generated by the first camera.
- a shape data calculation unit that calculates a shape data on which a measurement error due to the disturbance is superimposed and represents a three-dimensional shape of the surface of the rigid body to be measured, and the same position of the rigid body to be measured is different
- a height change value acquisition process for acquiring a height change value resulting from the disturbance at the position from a height measurement value related to the surface height of the measured rigid body obtained at two times is used for the first correction.
- Image of optical cutting line Is used for a plurality of points at different longitudinal positions of the first correction light section line, and the height change value acquisition process is performed on the captured image of the second correction light section line.
- a shape measurement device having a disturbance estimation unit that estimates the amount of height fluctuation caused by the noise, and a correction unit that corrects a measurement error caused by the disturbance by subtracting the height fluctuation amount from the shape data. Is done.
- the disturbance estimation unit linearly approximates the height change value caused by the disturbance at a plurality of points on the first correction light cutting line, and the intersection estimation point between the straight line and the shape measuring light cutting line.
- a height change value caused by a disturbance is estimated, a height approximation value caused by the disturbance at a plurality of points on the second correction light cutting line is approximated by a straight line, and the straight line and the light beam for shape measurement are cut.
- the height change value caused by the disturbance at the intersection with the line is estimated, and the height fluctuation amount is estimated by a straight line connecting the height change values caused by the disturbance at the two intersections.
- the first camera and the second camera capture images at each time corresponding to a predetermined longitudinal interval to generate N (N is an integer of 2 or more) captured images, respectively, It is preferable that the disturbance estimation unit calculates the height fluctuation amount on the assumption that the disturbance does not occur in the first captured image.
- the imaging timing of the first camera and the second camera is the imaged image of the second camera captured at the imaging time adjacent to each other, and the target light beam that has been irradiated with the correction light cutting line in common. It is controlled so that a common irradiation region that is a part of the measurement rigid body exists, and the disturbance estimation unit is configured to perform the measurement in each of the first correction light cutting line and the second correction light cutting line. It is preferable to calculate a height change value caused by the disturbance for the plurality of points corresponding to the common irradiation region.
- the first linear laser light source, the second linear laser light source, and the third linear laser light source are in relation to a plane in which the optical axis of each light source is defined by the longitudinal direction and the width direction of the rigid body to be measured. It is preferable to be arranged vertically.
- An angle formed by the optical axis of the first camera and the optical axis of the first linear laser light source, an angle formed by the line of sight of the second camera and the optical axis of the second linear laser light source, and The angle formed by the line of sight of the second camera and the optical axis of the third linear laser light source is preferably 30 degrees or more and 60 degrees or less independently of each other.
- the measured object from a plurality of linear laser light sources that move relative to the measured rigid body along the longitudinal direction of the measured rigid body, the measured object
- the shape of the rigid body to be measured is measured by a plurality of optical cutting lines with a plurality of linear laser beams irradiated to the surface of the rigid body, and the optical cutting lines extending in the width direction of the measured rigid body
- a first linear laser light source for emitting a shape-measuring optical cutting line used for calculating the surface shape of the rigid body to be measured, parallel to the longitudinal direction of the rigid body to be measured, and
- a second linear laser light source that emits a first correction light cutting line that intersects the shape measuring light cutting line and is used to correct the influence of disturbance acting on the rigid body to be measured; Parallel to the longitudinal direction of the measuring rigid body,
- the cross section of the shape measuring light cutting line is present at a position in the width direction of the measured rigid body different from the first correction light cutting line, and the influence of disturbance acting
- the rigid body to be measured Based on an imaging step of imaging reflected light from the surface at predetermined longitudinal intervals, and an image of the shape measuring light cutting line generated at each time generated by the first camera, the rigid body to be measured
- a shape data calculating step that represents a three-dimensional shape of the surface and that calculates shape data on which a measurement error caused by the disturbance is superimposed; and the measured object acquired at two different times for the same position of the measured rigid body
- a height change value acquisition process for acquiring a height change value caused by the disturbance at the position from a height measurement value related to the surface height of the rigid body is performed using a captured image of the first correction light cutting line.
- the first correction light cutting line is applied to a plurality of points at different longitudinal positions, and the height change value acquisition process is performed using the captured image of the second correction light cutting line.
- the second supplement The height change value caused by the plurality of disturbances obtained from the captured images of the first correction light cutting line, performed on a plurality of points at different longitudinal positions of the main light cutting line, A height variation amount caused by the disturbance superimposed on the shape data using a plurality of height change values caused by the disturbance obtained from a captured image of the second correction light cutting line.
- a shape measurement method including a disturbance estimation step for estimating the measurement error, and a correction step for correcting a measurement error caused by the disturbance by subtracting the height fluctuation amount from the shape data.
- the disturbance estimation step by linearly approximating height change values caused by the disturbance at a plurality of points on the first correction light cutting line, at the intersection of the straight line and the shape measuring light cutting line.
- a height change value caused by the disturbance is estimated, and a straight line approximation of the height change value caused by the disturbance at a plurality of points on the second correction light cutting line is performed, so that the straight line and the shape measurement value are obtained.
- a height change value caused by the disturbance at the intersection with the light cutting line is estimated, and the height variation amount is estimated by a straight line connecting the height change values caused by the disturbance at the two intersections. preferable.
- the first camera and the second camera capture images at each time corresponding to a predetermined longitudinal interval to generate N (N is an integer of 2 or more) captured images, respectively,
- N is an integer of 2 or more
- the height fluctuation amount is calculated on the assumption that the disturbance has not occurred in the first captured image.
- the imaging timing of the first camera and the second camera is the imaged image of the second camera captured at the imaging time adjacent to each other, and the target light beam that has been irradiated with the correction light cutting line in common. It is controlled so that a common irradiation region that is a part of the measurement rigid body exists, and in the disturbance estimation step, each of the first correction light cutting line and the second correction light cutting line is used. It is preferable that height change values resulting from the disturbance are calculated for the plurality of points corresponding to the common irradiation region.
- the height change in the i-th (i 2,..., N) captured image of the second camera with reference to the first captured image of the second camera.
- the value is preferably calculated.
- the first linear laser light source, the second linear laser light source, and the third linear laser light source are in relation to a plane in which the optical axis of each light source is defined by the longitudinal direction and the width direction of the rigid body to be measured. It is preferable to be arranged vertically.
- An angle formed by the optical axis of the first camera and the optical axis of the first linear laser light source, an angle formed by the line of sight of the second camera and the optical axis of the second linear laser light source, and The angle formed by the line of sight of the second camera and the optical axis of the third linear laser light source is preferably 30 degrees or more and 60 degrees or less independently of each other.
- any of the three disturbances of parallel movement in the height direction, rotation around the longitudinal axis, or rotation around the width axis occurs.
- the surface height of the rigid body to be measured can be measured more accurately.
- FIG. 10 is an explanatory diagram for explaining an experimental example 1;
- FIG. 3 is an explanatory diagram for explaining Example 1;
- 10 is a graph showing the results of Experimental Example 1.
- FIG. 10 is a graph showing the results of Experimental Example 1.
- FIG. 10 is explanatory drawing for demonstrating the experiment example 2.
- FIG. FIG. 10 is explanatory diagram for demonstrating the experiment example 2.
- 10 is an explanatory diagram for explaining Example 2; 10 is a graph showing the results of Experimental Example 2.
- FIG. 10 is a graph showing the results of Experimental Example 2.
- FIG. 10 is an explanatory diagram for explaining an experimental example 3.
- FIG. 10 is an explanatory diagram for explaining Example 3.
- FIG. 10 is a graph showing the results of Experimental Example 3.
- FIG. 10 is a graph showing the results of Experimental Example 3.
- FIG. 1 is an explanatory diagram schematically showing the configuration of the shape measuring apparatus according to the present embodiment.
- the shape measuring apparatus 10 includes a plurality of linear laser light sources that move relative to the measured rigid body along the longitudinal direction of the measured rigid body, and are irradiated to the surface of the measured rigid body.
- This is an apparatus for measuring the shape of a rigid body to be measured by means of a so-called optical cutting method using a plurality of optical cutting lines with a linear laser beam.
- optical cutting method using a plurality of optical cutting lines with a linear laser beam.
- the width direction of the measured rigid body S is the C-axis direction (in the spatial coordinate system)
- the longitudinal direction of the measured rigid body S, that is, the transport direction is the L-axis direction
- the height direction of the measured rigid body S is Is the Z-axis direction.
- the measured rigid body S of interest in the present embodiment is an object that can be considered that its shape and volume do not change during the shape measurement process as described below. Therefore, for example, slabs and thick plates that are semi-finished products in the steel industry can be handled as the measured rigid body S in the present embodiment. Further, not only slabs and thick plates in the steel industry, but also various metals other than iron such as titanium, copper, and aluminum, ceramics, composite material slabs and thick plates, etc., are measured as rigid bodies S in this embodiment. It is possible to handle.
- the shape measuring apparatus 10 irradiates the surface of the rigid body S to be measured with a plurality of linear laser beams, and linear laser beams on the surface of the rigid body S to be measured.
- the imaging apparatus 100 that captures reflected light of light, and predetermined image processing on the image captured by the imaging apparatus 100 are performed, and the three-dimensional shape of the rigid body S to be measured (that is, the L-axis-C-axis plane) And an arithmetic processing unit 200 for calculating a surface height at each position.
- the imaging apparatus 100 irradiates the surface of the rigid body S to be measured with three linear laser beams, and the surface of the rigid body S to be measured sequentially along the longitudinal direction at each time corresponding to a predetermined longitudinal interval. It is an apparatus that captures an image and outputs a captured image (light-cut image) obtained as a result of the imaging to an arithmetic processing apparatus 200 described later.
- the irradiation timing of the linear laser beam to the measured rigid body S, the imaging timing of the surface of the measured rigid body S, and the like are controlled by the arithmetic processing apparatus 200 described later.
- Such an imaging apparatus 100 is, for example, a PLG (Pulse Logic Generator: pulse) provided in a drive mechanism or the like that controls the conveyance of the measured rigid body S with a change in the longitudinal position of the measured rigid body S with respect to the imaging apparatus 100.
- PLG Pulse Logic Generator
- the arithmetic processing device 200 is a device that calculates the three-dimensional shape of the measured rigid body S by performing image processing as described below on the light section image generated at each time generated by the imaging device 100. It is.
- FIGS. 2 to 7 are explanatory diagrams schematically showing the configuration of the imaging apparatus according to the present embodiment.
- the imaging apparatus 100 includes three linear laser light sources 101 a, 101 b, and 101 c (hereinafter collectively referred to as “linear”) that each emit linear laser light.
- Laser light source 101 " and two area cameras 111 and 113.
- the linear laser light source 101a is an example of a first linear laser light source
- the linear laser light source 101b is an example of a second linear laser light source
- the linear laser light source 101c is a third linear light source. It is an example of a laser light source.
- the area camera 111 is an example of a first camera
- the area camera 113 is an example of a second camera.
- the case where the imaging apparatus 100 includes two area cameras will be described as an example.
- the number of area cameras included in the imaging apparatus 100 according to the present embodiment is such an example. It is not limited.
- the case where the imaging apparatus 100 includes three area cameras will be described later.
- the linear laser light source 101 is an apparatus that irradiates a surface of a measured rigid body (hereinafter, also simply referred to as “rigid body”) S, which is an object to be measured, with linear laser light (linear laser light). .
- the linear laser light source 101 according to the present embodiment can use any light source as long as it can irradiate the surface of the rigid body S with linear laser light.
- the laser light source for example, it is possible to use a CW (Continuous Wave) laser light source that continuously performs laser oscillation.
- the wavelength of the laser light oscillated by the laser light source is preferably a wavelength belonging to the visible light band of about 400 nm to 800 nm, for example.
- Such a laser light source oscillates laser light based on an oscillation timing control signal sent from an arithmetic processing unit 200 described later.
- the CW laser can be obtained by synchronizing the pulse laser oscillation timing with the imaging timing of the area cameras 111 and 113. It can be handled in the same way as a light source.
- the rod lens is a lens that spreads the laser light emitted from the laser light source in a fan-shaped surface toward the surface of the rigid body S.
- the laser beam emitted from the laser light source becomes a linear laser beam and is irradiated onto the surface of the rigid body S.
- a lens other than a rod lens such as a cylindrical lens or a Powell lens may be used as long as the laser light can be expanded in a fan shape.
- a bright linear portion (shown as a black line in FIG. 2 and the like) is formed on the surface of the rigid body S irradiated with the linear laser beam.
- three linear laser light sources 101a, 101b, and 101c are used, three bright portions are formed. These linear bright portions are called light cutting lines.
- the reflected light of the light cutting line on the surface of the rigid body S propagates to the area camera, forms an image on an image sensor provided in the area camera, and is imaged by the area camera.
- the optical cutting line obtained by the linear laser light source 101a referred to as a light section line L a
- the optical cutting line obtained by the linear laser light source 101b referred to as a light section line L b
- the linear laser the light section line obtained by the light source 101c referred to as a light section line L c
- the light cutting lines L a , L b , and L c are collectively referred to as “light cutting lines L”.
- the optical cutting line L a is an example of a configuration measuring optical cutting line.
- the light cutting line L b and the light cutting line L c are examples of the correction light cutting line.
- the light cutting line L b corresponds to the first correction light cutting line
- the light cutting line L c Corresponds to the second correction light section line.
- the linear laser light source 101 is installed on the transport line so as to satisfy all the following three conditions.
- ⁇ Light cutting line L a and the light cutting line L b has an intersection A.
- ⁇ Light cutting line L a and the light cutting line L c have an intersection B.
- the optical cutting line L b and the optical cutting line L c are both parallel to the L axis, and the optical cutting line L b and the optical cutting line L c exist at different width direction positions on the surface of the rigid body S. .
- the resulting surface height of the entire rigid body S can be obtained by connecting the lengths in the longitudinal direction according to relative movement between the rigid body S and the imaging device (for example, conveyance of the rigid body S).
- the surface height obtained by the light cutting method using one light cutting line is an apparent surface height including the disturbance, and the true surface height Becomes a measured value with different errors.
- the shape measuring apparatus 10 as described in detail below, the optical cutting line L b extending in the longitudinal direction of the rigid body S is added, and each point of the longitudinal position on the optical cutting line L b The relationship with the change in surface height due to disturbance is approximated by a straight line.
- the longitudinal position where the optical cutting line L a is present i.e., the intersection A of the optical cutting line L a and the light cutting line L b
- the value of the approximate line in as the surface height variation of the disturbance caused by the light section line L a, uniquely determined.
- the apparent surface height due to the disturbance changes from the surface height after the disturbance is removed (that is, the apparent surface due to the disturbance).
- the change in height from the true surface height changes linearly along the longitudinal direction.
- the measured values at each point on the optical cutting line L b by linear approximation, the effect of absorbing variations in the values due to measurement error.
- the addition of such an optical cutting line L b, Z direction vertical movement (the value of the approximate line, takes a constant value irrespective of the longitudinal position. That is, the slope is 0 approximate lines), the rotation about the C axis It is possible to uniquely determine the magnitude of two types of disturbances (the approximate straight line has a certain inclination with respect to the position in the longitudinal direction).
- the shape measuring apparatus 10 by using the three optical cutting lines as described above, the translation in the height direction, the rotation about the longitudinal axis, or the rotation about the width direction axis is performed at the time of conveyance. Even when any one of the three disturbances of rotation occurs, the surface height of the rigid body to be measured can be measured more accurately.
- the optical cutting line L a and the light cutting line L b are orthogonal, and are illustrated for the case where the optical cutting line L a and the light cutting line L c are orthogonal, the light
- the arrangement of the cutting lines is not limited to the case shown in these drawings. That is, the optical cutting line L a, in the case where the optical cutting line L b and cutting lines L c are not orthogonal even following description is similarly valid.
- the magnitude of the disturbance at the intersection point A and the intersection point B is calculated using the above approximate straight line, and even if the two light cutting lines are not orthogonal to each other. Because it is good.
- the specific length of the light cutting line L is not particularly limited, and the length is appropriately determined so that the luminance distribution of the light cutting line is uniform on the surface of the rigid body S. Good.
- the positions in the width direction of the light cutting lines L b and L c are not particularly limited, and the light cutting line L can be used regardless of the width of the rigid body S that is transported on the transport line. The positions may be set so that b and L c exist on the surface of the rigid body S.
- the area cameras 111 and 113 are equipped with a lens having a predetermined focal length and an image sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor). Each of the area cameras 111 and 113 captures an optical cutting image by capturing an optical cutting line, which is a reflected light of the linear laser light applied to the surface of the rigid body S, every time the rigid body S moves by a predetermined distance. Generate. In addition, the area cameras 111 and 113 output the generated light section image to the arithmetic processing device 200 described later.
- an image sensor such as a CCD (Charge Coupled Device) or a CMOS (Complementary Metal Oxide Semiconductor).
- Each of the area cameras 111 and 113 captures an optical cutting image by capturing an optical cutting line, which is a reflected light of the linear laser light applied to the surface of the rigid body S, every time the rigid body S moves by a predetermined distance. Generate.
- the area cameras 111 and 113 are controlled by the arithmetic processing device 200 described later, and a trigger signal for imaging is output from the arithmetic processing device 200 every time the rigid body S moves by a predetermined distance.
- the area cameras 111 and 113 capture the surface of the rigid body S irradiated with the linear laser light to generate a light cut image, and the generated light cut image It outputs to the arithmetic processing unit 200.
- N N is an integer of 2 or more
- the linear laser light source 101a is arranged so that the plane containing the linear laser light emitted from the light source is perpendicular to the L-axis-C-axis plane (in other words, The optical axis of the linear laser light source 101a is set to be substantially parallel to the Z axis).
- the linear laser light irradiates portions with different positions in the longitudinal direction of the rigid body S due to disturbance described later, and it becomes difficult to measure the accurate surface shape.
- the plane including the emitted linear laser light is perpendicular to the L-axis-C-axis plane. (In other words, the light sources of the linear laser light sources 101b and 101c are substantially parallel to the Z axis), and each light source is installed on the transport line.
- the optical cutting line L a by disturbances, L b, when the rigid body S is rotated around the axis parallel to the L c e.g., with respect to the optical cutting line L a Strictly speaking, when the rigid body S rotates around the L axis with respect to the optical cutting lines L b and L c , the irradiation position of the linear laser beam is not exactly the same.
- the true surface height of the rigid body S changes smoothly and the amount of rotation of the rigid body S is not large, the linear laser beam can be obtained even when there is such rotation. Can be considered as irradiating the same position on the surface of the rigid body S.
- the latter assumption can be said to be appropriate.
- the optical axis of the area camera 111 and the optical axis of the linear laser light source 101a (on the L-axis / Z-axis plane)
- the magnitude of the angle ⁇ 1 formed with the Z axis can be set to an arbitrary value.
- the size of the angle ⁇ 1 is preferably about 30 to 60 degrees.
- the area camera 111 is separated from the specular reflection direction of the linear laser light source 101a, becomes dark light cutting line L a captured by the area camera 111, In order to perform photographing with the same brightness, a higher-power laser is required.
- the optical axis of the area camera 111 projected on the L axis -C axis plane, so that the optical cutting line L a are perpendicular to each other, it is preferable to install a area camera 111. It is possible to align the: Thus, C-axis direction of the resolution (mm) length corresponding to one pixel (unit) of the optical cutting line L a as seen from the area camera 111.
- the optical cutting line L a and the light cutting line L b may not be perpendicular to the L c. That is, the optical cutting line L a may or may not be parallel to the width direction (C-axis). This is because, as described above, in order to calculate the disturbance quantity at the intersection point A, the intersection B, the optical cutting line L a and the light cutting line L b, and a L c is because may not be orthogonal.
- the area camera 111 the entire light section line L a is to be included in the imaging field, the imaging area AR1 of the area camera 111 is set.
- the area with respect to the optical cutting lines L b and L c in the C-axis-Z-axis plane are arbitrary values as with the angle ⁇ 1.
- the sizes of the angles ⁇ 2 and ⁇ 3 are preferably about 30 to 60 degrees, respectively.
- an optical cutting line L b in the L axis -C axis plane Similar to the relationship between the light section lines L a and area camera 111, an optical cutting line L b in the L axis -C axis plane, the optical axis of the area camera 113 projected on the L axis -C axis plane are preferably orthogonal to each other. At this time, since the optical cutting line L b and the optical cutting line L c are parallel to each other, if the condition according the optical cutting line L b is satisfied, automatically condition fulfilled even for light cutting line L c It is.
- the imaging region AR2 of the area camera 113 is set so that the intersection A and the intersection B are included in the imaging field of view.
- FIG. 6 illustrates a case where the entire optical cutting lines L b and L c are included in the imaging field of view.
- the disturbance described later An estimation process can be performed. In order to increase the accuracy of the disturbance estimation process described later, it is preferable to include the entire light cutting lines L b and L c in the imaging field of view.
- the imaging timing of the area cameras 111 and 113 is the time adjacent to each other (for example, the i-th imaging time (i is an integer equal to or greater than 1), and i + 1.
- the i-th imaging time i is an integer equal to or greater than 1
- i + 1 the portion of the rigid body S
- the arithmetic processing device 200 calculates the magnitude of the disturbance by paying attention to the light cutting lines L b and L c in the common irradiation portion.
- FIG. 7 illustrates a case where the surface of the rigid body S is flat and no disturbance is generated between the two continuous images, but the surface of the rigid body S is not flat or continuous. Even when a disturbance occurs between the two images, a common irradiation portion exists.
- the shape measuring apparatus 10 measures the surface height of the rigid body S when a rigid body such as a slab or a thick plate is continuously conveyed.
- a rigid body such as a slab or a thick plate
- there are various causes of measurement errors such as vibration caused by a drive mechanism provided in a conveyance line or the like.
- the surface of the rigid body S is illustrated with attention paid to the flat surface.
- the following description is not limited to the case illustrated in FIG. 9 to FIG. The same holds true when the surface of S is not flat. If the surface of the rigid body S is not flat, the light cutting line itself is a curve, but the change in the light cutting line due to the presence or absence of disturbance changes linearly along the longitudinal direction as in the case of flatness. is there.
- each of the light cutting lines L a , L b , and L c is the same amount as shown in FIG. Translates vertically in the image.
- the rotation of the L-axis as a disturbance occurs during i + 1 th imaging, as shown in FIG. 10 the inclination and length of the optical cutting line L a is changed, the optical cutting line L b , L c translate in the image by different amounts.
- the inclinations of the light cutting lines L b and L c change as shown in FIG.
- a change in the surface height (a change in the Z coordinate) caused by the disturbance generated in the rigid body S is compared by comparing two consecutive images obtained by the area camera 113. ) At each image capturing time. Thereafter, the surface height on which the measurement error due to the disturbance obtained from the light cut image of the area camera 111 is superimposed based on the change in the surface height caused by the obtained disturbance (in other words, the magnitude of the disturbance). Is corrected and the true surface height is output.
- FIG. 12 is a block diagram showing an example of the configuration of the image processing unit of the arithmetic processing device provided in the shape measuring apparatus according to the present embodiment.
- 14 and 15 and FIGS. 17 to 23 are explanatory diagrams for explaining the disturbance estimation process performed by the disturbance estimation unit according to the present embodiment.
- FIG. 16 is a block diagram illustrating an example of a configuration of a disturbance estimation unit included in the image processing unit according to the present embodiment.
- FIG. 24 is an explanatory diagram for describing shape data calculation processing performed by the shape data calculation unit according to the present embodiment.
- 25 and 26 are explanatory diagrams for explaining the correction processing performed by the correction processing unit according to the present embodiment.
- the arithmetic processing device 200 mainly includes an imaging control unit 201, an image processing unit 203, a display control unit 205, and a storage unit 207.
- the imaging control unit 201 is realized by, for example, a CPU (Central Processing Unit), a ROM (Read Only Memory), a RAM (Random Access Memory), a communication device, and the like.
- the imaging control unit 201 performs overall control of imaging processing of the rigid body S by the imaging device 100 according to the present embodiment.
- the imaging control unit 201 sends a control signal for starting oscillation of the linear laser light source 101 to the imaging device 100 when imaging of the rigid body S is started.
- the imaging control unit 201 moves a PLG signal (for example, the rigid body S moves 1 mm) periodically transmitted from a drive mechanism or the like that controls the conveyance of the rigid body S.
- a PLG signal for example, the rigid body S moves 1 mm
- a trigger signal for starting imaging is sent to the area cameras 111 and 113.
- the image processing unit 203 is realized by, for example, a CPU, a ROM, a RAM, a communication device, and the like.
- the image processing unit 203 acquires the imaging data generated by the area cameras 111 and 113 (that is, the imaging image data related to the light section image), performs the image processing described below on the imaging data, and performs the processing of the rigid body S.
- the height of the entire surface is calculated as 3D shape data.
- the image processing unit 203 transmits information about the obtained calculation result to the display control unit 205 and the storage unit 207, or is provided outside the shape measuring apparatus 10. Transmitted to various devices.
- the image processing unit 203 will be described in detail later.
- the display control unit 205 is realized by, for example, a CPU, a ROM, a RAM, an output device, a communication device, and the like.
- the display control unit 205 displays the measurement result of the rigid body S transmitted from the image processing unit 203 on an output device such as a display provided in the arithmetic processing device 200, an output device provided outside the arithmetic processing device 200, or the like. Display control. Thereby, the user of the shape measuring apparatus 10 can grasp the measurement result regarding the three-dimensional shape of the rigid body S on the spot.
- the storage unit 207 is an example of a storage device included in the arithmetic processing device 200, and is realized by, for example, a ROM, a RAM, a storage device, or the like.
- the storage unit 207 stores calibration data related to the light section line L used in image processing performed by the image processing unit 203. Further, the storage unit 207 stores information indicating the optical positional relationship between the linear laser light source 101 and the area cameras 111 and 113 included in the imaging apparatus 100, and a high-order computer (for example, provided outside the shape measuring apparatus 10). Information related to the design parameters of the shape measuring apparatus 10 such as information transmitted from a management computer or the like that generally manages the conveyance line is also stored.
- the storage unit 207 various parameters that need to be saved when the arithmetic processing apparatus 200 according to the present embodiment performs some processing, or the progress of the processing (for example, the measurement result transmitted from the image processing unit 203). , Pre-stored calibration data, various databases, programs, etc.) are recorded as appropriate.
- the storage unit 207 can be freely read / written by the imaging control unit 201, the image processing unit 203, the display control unit 205, the host computer, and the like.
- the image processing unit 203 includes an imaging data acquisition unit 211, a disturbance estimation unit 213, a shape data calculation unit 215, a correction unit 217, a result output unit 219, Is provided.
- the imaging data acquisition unit 211 is realized by, for example, a CPU, a ROM, a RAM, a communication device, and the like.
- the imaging data acquisition unit 211 acquires optical cutting line imaging data (that is, image data related to an optical cutting image) output from the area cameras 111 and 113 of the imaging device 100.
- the imaging data acquisition unit 211 acquires, from the area camera 113, imaging data related to the optical cutting lines L b and L c used as the correction optical cutting lines (in other words, imaging data obtained by imaging the imaging area AR2 in FIG. 6). Then, the imaging data is output to a disturbance estimation unit 213 described later.
- the imaging data acquiring unit 211, the area camera 111, (in other words, the imaging data of the captured imaging region AR1 in FIG. 5) imaging data to an optical cutting line L a which is used as configuration measuring optical cutting line, It outputs to the shape data calculation part 215 mentioned later.
- the imaging data acquisition unit 211 associates the imaging data related to the optical cutting line acquired from the imaging device 100 with the time information related to the date and time when the imaging data is acquired, and stores it as history information in the storage unit 207 or the like. Good.
- the disturbance estimation unit 213 is realized by, for example, a CPU, a ROM, a RAM, and the like.
- the disturbance estimation unit 213 estimates the magnitude of the disturbance generated in the rigid body S using the image data of the correction light cutting lines (that is, the light cutting lines L b and L c ) imaged by the area camera 113. Is a processing unit.
- the disturbance estimation unit 213 obtains a height change value caused by the disturbance at the position from the height measurement values regarding the surface height of the rigid body S acquired at two different times for the same position of the rigid body S.
- the height change value acquisition process to be acquired is performed on the captured image obtained from the area camera 113.
- the height change value acquisition process is performed on a plurality of positions.
- the disturbance estimation unit 213 uses the height change value at the intersection A obtained from the light section line L b and the height change value at the intersection B obtained from the light section line L c.
- the height fluctuation amount superimposed on the shape data calculated by the shape data calculation unit 215 described later is estimated.
- the disturbance estimation process in the disturbance estimation unit 213 will be described in detail later.
- the disturbance estimation unit 213 outputs the obtained disturbance estimation result to the correction unit 217, which will be described later, after completing the disturbance estimation process described in detail below. Further, the disturbance estimation unit 213 may associate the time information regarding the date and time when the data is generated with the data representing the estimation result regarding the obtained disturbance and store the data in the storage unit 207 or the like as history information.
- the shape data calculation unit 215 is realized by a CPU, a ROM, a RAM, and the like, for example.
- the shape data calculation process in the shape data calculation unit 215 will be described again below.
- the shape data calculation unit 215 When the shape data calculation unit 215 finishes the shape data calculation process described below, the shape data calculation unit 215 outputs the obtained shape data to the correction unit 217 described later.
- the shape data calculation unit 215 may associate the obtained shape data with time information related to the date and time when the shape data is generated and store it as history information in the storage unit 207 or the like.
- the correction unit 217 is realized by, for example, a CPU, a ROM, a RAM, and the like.
- the correction unit 217 corrects the measurement error caused by the disturbance by subtracting the height fluctuation amount calculated by the disturbance estimation unit 213 from the shape data calculated by the shape data calculation unit 215.
- true shape data relating to the rigid body S from which a measurement error due to disturbance that may occur in the rigid body S is removed is generated.
- the correction unit 217 finishes the correction process described below, the correction unit 217 outputs the corrected shape data to the result output unit 219 described later.
- the result output unit 219 is realized by, for example, a CPU, a ROM, a RAM, an output device, a communication device, and the like.
- the result output unit 219 outputs information regarding the surface shape of the rigid body S output from the correction unit 217 to the display control unit 205. Thereby, the information regarding the surface shape of the rigid body S will be output to a display part (not shown).
- the result output unit 219 may output the measurement result related to the obtained surface shape to an external device such as a manufacturing control computer, and creates various forms using the obtained measurement result. May be.
- the result output unit 219 may store information on the surface shape of the rigid body S as history information in the storage unit 207 or the like in association with time information on the date and time when the information is calculated.
- Disturbance Estimation Processing in Disturbance Estimation Unit 213 the disturbance estimation processing performed in disturbance estimation unit 213 will be described in detail with reference to FIGS. First, before explaining the disturbance estimation process, calibration data used in the disturbance estimation process will be described.
- the storage unit 207 stores in advance calibration data relating to the optical cutting line L, which is used in the disturbance estimation process in the disturbance estimation unit 213 and the shape calculation process in the shape data calculation unit 215. Has been.
- the calibration data stored in advance in the storage unit 207 includes two types of calibration data: first calibration data and second calibration data.
- the first calibration data is the amount of change (unit: pixel) of the position of the light section line on the captured image captured by the area cameras 111, 113, and the amount in real space (unit: length such as mm or m). This is calibration data necessary for conversion to a unit of length, which will be described below using units of mm.
- the first calibration data includes the normal imaging resolution (mm / pixel) of the area camera, and the angles ⁇ 1 , ⁇ 2 , ⁇ 3 formed by the line of sight with respect to the light cutting lines L a , L b , and L c and the Z-axis direction.
- the data is calculated from.
- the imaging resolution and the angles ⁇ 1 , ⁇ 2 , and ⁇ 3 are not constants, and the height of the rigid body S Depending on the value.
- the first calibration data is referred to as a calibration curve.
- Such calibration curve each of the light section lines L a, L b, are set respectively L c.
- the first calibration data can be calculated by calculation or can be obtained by actual measurement.
- the focal length f of the lens mounted on the area cameras 111 and 113, the distance a from the lens to the measurement target (that is, the rigid body S), the area camera 111, The distance b from the image sensor provided in 113 to the lens is used. More specifically, by using these parameters, the first calibration data can be calculated by obtaining the magnification m represented by Expression 103 by the imaging formula represented by Expression 101 below.
- the imaging resolution D (mm / pixel) is a value represented by the following Expression 105. Since the imaging resolution D is an imaging resolution in a plane perpendicular to the line of sight, when the angle formed between the line of sight and the normal direction is ⁇ degrees, the amount of vertical movement H (mm) of the measurement target corresponding to one pixel. Is a value represented by the following expression 107.
- the amount of vertical movement H of the measurement object corresponding to one pixel obtained as described above represents the change amount (unit: pixel) of the light section line on the captured images captured by the area cameras 111 and 113 in real space.
- This is a conversion coefficient for conversion into a quantity (unit: mm, for example).
- the value given by the above equation 107 based on the optical positional relationship between the area cameras 111 and 113 and the light cutting lines L a , L b , and L c corresponding to the area cameras 111 and 113 is set to the respective light cutting points. It can be used as calibration curves C a , C b , C c (that is, first calibration data) for the lines L a , L b , L c .
- the second calibration data is a conveyance distance (unit: unit of length such as mm or m) of the rigid body S in the real space between two consecutive image capturing times in the real space shown in FIG. Is the data representing the amount of movement (unit: pixel) in the horizontal direction within the image corresponding to.
- the second calibration data is set for each of the light cutting lines L b and L c .
- the second calibration data is calibration data used to estimate the magnitude of the disturbance.
- the second calibration data can also be calculated by calculation or can be obtained by actual measurement.
- the second calibration data includes the transport distance ⁇ s ( ⁇ s shown in FIG. 13) in the real space of the rigid body S in the generated captured image while two consecutive captured images are generated. This is data indicating how many pixels are supported. Therefore, when calculating the second calibration data by calculation, the imaging resolution D calculated by the above formula 105 is calculated for both the optical cutting lines L b and L c , and the obtained imaging resolution D b , The set value of the transport distance ⁇ s in the real space may be divided using D c .
- a captured image may be generated while being translated. Then, the obtained captured image is analyzed, and the horizontal movement amounts ⁇ L b and ⁇ L c in the captured image may be measured.
- the position in the height direction where the optical cutting line Lb is imaged is expressed as the optical cutting line L.
- b for the reference position in the Y-coordinate Y b (i.e., the position of Y b 0) and the reference position of the X-coordinate X b, and the left edge of the captured image.
- X-coordinate X b relates to an optical cutting line L b is defined along the extending direction of the light section lines L b, X-axis direction X b and Y-axis direction Y b to an optical cutting line L b is 14 It is defined as shown in
- the position in the height direction where the light cutting line Lc is picked up is shown as the light cutting line.
- the reference position of the X-coordinate X c the left edge of the captured image.
- X-coordinate X c to an optical cutting line L c is defined along the extending direction of the light section lines L c, X-axis direction X c and Y-axis direction Y c relates to an optical cutting line L c is 15 It is defined as shown in
- FIG. 24 A specific example of the coordinate system about the optical cutting line L a, while referring to FIG. 24 below for reiterated.
- the calibration curve C A value when the “height” in the captured image is converted into a real space (unit: mm) by a 1 , C b , and C c is expressed as “a height in the Z coordinate” or the like.
- the disturbance estimation processing performed by the disturbance estimation unit 213 will be described in detail with reference to FIGS. 16 to 23.
- the light cutting lines L b and L c out of the surface of the rigid body S based on the captured image captured by the area camera 113 and including the light cutting lines L b and L c.
- a height change value (that is, a change amount of the Z coordinate in the real space) due to a disturbance in the portion existing above is calculated.
- the disturbance estimation unit 213 approximates the distribution of the amount of change in the Yb coordinate along the Xb direction with a straight line.
- a disturbance estimating section 213 while suppressing the variations of values due to the measurement error at each point on the optical cutting line L b, with X b coordinates corresponding to the intersection A shown in FIG. 2
- the amount of change in the Yb coordinate value can be accurately calculated.
- the change amount of the Z coordinate at the intersection A and the intersection B calculated as described above is expressed as the C coordinate.
- the amount of change in the Z coordinate can be plotted on a plane with the vertical axis. Since the measurement object of interest in the shape measuring apparatus 10 according to the present embodiment is a rigid body, the change in the Z coordinate at each point in the width direction of the rigid body S located between the intersection A and the intersection B in the real space. The amount should change linearly.
- the disturbance estimation unit 213 can obtain the change in the Z coordinate due to the disturbance at each position in the width direction connecting the two intersections by obtaining the straight line as described above on the C-axis-Z-axis plane.
- the disturbance estimation unit 213 that performs the disturbance estimation process includes the common irradiation partial disturbance estimation unit 221, An intersection position disturbance estimation unit 223.
- the common irradiation partial disturbance estimation part 221 is implement
- Common irradiated portion disturbance estimating unit 221 performs the calculation processing of the variation of the values of Y b coordinate and Y c coordinates due to disturbances, such as described above with respect to the common illumination portion shown in FIG.
- the processing performed by the common irradiation partial disturbance estimation unit 221 will be described in detail with reference to FIGS. 17 to 20.
- the optical cutting line L c if the i-th captured image captured by the area camera 113 is translated by ⁇ L c in the negative direction of the X c axis based on the second calibration data, i and X c coordinate common irradiated portion of the sheet, and X c coordinate common irradiated portion of i + 1 th, it is possible to match the. Since the common irradiation portion is at the same position on the rigid body S, the true surface height of the common irradiation portion in the real space is the same. Therefore, after aligning the X coordinates, the Y coordinate of the i-th common irradiation portion is compared with the Y coordinate of the i + 1-th common irradiation portion. The size can be estimated.
- the common irradiation partial disturbance estimation unit 221 includes an apparent surface height including a disturbance component obtained from the (i + 1) th captured image (hereinafter referred to as “apparent height”), and i.
- apparent height a disturbance component obtained from the (i + 1) th captured image
- disurbance component a disturbance component obtained from the (i + 1) th captured image
- FIG. 17 is an explanatory diagram for explaining a method of calculating the change in value of Y b coordinates of the disturbance caused in the common irradiated portion disturbance estimating unit 221. Note that FIG. 17 illustrates a case in which translation in the Z-axis direction occurs as a disturbance between two consecutive images, but in the following description, translation in the Z-axis direction occurs as a disturbance.
- the present invention is not limited to this case, and the same holds true for the case where rotation about the L axis occurs or the case where rotation about the C axis occurs. This is because, in any of the three disturbances, the changes in the Y b coordinate and the Y c coordinate caused by the disturbance can be linearly approximated because the object to be measured is a rigid body.
- the common irradiated portion disturbance estimating section 221 the same processing as that performed with respect to the optical cutting line L b, also be carried out with respect to the optical cutting line L c. Therefore, in the following figures and description, the processing to be performed for the optical cutting line L b as a representative, it is assumed that the description.
- the common irradiation part disturbance estimation unit 221 performs the following for the Xb coordinates belonging to each common irradiation part for the two i-th and i + 1-th captured images captured by the area camera 113. Execute the process.
- the light cutting line L b in the i-th captured image in the (X b , Y b ) coordinate system is regarded as a function of X b
- Y b F obs b (i, X b )
- F obs b (i, X b ) is referred to as “apparent height” of the light section line L b .
- the vertical movement of the light cutting line due to the cause is represented as a disturbance component d b (i, X b ).
- the vertical movement of the position of the light cutting line in the (i + 1) -th captured image is specified based on the position of the light cutting line in the i-th captured image. It can be seen that this is a method of estimating the magnitude of the disturbance (that is, estimating the disturbance between captured image frames).
- the light cutting method according to the present embodiment is a method for estimating the magnitude of the disturbance based on the position of the light cutting line in the first captured image, as just mentioned and described in detail later. Please note that.
- the height of the apparent optical cutting line L b in i th captured image Considering with reference to FIGS. 9 to 11, etc., to the likely if surface height observed when the "disturbance is not present It can be considered that the change in the position of the light cutting line due to the disturbance component is added. That is, the height of the apparent optical cutting line L b of the i-th captured image, as schematically shown in FIG. 17, a disturbance component, the surface height of the after disturbance is removed (i.e., disturbance The height of the surface that would be observed in the absence of .., hereinafter referred to simply as “the height of the surface after disturbance removal”).
- the disturbance component d b i, X b
- the disturbance component d b can be regarded as a linear function with respect to X b , that is, a straight line.
- the disturbance component in the first captured image is zero”. That is, in the first captured image and the second and subsequent captured images in which the common irradiated portion in the first captured image exists, for all the Xb coordinates belonging to the common irradiated portion, the following expression It is assumed that 121 is established.
- the surface height finally output by the image processing according to the present embodiment is higher than the original surface height.
- the plane determined by the magnitude of the disturbance component already added at the time of capturing the first image is a uniformly added value.
- the reference surface is determined as the rigid body S is, for example, a steel semi-finished product slab
- the correction is made by subtracting the flat surface so that the surface height of the full length that is finally output matches the reference surface.
- the surface height viewed from the reference plane can be obtained. Therefore, the following description will be made assuming that the above equation 121 is satisfied.
- the surface height after disturbance removal of the irradiated portion by the light section line L b in i-th shooting time may by pulling the disturbance component from a surface height of the apparent. That is, the surface height H b (i, X b ) after the disturbance removal of the rigid body S irradiated to the light cutting line L b in the i-th captured image can be obtained according to the following Expression 123.
- H b (i, X b ) F obs b (i, X b ) ⁇ d b (i, X b ) (Equation 123)
- the disturbance component in the (i + 1) -th captured image can be obtained by subtracting the surface height after disturbance removal from the apparent height in the (i + 1) -th captured image. That is, the following expression 125 is established.
- the surface height H b (i + 1, X b ) after disturbance removal in the i + 1-th captured image cannot be measured only from the i + 1-th image.
- the surface height after disturbance removal in the (i + 1) -th captured image is equal to the surface height after disturbance removal in the i-th captured image.
- the common irradiation partial disturbance estimation unit 221 calculates the surface height H b (i, X b ) after disturbance removal in the i-th sheet, which has already been obtained by Expression 123, in the transport direction (that is, utilizes those aligned common irradiated portion only [Delta] L b is translated in the negative directions) of X b axis, as i + 1 th surface height H b after disturbance rejection in the captured image of the (i + 1, X b) . That is, the fact that the relationship represented by the following expression 127 is established is used.
- i + 1 th disturbance component d b (i + 1, X b) is the apparent obtained from i + 1 th image and height, after i-th disturbance rejection Using the surface height, the following equation 129 can be used.
- the surface height H b (i + 1, X b ) after disturbance removal in the (i + 1) th captured image can be obtained.
- the surface height after disturbance removal on the first sheet and the disturbance component on the (i + 1) th sheet can be sequentially calculated.
- FIG. 19 is an explanatory diagram for explaining processing based on Expression 129 in the common irradiation portion of the first captured image and the second captured image.
- the disturbance component d b (2, X b ) in the second captured image has the apparent height F obs b (2, X b ) and the disturbance in the second captured image. It is the difference between the surface height H b (2, X b ) after removal.
- the surface height H b (2, X b ) after disturbance removal in the second captured image is the surface height after disturbance removal in the first captured image.
- H b (1, X b ) is translated by ⁇ L b as shown by a broken line in FIG.
- H b (1, X b + ⁇ L b ) H b (1, X b + ⁇ L b ).
- H b (1, X b ) is equal to F obs b (1, X b ).
- H b (1, X b + ⁇ L b ) is equal to F obs b (1, X b + ⁇ L b ).
- the disturbance component d b (2, X b ) in the second captured image has an apparent height of F obs b (2, X b ), which is the first apparent height. the equal to minus those moved [Delta] L b parallel. That is, the situation shown in FIG. 19 corresponds to the expression of the expression shown as expression 129 above.
- the disturbance generated in the rigid body S is a parallel movement in the Z direction
- the disturbance component (the size indicated by the one-dot chain line in FIG. 19) d b (2, X b ) is constant regardless of the Xb coordinate.
- FIG. 20 is an explanatory diagram for explaining processing based on Expression 123 and Expression 129 in the common irradiation portion of the second captured image and the third captured image.
- the disturbance component d b (2,2) already calculated based on FIG. 19 from the apparent height F obs b (2, X b ) obtained from the second image.
- the surface height H b (2, X b ) after disturbance removal can be calculated. This relationship illustrates the relationship represented by Equation 123 above.
- the surface height H b (2, X b ) after disturbance removal in the common irradiation part of the second captured image is the third imaged.
- the disturbance component d b (3, X b ) can be calculated.
- the disturbance component d b (3, X b ) of the third captured image is the apparent height F obs b (3, X b ) of the third captured image.
- the surface height H b (2, X b ) after disturbance removal in the second captured image is subtracted from the surface height H b (2 after disturbance removal in the second captured image.
- X b ) is obtained by subtracting the disturbance component d b (2, X b ) of the second captured image from the apparent height F obs b (2, X b ) of the second captured image. is there.
- the disturbance component d b (3, X b ) of the third captured image can be considered to be an amount based on the disturbance component d b (2, X b ) of the second captured image.
- the disturbance component d b (2, X b ) of the second captured image can be regarded as an amount based on the disturbance component d b (1, X b ) of the first captured image.
- the disturbance estimation process according to the present embodiment converts the disturbance component d b (i, X b ) in the i-th captured image from i ⁇ from the disturbance in the 1st captured image. It is specified as a result of integration of all disturbances up to disturbances in the first captured image.
- the magnitude of the disturbance component of the optical cutting line L b is a constant regardless of X b coordinate Become.
- the disturbance component d c (i, X c ) on the light cutting line L c existing at different positions in the width direction in the real space is also constant regardless of the coordinate X c .
- the value of the disturbance component d b and disturbance component d c are different, it is possible to grasp that the rotation of the L-axis is present.
- the common irradiation partial disturbance estimation unit 221 When the common irradiation partial disturbance estimation unit 221 performs the above processing, the disturbance component d b (i, X b ) on the light section line L b is obtained using two consecutive captured images. The size can be calculated. By applying the above process to the light cutting line L c in the same manner, the common irradiation partial disturbance estimation unit 221 causes the disturbance component d c (i, X c ) on the light cutting line L c. Can be calculated.
- the common irradiation partial disturbance estimation unit 221 outputs information on the magnitude of the disturbance component on each of the light cutting lines L b and L c calculated in this way to the intersection position disturbance estimation unit 223 described later.
- intersection position disturbance estimation unit 223 is realized by, for example, a CPU, a ROM, a RAM, and the like.
- the intersection position disturbance estimation unit 223 uses the magnitude of the disturbance in the common irradiation part calculated by the common irradiation part disturbance estimation unit 221 to linearly calculate the distribution of the disturbance magnitude along the X coordinate.
- the magnitude of the disturbance at the intersection A and the intersection B is calculated by approximating and extrapolating the obtained approximate straight line to the intersection position (in some cases, interpolation).
- the variation occurring at each point on the light cutting lines L b and L c is absorbed, and the intersection point A is more accurately compared with the conventional light cutting method including the invention described in Patent Document 1.
- the value of the disturbance at the intersection B can be obtained.
- intersection position disturbance estimation unit 223 uses the calibration curves C b and C c which are the first calibration data, and calculates the surface height expressed in units of pixels in the Z coordinate (unit: mm). By converting into a value, the magnitude of the disturbance at the Z coordinate of the intersections A and B is calculated.
- the intersection position disturbance estimation unit 222 determines that the Z coordinate change ⁇ Z b (i) (unit: mm) due to the disturbance component at the intersection A in the i-th image.
- the first reason is because the measurement object is a rigid body, disturbance component d a (i, X a) along the optical cutting line L a in the image captured by the area camera 111 and, according disturbance component d This is because the disturbance component in the Z coordinate obtained by converting a (i, X a ) by the calibration curve C a is a straight line as in the case of the light cutting lines L b and L c .
- the second reason is that the value of the disturbance component at two points on a straight line to an optical cutting line L a can be identified, with respect to the optical cutting line L a, to estimate the value of the disturbance components at locations other than intersections This is because it becomes possible.
- FIG. 21 illustrates a case where a parallel movement in the Z-axis direction occurs as a disturbance between two consecutive captured images, but the following description is limited to the case illustrated in FIG. 21.
- the present invention is not limited to this, and can be similarly applied to the case where rotation around the L axis occurs and the case where rotation around the C axis occurs.
- the imaging time of the i th captured image with respect to the intersection point A between the optical cutting line L a and the light cutting line L b represents the Z coordinate of the apparent containing the disturbance component with Z b (i), the light section regard intersection B of the line L a and the light cutting line L c, and be expressed as an apparent containing the disturbance component Z coordinate Z c (i).
- the surface height in the Z coordinate that is regarded as no disturbance occurs up to the i-th image on the basis of the time when the first captured image is captured (that is, the Z coordinate after disturbance removal). Is expressed as Z b t (i) for the intersection point A and Z c t (i) for the intersection point B.
- the difference between the apparent surface height Z b (i) at the intersection A in the Z coordinate and the surface height Z b t (i) after disturbance removal in the Z coordinate Is defined as a change in Z coordinate ⁇ Z b (i) due to a disturbance component.
- the difference between the apparent surface height Z c (i) at the intersection B in the Z coordinate and the surface height Z c t (i) after disturbance removal in the Z coordinate Is defined as a Z coordinate change ⁇ Z c (i) due to a disturbance component.
- the intersection position disturbance estimation unit 223, as shown in FIG. 22, outputs the disturbance component d b ( Consider how the size of i, Xb ) is distributed along the Xb direction.
- the intersection position disturbance estimation unit 223 linearly approximates the distribution of the disturbance component d b (i, X b ) along the Xb direction by a known statistical process such as a least square method. Thereafter, the intersection position disturbance estimation unit 223 uses the Xb coordinate of the intersection A and the calculated approximate straight line, and the disturbance component d b (i, A) (unit) is the magnitude of the disturbance component at the intersection A. : Pixel).
- the intersection position disturbance estimation unit 223 uses the calibration curve C b that is the first calibration data to calculate the disturbance component d b (i, A) (unit: pixel).
- the magnitude of the disturbance component is converted into a disturbance component ⁇ Z b (i) (unit: mm) in the Z coordinate.
- the calibration curve C b is a curve
- the disturbance component d b (i, A) is mentioned above.
- the disturbance component is based on the first captured image. Specifically, in order to obtain ⁇ Z b (i) by applying the calibration curve C b as shown in FIG. 23, conversion from pixel unit to mm unit is performed at two points on the calibration curve, and the Z coordinate is obtained. It is necessary to take a difference in.
- the intersection position disturbance estimation unit 223 uses the apparent height F obs b (i, A) of the intersection A and the calibration curve C b as shown in FIG.
- the apparent surface height Z b (i) at the intersection A in the coordinates is calculated.
- the intersection position disturbance estimation unit 223 uses the surface height H b (i, A) after disturbance removal and the calibration curve C b to obtain the surface height after disturbance removal at the i-th Z coordinate.
- Z b t (i) is calculated.
- intersection position disturbance estimation unit 223 calculates the disturbance component ⁇ Z b (i) in the Z coordinate at the intersection A by calculating the difference between the two obtained surface heights. Further, the intersection position disturbance estimation unit 223 also calculates the disturbance component ⁇ Z c (i) in the Z coordinate at the intersection B in exactly the same manner.
- intersection position disturbance estimation unit 223 outputs the information regarding the magnitude of the disturbance component at the intersections A and B calculated in this manner to the correction unit 217.
- the disturbance estimation process performed by the disturbance estimation unit 213 has been described in detail above with reference to FIGS.
- Shape Data Calculation Processing in Shape Data Calculation Unit 215 will be described in detail with reference to FIG. In FIG. 24, the case where rotation around the L-axis is generated as a disturbance is illustrated, but the following description is not limited to the case illustrated in FIG. Absent.
- the shape data calculation unit 215 first refers to the captured image data captured by the area camera 111 and output from the captured data acquisition unit 211, and as illustrated in FIG. 24, the light in the i-th captured image. the apparent regarding the cutting line L a height F obs a (i, X a ) ( in pixels) to identify.
- the height direction of the optical cutting line L a is captured position
- the optical cutting line L the X-coordinate X a related a it can be defined along the extending direction of the light section line L a reference to the left edge of the captured image.
- the shape data calculation unit 215 stores the apparent height F obs a (i, X a ) (unit: pixel) obtained from the i-th captured image in the storage unit 207. using the calibration curve C a is the calibration data, the height Z of the apparent in the Z-coordinate (i, X a): to convert into (in mm unit of length such as).
- the apparent height Z (i, X a ) calculated in this way is a value on which a change in the Z coordinate (that is, a measurement error) caused by a disturbance is superimposed.
- the shape data calculation unit 215 outputs information regarding the apparent height Z (i, X a ) in the Z coordinate calculated in this way to the correction unit 217 described later.
- the correction unit 217 is calculated by the shape data including the measurement error calculated by the shape data calculation unit 215 (apparent height Z (i, X a ) in the Z coordinate) and the disturbance estimation unit 213. Then, a correction process is performed using the disturbance component (disturbance component ⁇ Z b (i) in the Z coordinate) to calculate the true surface height of the rigid body S that is the measurement object. By repeating this correction processing for all images captured by the area camera 111, the true surface height is superposed in the longitudinal direction, and as a result, the true surface height of the entire rigid body S is calculated. It becomes possible to do.
- the correction unit 217 first uses the disturbance components ⁇ Z b (i) and ⁇ Z c (i) in the Z coordinates at the intersection A and the intersection B calculated by the disturbance estimation unit 213 to obtain a diagram.
- a straight line as shown in FIG. As with prior mentioned, the disturbance component in the Z-coordinate along the optical cutting line L a [Delta] Z (i, X a), due measurement object is a rigid body, the primary function (i.e., linear) with respect to the coordinates X a .
- intersection point A and the disturbance component in the Z-coordinate at the intersection B ⁇ Z b (i), by calculating the straight line connecting the [Delta] Z c (i), the disturbance component in the Z-coordinate along the optical cutting line L a [Delta] Z (i , X a ) can be specified.
- the correcting unit 217 determines a change in the Z coordinate due to the disturbance (that is, the disturbance component) from Z (i, X a ) obtained by the shape data calculating unit 215. By subtracting ⁇ Z (i, X a )), the true surface height Z out (i, X a ) in the Z coordinate is calculated.
- correction processing performed by the correction unit 217 according to the present embodiment has been described above with reference to FIGS. 25 and 26.
- each component described above may be configured using a general-purpose member or circuit, or may be configured by hardware specialized for the function of each component.
- the CPU or the like may perform all functions of each component. Therefore, it is possible to appropriately change the configuration to be used according to the technical level at the time of carrying out the present embodiment.
- a computer program for realizing each function of the arithmetic processing apparatus according to the present embodiment as described above can be produced and mounted on a personal computer or the like.
- a computer-readable recording medium storing such a computer program can be provided.
- the recording medium is, for example, a magnetic disk, an optical disk, a magneto-optical disk, a flash memory, or the like.
- the above computer program may be distributed via a network, for example, without using a recording medium.
- FIGS. 27 and 28 are explanatory diagrams schematically showing a modification of the imaging device according to the present embodiment.
- the imaging apparatus 100 is described with the two area cameras 111 and 113 provided, but the configuration of the imaging apparatus 100 according to the present embodiment is not limited to such an example. .
- the optical cutting line L c captured by the area camera 117 uses three area camera along with area camera 111 It is also possible.
- Each area camera 115 and 117 as in the case of using the area camera 111, 113 of the two as the imaging apparatus 100, as shown in FIG. 28, the projection on the light cutting line L b and L axes -C axis plane is the orthogonal to the optical axis of the area camera 115, and the optical axis of the optical cutting line L c and L axis -C axis area camera 117 projected on the plane is installed so as to be orthogonal.
- the imaging area AR3 of the area camera 115 and the imaging area AR4 of the area camera 117 include the intersection point A and the intersection point B in the imaging field, respectively, as in the case where the two area cameras 111 and 113 are used as the imaging device 100.
- angles ⁇ 4 and ⁇ 5 formed by the optical axis of each area camera and the Z axis are preferably set to, for example, about 30 to 60 degrees for the same reason as in the case of two area cameras.
- the angles ⁇ 4 and ⁇ 5 may be the same value or different values. In any case, it is possible to measure the desired shape by the same calculation process as when one area camera is used.
- 27 and 28 show a case where two area cameras 115 and 117 are arranged on one side in the width direction of the rigid body S, but even in the direction of parallel movement in the disturbance estimation unit 213. if care the area camera 115 on the side of the optical cutting line L b side of the rigid body S is disposed, it is also possible to arrange the area camera 117 on the side of the light section lines L c side of the rigid body S.
- FIG. 29A and FIG. 29B are flowcharts showing an example of the flow of the shape measuring method according to the present embodiment.
- first calibration data and the second calibration data are appropriately generated and stored in the storage unit 207 using various methods as described above.
- the imaging apparatus 100 of the shape measuring apparatus 10 images the measured rigid body S being transported by the area cameras 111 and 113 under the control of the imaging control unit 201 in the arithmetic processing apparatus 200. Then, N captured images are generated (step S101). Each time the area cameras 111 and 113 of the imaging apparatus 100 generate one captured image, the imaging data of the generated captured image is output to the arithmetic processing apparatus 200.
- the imaging data acquisition unit 211 of the arithmetic processing device 200 acquires the imaging data from the imaging device 100
- the imaging data acquisition unit 211 outputs the imaging data generated by the area camera 111 to the shape data calculation unit 215 and is generated by the area camera 113.
- the captured image data is output to the disturbance estimation unit 213.
- the disturbance estimation process in the disturbance estimation unit 213 and the shape data calculation process in the shape data calculation unit 215 may be performed in parallel, or the process in one of the processing units may be performed in the other processing unit. Needless to say, the process may be performed prior to the process in FIG.
- the shape data calculation unit 215 uses the shape-measuring light cutting line (that is, the light cutting line L a ) and the calibration curve C a while referring to the i-th captured image by the method described above. Then, shape data in the real space (surface height in the Z coordinate) is calculated (step S107). When the shape data calculation unit 215 calculates shape data in real space for the i-th captured image, the shape data calculation unit 215 outputs information about the obtained shape data to the correction unit 217.
- the disturbance estimation unit 213 performs common irradiation based on each correction light cutting line (that is, the light cutting lines L b and L c ) with reference to the i-th captured image by the method described above.
- the disturbance component of the part is calculated (step S109).
- the disturbance estimation unit 213 calculates an approximate straight line using the calculated disturbance component, and then calculates the disturbance component at the intersection A and the intersection B (step S111).
- the disturbance estimation unit 213 converts the disturbance components at the intersections A and B into quantities in real space using the calibration curves C b and C c (step S113).
- the disturbance estimation unit 213 outputs information regarding the magnitude of the disturbance component in the obtained real space to the correction unit 217.
- the correction unit 217 outputs the disturbance component at the position of the shape-measuring optical cutting line by the method described above based on the disturbance component in the real space of the intersection point A and the intersection point B output from the disturbance estimation unit 213. Calculate (step S115). Thereafter, the correction unit 217 calculates the true surface height by subtracting the disturbance component in the real space from the shape data in the real space output from the shape data calculation unit 215 (step S117).
- FIG. 30 is a block diagram for explaining a hardware configuration of the arithmetic processing device 200 according to the embodiment of the present invention.
- the arithmetic processing apparatus 200 mainly includes a CPU 901, a ROM 903, and a RAM 905.
- the arithmetic processing device 200 further includes a bus 907, an input device 909, an output device 911, a storage device 913, a drive 915, a connection port 917, and a communication device 919.
- the CPU 901 functions as a central processing device and control device, and performs all or part of the operation in the arithmetic processing device 200 according to various programs recorded in the ROM 903, the RAM 905, the storage device 913, or the removable recording medium 921. Control.
- the ROM 903 stores programs used by the CPU 901, calculation parameters, and the like.
- the RAM 905 primarily stores programs used by the CPU 901, parameters that change as appropriate during execution of the programs, and the like. These are connected to each other by a bus 907 constituted by an internal bus such as a CPU bus.
- the bus 907 is connected to an external bus such as a PCI (Peripheral Component Interconnect / Interface) bus via a bridge.
- PCI Peripheral Component Interconnect / Interface
- the input device 909 is an operation means operated by the user such as a mouse, a keyboard, a touch panel, a button, a switch, and a lever.
- the input device 909 may be, for example, remote control means (so-called remote control) using infrared rays or other radio waves, or may be an external connection device 923 such as a PDA corresponding to the operation of the arithmetic processing device 200. May be.
- the input device 909 includes, for example, an input control circuit that generates an input signal based on information input by a user using the operation unit and outputs the input signal to the CPU 901. By operating the input device 909, the user can input various data to the shape measuring apparatus 10 and instruct processing operations.
- the output device 911 is configured by a device that can notify the user of the acquired information visually or audibly.
- Such devices include display devices such as CRT display devices, liquid crystal display devices, plasma display devices, EL display devices and lamps, audio output devices such as speakers and headphones, printer devices, mobile phones, and facsimiles.
- the output device 911 outputs results obtained by various processes performed by the arithmetic processing device 200, for example. Specifically, the display device displays the results obtained by various processes performed by the arithmetic processing device 200 as text or images.
- the audio output device converts an audio signal composed of reproduced audio data, acoustic data, and the like into an analog signal and outputs the analog signal.
- the storage device 913 is a data storage device configured as an example of a storage unit of the arithmetic processing device 200.
- the storage device 913 includes, for example, a magnetic storage device such as an HDD (Hard Disk Drive), a semiconductor storage device, an optical storage device, or a magneto-optical storage device.
- the storage device 913 stores programs executed by the CPU 901, various data, various data acquired from the outside, and the like.
- the drive 915 is a recording medium reader / writer, and is built in or externally attached to the arithmetic processing unit 200.
- the drive 915 reads information recorded in a removable recording medium 921 such as a mounted magnetic disk, optical disk, magneto-optical disk, or semiconductor memory, and outputs the information to the RAM 905.
- the drive 915 can write a record in a removable recording medium 921 such as a mounted magnetic disk, optical disk, magneto-optical disk, or semiconductor memory.
- the removable recording medium 921 is, for example, a CD medium, a DVD medium, a Blu-ray (registered trademark) medium, or the like.
- the removable recording medium 921 may be a compact flash (registered trademark) (CompactFlash: CF), a flash memory, an SD memory card (Secure Digital memory card), or the like. Further, the removable recording medium 921 may be, for example, an IC card (Integrated Circuit card) on which a non-contact IC chip is mounted, an electronic device, or the like.
- connection port 917 is a port for directly connecting a device to the arithmetic processing device 200.
- Examples of the connection port 917 include a USB (Universal Serial Bus) port, an IEEE1394 port, a SCSI (Small Computer System Interface) port, and an RS-232C port.
- the communication device 919 is a communication interface configured by a communication device for connecting to the communication network 925, for example.
- the communication device 919 is, for example, a communication card for wired or wireless LAN (Local Area Network), Bluetooth (registered trademark), or WUSB (Wireless USB).
- the communication device 919 may be a router for optical communication, a router for ADSL (Asymmetric Digital Subscriber Line), or a modem for various communication.
- the communication device 919 can transmit and receive signals and the like according to a predetermined protocol such as TCP / IP, for example, with the Internet and other communication devices.
- the communication network 925 connected to the communication device 919 is configured by a wired or wireless network, and is, for example, the Internet, a home LAN, an in-house LAN, infrared communication, radio wave communication, satellite communication, or the like. May be.
- each component described above may be configured using a general-purpose member, or may be configured by hardware specialized for the function of each component. Therefore, it is possible to change the hardware configuration to be used as appropriate according to the technical level at the time of carrying out this embodiment.
- the shape measuring apparatus and the shape measuring method according to the present invention will be specifically described with reference to examples.
- the Example shown below is an example of the shape measuring apparatus and shape measuring method which concern on this invention, Comprising: The shape measuring apparatus and shape measuring method which concern on this invention are limited to the Example shown below is not.
- Examples 1 to 3 shown below an aluminum plate whose surface is known to be flat was used as the rigid body S to be measured. Moreover, the shape measuring apparatus used for shape measurement is the shape measuring apparatus 10 according to the present embodiment as shown in FIGS. 1 and 2.
- Example 1 one aluminum sheet as described above is taken for 60 seconds at a constant speed of 5 mm / second, one image is taken with two area cameras per 0.2 second, and 60 sheets are obtained with each area camera. Obtained images were obtained. In advance, calibration curves C a , C b , C c and ⁇ L b , ⁇ L c were created, and the obtained data was stored in the storage unit.
- Example 2 In Example 2, the rotation about the L axis as shown in FIG. 32A (the rotation axis was the center position in the width direction of the aluminum plate, and the positive direction of the rotation angle was clockwise along the positive direction of the L axis. ) was added as a disturbance during the conveyance of the aluminum plate.
- the positional relationship between the position of the light cutting line and the rotation axis is as shown in FIG. 32B.
- Z (i, X a ) is superimposed on the change in the Z-axis direction due to rotation around the L-axis, and the surface height of the corresponding part is not flat. I understand. This result indicates that Z (i, X a ) cannot express an accurate surface height.
- Example 3 In Example 3, the rotation about the C axis as shown in FIG. 33A (the rotation axis is the central position in the longitudinal direction of the aluminum plate, and the positive direction of the rotation angle is clockwise along the positive direction of the C axis). ) was added as a disturbance during the conveyance of the aluminum plate. Note that the positional relationship between the position of the light cutting line and the rotation axis is as shown in FIG. 33B. As a result, as shown in FIG. 33C, Z (i, X a ) is superimposed on the change in the Z-axis direction due to rotation around the C-axis, and the surface height of the corresponding part is not flat. I understand.
- Shape measuring apparatus 100 Imaging apparatus 101a, 101b, 101c Linear laser light source 111,113,115,117 Area camera 200 Arithmetic processing apparatus 201 Imaging control part 203 Image processing part 205 Display control part 207 Storage part 211 Imaging data acquisition part 213 Disturbance estimation unit 215 Shape data calculation unit 217 Correction unit 219 Result output unit 221 Common irradiation partial disturbance estimation unit 223 Intersection position disturbance estimation unit
Landscapes
- Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Optics & Photonics (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
以下では、まず、図1を参照しながら、本発明の実施形態に係る形状測定装置10の全体構成について説明する。図1は、本実施形態に係る形状測定装置の構成を模式的に示した説明図である。
続いて、図2~図7を参照しながら、本実施形態に係る形状測定装置10が備える撮像装置100について、詳細に説明する。図2~図7は、本実施形態に係る撮像装置の構成を模式的に示した説明図である。
・光切断線Laと光切断線Lbが交点Aを持つ。
・光切断線Laと光切断線Lcが交点Bを持つ。
・光切断線Lbと光切断線Lcは、いずれもL軸と平行であり、光切断線Lbと光切断線Lcは、剛体Sの表面上の互いに異なる幅方向位置に存在する。
次に、図8~図11を参照しながら、被測定剛体Sに生じる外乱と、かかる外乱に伴って撮像される撮像画像(光切断画像)について、具体的に説明する。図8~図11は、被測定剛体に生じうる外乱について説明するための模式図である。
(1)Z軸方向(剛体Sの高さ方向)の平行移動
(2)L軸(剛体Sの長手方向)回りの回転
(3)C軸(剛体Sの幅方向)回りの回転
以下、これら3つの測定誤差の要因をまとめて、外乱とも称する。
次に、図1、及び、図12~図26を参照しながら、本実施形態に係る形状測定装置10が備える演算処理装置200について、詳細に説明する。図12は、本実施形態に係る形状測定装置が備える演算処理装置の画像処理部の構成の一例を示したブロック図である。図14及び図15、並びに、図17~図23は、本実施形態に係る外乱推定部が実施する外乱推定処理について説明するための説明図である。図16は、本実施形態に係る画像処理部が備える外乱推定部の構成の一例を示したブロック図である。図24は、本実施形態に係る形状データ算出部が実施する形状データ算出処理について説明するための説明図である。図25及び図26は、本実施形態に係る補正処理部が実施する補正処理について説明するための説明図である
再び図1に戻って、本実施形態に係る形状測定装置10が備える演算処理装置200の全体構成について説明する。
本実施形態に係る演算処理装置200は、図1に示したように、撮像制御部201と、画像処理部203と、表示制御部205と、記憶部207と、を主に備える。
次に、図12~図26を参照しながら、演算処理装置200が備える画像処理部203の構成について説明する。
本実施形態に係る画像処理部203は、図12に示したように、撮像データ取得部211と、外乱推定部213と、形状データ算出部215と、補正部217と、結果出力部219と、を備える。
以下では、図13~図23を参照しながら、外乱推定部213で実施される外乱推定処理について、詳細に説明する。
まず、外乱推定処理について説明するに先立って、かかる外乱推定処理で用いられる校正データについて説明する。
先だって言及したように、記憶部207には、外乱推定部213における外乱推定処理や、形状データ算出部215における形状算出処理で用いられる、光切断線Lに関する校正データが、予め格納されている。記憶部207に事前に格納される校正データには、第1の校正データ及び第2の校正データという、2種類の校正データがある。
計算によって第1の校正データを算出する場合には、エリアカメラ111,113に装着されたレンズの焦点距離fと、レンズから測定対象(すなわち、剛体S)までの距離aと、エリアカメラ111,113に設けられた撮像素子からレンズまでの距離bと、を利用する。より詳細には、これらのパラメータを利用して、以下の式101で表わされる結像公式により式103で表わされる倍率mを求めて、第1の校正データを計算することができる。
倍率:m=b/a ・・・・(式103)
H=D/sinα ・・・(式107)
第2の校正データは、図13に示した実空間上での連続する2枚の画像撮像時刻間において、剛体Sの実空間での搬送距離(単位:mmやm等の長さの単位)に相当する画像内での水平方向の移動量(単位:ピクセル)を表わしたデータである。この第2の校正データは、光切断線Lb,Lcに対して、それぞれ設定される。後述するように、かかる移動量の分だけエリアカメラ113で撮像された撮像画像を水平方向(実空間におけるL軸方向に対応する方向)に平行移動させることで、連続する2枚の撮影画像において剛体S上の同じ点の上下方向の移動量を比較することができる。このように、第2の校正データは、外乱の大きさを推定するために用いられる校正データである。
前述のように、第2の校正データは、連続する2枚の撮影画像が生成される間における剛体Sの実空間における搬送距離Δs(図13に示したΔs)が、生成された撮像画像において、どの程度の画素数に対応するかを示したデータである。従って、計算によって第2の校正データを算出する場合には、上記式105で算出される撮像分解能Dを、光切断線Lb,Lcの双方について算出し、得られた撮像分解能Db,Dcを用いて、実空間における搬送距離Δsの設定値を割ればよい。すなわち、光切断線Lbに関する水平方向の移動量をΔLbとし、光切断線Lcに関する水平方向の移動量をΔLcとすると、これらの値は、以下の式109及び式111によって算出することができる。
ΔLc=Δs/Dc ・・・(式111)
次に、図14及び図15を参照しながら、外乱推定処理で用いられる座標系について、具体的に説明する。
本実施形態に係る外乱推定部213で実施される外乱推定処理では、エリアカメラ113によって撮像された撮像画像に固定された座標系を用いて、画像処理が行われる。すなわち、エリアカメラ113によって生成された光切断画像において、剛体Sの長手方向に対応する方向(すなわち、光切断画像の水平方向)をX軸方向とし、X軸方向に対して直交する方向(すなわち、光切断画像の高さ方向)をY軸方向とする。
それでは、図16~図23を参照しながら、外乱推定部213で実施される外乱推定処理について、詳細に説明する。
本実施形態に係る外乱推定部213では、エリアカメラ113で撮像された、光切断線Lb,Lcが写る撮像画像を基に、剛体Sの表面のうち、光切断線Lb,Lc上に存在する部分における、外乱起因の高さ変化値(すなわち、実空間におけるZ座標の変化量)を算出する。
上記特許文献1で提案されているような光切断法では、光切断線上の異なる長手方向位置の複数の点について、異なる時刻で表面高さ測定を行い、各点の表面高さ測定結果の差異(すなわち外乱起因の変化)を、そのまま外乱の大きさの計算に用いていた。しかしながら、本実施形態に係る形状測定装置10で実施される光切断法では、外乱推定部213が実施する外乱推定処理によって、光切断線Lb上の各点の長手方向位置(すなわち、Xb座標の値)と、これらの点での外乱に起因するYb座標の値の変化との関係を、異なる時刻に撮像された複数の撮像画像を利用して特定する。その上で、外乱推定部213は、Yb座標の変化量のXb方向に沿った分布を、直線で近似する。かかる近似直線を利用することで、外乱推定部213は、光切断線Lb上の各点における測定誤差による値のばらつきを抑えながら、図2に示した交点Aに対応するXb座標でのYb座標の値の変化量を、正確に算出することができる。その後、外乱推定部213は、先だって説明したような校正曲線Cbを用いることで、画素単位で表わされているYb座標の値の変化量を、実空間でのZ座標の変化量(すなわち、外乱に起因する高さ変動量)に変換する。
共通照射部分外乱推定部221は、例えば、CPU、ROM、RAM等により実現される。共通照射部分外乱推定部221は、上記の外乱推定処理の概略で簡単に言及した処理のうち、光切断線Lb,Lc上の各点の長手方向位置(すなわち、Xb座標、Xc座標の値)と、これらの点での外乱に起因するYb座標、Yc座標の値の変化との関係を、異なる時刻に撮像された複数の撮像画像を利用して特定する処理部である。
・・・(式125)
・・・(式129)
交点位置外乱推定部223は、例えば、CPU、ROM、RAM等により実現される。交点位置外乱推定部223は、上記の外乱推定処理の概略で簡単に言及した処理のうち、光切断線Lbに関して、Yb座標の変化量のXb方向に沿った分布を直線で近似するとともに、光切断線Lcに関して、Yc座標の変化量のXc方向に沿った分布を直線で近似する処理を実施して、交点A、Bの位置での外乱の大きさを推定する処理部である。
・交点Aにおける外乱成分に起因するZ座標の変化ΔZb(i)(単位:mm)
・交点Bにおける外乱成分に起因するZ座標の変化ΔZc(i)(単位:mm)
のそれぞれを算出する処理部である。
ΔZc(i)=Zc(i)-Zc t(i) ・・・(式133)
続いて、図24を参照しながら、形状データ算出部215で実施される形状データ算出処理について、詳細に説明する。なお、図24では、外乱としてL軸回りの回転が生じている場合について図示しているが、これまでの説明と同様に、以下の説明は、図24に示した場合に限定されるものではない。
続いて、図25及び図26を参照しながら、補正部217で実施される補正処理について、詳細に説明する。
次に、図27及び図28を参照しながら、本実施形態に係る撮像装置100の変形例について、簡単に説明する。図27及び図28は、本実施形態に係る撮像装置の変形例を模式的に示した説明図である。
次に、図29A及び図29Bを参照しながら、本実施形態に係る形状測定装置10で実施される形状測定方法の流れについて、簡単に説明する。図29A及び図29Bは、本実施形態に係る形状測定方法の流れの一例を示した流れ図である。
次に、図30を参照しながら、本発明の実施形態に係る演算処理装置200のハードウェア構成について、詳細に説明する。図30は、本発明の実施形態に係る演算処理装置200のハードウェア構成を説明するためのブロック図である。
実施例1では、図31Aに示したようなZ方向の平行移動を、アルミ板の搬送中に外乱として付加した。なお、光切断線の位置は、図31Bに示した通りである。その結果、図31Cに示すように、Z(i,Xa)は、外乱によるZ軸方向の変化が重畳されており、該当する部分の表面高さが平坦になっていないことがわかる。この結果は、Z(i,Xa)では正確な表面高さを表現できていないことを示している。一方、図31Dに示したように、Zout(i,Xa)(i=1,2,・・・,60)は平坦となり、正確な表面高さが測定されていることが確認できた。
実施例2では、図32Aに示したようなL軸回りの回転(回転軸は、アルミ板の幅方向中央位置とし、回転角の正方向は、L軸正方向に沿って時計回りとした。)を、アルミ板の搬送中に外乱として付加した。なお、光切断線の位置と回転軸との位置関係は、図32Bに示した通りである。その結果、図32Cに示すように、Z(i,Xa)は、L軸回りの回転によるZ軸方向の変化が重畳されており、該当する部分の表面高さが平坦になっていないことがわかる。この結果は、Z(i,Xa)では正確な表面高さを表現できていないことを示している。一方、図32Dに示したように、Zout(i,Xa)(i=1,2,・・・,60)は平坦となり、正確な表面高さが測定されていることが確認できた。
実施例3では、図33Aに示したようなC軸回りの回転(回転軸は、アルミ板の長手方向中央位置とし、回転角の正方向は、C軸正方向に沿って時計回りとした。)を、アルミ板の搬送中に外乱として付加した。なお、光切断線の位置と回転軸との位置関係は、図33Bに示した通りである。その結果、図33Cに示すように、Z(i,Xa)は、C軸回りの回転によるZ軸方向の変化が重畳されており、該当する部分の表面高さが平坦になっていないことがわかる。この結果は、Z(i,Xa)では正確な表面高さを表現できていないことを示している。一方、図33Dに示したように、Zout(i,Xa)(i=1,2,・・・,60)は平坦となり、正確な表面高さが測定されていることが確認できた。
100 撮像装置
101a,101b,101c 線状レーザ光源
111,113,115,117 エリアカメラ
200 演算処理装置
201 撮像制御部
203 画像処理部
205 表示制御部
207 記憶部
211 撮像データ取得部
213 外乱推定部
215 形状データ算出部
217 補正部
219 結果出力部
221 共通照射部分外乱推定部
223 交点位置外乱推定部
Claims (16)
- 被測定剛体の長手方向に沿って当該被測定剛体に対して相対移動する複数の線状レーザ光源から、前記被測定剛体の表面へと照射された、複数の線状レーザ光による複数の光切断線により、当該被測定剛体の形状を測定するものであり、
長手方向に沿って相対移動する前記被測定剛体の表面に対して、3本の前記線状レーザ光を照射するとともに、前記3本の線状レーザ光の前記被測定剛体の表面からの反射光を所定の長手方向間隔で撮像する撮像装置と、
前記撮像装置により撮像された前記光切断線に関する撮像画像に対して画像処理を実施して、前記被測定剛体の表面形状を算出する演算処理装置と、
を備え、
前記撮像装置は、
前記被測定剛体の幅方向に延びる前記光切断線であり、前記被測定剛体の表面形状を算出するために用いられる形状測定用光切断線を射出する第1線状レーザ光源と、
前記被測定剛体の長手方向に対して平行であり、かつ、前記形状測定用光切断線と交差しており、前記被測定剛体に作用する外乱の影響を補正するために用いられる第1の補正用光切断線を射出する第2線状レーザ光源と、
前記被測定剛体の長手方向に対して平行であり、前記形状測定用光切断線と交差し、かつ、前記第1の補正用光切断線とは異なる前記被測定剛体の幅方向位置に存在しており、前記被測定剛体に作用する外乱の影響を補正するために用いられる第2の補正用光切断線を射出する第3線状レーザ光源と、
前記形状測定用光切断線を、所定の長手方向間隔に対応する各時刻に撮像し、各時刻におけるそれぞれの前記形状測定用光切断線の撮像画像を生成する第1のカメラと、
前記補正用光切断線を、所定の長手方向間隔に対応する各時刻に撮像し、各時刻におけるそれぞれの前記補正用光切断線の撮像画像を生成する第2のカメラと、
を有しており、
前記演算処理装置は、
前記第1のカメラにより生成された各時刻での前記形状測定用光切断線の撮像画像に基づいて、前記被測定剛体の表面の3次元形状を表わし、かつ、前記外乱に起因する測定誤差の重畳された形状データを算出する形状データ算出部と、
前記被測定剛体の同一位置について異なる2つの時刻に取得した前記被測定剛体の表面高さに関する高さ測定値から、当該位置における前記外乱に起因する高さ変化値を取得する高さ変化値取得処理を、前記第1の補正用光切断線の撮像画像を用いて、当該第1の補正用光切断線の異なる長手方向位置の複数の点に対して実施するとともに、前記高さ変化値取得処理を、前記第2の補正用光切断線の撮像画像を用いて、当該第2の補正用光切断線の異なる長手方向位置の複数の点に対して実施し、前記第1の補正用光切断線の撮像画像から得られた複数の前記外乱に起因する高さ変化値と、前記第2の補正用光切断線の撮像画像から得られた複数の前記外乱に起因する高さ変化値と、を利用して、前記形状データに重畳された前記外乱に起因する高さ変動量を推定する外乱推定部と、
前記形状データから前記高さ変動量を差し引くことで、前記外乱に起因する測定誤差を補正する補正部と、
を有する、形状測定装置。 - 前記外乱推定部は、
前記第1の補正用光切断線上の複数の点における前記外乱に起因する高さ変化値を直線近似して、当該直線と前記形状測定用光切断線との交点における前記外乱に起因する高さ変化値を推定し、
前記第2の補正用光切断線上の複数の点における前記外乱に起因する高さ変化値を直線近似して、当該直線と前記形状測定用光切断線との交点における前記外乱に起因する高さ変化値を推定し、
2つの前記交点における前記外乱に起因する高さ変化値を結ぶ直線により、前記高さ変動量を推定する、請求項1に記載の形状測定装置。 - 前記第1のカメラ及び前記第2のカメラは、所定の長手方向間隔に対応する各時刻に撮像を行って、それぞれN枚(Nは、2以上の整数。)の撮像画像を生成し、
前記外乱推定部は、1枚目の撮像画像に前記外乱が生じていないとみなして、前記高さ変動量を算出する、請求項1又は2に記載の形状測定装置。 - 前記第1のカメラ及び前記第2のカメラの撮像タイミングは、互いに隣り合う撮像時刻に撮像した前記第2のカメラの撮像画像において、共通して前記補正用光切断線が照射されている前記被測定剛体の部分である共通照射領域が存在するように制御されており、
前記外乱推定部は、前記第1の補正用光切断線、及び、前記第2の補正用光切断線のそれぞれでの前記共通照射領域に該当する前記複数の点について、前記外乱に起因する高さ変化値を算出する、請求項1~3の何れか1項に記載の形状測定装置。 - 前記外乱推定部は、前記第2のカメラのi+1枚目(i=1,2,・・・,N-1)の撮像画像から得られる前記高さ変化値を含む見かけの表面高さと、前記第2のカメラのi枚目の撮像画像から得られる、当該撮像画像の前記共通照射領域における前記高さ変化値を除去した後の表面高さと、を用いて、前記i+1枚目の撮像画像における前記高さ変化値と、当該高さ変化値を除去した後の表面高さと、を算出する、請求項4に記載の形状測定装置。
- 前記外乱推定部は、前記第2のカメラの1枚目の撮像画像を基準として、前記第2のカメラのi枚目(i=2,・・・,N)の撮像画像における前記高さ変化値を算出する、請求項4又は5に記載の形状測定装置。
- 前記第1線状レーザ光源、前記第2線状レーザ光源及び前記第3線状レーザ光源は、それぞれの光源の光軸が前記被測定剛体の長手方向及び幅方向で規定される平面に対して垂直となるように配設される、請求項1~6の何れか1項に記載の形状測定装置。
- 前記第1のカメラの光軸と前記第1線状レーザ光源の光軸とのなす角、前記第2のカメラの視線と前記第2線状レーザ光源の光軸とのなす角、及び、前記第2のカメラの視線と前記第3線状レーザ光源の光軸とのなす角は、互いに独立に、30度以上60度以下である、請求項1~7の何れか1項に記載の形状測定装置。
- 被測定剛体の長手方向に沿って当該被測定剛体に対して相対移動する複数の線状レーザ光源から、前記被測定剛体の表面へと照射された、複数の線状レーザ光による複数の光切断線により、当該被測定剛体の形状を測定するものであり、
前記被測定剛体の幅方向に延びる前記光切断線であり、前記被測定剛体の表面形状を算出するために用いられる形状測定用光切断線を射出する第1線状レーザ光源と、前記被測定剛体の長手方向に対して平行であり、かつ、前記形状測定用光切断線と交差しており、前記被測定剛体に作用する外乱の影響を補正するために用いられる第1の補正用光切断線を射出する第2線状レーザ光源と、前記被測定剛体の長手方向に対して平行であり、前記形状測定用光切断線と交差し、かつ、前記第1の補正用光切断線とは異なる前記被測定剛体の幅方向位置に存在しており、前記被測定剛体に作用する外乱の影響を補正するために用いられる第2の補正用光切断線を射出する第3線状レーザ光源と、前記形状測定用光切断線を、所定の長手方向間隔に対応する各時刻に撮像し、各時刻におけるそれぞれの前記形状測定用光切断線の撮像画像を生成する第1のカメラと、前記補正用光切断線を、所定の長手方向間隔に対応する各時刻に撮像し、各時刻におけるそれぞれの前記補正用光切断線の撮像画像を生成する第2のカメラと、を有する撮像装置から、長手方向に沿って相対移動する前記被測定剛体の表面に対して3本の前記光切断線を照射して、当該3本の光切断線の前記被測定剛体の表面からの反射光を所定の長手方向間隔で撮像する撮像ステップと、
前記第1のカメラにより生成された各時刻での前記形状測定用光切断線の撮像画像に基づいて、前記被測定剛体の表面の3次元形状を表わし、かつ、前記外乱に起因する測定誤差の重畳された形状データを算出する形状データ算出ステップと、
前記被測定剛体の同一位置について異なる2つの時刻に取得した前記被測定剛体の表面高さに関する高さ測定値から、当該位置における前記外乱に起因する高さ変化値を取得する高さ変化値取得処理を、前記第1の補正用光切断線の撮像画像を用いて、当該第1の補正用光切断線の異なる長手方向位置の複数の点に対して実施するとともに、前記高さ変化値取得処理を、前記第2の補正用光切断線の撮像画像を用いて、当該第2の補正用光切断線の異なる長手方向位置の複数の点に対して実施して、前記第1の補正用光切断線の撮像画像から得られた複数の前記外乱に起因する高さ変化値と、前記第2の補正用光切断線の撮像画像から得られた複数の前記外乱に起因する高さ変化値と、を利用して、前記形状データに重畳された前記外乱に起因する高さ変動量を推定する外乱推定ステップと、
前記形状データから前記高さ変動量を差し引くことで、前記外乱に起因する測定誤差を補正する補正ステップと、
を含む、形状測定方法。 - 前記外乱推定ステップでは、
前記第1の補正用光切断線上の複数の点における前記外乱に起因する高さ変化値を直線近似することで、当該直線と前記形状測定用光切断線との交点における前記外乱に起因する高さ変化値が推定され、
前記第2の補正用光切断線上の複数の点における前記外乱に起因する高さ変化値を直線近似することで、当該直線と前記形状測定用光切断線との交点における前記外乱に起因する高さ変化値が推定され、
2つの前記交点における前記外乱に起因する高さ変化値を結ぶ直線により、前記高さ変動量が推定される、請求項9に記載の形状測定方法。 - 前記第1のカメラ及び前記第2のカメラは、所定の長手方向間隔に対応する各時刻に撮像を行って、それぞれN枚(Nは、2以上の整数。)の撮像画像を生成し、
前記外乱推定ステップでは、1枚目の撮像画像に前記外乱が生じていないとみなして、前記高さ変動量が算出される、請求項9又は10に記載の形状測定方法。 - 前記第1のカメラ及び前記第2のカメラの撮像タイミングは、互いに隣り合う撮像時刻に撮像した前記第2のカメラの撮像画像において、共通して前記補正用光切断線が照射されている前記被測定剛体の部分である共通照射領域が存在するように制御されており、
前記外乱推定ステップでは、前記第1の補正用光切断線、及び、前記第2の補正用光切断線のそれぞれでの前記共通照射領域に該当する前記複数の点について、前記外乱に起因する高さ変化値が算出される、請求項9~11の何れか1項に記載の形状測定方法。 - 前記外乱推定ステップでは、前記第2のカメラのi+1枚目(i=1,2,・・・,N-1)の撮像画像から得られる前記高さ変化値を含む見かけの表面高さと、前記第2のカメラのi枚目の撮像画像から得られる、当該撮像画像の前記共通照射領域における前記高さ変化値を除去した後の表面高さと、を用いて、前記i+1枚目の撮像画像における前記高さ変化値と、当該高さ変化値を除去した後の表面高さと、が算出される、請求項12に記載の形状測定方法。
- 前記外乱推定ステップでは、前記第2のカメラの1枚目の撮像画像を基準として、前記第2のカメラのi枚目(i=2,・・・,N)の撮像画像における前記高さ変化値が算出される、請求項12又は13に記載の形状測定方法。
- 前記第1線状レーザ光源、前記第2線状レーザ光源及び前記第3線状レーザ光源は、それぞれの光源の光軸が前記被測定剛体の長手方向及び幅方向で規定される平面に対して垂直となるように配設される、請求項9~14の何れか1項に記載の形状測定方法。
- 前記第1のカメラの光軸と前記第1線状レーザ光源の光軸とのなす角、前記第2のカメラの視線と前記第2線状レーザ光源の光軸とのなす角、及び、前記第2のカメラの視線と前記第3線状レーザ光源の光軸とのなす角は、互いに独立に、30度以上60度以下である、請求項9~15の何れか1項に記載の形状測定方法。
Priority Applications (9)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
BR112017022305-8A BR112017022305B1 (pt) | 2015-04-22 | 2016-04-22 | Aparelho de medição de formato e método de medição de formato |
US15/567,933 US10451410B2 (en) | 2015-04-22 | 2016-04-22 | Shape measurement apparatus and shape measurement method |
JP2017514217A JP6380667B2 (ja) | 2015-04-22 | 2016-04-22 | 形状測定装置及び形状測定方法 |
KR1020177032769A KR101950634B1 (ko) | 2015-04-22 | 2016-04-22 | 형상 측정 장치 및 형상 측정 방법 |
ES16783284T ES2743217T3 (es) | 2015-04-22 | 2016-04-22 | Aparato de medición de forma y método de medición de forma |
EP16783284.9A EP3270104B8 (en) | 2015-04-22 | 2016-04-22 | Shape measuring apparatus and shape measuring method |
PL16783284T PL3270104T3 (pl) | 2015-04-22 | 2016-04-22 | Urządzenie do pomiaru kształtu i sposób pomiaru kształtu |
CN201680036936.7A CN107735646B (zh) | 2015-04-22 | 2016-04-22 | 形状测定装置以及形状测定方法 |
CA2981970A CA2981970C (en) | 2015-04-22 | 2016-04-22 | Shape measurement apparatus and shape measurement method |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015087517 | 2015-04-22 | ||
JP2015-087517 | 2015-04-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016171263A1 true WO2016171263A1 (ja) | 2016-10-27 |
Family
ID=57143946
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/062801 WO2016171263A1 (ja) | 2015-04-22 | 2016-04-22 | 形状測定装置及び形状測定方法 |
Country Status (10)
Country | Link |
---|---|
US (1) | US10451410B2 (ja) |
EP (1) | EP3270104B8 (ja) |
JP (1) | JP6380667B2 (ja) |
KR (1) | KR101950634B1 (ja) |
CN (1) | CN107735646B (ja) |
BR (1) | BR112017022305B1 (ja) |
CA (1) | CA2981970C (ja) |
ES (1) | ES2743217T3 (ja) |
PL (1) | PL3270104T3 (ja) |
WO (1) | WO2016171263A1 (ja) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018119851A (ja) * | 2017-01-25 | 2018-08-02 | 東芝三菱電機産業システム株式会社 | 平坦度計測装置 |
KR102044852B1 (ko) * | 2018-06-29 | 2019-11-13 | 대한민국(농촌진흥청장) | 젖소 유두 자동인식장치 및 방법 |
JP2020190458A (ja) * | 2019-05-21 | 2020-11-26 | 株式会社小野測器 | 速度計測装置 |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP7027049B2 (ja) * | 2017-06-15 | 2022-03-01 | キヤノン株式会社 | 画像処理装置、画像処理方法及びプログラム |
US10890441B2 (en) * | 2017-11-27 | 2021-01-12 | Nippon Steel Corporation | Shape inspection apparatus and shape inspection method |
JP2019168315A (ja) * | 2018-03-23 | 2019-10-03 | 三菱電機株式会社 | 測定装置、回路基板、表示装置、および測定方法 |
KR102048364B1 (ko) * | 2018-04-13 | 2019-11-25 | 엘지전자 주식회사 | 로봇 청소기 |
EP3713767B1 (en) * | 2018-12-20 | 2023-11-01 | Kornit Digital Ltd. | Printing head height control |
CN111366065B (zh) * | 2020-02-28 | 2021-11-05 | 深圳冰河导航科技有限公司 | 一种平地机的自动校准方法 |
CN112648981B (zh) * | 2020-12-04 | 2023-01-13 | 中国航空工业集团公司成都飞机设计研究所 | 一种基于激光定位的旋转机构运动过程摆动量测量方法 |
CN112747679A (zh) * | 2020-12-23 | 2021-05-04 | 河南中原光电测控技术有限公司 | 测宽设备、测宽方法、存储有测宽程序的计算机可读介质 |
KR102413483B1 (ko) * | 2021-07-28 | 2022-06-28 | 주식회사 프로시스템 | 3차원 곡면 형상 검사 장치 및 3차원 곡면 형상 검사 방법 |
CN115889974A (zh) * | 2022-12-02 | 2023-04-04 | 无锡奥特维光学应用有限公司 | 一种接线盒激光焊接装置及焊接方法 |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012032271A (ja) * | 2010-07-30 | 2012-02-16 | Kobe Steel Ltd | 測定装置 |
JP2013221799A (ja) * | 2012-04-13 | 2013-10-28 | Nippon Steel & Sumitomo Metal | 形状計測装置及び形状計測方法 |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10160437A (ja) * | 1996-12-03 | 1998-06-19 | Bridgestone Corp | タイヤの外形状判定方法及び装置 |
KR100552469B1 (ko) | 2003-01-13 | 2006-02-15 | 삼성전자주식회사 | 위상차 제거기능을 갖는 트랙에러검출장치 및 그의위상차제거방법 |
DE102007054906B4 (de) | 2007-11-15 | 2011-07-28 | Sirona Dental Systems GmbH, 64625 | Verfahren zur optischen Vermessung der dreidimensionalen Geometrie von Objekten |
JP5180608B2 (ja) * | 2008-01-30 | 2013-04-10 | 株式会社日立ハイテクノロジーズ | ディスク表面の欠陥検査方法及び欠陥検査装置 |
DE102008048963B4 (de) | 2008-09-25 | 2011-08-25 | Technische Universität Braunschweig Carolo-Wilhelmina, 38106 | 3D-Geometrie-Erfassungsverfahren und -vorrichtung |
JP2011047857A (ja) | 2009-08-28 | 2011-03-10 | Toyota Motor Corp | 三次元形状計測方法 |
JP4666272B1 (ja) * | 2009-10-19 | 2011-04-06 | 住友金属工業株式会社 | 板材の平坦度測定方法及びこれを用いた鋼板の製造方法 |
US9116504B2 (en) * | 2010-09-07 | 2015-08-25 | Dai Nippon Printing Co., Ltd. | Scanner device and device for measuring three-dimensional shape of object |
CN102353684B (zh) * | 2011-06-23 | 2013-10-30 | 南京林业大学 | 基于双激光三角法的激光肉图像采集方法 |
WO2015133287A1 (ja) * | 2014-03-07 | 2015-09-11 | 新日鐵住金株式会社 | 表面性状指標化装置、表面性状指標化方法及びプログラム |
JP6482196B2 (ja) * | 2014-07-09 | 2019-03-13 | キヤノン株式会社 | 画像処理装置、その制御方法、プログラム、及び記憶媒体 |
CN105302151B (zh) * | 2014-08-01 | 2018-07-13 | 深圳中集天达空港设备有限公司 | 一种飞机入坞引导和机型识别的系统及方法 |
WO2016098400A1 (ja) * | 2014-12-15 | 2016-06-23 | ソニー株式会社 | 撮像装置組立体、3次元形状測定装置及び動き検出装置 |
JP6478713B2 (ja) * | 2015-03-04 | 2019-03-06 | キヤノン株式会社 | 計測装置および計測方法 |
JP6061059B1 (ja) * | 2015-05-29 | 2017-01-18 | 新日鐵住金株式会社 | 金属体の形状検査装置及び金属体の形状検査方法 |
KR102044196B1 (ko) * | 2016-07-19 | 2019-11-13 | 닛폰세이테츠 가부시키가이샤 | 조도 측정 장치 및 조도 측정 방법 |
-
2016
- 2016-04-22 WO PCT/JP2016/062801 patent/WO2016171263A1/ja active Application Filing
- 2016-04-22 CA CA2981970A patent/CA2981970C/en not_active Expired - Fee Related
- 2016-04-22 PL PL16783284T patent/PL3270104T3/pl unknown
- 2016-04-22 JP JP2017514217A patent/JP6380667B2/ja active Active
- 2016-04-22 CN CN201680036936.7A patent/CN107735646B/zh active Active
- 2016-04-22 EP EP16783284.9A patent/EP3270104B8/en active Active
- 2016-04-22 BR BR112017022305-8A patent/BR112017022305B1/pt active IP Right Grant
- 2016-04-22 ES ES16783284T patent/ES2743217T3/es active Active
- 2016-04-22 KR KR1020177032769A patent/KR101950634B1/ko active IP Right Grant
- 2016-04-22 US US15/567,933 patent/US10451410B2/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012032271A (ja) * | 2010-07-30 | 2012-02-16 | Kobe Steel Ltd | 測定装置 |
JP2013221799A (ja) * | 2012-04-13 | 2013-10-28 | Nippon Steel & Sumitomo Metal | 形状計測装置及び形状計測方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3270104A4 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2018119851A (ja) * | 2017-01-25 | 2018-08-02 | 東芝三菱電機産業システム株式会社 | 平坦度計測装置 |
KR102044852B1 (ko) * | 2018-06-29 | 2019-11-13 | 대한민국(농촌진흥청장) | 젖소 유두 자동인식장치 및 방법 |
JP2020190458A (ja) * | 2019-05-21 | 2020-11-26 | 株式会社小野測器 | 速度計測装置 |
JP7267097B2 (ja) | 2019-05-21 | 2023-05-01 | 株式会社小野測器 | 速度計測装置 |
Also Published As
Publication number | Publication date |
---|---|
BR112017022305A2 (ja) | 2018-07-10 |
CA2981970C (en) | 2019-08-06 |
JPWO2016171263A1 (ja) | 2018-02-08 |
EP3270104B8 (en) | 2019-07-17 |
JP6380667B2 (ja) | 2018-08-29 |
US20180106608A1 (en) | 2018-04-19 |
US10451410B2 (en) | 2019-10-22 |
CN107735646A (zh) | 2018-02-23 |
EP3270104A1 (en) | 2018-01-17 |
BR112017022305B1 (pt) | 2022-08-09 |
KR101950634B1 (ko) | 2019-02-20 |
KR20170136618A (ko) | 2017-12-11 |
EP3270104B1 (en) | 2019-06-12 |
PL3270104T3 (pl) | 2019-12-31 |
CN107735646B (zh) | 2019-12-17 |
ES2743217T3 (es) | 2020-02-18 |
EP3270104A4 (en) | 2018-08-22 |
CA2981970A1 (en) | 2016-10-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6380667B2 (ja) | 形状測定装置及び形状測定方法 | |
KR101257188B1 (ko) | 3차원 형상 계측 장치, 3차원 형상 계측 방법 및 3차원 형상 계측 프로그램을 기록한 기록 매체 | |
JP5857858B2 (ja) | 形状計測装置及び形状計測方法 | |
JP2007114071A (ja) | 三次元形状計測装置、プログラム、コンピュータ読み取り可能な記録媒体、及び三次元形状計測方法 | |
JP2014028415A (ja) | バラ積みされた物品をロボットで取出す装置 | |
TWI493153B (zh) | 非接觸式物件空間資訊量測裝置與方法及取像路徑的計算方法 | |
JP2011203108A (ja) | 3次元距離計測装置及びその方法 | |
JP4058421B2 (ja) | 振動計測装置及びその計測方法 | |
JP5383853B2 (ja) | 工具形状測定装置、及び工具形状測定方法 | |
JP2010276447A (ja) | 位置計測装置、位置計測方法およびロボットシステム | |
JP2006284531A (ja) | 工具形状測定装置、及び工具形状測定方法 | |
JP2007093412A (ja) | 3次元形状測定装置 | |
JP5494234B2 (ja) | 三次元形状計測装置、キャリブレーション方法、およびロボット | |
KR102177726B1 (ko) | 가공품 검사 장치 및 검사 방법 | |
JP2004309318A (ja) | 位置検出方法、その装置及びそのプログラム、並びに、較正情報生成方法 | |
JP2007121124A (ja) | Ccdカメラ式3次元形状測定機の精度保証治具 | |
KR102196286B1 (ko) | 3차원 형상 계측 시스템 및 계측 시간 설정 방법 | |
JP2010025803A (ja) | 位置決め機能を有する検査装置、位置決め機能を有する検査装置用プログラム、位置決め機能を有する検査装置の検査方法 | |
KR20210000791A (ko) | 레이저를 이용한 가공품 검사 장치 및 검사 방법 | |
JP2016095243A (ja) | 計測装置、計測方法、および物品の製造方法 | |
JP6091092B2 (ja) | 画像処理装置、及び画像処理方法 | |
JP2006153654A (ja) | 三次元計測装置および三次元計測方法 | |
WO2021193236A1 (ja) | 画像処理装置及び画像処理方法 | |
JP2011220752A (ja) | 三次元形状計測装置および三次元形状計測装置のキャリブレーション方法 | |
JP2013007722A (ja) | 3次元計測装置およびその方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16783284 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2017514217 Country of ref document: JP Kind code of ref document: A Ref document number: 2981970 Country of ref document: CA |
|
WWE | Wipo information: entry into national phase |
Ref document number: 15567933 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
REG | Reference to national code |
Ref country code: BR Ref legal event code: B01A Ref document number: 112017022305 Country of ref document: BR |
|
ENP | Entry into the national phase |
Ref document number: 20177032769 Country of ref document: KR Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 112017022305 Country of ref document: BR Kind code of ref document: A2 Effective date: 20171017 |