WO2021107027A1 - 形状復元方法及び画像測定装置 - Google Patents
形状復元方法及び画像測定装置 Download PDFInfo
- Publication number
- WO2021107027A1 WO2021107027A1 PCT/JP2020/044058 JP2020044058W WO2021107027A1 WO 2021107027 A1 WO2021107027 A1 WO 2021107027A1 JP 2020044058 W JP2020044058 W JP 2020044058W WO 2021107027 A1 WO2021107027 A1 WO 2021107027A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- measured
- light
- solid angle
- normal vector
- image
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 63
- 238000005259 measurement Methods 0.000 title abstract description 11
- 239000013598 vector Substances 0.000 claims abstract description 118
- 239000007787 solid Substances 0.000 claims abstract description 104
- 238000005286 illumination Methods 0.000 claims abstract description 60
- 238000003384 imaging method Methods 0.000 claims abstract description 32
- 230000001678 irradiating effect Effects 0.000 claims abstract description 6
- 230000003287 optical effect Effects 0.000 claims description 81
- 238000004364 calculation method Methods 0.000 claims description 42
- 230000000295 complement effect Effects 0.000 claims description 24
- 239000000284 extract Substances 0.000 claims description 4
- 239000008186 active pharmaceutical agent Substances 0.000 abstract description 15
- 238000010586 diagram Methods 0.000 description 10
- 230000007547 defect Effects 0.000 description 8
- 238000012935 Averaging Methods 0.000 description 2
- 238000007689 inspection Methods 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 230000010287 polarization Effects 0.000 description 2
- 230000000052 comparative effect Effects 0.000 description 1
- 239000000470 constituent Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
- 238000004381 surface treatment Methods 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01B—MEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
- G01B11/00—Measuring arrangements characterised by the use of optical techniques
- G01B11/24—Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8806—Specially adapted optical and illumination features
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/9515—Objects of complex shape, e.g. examined with use of a surface follower device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8806—Specially adapted optical and illumination features
- G01N2021/8845—Multiple wavelengths of illumination or detection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8854—Grading and classifying of flaws
- G01N2021/8861—Determining coordinates of flaws
- G01N2021/8864—Mapping zones of defects
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8887—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges based on image processing techniques
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N2201/00—Features of devices classified in G01N21/00
- G01N2201/10—Scanning
- G01N2201/104—Mechano-optical scan, i.e. object and beam moving
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Definitions
- the present invention relates to a shape restoration method and an image measuring device, and more particularly to a shape restoration method and an image measuring device capable of quickly restoring information of each point of the object to be measured in an image of the object to be measured.
- an image measuring device that irradiates an object to be measured with illumination light, processes the captured image, and restores the shape information of the object to be measured.
- an image measuring device that images an object to be measured by a telecentric imaging optical system and measures the shape of the object to be measured corresponds to this. Since the telecentric imaging optical system has a deep depth of field, there is little blurring of the image even if there is a step in the optical axis direction, so it is mainly used to measure the two-dimensional shape of the surface of the object to be measured. Are suitable. However, in the telecentric imaging optical system, it is difficult to detect information in the height direction of the object to be measured, and it is not suitable for measuring the three-dimensional shape of the object to be measured.
- Patent Document 1 by using a specific inspection lighting device, it is possible to obtain tilt information of each point of an object to be measured based on one captured image.
- the system is being developed. According to the present invention, it is possible to extract information on defects such as minute irregularities and foreign substances of the object to be measured.
- the present invention has been made to solve the above-mentioned conventional problems, and provides a shape restoration method and an image measuring device capable of quickly restoring information of each point of an object to be measured in an image of an image to be measured.
- the task is to do.
- the invention according to claim 1 of the present application is a shape restoration method for irradiating an object to be measured with illumination light and processing the captured image to restore the shape of the object to be measured, and has different optical attributes.
- An imaging step of receiving light and capturing the image, and an inclusion relationship between the plurality of solid angle regions constituting the object light and the predetermined observed solid angle based on the optical attributes identified in each pixel of the image.
- the above-mentioned problem is solved by including the shape restoration step.
- the invention according to claim 2 of the present application has the same irradiation solid angle with respect to each point of the object to be measured.
- the invention according to claim 3 of the present application provides the plurality of solid angle regions around the irradiation optical axis of the irradiation solid angle of the illumination light.
- the invention according to claim 4 of the present application has the optical attribute as a wavelength region of light.
- the invention according to claim 5 of the present application has a pre-stage step in the pre-stage of the lighting process, and in the pre-stage step, the object to be measured or a specific jig is used in place of the object to be measured.
- the illumination step and the imaging step are performed, and further, a correspondence relationship generation step for obtaining a correspondence relationship between the light attribute and the normal vector is performed.
- the invention according to claim 6 of the present application uses the specific jig as a reference sphere or a reference plane.
- the invention according to claim 7 of the present application constitutes the correspondence relationship as a correspondence table.
- the invention according to claim 8 of the present application constitutes the correspondence as a complementary function.
- the invention according to claim 9 of the present application is a normalized normal vector.
- the object to be measured is subjected to the observation optical axis after the imaging step.
- a rotation step of rotating the circumference at a predetermined angle is performed, and the calculation step is performed after the lighting step and the imaging step are performed a predetermined number of times.
- the invention according to claim 11 of the present application includes an illuminating device that irradiates an object to be measured with illumination light, an imaging device that images the object to be measured and outputs an image, and a processing device that processes the image.
- An image measuring device that measures the shape of the object to be measured, wherein the illuminating device emits the illuminating light and irradiates the object to be measured with the illuminating light at a specific irradiation stereoscopic angle.
- the imaging device includes a unit, a filter unit between the light source unit and the lens unit, and a filter unit that separates the inside of the specific irradiation stereoscopic angle into a plurality of stereoscopic angle regions having different light attributes.
- each pixel of the image pickup apparatus can distinguish the different light attributes from each other, and the processing apparatus causes the object light.
- the filter unit is arranged in the vicinity of the focal length of the lens unit on the irradiation optical axis of the illumination light.
- the filter unit includes different filter regions around the irradiation optical axis so that the plurality of solid angle regions are provided around the irradiation optical axis of the illumination light. It was done.
- the invention according to claim 14 of the present application is such that the filter unit has different wavelength regions of light as the light attribute.
- the processing apparatus includes a storage unit that stores a correspondence between the optical attribute and the normal vector, and the calculation unit stores the normal vector based on the correspondence. It is what I asked for.
- the invention according to claim 16 of the present application is the normalization of the normal vector in the processing apparatus.
- the invention according to claim 17 of the present application is such that the object to be measured is provided with a rotating table that can rotate around the observation optical axis.
- the calculation unit further obtains the normal vector of each point of the object to be measured stored in advance and the newly imaged object to be measured. It is provided with a matching determination unit that compares the normal vector of the point with the normal vector and extracts parts different from each other.
- FIG. 1 Schematic diagram showing an image measuring apparatus according to the first embodiment of the present invention.
- the irradiation solid angle (F) corresponding to the filter unit (B) having three filter regions around the irradiation optical axis
- the irradiation solid angle (G) corresponding to the filter unit (C) having four concentric filter regions.
- the irradiation solid angle (H) corresponding to the filter unit (D) having three filter regions around the irradiation optical axis concentrically.
- Schematic diagram showing the relationship between the irradiation solid angle, the reflection solid angle, and the observation solid angle in the image measuring device (figure (A) when the normal vector of the surface of the object to be measured coincides with the observation optical axis, the object to be measured.
- Figure (B) when the normal vector on the surface of the object deviates from the observation optical axis. Comparative schematic diagram of the irradiation solid angle of the conventional illumination light and the irradiation solid angle of the illumination light of the present embodiment
- the flow chart which shows the procedure of shape restoration in the image measuring apparatus of FIG. A flow chart showing the procedure of the contents of the pre-stage process of FIG. 7 (detailed flow chart (B) of the pre-stage correspondence generation process shown in FIG. 7 (A) and FIG. 8 (A)). Schematic diagram showing the range of slopes of the reference sphere and the obtained normal vector used when performing the first-stage process shown in FIG. An example of a correspondence table showing the correspondence between the optical attribute and the normal vector obtained in the previous step shown in FIG.
- the flow chart which shows the procedure of shape restoration in the image measuring apparatus of FIG. Processing block diagram of the image measuring apparatus according to the fifth embodiment of the present invention
- FIGS. 1 to 10 the first embodiment of the present invention will be described with reference to FIGS. 1 to 10.
- the present invention is not limited to the contents described in the following embodiments.
- the constituent requirements in the embodiments described below include those that can be easily assumed by those skilled in the art, those that are substantially the same, that is, those in a so-called equal range.
- the components disclosed in the embodiments described below may be appropriately combined or appropriately selected and used.
- the image measuring device 100 captures an image of the object to be measured by receiving an image of the object to be measured by receiving an illumination device 110 that irradiates the object W to be measured and the reflected light from the object W to be measured. It includes an image pickup device CM for output, a processing device 120 for processing the image, and a display device DD.
- the processing device 120 includes an image capture IMC and an image processing device IMP.
- the image measuring device 100 can irradiate the object W to be measured with illumination light, process the captured image, measure the shape of the object to be measured, and restore the shape of the object to be measured.
- the object W to be measured preferably has a surface shape close to a glossy surface even if the surface shape is complicated.
- the illumination device 110 includes a light source unit 112 that emits illumination light, a filter unit 114, a lens unit 116 that irradiates the object W with illumination light at a specific irradiation solid angle IS, and a half. It has a mirror 118 and.
- the light source unit 112 may be one in which one or more chip-type LEDs are arranged, an organic EL, or one in which a light guide plate is guided from a side light.
- the light source unit 112 is movable along the irradiation optical axis L1.
- the filter unit 114 is between the light source unit 112 and the lens unit 116, and has different wavelength regions (optical attributes) R, G, B (reference numerals) of light in a specific irradiation solid angle IS.
- R is separated into a plurality of solid angle regions IS1, IS2, and IS3 (see FIG. 3E) having a red wavelength region, a reference numeral G being a green wavelength region, and a reference numeral B being a blue wavelength region.
- the filter unit 114 is a light source so that a plurality of solid angle regions IS1, IS2, and IS3 are provided around the irradiation light axis L1 of the irradiation solid angle IS of the illumination light.
- a diaphragm (opening radius R0) that limits the light emitted from the unit 112 and filter regions CF1, CF2, and CF3 that are different from each other are provided around the irradiation light axis L1 inside the diaphragm.
- the filter regions CF1, CF2, and CF3 are fan-shaped at 120 degrees and are composed of red, green, and blue color filters, respectively.
- the filter unit 114 is arranged in the vicinity of the focal length f of the lens unit 116 on the irradiation optical axis L1 of the illumination light. Further, the filter unit 114 is also movable along the irradiation optical axis L1.
- the filter unit 114 is an optical element in which a diaphragm, which is a light-shielding mask that blocks illumination light, and a filter that changes the wavelength region of light are combined into one, but the present invention is not limited to this, and the filter unit 114 is separately provided. It may be provided. Alternatively, a liquid crystal shutter or the like capable of electrically changing the transmittance and color may be used for the filter unit.
- the filter unit is of the transmissive type, but may be of the reflective type.
- the lens unit 116 irradiates the object W to be measured with the illumination light emitted from the light source unit 112 and passed through the filter unit 114 at a specific irradiation solid angle IS.
- the lens unit 116 is, for example, a refraction type lens, which may be a single lens, or may be composed of a plurality of lenses.
- the lens unit 116 is also movable along the irradiation optical axis L1.
- the half mirror 118 is arranged so that the irradiation optical axis L1 and the observation optical axis L2 are aligned with each other and the irradiation light is coaxial epi-illumination. Therefore, the irradiation solid angle IS and the observation solid angle DS are formed in the same direction as shown in FIGS. 4A and 4B.
- the movement of the light source unit 112, the filter unit 114, and the lens unit 116 can be adjusted, and the filter region of the filter unit 114 can be changed, so that the wavelength region of light can be arbitrarily changed. It is possible to realize an irradiation solid angle IS having an arbitrary shape with respect to the object W to be measured. Further, by arranging the filter unit 114 in the vicinity of the focal length f of the lens unit 116, the irradiation light is applied to all the positions of the entire field of view of the object W to be imaged by the image pickup apparatus CM under the same conditions. Can be irradiated.
- FIG. 5A shows the irradiation solid angles IS and IS'at different positions P and P'of the object to be measured W when the object to be measured W is irradiated with the general conventional illumination LS. Shown.
- the shapes of the irradiation solid angles IS and IS'and the directions of the irradiation optical axes are different from each other at the positions P and P'.
- the illumination device 110 of the present embodiment as shown in FIG. 5 (B), the irradiation light can be irradiated under the same conditions at all the positions of the entire visual field range of the object W to be measured. That is, the irradiation solid angle IS is the same for each point of the object W to be measured. Therefore, the lighting device 110 of the present embodiment can extract minute changes that cannot be realized by conventional lighting.
- the image pickup device CM uses, for example, a telecentric image pickup optical system (an image pickup optical system with an AF function may be used) to obtain object light from the object to be measured generated by the illumination light of the lighting device 110.
- Light is received at a predetermined observation stereoscopic angle DS, and a two-dimensional image is output as a color image.
- the image pickup device CM is, for example, a color CCD camera or a color CMOS camera, and each pixel of the image pickup device CM can distinguish different optical attributes from each other. That is, in the present embodiment, the different light attributes are the wavelength regions R, G, and B of different light. For example, each pixel has a red, green, and blue color filter (of a Bayer pattern composed of four). ) Consists of a set of pixel elements.
- the color image is processed by the processing device 120.
- the processing device 120 includes an image holding unit 122, a calculation unit 124, a storage unit 126, and a shape restoration unit 128, and is connected to an imaging device CM and a display device DD. .. Therefore, the processing device 120 can process the image from the image pickup device CM and output the display signal to the display device DD.
- the display device DD can display an captured color image, a three-dimensional image, and various types of information based on the output of the shape restoration unit 128.
- the image holding unit 122 is an internal circuit of the image capture IMC, and makes it possible to hold the image from the image pickup device CM in frame units. In the present embodiment, images of each of the wavelength regions R, G, and B of light can be retained.
- the calculation unit 124 describes each point of the object W to be measured corresponding to each pixel from the inclusion relationship between the plurality of solid angle regions RS1, RS2, RS3 constituting the object light from the object W to be measured and the predetermined observation solid angle DS. Find the normal vector Vn of. The principle will be described with reference to FIGS. 4 (A) and 4 (B). It should be noted that what is drawn by the solid line is the irradiation solid angle IS formed by the irradiation light and the observation solid angle DS by the image pickup apparatus CM. What is drawn by the dotted line is the reflected solid angle RS formed by the object light.
- the reflected optical axis L3 and the observation optical axis L2 are in the same state as shown in FIG. 4 (A). That is, when the object W to be illuminated is irradiated with the illumination light having the irradiation solid angle IS, the wavelength region R of the light corresponding to the solid angle regions RS1, RS2, and RS3 of the solid angle RS reflected by the object light in the observation solid angle DS. , G, B brightness Rc, Gc, Bc are detected equally. Therefore, the non-tilted normal vector Vn can be obtained based on the ratio of the luminance Rc, Gc, and Bc of the wavelength regions R, G, and B of the detected light.
- the reflected optical axis L3 and the observation optical axis L2 do not match as shown in FIG. 4 (B). That is, when the object W to be illuminated is irradiated with the illumination light having the irradiation solid angle IS, the wavelength region R of the light corresponding to the solid angle region RS1 of the solid angle RS reflected by the object light is within the range of the observation solid angle DS. Almost no brightness Rc can be received. On the other hand, the brightness Gc and Bc of the wavelength regions G and B of the light corresponding to the solid angle regions RS2 and RS3 are detected substantially equally. Therefore, the inclined normal vector Vn can be obtained based on the ratio of the luminance Rc, Gc, and Bc of the wavelength regions R, G, and B of the detected light.
- the calculation unit 124 can obtain the normal vector Vn based on the correspondence between the optical attribute (in this embodiment, each of the wavelength regions R, G, and B of light) and the normal vector Vn.
- the calculation unit 124 also obtains the correspondence between the wavelength regions R, G, and B of light and the normal vector Vn.
- the correspondence relationship can be obtained by the correspondence table and the complementary functions fx and fy.
- the complementary functions fx and fy obtain the normal vector Vn between the corresponding tables provided discretely.
- the storage unit 126 can store various initial setting values, various programs, various tables, various functions, and various data.
- the storage unit 126 stores the correspondence between the wavelength regions R, G, and B of the light of the object W to be measured and the normal vector Vn.
- the correspondence between the wavelength regions R, G, B of light and the normal vector Vn is configured as a correspondence table as shown in FIG. 10, and this correspondence table is stored in the storage unit 126.
- the codes Rt, Gt, and Bt in FIG. 10 are the brightnesses (0 ⁇ Rt, Gt, Bt ⁇ 100) of each of the wavelength regions R, G, and B of the light recorded in the corresponding table.
- the symbols Vtnx and Vtny are the X component and the Y component of the normalized normal vector Vtn recorded in the corresponding table, respectively.
- the storage unit 126 also stores the complementary functions fx and fy obtained from the corresponding table.
- the shape restoration unit 128 restores the shape of the object W to be measured by obtaining the inclination information of each point of the object W to be measured from the normal vector Vn obtained for each pixel. Specifically, the shape of the object W to be measured is restored by converting the normal vector Vn into the inclination information of each pixel and connecting the inclination information at pixel intervals.
- the tilt information and the shape information are output to the display device DD and stored in the storage unit 126.
- step S2 the pre-stage process
- the first step is a step of obtaining in advance the correspondence between the wavelength regions R, G, B of light for reproducing the shape of the object W to be measured and the normal vector Vn.
- the pre-stage process includes a pre-stage illumination step, a pre-stage imaging step, and a pre-stage correspondence generation step.
- a reference sphere (specific jig) is used instead of the object W to be measured.
- the reference sphere is a priced sphere whose size (radius r) is measured and whose accuracy does not affect the variation of the normal vector. It is desirable that the material and surface treatment of the reference sphere are the same as those of the object to be measured W to be measured.
- the pre-stage lighting process (FIG. 8 (A), step S21) is performed.
- the illumination device 110 irradiates the reference sphere with illumination light having a specific irradiation solid angle IS having a plurality of solid angle regions IS1, IS2, and IS3 having different wavelength regions R, G, and B. To do.
- the irradiation solid angle IS is made the same for each point of the reference sphere.
- the first-stage imaging step (FIG. 8 (A), step S22) is performed.
- the object light from the reference sphere generated by the illumination light is received at a predetermined observation solid angle DS and an image is captured.
- the first-stage correspondence relationship generation step is a step of obtaining the correspondence relationship between the wavelength regions R, G, B of light and the normal vector Vn by the calculation unit 124.
- the first-stage correspondence relationship generation step includes a range setting step, a correspondence table generation step, and a complementary function calculation step.
- a range setting step (FIG. 8B, step S231) is performed.
- a range in which the direction of the normal vector Vn can be obtained from the image JG_IMG of the captured reference sphere is calculated. For example, by extracting a pixel region having high brightness exceeding the noise level, or by extracting the pixel region from the image JG_IMG of the reference sphere by the difference processing of ON / OFF of the lighting device 110, the object light from the reference sphere is reflected. Find the coming range L. Then, assuming that the maximum surface inclination angle in the reference sphere (radius r) is the sign ⁇ , it can be obtained as follows.
- ⁇ acos ((L / 2) / r) (2)
- the correspondence table generation step creates a correspondence table between the wavelength regions R, G, B of light and the normal vector Vn for each pixel in the measurable range of the object light in the image JG_IMG of the reference sphere.
- Cx and Cy be the centers of the sphere projection image on the image JG_IMG of the reference sphere
- X and Y be the pixel coordinates of the measurable range of the object light.
- the normal vector V (Vx, Vy, Vz) is obtained as follows.
- Vx (X-Cx) * Px (3)
- Vy (Y-Cy) * Py (4)
- Vz sqrt (r * r-Vx * Vx-Vy * Vy) (5)
- Vnx Vx / r (6)
- Vny Vy / r (7)
- Vnz sqrt (1-Vnx * Vnx-Vny * Vny) (8)
- the X component Vnx and the Y component Vny of the normal vector Vn are obtained for the brightness Rc, Gc, and Bc of the light wavelength regions R, G, and B of the image coordinates X and Y of the image JG_IMG of the reference sphere.
- the correspondence table shown in FIG. 10 can be generated (when the correspondence table is used, the codes Rc, Gc, Bc, Vnx, and Vny are changed to the codes Rt, Gt, Bt, Vtnx, and Vny, respectively).
- a complementary function calculation step (FIG. 8B, step S233) is performed.
- the complementary functions fx and fy are obtained from the corresponding table. Specifically, first, the brightness Rt, Gt, and Bt of the wavelength regions R, G, and B of the light in the corresponding table are standardized, and two variables are set as shown below (for example, only the brightness rates Rn and Gn). ).
- the complementary function fx (fy) with the luminance factors Rn and Gn as variables is obtained so that the X component Vtnx (Y component Vtny) of the normal vector Vn in the corresponding table can be obtained.
- Complementary functions fx and fy can be obtained, for example, by using spline complementation that fits a free-form surface. It should be noted that N (N ⁇ 4) correspondences are used to obtain the complementary functions fx and fy.
- the obtained complementary functions fx and fy are stored in the storage unit 126.
- the process returns to FIG. 7 and the lighting process (FIG. 7, step S4) is performed.
- the object W is irradiated with illumination light having a specific irradiation solid angle IS having a plurality of solid angle regions having different wavelength regions R, G, and B of light.
- the irradiation solid angle IS is made the same for each point of the object W to be measured.
- an imaging step (FIG. 7, step S6) is performed.
- the object light from the object W to be measured generated by the illumination light is received at a predetermined observation solid angle DS, and an image is imaged.
- step S8 the calculation step (FIG. 7, step S8) is performed.
- a plurality of solid angle regions RS1 (IS1), RS2 (IS2), and RS3 (IS3) constituting the object light are formed based on the wavelength regions R, G, and B of the light identified in each pixel of the image.
- the normal vector Vn of each point of the object W to be measured corresponding to each pixel is obtained from the inclusion relationship with the predetermined observation solid angle DS.
- the correspondence table is called from the storage unit 126, and the identified light wavelength regions R, G, and B brightness Rc, Gc, and Bc are the light wavelength regions R, G, and B brightness Rt of the correspondence table. If Gt and Bt match, the corresponding normal vector Vn becomes the normal vector obtained as it is. If they do not match, the luminance Rc, Gc, and Bc of the identified light wavelength regions R, G, and B are normalized to obtain the luminance factors Rn and Bn. Then, the complementary functions fx and fy are read from the storage unit 126, and the corresponding normal vector Vn is calculated.
- the brightness factors Rn and Bn are obtained by normalizing the brightness Rc, Gc and Bc of the wavelength regions R, G and B of the light immediately identified without using the corresponding table. Then, the complementary functions fx and fy may be read from the storage unit 126 to calculate the corresponding normal vector Vn.
- the corresponding normal vector Vn may be approximately calculated by using a plurality of correspondences in the correspondence table without using the complementary functions fx and fy. The explanation will be given below.
- the luminance Rt, Gt, Bt and the luminance Rc for M pieces (M sets) in the corresponding table that can be determined to have values close to the luminance Rc, Gc, Bc of the identified light wavelength regions R, G, B. , Gc, Bc and the sum of squared luminance SUM (M ⁇ N ⁇ 4, M may be all in the corresponding table).
- SUM (Rc-Rt) * (Rc-Rt) + (Gc-Gt) * (Gc-Gt) + (Bc-Bt) * (Bc-Bt) (12)
- N N sets) brightness Rt, Gt, and Bt in the order in which the sum of squared luminance differences SUM is closest. Then, N normal vectors Vn corresponding to these are obtained from the corresponding table.
- the normal vectors may be obtained for the brightness Rc, Gc, Bc of the identified light wavelength regions R, G, and B.
- the shape restoration step (FIG. 7, step S10) is performed.
- the shape restoration step the slope information of each point of the object to be measured W is obtained from the normal vector Vn, and the shape of the object to be measured W is restored in consideration of the pixel size.
- the illumination light having a specific irradiation solid angle IS having a plurality of (three) solid angle regions IS1, IS2, and IS3 having different wavelength regions R, G, and B of light. Irradiate the object W to be measured. Then, based on the light wavelength regions R, G, and B identified in each pixel of the image, from the inclusion relationship between the plurality of solid angle regions RS1, RS2, RS3 constituting the object light and the predetermined observation solid angle DS. The normal vector Vn of each point of the object W to be measured corresponding to each pixel is obtained.
- the filter unit 114 is arranged near the focal length f of the lens unit 116 on the irradiation optical axis L1, and the irradiation solid angle IS is the same for each point of the object W to be measured. .. Therefore, homogeneous information can be captured in the captured image from all the points of the object W to be measured. That is, it is possible to quantify the information on the surface of the object W to be measured equally regardless of the location, restore the shape, and evaluate it.
- the filter unit may not be arranged near the focal length f of the lens unit on the irradiation optical axis L1. This is because, depending on the object W to be measured, it may be sufficient to obtain highly accurate information only at each point of the object W to be measured in the immediate vicinity of the irradiation optical axis L1.
- the filter unit 114 has different filter regions CF1 and CF2 around the irradiation optical axis L1 so that the plurality of solid angle regions IS1, IS2, and IS3 are provided around the irradiation optical axis L1 of the illumination light. , CF3. Therefore, when there are a plurality of normal vectors Vn having the same inclination angle with the irradiation optical axis L1 as the rotation axis, they can be obtained in a distinct state. That is, the inclination of the surface of the object to be measured (the direction of the inclination angle with the irradiation optical axis L1 as the rotation axis) can be faithfully reproduced from the normal vector Vn.
- the filter unit 114 shown in FIG. 3 (A) is used, but the present invention is not limited to this, and may be as shown in FIG. 3 (B).
- the filter unit 114 has the same filter regions CF1, CF2, and CF3 as in FIG. 3A, with only the vicinity of the irradiation optical axis L1 as the uniform filter region CF4. Therefore, by using this filter unit 114, it is possible to reduce the man-hours such as detecting the slight slope of the normal vector Vn and obtaining the normal vector Vn, and it is possible to detect only the necessary slope.
- the filter unit 114 concentrically includes two configurations similar to those in FIG. 3A. That is, the filter unit 114 includes different filter regions CF21, CF22, and CF23 around the irradiation optical axis L1, and further includes different filter regions CF11, CF12, and CF13 on the outside thereof. Therefore, by using this filter unit 114, the slope of the normal vector Vn can be detected more finely than the filter unit 114 of FIG. 3A.
- the filter unit 114 includes filter regions CF1, CF2, CF3, and CF4 that are concentrically different from each other with respect to the irradiation optical axis L1. That is, when a plurality of normal vectors having the same inclination angle with respect to the irradiation optical axis L1 exist, the filter unit 114 finely obtains the steepness of the inclination angle without distinguishing them. Therefore, although there is no information on the rotation direction around the irradiation optical axis L1, when determining the quality of the object W to be measured, the processing time and man-hours for shape restoration can be shortened, and the determination is facilitated. It becomes possible.
- FIGS. 3 (F) to 3 (H) show the irradiation solid angle IS and the solid angle regions IS1, IS2, IS3, and IS4 corresponding to the filter unit 114 of FIGS. 3 (B) to 3 (D), respectively.
- IS11, IS12, IS13, IS21, IS22, IS23 are shown.
- the filter unit 114 has different wavelength regions R, G, and B of light as optical attributes. Therefore, when the normal vector Vn is not tilted (the state where the object W to be measured is not tilted), the light becomes white light, and it is easy to intuitively recognize that the normal vector Vn is not tilted visually. Further, since the light is white when there is no inclination, the hue of the object to be measured W itself facing directly can be easily determined. At the same time, a normal color CDD camera or color CMOS camera can be used as it is for the image pickup device CM. Therefore, the identification of the optical attribute can be easily realized at low cost.
- the wavelength region of light may be two or more instead of three of R, G, and B. Further, the color may not be a red wavelength region, a green wavelength region, or a blue wavelength region, but a wavelength region of another color may be used in combination.
- the optical attributes include the polarization state, the brightness, etc. in addition to the wavelength regions R, G, and B of light. That is, for example, the light attribute may be in a deflected state. In that case, for example, a polarizing plate or the like that changes the polarization state of light is used in the filter unit. Then, the light attribute may be identified by using a corresponding polarizing plate for the image pickup apparatus CM.
- a pre-stage process is provided before the lighting process, and in the pre-stage process, a reference ball is used instead of the object W to be measured as a specific jig, and the pre-stage lighting process and the pre-stage imaging process are performed. Further, a pre-stage correspondence relationship generation step of finding the correspondence relationship between the wavelength regions R, G, B of light and the normal vector Vn is performed. That is, since the correspondence between the wavelength regions R, G, and B of light and the normal vector Vn is obtained in advance, the object W to be measured is imaged to measure and restore the shape quickly and stably. be able to.
- the measurement of the object W to be measured is performed by the image measuring device 100 except that the object W to be measured is changed to a specific jig.
- the arrangement and configuration of the case can be used as it is. Therefore, the steps from the previous step to the shape restoration step can be performed efficiently and quickly.
- the specific jig is the reference sphere, the pre-stage imaging step can be completed only once, and the correspondence between the wavelength regions R, G, B of light and the normal vector Vn can be easily and quickly obtained. be able to.
- the normal vector may be obtained by finding the correspondence between the wavelength regions R, G, B of light and the normal vector Vn.
- the operator directly specifies the normal vector in one of the most dominant wavelength regions of light using an input device (not shown by the operator).
- the normal vector may be obtained by using a simulation such as a ray tracing method.
- the pre-stage process may be performed with a different configuration or a different method. For example, a device different from the image measuring device 100 may be used, or a different lighting device or an imaging device CM may be used in the image measuring device 100.
- a reference plane may be used instead of the reference sphere (the reference plane is a plane in which the waviness and roughness of the surface can be ignored with respect to the slope of the normal vector to be measured.
- the object W to be measured may be the one to be measured from now on, another one having the same shape and another one, or one having a completely different shape).
- the reference plane is used as a specific jig, the following steps are performed.
- the lighting device 110 irradiates the reference plane and images the reference plane.
- the reference plane is tilted with respect to the observation optical axis L2 at different angles, and imaging is performed a plurality of times (N ⁇ 4).
- the normal vector Vn corresponding to the tilted angle is obtained.
- the brightnesses Rc, Gc, and Bc of the wavelength regions R, G, and B of the light corresponding to the respective normal vectors Vn are obtained.
- the brightness Rc, Gc, and Bc at this time are obtained by averaging only the portion of the reference plane in the captured image.
- a correspondence table showing the correspondence between the wavelength regions R, G, B of light and the normal vector as shown in FIG. 10 is obtained.
- the procedure described here shows the steps excluding step S233 in FIGS. 8A and 8B.
- the object to be measured W itself may be used as it is. In that case, the following steps are performed.
- the lighting device 110 irradiates the object to be measured W to determine a temporary reference plane.
- this provisional reference plane the amount of change in the brightness Rc, Gc, Bc of the wavelength regions R, G, B of light in the portion W of the object to be measured in the image is calculated, and the region in which the amount of change is the smallest is calculated. It can be decided by asking. After determining this provisional reference plane, the rest of the process is the same as when the above-mentioned reference plane is used. Therefore, the following description will be omitted.
- the processing device 120 includes a storage unit 126 that stores the correspondence between the wavelength regions R, G, and B of light and the normal vector Vn, and the calculation unit 124 has a normal based on the correspondence. Find the vector Vn. Therefore, even if the correspondence is complicated, the correspondence can be appropriately read out and used by the calculation unit 124.
- the correspondence relationship is configured as a correspondence table. Therefore, the amount of calculation by the calculation unit 124 is small, and the normal vector Vn can be obtained quickly.
- the correspondence is also configured as complementary functions fx and fy. Therefore, by using the complementary functions fx and fy, the normal vector Vn can be quickly obtained even for the brightness Rc, Gc, and Bc of the light wavelength regions R, G, and B, which are not supported by the corresponding table. Can be done.
- the above-mentioned correspondence may be directly read into the calculation unit from the outside.
- the configuration may be such that the correspondence relationship is obtained every time the normal vector Vn is obtained.
- only the complementary functions fx and fy may be configured without constructing the corresponding table.
- neither the corresponding table nor the complementary table need to be configured. In that case, the operator may directly determine the normal vector with respect to the brightness Rc, Gc, Bc of the obtained wavelength regions R, G, B of the light.
- the normal vector Vn is normalized. Therefore, the parameters for obtaining the correspondence table and the complementary functions fx and fy that define the correspondence between the wavelength regions R, G, and B of light and the normal vector Vn can be reduced. Therefore, the storage capacity required for the corresponding table can be reduced, and the amount of calculation for calculating the complementary functions fx and fy can be reduced.
- the present invention is not limited to this, and the normal vector V which is not normalized may be used.
- the lighting device 110 includes a light source unit 112, a filter unit 114, a lens unit 116, and a half mirror 118, but the present invention is not limited to this.
- it may be as in the second embodiment shown in FIG.
- a second filter unit 213 is further provided between the light source unit 212 and the filter unit 214. Therefore, for the elements other than the second filter unit 213, the first digit of the code is changed and the description thereof will be omitted.
- the second filter unit 213 is arranged on the irradiation optical axis L1 between the light source unit 212 and the filter unit 214. Similar to the filter unit 214, the second filter unit 213 can include a diaphragm that blocks the illumination light and a filter region that changes the light attribute. The second filter unit 213 is arranged near the focal position so that the image is formed on the surface of the object W to be measured. Therefore, the second filter unit 213 can prevent stray light, further homogenize the illumination light, and change complicated light attributes.
- the image measuring device receives the reflected light of the object W to be measured as an object light to measure the object W to be measured, but the present invention is not limited to this.
- the present invention may be as in the third embodiment shown in FIG.
- the light transmitted through the object W to be measured is received as the object light, and the object W to be measured is measured. Therefore, in the present embodiment, the shape of the object W to be measured can be measured and restored even if the object W to be measured is a material that does not easily reflect the illumination light and easily transmits the illumination light.
- the irradiation optical axis L1 and the observation optical axis L2 are coaxial, but the present invention is not limited to this.
- it may be as in the fourth embodiment shown in FIGS. 13 and 14.
- the irradiation optical axis L1 and the observation optical axis L2 intersect at the surface of the object to be measured W.
- the first digit of the code is changed for the elements other than the changed / added elements. The explanation is omitted.
- a turntable RT capable of rotating the object W to be measured around the observation optical axis L2
- the processing device 420 includes an image holding unit 422, a calculation unit 424, a control unit 425, a storage unit 426, and a shape restoration unit 428.
- the control unit 425 outputs a signal for controlling the rotation drive of the turntable RT to the turntable RT.
- the rotation angle is instructed by an input device (not shown) or a program stored in the storage unit 426. Further, the control unit 425 outputs the rotation angle signal of the turntable RT to the calculation unit 424.
- the calculation unit 424 links the rotation angle signal of the turntable RT with the image obtained at that time, and from the inclusion relationship between the plurality of solid angle regions IS1, IS2, IS3 and the predetermined observation solid angle DS, each The normal vector of each point of the object to be measured corresponding to the pixel is obtained.
- the pre-stage step (FIG. 14, step S2) is performed. Then, the lighting step (FIG. 14, step S4) is performed. Then, an imaging step (FIG. 14, step S6) is performed.
- a rotation step (FIG. 14, step S7) is performed.
- the plurality of solid angle regions IS1, IS2, and IS3 by the lighting device 410 are not rotationally symmetric with respect to the observation optical axis L2 of the observation solid angle DS. Therefore, in the rotation step, the object W to be measured is rotated around the observation optical axis L2 at a predetermined angle ⁇ 1 in each imaging step.
- the predetermined angle ⁇ 1 is set to be equal to or less than the plane angle occupied when the irradiation solid angle IS of the lighting device 410 is projected onto the surface of the object W to be measured.
- the calculation step is performed after performing the illumination step and the imaging step (only the imaging step is sufficient if the lighting conditions do not change even after the rotation step) a predetermined number of times.
- the calculation step (FIG. 14, step S8) is performed.
- the normal vector Vn is obtained after considering the angle ⁇ at which the irradiation optical axis L1 and the observation optical axis L2 intersect and the predetermined angle ⁇ 1.
- a shape restoration step (FIG. 14, step S10) is performed.
- the inclination can be measured and reproduced isotropically without depending on the measurement direction.
- this turntable RT is effective even if it is coaxial epi-illumination light in which the irradiation optical axis L1 and the observation optical axis L2 coincide with each other.
- the measurement accuracy of the normal vector Vn may be directionally dependent. Therefore, by using such a turntable RT in an image measuring device as in the first embodiment, it is possible to improve the direction dependence of the measurement accuracy of the normal vector Vn.
- the image measuring apparatus of the above embodiment shows processing the image of the object to be measured W to measure the shape of the object to be measured and restore the shape of the object to be measured, but the present invention is not limited to this.
- the calculation unit 524 further obtains from the normal vector Vnb of each point of the measured object W stored in advance and the newly imaged measured object W.
- a matching determination unit 524A is provided which compares the normal vector Vn of each of the obtained points with the normal vector Vn and extracts parts different from each other. Therefore, for the elements other than the calculation unit 524, the storage unit 526, and the shape restoration unit 528 related to the function of the matching determination unit 524A, the first digit of the code is changed and the description thereof will be omitted.
- the calculation unit 524 first obtains all the normal vectors of the object W to be measured and associates them with each pixel in two dimensions (XY plane) (this is called a normal vector group). Next, this normal vector group is rotated 360 times in 1 deg increments and stored in the storage unit 526. That is, 360 normal vector groups are stored in the storage unit 526 (normal vector Vn is standardized in advance). This is the normal vector Vn of each point of the object to be measured W stored in advance.
- the calculation unit 524 obtains the normal vector Vn of each point of the object W to be measured. Then, the calculation unit 524 forms a normal vector group by associating each pixel with each other in two dimensions (XY plane). Then, the calculation unit 524 takes a difference square sum (pattern matching) with each of the normal vector group and the 360 normal vector group previously stored in the storage unit 526, and the value is the smallest (most). One normal vector group (when the patterns match) is read into the matching determination unit 524A. Then, the matching determination unit 524A compares the normal vector group with the most matching patterns read from the storage unit 526 with the newly obtained normal vector group.
- the matching determination unit 524A obtains a portion where the normal vectors Vn are different from each other, and calculates the difference between the normal vectors of the different portions. When the difference is equal to or greater than a certain threshold value, the matching determination unit 524A adds information that the position is a defect (this is referred to as defect information). Then, the matching determination unit 524A outputs the defect information and the newly obtained normal vector group to the shape restoration unit 528.
- the shape restoration unit 528 restores the shape of the object to be measured W with defect information based on the output from the matching determination unit 524A. Alternatively, the part with the defect information and the defect information are restored.
- the matching determination unit 524A by providing the matching determination unit 524A, different portions can be discriminated between the objects W to be measured, and defects can be easily detected.
- the present invention can be widely applied to a shape restoration method for irradiating an object to be measured with illumination light, processing the captured image, and restoring the shape of the object to be measured, and an image measuring device using the method. ..
- Luminance Bn, Gn, Rn Luminance rate CF1, CF2, CF3, CF4, CF11, CF12, CF13 , CF21, CF22, CF23 ... Filter area CM ... Imaging device Cx, Cy ... Sphere projection image center DD ... Display device DS ... Observation solid angle DS1, DS2, DS3, IS1, IS2, IS3, IS4, IS5, IS11, IS12, IS13, IS21, IS22, IS23, RS1, RS2, RS3 ... Solid angle region f ... Focus distance fx, fy ... Complementary function IMC ... Image capture IMP ... Image processing device IS, IS'... Irradiation solid angle JG ...
- Reference sphere JG_IMG ... Image of reference sphere L ... Range L1 ... Irradiation optical axis L2 ... Observation optical axis L3 ... Reflected optical axis LS ... Conventional lighting M, NN, N ... Number of times P, P'... Position r, R0 ... Radius RS ... Reflected solid angle RT ... Turntable V, Vn, Vnb, Vtn ... Normal vector Vnx, Vtnx, Vx ... X component Vny, Vtny, Vy ... Y component Vnz, Vtnz, Vz ... Z component W ... Object ⁇ , ⁇ 1, ⁇ , ⁇ ...angle
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Biochemistry (AREA)
- Chemical & Material Sciences (AREA)
- Analytical Chemistry (AREA)
- General Health & Medical Sciences (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Signal Processing (AREA)
- Quality & Reliability (AREA)
- Algebra (AREA)
- Mathematical Analysis (AREA)
- Mathematical Optimization (AREA)
- Mathematical Physics (AREA)
- Pure & Applied Mathematics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
Description
Vnx*Vnx+Vny*Vny+Vnz*Vnz=1 (1)
θ=acos((L/2)/r) (2)
Vx=(X-Cx)*Px (3)
Vy=(Y-Cy)*Py (4)
Vz=sqrt(r*r-Vx*Vx-Vy*Vy) (5)
Vnx=Vx/r (6)
Vny=Vy/r (7)
Vnz=sqrt(1-Vnx*Vnx-Vny*Vny) (8)
Rn=Rt/sqrt(Rt*Rt+Gt*Gt+Bt*Bt) (9)
Gn=Gt/sqrt(Rt*Rt+Gt*Gt+Bt*Bt) (10)
Bn=sqrt(1-(Rt*Rt)/(Rt*Rt+Gt*Gt+Bt*Bt)+(Gt*Gt)/(Rt*Rt+Gt*Gt+Bt*Bt)) (11)
SUM=(Rc―Rt)*(Rc―Rt)+(Gc―Gt)*(Gc―Gt)+(Bc―Bt)*(Bc―Bt) (12)
プS4)を行う。そして、撮像工程(図14、ステップS6)を行う。
NN=360/θ1 (13)
110、210、310、410、510…照明装置
112、212、312…光源部
114、214、314…フィルター部
116、216、316…レンズ部
118、218…ハーフミラー
120、420、520…処理装置
122、422、522…画像保持部
124、424、524…演算部
126、426、526…記憶部
128、428、528…形状復元部
213…第2フィルター部
425…制御部
524A…整合判定部
B、G、R…波長領域
Bc、Bt、Gc、Gt、Rc、Rt…輝度
Bn、Gn、Rn…輝度率
CF1、CF2、CF3、CF4、CF11、CF12、CF13、CF21、CF22、CF23…フィルター領域
CM…撮像装置
Cx、Cy…球投影像中心
DD…表示装置
DS…観察立体角
DS1、DS2、DS3、IS1、IS2、IS3、IS4、IS5、IS11、IS12、IS13、IS21、IS22、IS23、RS1、RS2、RS3…立体角領域
f…焦点距離
fx、fy…補完関数
IMC…画像キャプチャ
IMP…画像処理装置
IS、IS’…照射立体角
JG…基準球
JG_IMG…基準球の画像
L…範囲
L1…照射光軸
L2…観察光軸
L3…反射光軸
LS…従来照明
M、NN、N…回数
P、P’…位置
r、R0…半径
RS…反射立体角
RT…回転台
V、Vn、Vnb、Vtn…法線ベクトル
Vnx、Vtnx、Vx…X成分
Vny、Vtny、Vy…Y成分
Vnz、Vtnz、Vz…Z成分
W…被測定物
θ、θ1、φ、ω…角度
Claims (18)
- 被測定物に照明光を照射し、撮像された画像を処理して、前記被測定物の形状を復元する形状復元方法であって、
互いに異なる光属性を持つ複数の立体角領域を備える特定の照射立体角を有する前記照明光を前記被測定物に照射する照明工程と、
前記照明光により生じる前記被測定物からの物体光を所定の観察立体角で受光し前記画像を撮像する撮像工程と、
該画像の各画素において識別された該光属性に基づいて、前記物体光を構成する前記複数の立体角領域と前記所定の観察立体角との包含関係から前記各画素に対応する前記被測定物の各点の法線ベクトルを求める演算工程と、
該法線ベクトルから前記被測定物の各点の傾き情報を求めて前記被測定物の形状を復元する形状復元工程と、
を含むことを特徴とする形状復元方法。 - 請求項1において、
前記照射立体角は、前記被測定物の各点に対して同一にされていることを特徴とする形状復元方法。 - 請求項1または2において
前記複数の立体角領域は、前記照明光の前記照射立体角の照射光軸周りに設けられていることを特徴とする形状復元方法。 - 請求項1乃至3のいずれかにおいて、
前記光属性は光の波長領域とされていることを特徴とする形状復元方法。 - 請求項1乃至4のいずれかにおいて、
前記照明工程の前段に、前段工程を有し、
該前段工程では、前記被測定物自身または特定の治具が前記被測定物の代わりに用いられ、前記照明工程と前記撮像工程とが行われ、更に、前記光属性と前記法線ベクトルとの対応関係を求める対応関係生成工程を行うことを特徴とする形状復元方法。 - 請求項5において、
前記特定の治具は、基準球、または基準平面とされていることを特徴とする形状復元方法。 - 請求項5または6において、
前記対応関係は、対応テーブルとして構成されることを特徴とする形状復元方法。 - 請求項5乃至7のいずれかにおいて、
前記対応関係は、補完関数として構成されることを特徴とする形状復元方法。 - 請求項1乃至8のいずれかにおいて、
前記法線ベクトルは、正規化されていることを特徴とする形状復元方法。 - 請求項1乃至9のいずれかにおいて、
前記複数の立体角領域が、前記観察立体角の観察光軸に対して回転対称でない場合には、前記撮像工程後に、前記被測定物を前記観察光軸周りに所定の角度で回転させる回転工程を行い、前記照明工程と前記撮像工程を所定の回数行った後に前記演算工程を行うことを特徴とする形状復元方法。 - 被測定物に照明光を照射する照明装置と、前記被測定物を撮像して画像を出力する撮像装置と、該画像を処理する処理装置と、を備え、前記被測定物の形状を測定する画像測定装置であって、
前記照明装置は、前記照明光を出射する光源部と、前記照明光を特定の照射立体角で前記被測定物に照射するレンズ部と、前記光源部と前記レンズ部との間であって、前記特定の照射立体角内を互いに異なる光属性を持つ複数の立体角領域に分離するフィルター部と、を有し、
前記撮像装置は前記照明光により生じる前記被測定物からの物体光を所定の観察立体角で受光し、前記撮像装置の各画素は前記異なる光属性を互いに識別可能とされ、
前記処理装置は、前記物体光を構成する前記複数の立体角領域と前記所定の観察立体角との包含関係から前記各画素に対応する前記被測定物の各点の法線ベクトルを求める演算部と、該法線ベクトルから前記被測定物の各点の傾き情報を求めて前記被測定物の形状を復元する形状復元部と、を備える
ことを特徴とする画像測定装置。 - 請求項11において、
前記フィルター部は、前記照明光の照射光軸上の前記レンズ部の焦点距離の近傍に配置されることを特徴とする画像測定装置。 - 請求項11または12において、
前記フィルター部は、前記複数の立体角領域が前記照明光の照射光軸周りに設けられるように、該照射光軸周りに互いに異なるフィルター領域を備えることを特徴とする画像測定装置。 - 請求項11乃至13のいずれかにおいて、
前記フィルター部は、前記光属性として光の波長領域を互いに異ならせることを特徴とする画像測定装置。 - 請求項11乃至14のいずれかにおいて、
前記処理装置は、前記光属性と前記法線ベクトルとの対応関係を記憶する記憶部を備え、前記演算部が該対応関係に基づいて該法線ベクトルを求めることを特徴とする画像測定装置。 - 請求項11乃至15のいずれかにおいて、
前記処理装置は、前記法線ベクトルを正規化していることを特徴とする画像測定装置。 - 請求項11乃至16のいずれかにおいて、
前記被測定物を観察光軸周りに回転可能な回転台を備えることを特徴とする画像測定装置。 - 請求項11乃至17のいずれかにおいて、
前記演算部は、更に、予め格納している前記被測定物の各点の前記法線ベクトルと、新たに撮像された前記被測定物から求められた各点の前記法線ベクトルとを比較し、互いに異なる部分を抽出する整合判定部を備えることを特徴とする画像測定装置。
Priority Applications (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP20893506.4A EP4067811A4 (en) | 2019-11-29 | 2020-11-26 | SHAPE RECONSTRUCTION METHOD AND IMAGE MEASUREMENT DEVICE |
US17/780,735 US20220412727A1 (en) | 2019-11-29 | 2020-11-26 | Shape reconstruction method and image measurement device |
CN202080082861.2A CN114746716B (zh) | 2019-11-29 | 2020-11-26 | 形状复原方法和图像测量装置 |
JP2021561500A JPWO2021107027A1 (ja) | 2019-11-29 | 2020-11-26 | |
KR1020227020636A KR20220105656A (ko) | 2019-11-29 | 2020-11-26 | 형상 복원 방법 및 화상 측정 장치 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2019-217429 | 2019-11-29 | ||
JP2019217429 | 2019-11-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021107027A1 true WO2021107027A1 (ja) | 2021-06-03 |
Family
ID=76129531
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2020/044058 WO2021107027A1 (ja) | 2019-11-29 | 2020-11-26 | 形状復元方法及び画像測定装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20220412727A1 (ja) |
EP (1) | EP4067811A4 (ja) |
JP (1) | JPWO2021107027A1 (ja) |
KR (1) | KR20220105656A (ja) |
CN (1) | CN114746716B (ja) |
WO (1) | WO2021107027A1 (ja) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0151821B2 (ja) | 1981-08-10 | 1989-11-06 | Goo Kagaku Kogyo Kk | |
JPH07306023A (ja) * | 1994-05-10 | 1995-11-21 | Shigeki Kobayashi | 形状計測装置、検査装置及び製品製造方法 |
JP2011232087A (ja) * | 2010-04-26 | 2011-11-17 | Omron Corp | 形状計測装置およびキャリブレーション方法 |
JP6451821B1 (ja) * | 2017-12-05 | 2019-01-16 | マシンビジョンライティング株式会社 | 検査システム及び検査方法 |
Family Cites Families (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7948514B2 (en) * | 2008-06-02 | 2011-05-24 | Panasonic Corporation | Image processing apparatus, method and computer program for generating normal information, and viewpoint-converted image generating apparatus |
US8441532B2 (en) * | 2009-02-24 | 2013-05-14 | Corning Incorporated | Shape measurement of specular reflective surface |
JP2011145171A (ja) * | 2010-01-14 | 2011-07-28 | Nikon Corp | 形状検出装置 |
WO2012105157A1 (ja) * | 2011-02-01 | 2012-08-09 | パナソニック株式会社 | 立体画像撮影装置および内視鏡 |
JP5914850B2 (ja) * | 2011-11-30 | 2016-05-11 | パナソニックIpマネジメント株式会社 | 3次元計測装置およびそれに用いられる照明装置 |
JP6029394B2 (ja) * | 2012-09-11 | 2016-11-24 | 株式会社キーエンス | 形状測定装置 |
JP2014235066A (ja) * | 2013-05-31 | 2014-12-15 | 株式会社ブリヂストン | 表面形状測定装置 |
JP5866586B1 (ja) * | 2015-09-22 | 2016-02-17 | マシンビジョンライティング株式会社 | 検査用照明装置及び検査システム |
JP6762608B2 (ja) * | 2016-09-06 | 2020-09-30 | 株式会社日立ハイテクサイエンス | 走査型白色干渉顕微鏡を用いた三次元形状計測方法 |
-
2020
- 2020-11-26 CN CN202080082861.2A patent/CN114746716B/zh active Active
- 2020-11-26 KR KR1020227020636A patent/KR20220105656A/ko unknown
- 2020-11-26 US US17/780,735 patent/US20220412727A1/en active Pending
- 2020-11-26 WO PCT/JP2020/044058 patent/WO2021107027A1/ja unknown
- 2020-11-26 EP EP20893506.4A patent/EP4067811A4/en active Pending
- 2020-11-26 JP JP2021561500A patent/JPWO2021107027A1/ja active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0151821B2 (ja) | 1981-08-10 | 1989-11-06 | Goo Kagaku Kogyo Kk | |
JPH07306023A (ja) * | 1994-05-10 | 1995-11-21 | Shigeki Kobayashi | 形状計測装置、検査装置及び製品製造方法 |
JP2011232087A (ja) * | 2010-04-26 | 2011-11-17 | Omron Corp | 形状計測装置およびキャリブレーション方法 |
JP6451821B1 (ja) * | 2017-12-05 | 2019-01-16 | マシンビジョンライティング株式会社 | 検査システム及び検査方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4067811A4 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2021107027A1 (ja) | 2021-06-03 |
CN114746716A (zh) | 2022-07-12 |
EP4067811A4 (en) | 2023-12-27 |
EP4067811A1 (en) | 2022-10-05 |
KR20220105656A (ko) | 2022-07-27 |
US20220412727A1 (en) | 2022-12-29 |
CN114746716B (zh) | 2024-06-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9232117B2 (en) | Digital Schlieren imaging | |
TWI490445B (zh) | 用於估計一物件之一三維表面形狀之方法、裝置及機器可讀非暫時性儲存媒體 | |
CN112595496B (zh) | 近眼显示设备的不良检测方法、装置、设备及存储介质 | |
CN114280075B (zh) | 一种管类零件表面缺陷在线视觉检测系统及检测方法 | |
Rachakonda et al. | Sources of errors in structured light 3D scanners | |
WO2021107027A1 (ja) | 形状復元方法及び画像測定装置 | |
US11100629B2 (en) | Appearance inspecting apparatus for article and appearance inspecting method for article using the same | |
KR20200046789A (ko) | 이동하는 물체의 3차원 데이터를 생성하는 방법 및 장치 | |
TWI604221B (zh) | 影像景深測量方法以及應用該方法的影像擷取裝置 | |
CN110443750A (zh) | 检测视频序列中的运动的方法 | |
JP2023501525A (ja) | 自動外観検査ステーションのためのオフライントラブルシューティング及び開発 | |
WO2021053852A1 (ja) | 外観検査装置、外観検査装置の較正方法及びプログラム | |
JP2004239870A (ja) | 空間フィルタ、空間フィルタの作成方法、空間フィルタ作成プログラム、画面欠陥の検査方法及び装置 | |
JPH07306152A (ja) | 光学的歪検査装置 | |
JPH0522176B2 (ja) | ||
JPS6048683B2 (ja) | 物体表面状態検査方法とその検査装置 | |
CN112838018B (zh) | 光学量测方法 | |
WO2021153057A1 (ja) | 三次元形状計測装置、三次元形状計測方法及びプログラム | |
WO2021153056A1 (ja) | 三次元形状計測装置、三次元形状計測方法及びプログラム | |
KR102449421B1 (ko) | Euv 마스크 검사 방법 | |
RU54216U1 (ru) | Устройство для оценки и сравнения характеристик элементов оптических, фото- и телевизионных систем | |
JP2007285753A (ja) | 欠陥検出方法および欠陥検出装置 | |
CN111380869B (zh) | 具有高度信息的光学检测系统与方法 | |
JP3222729B2 (ja) | 倍率調整機能を備えた光学部材検査装置 | |
JP2001264031A (ja) | 形状計測方法及びその装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 20893506 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021561500 Country of ref document: JP Kind code of ref document: A |
|
ENP | Entry into the national phase |
Ref document number: 20227020636 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2020893506 Country of ref document: EP Effective date: 20220629 |