WO2007135915A1 - 表面検査装置 - Google Patents
表面検査装置 Download PDFInfo
- Publication number
- WO2007135915A1 WO2007135915A1 PCT/JP2007/060041 JP2007060041W WO2007135915A1 WO 2007135915 A1 WO2007135915 A1 WO 2007135915A1 JP 2007060041 W JP2007060041 W JP 2007060041W WO 2007135915 A1 WO2007135915 A1 WO 2007135915A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- defect
- image
- inspection
- dimensional image
- area
- Prior art date
Links
- 230000007547 defect Effects 0.000 claims abstract description 266
- 238000000034 method Methods 0.000 claims abstract description 38
- 230000008569 process Effects 0.000 claims abstract description 36
- 238000007689 inspection Methods 0.000 claims description 248
- 238000001514 detection method Methods 0.000 claims description 30
- 238000002372 labelling Methods 0.000 claims description 14
- 230000005484 gravity Effects 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000007717 exclusion Effects 0.000 claims description 3
- 238000000605 extraction Methods 0.000 claims description 3
- 239000000284 extract Substances 0.000 claims description 2
- 238000012360 testing method Methods 0.000 abstract description 5
- 238000012545 processing Methods 0.000 description 111
- 230000002093 peripheral effect Effects 0.000 description 57
- 230000000875 corresponding effect Effects 0.000 description 45
- 210000003128 head Anatomy 0.000 description 43
- 230000007246 mechanism Effects 0.000 description 23
- 238000003860 storage Methods 0.000 description 16
- 230000002950 deficient Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 7
- 238000003754 machining Methods 0.000 description 7
- 230000005540 biological transmission Effects 0.000 description 6
- 238000011179 visual inspection Methods 0.000 description 6
- 239000000835 fiber Substances 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000009826 distribution Methods 0.000 description 3
- 238000005259 measurement Methods 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000012447 hatching Effects 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000001771 impaired effect Effects 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/95—Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
- G01N21/954—Inspecting the inner surface of hollow bodies, e.g. bores
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/60—Analysis of geometric attributes
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01N—INVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
- G01N21/00—Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
- G01N21/84—Systems specially adapted for particular applications
- G01N21/88—Investigating the presence of flaws or contamination
- G01N21/8851—Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
- G01N2021/8854—Grading and classifying of flaws
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Definitions
- the present invention scans the surface of an inspection object with inspection light, receives reflected light from the surface, detects a defect on the surface of the inspection object based on the amount of the reflected light, or
- the present invention relates to a surface inspection apparatus that captures the surface of an inspection object, acquires a two-dimensional image thereof, and determines the presence or absence of a defect based on the density value of a pixel in the two-dimensional image.
- an inspection head is inserted into the inspection object by rotating the axial inspection head around its axis and feeding it in the axial direction.
- the outer peripheral force of the inspection head The test object is irradiated with laser light as the inspection light, and the inner peripheral surface of the inspection object is sequentially scanned to one end in the axial direction to the other end, and from the inspection object corresponding to the scanning
- a surface inspection apparatus that receives reflected light of an inspection object via an inspection head, and determines the presence or absence of a defect on the inner peripheral surface of the object to be inspected based on the amount of reflected light received (see, for example, Patent Document 1). ).
- Patent Document 1 Japanese Patent Laid-Open No. 11 281582
- the surface inspection apparatus described above uses laser light as inspection light, it is possible to detect a minute defect by narrowing down the irradiation range of the inspection light.
- the resolution of the detectable defect is increased more than necessary, even fine irregularities that cannot be handled as defects by visual inspection are judged as defects, and the inspection results are poor by visual inspection and inspection by equipment. May be different.
- it is effective to set a threshold value for the size of the defect to be detected and treat only those exceeding the threshold value as defects.
- small irregularities that do not satisfy the threshold by themselves are gathered in a relatively close range, the inspector's eyes can identify the set as a single defect S.
- the surface inspection device distinguishes each minute unevenness and distinguishes the threshold value from the size ratio. In comparison, they are all determined not to be defective. Therefore, if the inspection result by visual inspection is used as a reference, it is evaluated that the defect is missed, and the reliability of the inspection may be impaired.
- the position of the processed portion can be identified with respect to the circumferential direction even though the position of the image of the processed portion can be uniquely specified based on the edge of the inner peripheral surface.
- the position of the image of the processing portion changes according to the positional relationship between the scanning start position and the processing portion. Therefore, if a single mask image including all the images of the additional portion existing on the inner peripheral surface is prepared and superimposed on the image of the inner peripheral surface, the mask position is set to the periphery of the processed portion image. It may shift in the direction and affect the defect discrimination. Even if the mask image is moved in a two-dimensional image to match the mask position with the image of the cache part, it takes a long time to process a large amount of data in a mask image of a size corresponding to the entire inner peripheral surface. Cost.
- An object of the present invention is a category in which minute irregularities and the like that are not detected as defects alone are relatively close while eliminating the possibility that minute irregularities and the like on the surface of the inspection object are detected as defects alone. It is an object of the present invention to provide a surface inspection apparatus that can detect a defect or a defect according to the defect when it is dense. Another object of the present invention is to perform an accurate inspection by eliminating the influence of the processed part existing on the surface of the inspection object on the determination of the presence or absence of defects, and to achieve a high-speed processing. An object of the present invention is to provide a surface inspection apparatus capable of performing the above. Means for solving the problem
- a surface inspection apparatus is a detection that scans the surface of an object to be inspected with inspection light, receives reflected light from the surface, and outputs a signal corresponding to the amount of reflected light.
- a two-dimensional image having a gradation distribution according to the amount of reflected light from the surface of the object to be inspected is generated by the image generation means, and the pixels included in the two-dimensional image Are classified into a first pixel group having a gradation corresponding to the defect and a second pixel group having a gradation not corresponding to the defect, and the first pixel group is surrounded by the second pixel group.
- defect candidate portions those having a predetermined size or more are identified as defects, while relatively small defect candidate portions that are not identified as defects alone are densely packed to a predetermined level or more in the inspection region.
- the inspection area is identified as a defective area. Therefore, while eliminating the possibility of detecting minute irregularities on the surface of the inspection object alone as defects, the minute irregularities that are not detected as defects alone are densely packed in a relatively close range.
- a dense area can be identified as a defect area, and the defect area can be detected as a defect or an area according to a defect and presented to the user of the apparatus.
- the defect candidate extraction means may extract the defect candidate portion by performing a labeling process on a first pixel group included in the two-dimensional image.
- the labeling process is known as a method for grouping pixels contained in a two-dimensional image by using their gradations as a clue.
- the surface of the object to be inspected is used. From the two-dimensional image, the first pixel group having the gradation corresponding to the defect can be extracted as a defect candidate portion for each range surrounded by the second pixel group.
- the defect area identifying means is at least one of the area and the number of defect candidate portions that are less than the predetermined size in the inspection area. You can also determine the density! /.
- the density of minute defect candidate parts in the inspection area has a correlation with the number or area of those defect candidate parts, so the density can be increased or decreased by referring to at least one of the number or area. It is possible to properly determine this.
- the position of the center of gravity of a group of defect candidate portions included in the inspection region identified as the defect region is calculated as a position representing the group of defect candidate portions in the inspection region.
- Position calculating means for performing the above may be further provided. It is highly possible that the dense portion of minute defect candidates will be seen as a single defect by the inspector's visual inspection. Therefore, instead of the individual positions of the defect candidate portions, the position of the center of gravity is calculated as a position representative of those defect candidate portions, so that the positions of the dense portions of the defect candidate portions are more appropriately presented to the inspector. can do.
- Either one of them is held as a separate reference image for each different processed portion, and the position of the image of the processed portion in the direction corresponding to the axis corresponding to the axial direction of the surface and the circumference corresponding to the circumferential direction of the surface
- the reference image holding means for holding the number of images of the same processed part in relation to the direction in association with the reference image, the reference image, the position and the number associated with the reference image, Defects on the 2D image Above to identify areas to be excluded force, and defect discriminating means for discriminating the presence or absence of a defect on the basis of the density values of the pixels in the identified area outside, by providing a Solve the problem.
- the range in which the image of the processed part can exist can be narrowed down to a part of the direction corresponding to the axis of the two-dimensional image by using the position in the direction corresponding to the axis with respect to the image of the specific processed part as a clue force. Then, the density distribution of the reference image and the two-dimensional image are compared within the narrowed range to determine whether or not there is an image that matches the reference image, and the region where the matching image exists is determined as a defect determination. Exclude target power.
- the defect determination unit refers to a position associated with the reference image and refers to a range on the two-dimensional image to be compared with the reference image.
- the two-dimensional image is narrowed down to a part of the corresponding direction, and the reference image and the two-dimensional image are compared with each other within the narrowed range, and the reference image is compared based on the comparison result.
- excluded area specifying means 60 for specifying the same number of areas as those to be excluded as areas to be excluded from the defect discrimination target force.
- the entire surface of the two-dimensional image is compared with the standard image. Is faster than the sequential comparison. Furthermore, it matches with the reference image
- the number of areas that should be excluded is excluded from the object of defect determination. be able to. For example, even if a defect similar to the processed part coexists with the processed part in the circumferential direction, and the image of the defect is excluded from the object of defect determination, the image of one of the processed parts remains as the object of defect determination.
- the image of the processed part is alternatively determined as a defect, and as a result, the correct determination as to the presence or absence of the defect is performed.
- the excluded area specifying means sequentially changes the position of the reference image relative to the two-dimensional image in the narrowed range while sequentially changing the position in the circumferential equivalent direction.
- the degree of coincidence between the reference image on the two-dimensional image and the inspection target image of the same shape and size is determined, and when the determined degree of coincidence exceeds a predetermined threshold, the area of the inspection target image is determined as the defect determination.
- Target power Can be identified as an area to be excluded.
- the degree of coincidence is determined between the reference image of the same shape and the same size and the inspection target image, and the position of the inspection target image is changed in the direction corresponding to the axis line, thereby being associated with the reference image.
- the degree of coincidence may be calculated by a normal correlation between the reference image and the inspection target image. As a result, the calculation of the degree of coincidence between images and the discrimination based on the threshold can be processed at high speed.
- the reference image may correspond to an image obtained by extracting a minimum necessary rectangular area including a single processed part image from the two-dimensional image. According to this, the size of the reference image can be suppressed to the minimum necessary, and the periphery of the image of the processed part can be handled as a defect determination target as much as possible.
- the surface inspection apparatus According to the surface inspection apparatus described above, out of the defect candidate portions extracted from the two-dimensional image of the surface of the object to be inspected, those having a predetermined size or more are identified as defects, but are independently identified as defects. However, when the defect candidate portion is relatively small in the inspection area and is denser than a predetermined level, the inspection area is identified as a defect area. Therefore, while eliminating the possibility of detecting minute irregularities on the surface of the inspection object alone as defects, When minute irregularities that are not detected as defects are densely packed in a relatively close range, the dense area is identified as a defective area, and the defective area is detected as a defect or an area that conforms to a defect. It can be presented to the user of the device.
- each reference image can be reduced by holding separate reference images for each processing part having a different shape or size. Since the position of the image of the processed part in the direction corresponding to the axis on the two-dimensional image is held in association with the reference image, it corresponds to the reference image when specifying the area to be excluded from the defect discrimination target.
- the range where the image of the processed part can exist that is, the range to be compared with the reference image is narrowed down to a part of the range corresponding to the axis line direction of the two-dimensional image, and the reference image data Combined with the reduction of the amount, the processing speed can be increased.
- the number of images of the same processed part that should appear in the direction corresponding to the circumference of the two-dimensional image is held in association with the reference image!
- the possibility of misjudgment as an area that should be excluded can be eliminated. As a result, it is possible to perform an accurate inspection by eliminating the influence of the processed part existing on the surface of the inspection object on the determination of the presence or absence of defects.
- FIG. 1 is a diagram showing a schematic configuration of an embodiment of a surface inspection apparatus according to the present invention.
- FIG. 2 is a view showing an example of a two-dimensional image of the inner peripheral surface of the inspection object generated by the surface inspection apparatus in FIG.
- FIG. 4 is an enlarged view of part A in FIG.
- FIG. 5 is a diagram showing a state where a labeling process is performed on the image of FIG.
- FIG. 7 is a diagram showing a state in which the label numbers assigned to the pixels in FIG. 6 are further arranged.
- FIG. 8 is an enlarged view of part B in FIG.
- FIG. 9 is a diagram showing how the inspection area is shifted.
- FIG. 10 is a perspective view showing a part to be inspected with a part broken away.
- FIG. 11 is a view showing another example of the two-dimensional image of the inner peripheral surface of the inspection object generated by the surface inspection apparatus in FIG.
- FIG. 12 is a diagram showing a data structure of a reference image stored in a storage unit.
- FIG. 13 is a flowchart showing defect detection processing executed by the arithmetic processing unit of the surface inspection apparatus in FIG.
- FIG. 1 shows a schematic configuration of a surface inspection apparatus according to an embodiment of the present invention.
- the surface inspection apparatus 1 is an apparatus suitable for inspecting the cylindrical inner peripheral surface 100a provided on the object to be inspected 100, and includes an inspection mechanism 2 for performing the inspection, operation control of the inspection mechanism 2, and inspection. And a control unit 3 for executing processing of measurement results by the mechanism 2 and the like. Further, the inspection mechanism 2 projects the inspection light onto the inspection object 100 and receives the reflected light from the inspection object 100, and a detection unit 5 as a detection means and a predetermined amount to the detection unit 5 And a drive unit 6 for providing the following operations.
- the detection unit 5 receives a laser diode (hereinafter referred to as an LD) 11 as a light source of inspection light and reflected light from the inspection object 100, and the amount of reflected light per unit time (reflection)
- a photodetector (hereinafter referred to as PD) 12 that outputs a current or voltage signal corresponding to the light intensity) 12, a light projecting fiber 13 that guides the inspection light emitted from the LD 11 to the object 100 to be tested, A light receiving fiber 14 for guiding the reflected light from the object under test 100 to the PD 12, a holding cylinder 15 for holding the fibers 13 and 14 in a bundled state, and a coaxial arrangement outside the holding cylinder 15. And a hollow shaft-like inspection head 16.
- the inspection head 16 is rotatably supported via a bearing (not shown).
- the inspection light guided through the light projecting fiber 13 is emitted in the form of a beam along the direction of the axis AX of the inspection head 16 (hereinafter referred to as the axial direction).
- a lens 17 is provided for collecting the reflected light traveling in the direction opposite to the inspection light along the axial direction of the inspection head 16 on the light receiving fiber 14.
- a mirror 18 as an optical path changing means is fixed to the tip of the inspection head 16 (the right end in FIG. 1).
- a translucent window 16a is provided so as to face 18. The mirror 18 changes the optical path of the inspection light emitted from the lens 17 toward the translucent window 16a, and urges the optical path of the reflected light incident from the translucent window 16a into the inspection head 16 toward the lens 17. Change in the forward direction.
- the drive unit 6 includes a linear drive mechanism 30, a rotation drive mechanism 40, and a focus adjustment mechanism 50.
- the linear drive mechanism 30 is provided as a linear drive means for moving the inspection head 16 in the axial direction thereof.
- the linear drive mechanism 30 includes a base 31, a pair of rails 32 fixed to the base 31, and a slider 33 that is movable along the rails 32 in the axial direction of the detection head 16. And a feed screw 34 disposed in parallel with the axis AX of the inspection head 16 and an electric motor 35 for rotationally driving the feed screw 34.
- the slider 33 functions as a means for supporting the entire detection unit 5.
- the LD 11 and PD 12 are fixed to the slider 33, the inspection head 16 is attached to the slider 33 via the rotation drive mechanism 40, and the holding cylinder 15 is attached to the slider 33 via the focus adjustment mechanism 50. Further, a nut 36 is fixed to the slider 33, and a feed screw 34 is screwed into the nut 36. Accordingly, when the feed screw 34 is rotationally driven by the electric motor 35, the slider 33 moves along the rail 32 in the axial direction of the inspection head 16, and accordingly, the entire detection unit 5 supported by the slider 33 is moved. It moves in the axial direction of the inspection head 16. By driving the detection unit 5 using the linear drive mechanism 30, the irradiation position (scanning position) of the inspection light on the inner peripheral surface 100a of the inspection object 100 can be changed with respect to the axial direction of the detection head 16.
- a wall 31a is provided at the front end (right end in FIG. 1) of the base 31, and a through hole 31b coaxial with the inspection head 16 is provided in the wall 31a.
- a sample piece 37 is attached to the through hole 31b.
- the sample piece 37 is provided as a sample for determining the operating state of the surface inspection apparatus 1, and a through hole 37a coaxial with the inspection head 16 is provided on the center line thereof.
- the through hole 37a has an inner diameter through which the inspection head 16 can pass, and the inspection head 16 passes through the through hole 37a and is drawn out into the inspection object 100.
- the rotation drive mechanism 40 is provided as a rotation drive means for rotating the inspection head 16 about the axis AX.
- the rotary drive mechanism 40 is An electric motor 41 as a source and a transmission mechanism 42 for transmitting the rotation of the electric motor 41 to the inspection head 16 are provided.
- the transmission mechanism 42 may be a known rotation transmission mechanism such as a belt transmission device or a gear train. In this embodiment, a belt transmission device is used.
- the irradiation position of the inspection light on the inner peripheral surface 100a of the inspection object 100 can be changed with respect to the circumferential direction of the inspection object 100.
- the rotary drive mechanism 40 is provided with a rotary encoder 43 that outputs a pulse signal each time the inspection head 16 rotates by a predetermined unit angle. The number of pulse signals output from the rotary encoder 43 correlates with the rotation amount (rotation angle) of the inspection head 16, and the period of the pulse signals correlates with the rotation speed of the inspection head 16.
- the focus adjusting mechanism 50 is provided as a focus adjusting means for driving the holding cylinder 15 in the direction of the axis AX so that the inspection light is focused on the inner peripheral surface 100a of the inspection object 100.
- the focus adjustment mechanism 50 is disposed and supported between the support plate 51 fixed to the base end portion of the holding cylinder 15, and the slider 33 and the support plate 51 of the linear drive mechanism 30.
- a rail 52 that guides the plate 51 in the axial direction of the inspection head 16 a feed screw 53 that is arranged parallel to the axial line AX of the inspection head 16 and is screwed into the support plate 51, and the feed screw 53 is driven to rotate.
- an electric motor 54 is provided as a focus adjusting means for driving the holding cylinder 15 in the direction of the axis AX so that the inspection light is focused on the inner peripheral surface 100a of the inspection object 100.
- the focus adjustment mechanism 50 is disposed and supported between the support plate 51 fixed to the base end portion of the holding cylinder 15, and the slider 33 and the support plate 51 of the linear drive
- the support plate 51 moves along the rail 52 and the holding cylinder 15 moves in the axial direction of the inspection head 16. Thereby, the length of the optical path from the lens 17 through the mirror 18 to the inner peripheral surface 100a can be adjusted so that the inspection light is focused on the inner peripheral surface 100a of the inspection object 100.
- the control unit 3 includes an arithmetic processing unit 60 as a computer unit that executes inspection process management by the surface inspection apparatus 1, processing of the measurement results of the detection unit 5, and the detection unit 5 according to instructions of the arithmetic processing unit 60.
- Control the operation of each part Operation control unit 61, a signal processing unit 62 for executing predetermined processing on the output signal of the PD 12, an input unit 63 for a user to input an instruction to the arithmetic processing unit 60, and an arithmetic processing unit 60 are provided with an output unit 64 for presenting the inspection results and the like processed to the user, a computer program to be executed by the arithmetic processing unit 60, and a storage unit 65 for storing measured data and the like.
- the arithmetic processing unit 60, the input unit 63, the output unit 64, and the storage unit 65 can be configured using general-purpose computer equipment such as a personal computer.
- the input unit 63 is provided with input devices such as a keyboard and a mouse, and the output unit 64 is provided with a monitor device.
- An output device such as a printer may be added to the output unit 64.
- the storage unit 65 a hard disk storage device or a storage device such as a semiconductor storage element capable of storing data is used.
- the operation control unit 61 and the signal processing unit 62 may be realized by a hardware control circuit or may be realized by a computer unit.
- each of the arithmetic processing unit 60, the operation control unit 61, and the signal processing unit 62 operates as follows.
- the inspection object 100 is arranged coaxially with the inspection head 16.
- the arithmetic processing unit 60 instructs the operation control unit 61 to start an operation necessary for inspecting the inner peripheral surface 100a of the inspection object 100 in accordance with an instruction from the input unit 63.
- the operation control unit 61 causes the LD 11 to emit light with a predetermined intensity, and the operations of the motors 35 and 41 so that the inspection head 16 moves in the axial direction and rotates around the axis AX at a constant speed.
- the operation control unit 61 controls the operation of the motor 54 so that the inspection light is focused on the inner peripheral surface 100a as the surface to be inspected.
- the inner peripheral surface 100a is scanned by the inspection light up to its one end force and the other end.
- the driving of the inspection head 16 in the axial direction may be a feed operation at a constant speed, or an intermittent feed operation that moves by a predetermined pitch each time the inspection head 16 rotates.
- the output signal of the PD 12 is sequentially guided to the signal processing unit 62 in conjunction with the scanning of the inner peripheral surface 100a described above.
- the signal processing unit 62 performs analog signal processing necessary for processing the output signal of the PD 12 by the arithmetic processing unit 60, and further, AZD-converts the analog signal after the processing with a predetermined number of bits.
- the processing unit 60 converts the digital signal as a reflected light signal. Output to.
- the signal processing executed by the arithmetic processing unit 60 includes various processes such as a process for nonlinearly amplifying the output signal so as to increase the difference in brightness of the reflected light detected by the PD 12 and a process for removing a noise component from the output signal. Use the process as appropriate.
- the AZD conversion by the signal processing unit 62 is performed using the pulse train output from the rotary encoder 43 as a sampling clock signal. As a result, a digital signal having a gradation correlated with the amount of light received by the PD 12 while the inspection head 16 rotates by a predetermined angle is generated and output from the signal processing unit 62.
- the arithmetic processing unit 60 that has received the reflected light signal from the signal processing unit 62 stores the acquired signal in the storage unit 65. Further, the arithmetic processing unit 60 uses the reflected light signal stored in the storage unit 65 to generate a two-dimensional image in which the inner peripheral surface 100a of the inspection object 100 is developed in a plane.
- An example of the two-dimensional image is shown in Fig. 2.
- the inner peripheral surface 100a is developed on a plane defined by an orthogonal two-axis coordinate system in which the circumferential direction of the inspection object 100 is the X-axis direction and the axial direction of the inspection head 16 is the y-axis direction. Corresponds to an image.
- the two-dimensional image 200 uneven portions such as defects existing on the inner peripheral surface 100a are expressed as dark portions 201, and normal portions of the inner peripheral surface 100a are expressed as bright portions 202.
- the inspected object 100 is a bowl
- defects such as a nest that exists on the inner peripheral surface 100a, a flaw during cutting, and the like are imaged as the dark part 201.
- the arithmetic processing unit 60 inspects the obtained two-dimensional image 200 and determines the dark part 201 that satisfies a certain condition as a defect. The detailed procedure for defect detection will be described below with reference to FIG.
- FIG. 3 shows a defect detection routine executed by the arithmetic processing unit 60 in order to detect defects in the inspection object 100.
- the arithmetic processing unit 60 first generates a two-dimensional image 200 of the inner peripheral surface 100a based on the reflected light signal received from the signal processing unit 62 in step S1.
- the two-dimensional image 200 is an image virtually generated on the RAM of the arithmetic processing unit 60.
- the size of one pixel 203 of the two-dimensional image 200 may be appropriate, but it is 150 / zm in the X-axis direction and 50 / zm in the y-axis direction as one column.
- the arithmetic processing unit 60 compares the gradation of the pixel 203 constituting the two-dimensional image 200 with a predetermined threshold, sets the gradation of a pixel darker than the threshold to 1, and the gradation of a bright pixel.
- the B sound 201 of the two-dimensional image 200 in FIG. The gradation of the pixel corresponding to is converted to 1, and the gradation of the other pixels is converted to 0.
- Figure 4 shows a binarized image of part A in Fig. 2. In FIG. 4, the pixel 203 with gradation 1 is hatched.
- the shape of the unevenness or the like existing on the inner peripheral surface 100a of the inspection object 100 is also shown by imaginary lines Ll and L2.
- the pixel group with gradation 1 (the pixel group with hatching) corresponds to the first pixel group with gradation corresponding to the defect on the inner peripheral surface 100a of the inspection object 100.
- the pixel group with gradation 0 corresponds to the second pixel group with gradation not corresponding to the defect.
- the arithmetic processing unit 60 proceeds to step S3, and performs a labeling process on the binarized image.
- the labeling process is a known process for adding group attributes to pixels included in a two-dimensional image. The labeling process is performed for all the pixels constituting the two-dimensional image. Below, the labeling process will be described by taking the binarized image of FIG. 4 corresponding to part A of FIG. 2 as an example. .
- the labeling process the gradation of each pixel of the binarized image is sequentially inspected in a predetermined direction. When the gradation is 1 and the label is still attached and there is a pixel, the pixel is detected as a target pixel.
- the pixel 203a indicated by the bold line in FIG. 5 is first detected as the pixel of interest. If the target pixel 203a is detected, it is subsequently checked whether the gradation of a predetermined number of pixels (usually 4 or 8 pixels) adjacent to the target pixel 203a is 0 to 1. Then, a unique label number that is not yet used on the binary image is assigned to the pixel of interest 203a and a pixel having a continuous gradation of one. In FIG. 5, the label number 1 is assigned to the pixel of interest 203a and the pixel having a continuous gradation of one.
- the arithmetic processing unit 60 repeats such processing every time a target pixel is detected.
- the target pixels 203b and 203c are sequentially detected, and the target pixel 203b and the gradation 1 pixel adjacent thereto are labeled with label number 2, and the attention pixel 203c and the adjacent gradation 1 pixel are labeled.
- Number 3 is assigned to each. Note that the pixel 203d on the right side of the target pixel 203b is not detected as the target pixel because the label number 2 is assigned when the target pixel 203b is inspected. Further, at the time of the inspection relating to the target pixel 203c, the label number 3 is not attached since the label number 2 is already attached to the pixel 203d. The labeling process is repeated until the pixel of interest is no longer detected on the binary image. The labeling process ends.
- the arithmetic processing unit 60 proceeds to step S4 and arranges the label numbers.
- steps S4 In ordering the label numbers, portions with different label numbers between adjacent pixels are detected, and the label numbers are reassigned so that adjacent pixels have the same label number.
- the pixels 203c and 203d are adjacent to each other, they are labeled with the label numbers 2 and 3, so all the pixels 203c with the label number 3 that resolves this problem. , 203e label number is changed to 2.
- Figure 6 shows the state after the label change. In Fig. 6, pixels with the same label number are surrounded by a bold line. As is clear from the comparison between FIG. 4 and FIG.
- all pixels to which gradation 1 is given corresponding to the dark part 201 on the left side are grouped with label number 1 and correspond to the dark part 201 on the right side.
- all pixels with gradation 1 are labeled with label number 2 and grouped.
- the pixels grouped in this way are the first pixel group of gradations corresponding to the defect on the inner peripheral surface 100a of the object to be inspected 100, and the force of the gradation does not correspond to the defect. This corresponds to the defect candidate portion 210 extracted for each range surrounded by the second pixel group.
- the minimum unit of the defect candidate part 210 is one pixel.
- the defect candidate part 210 corresponds to the dark part 201 in the two-dimensional image shown in FIG.
- the label numbers of the groups are reassigned in the descending order of the number of pixels.
- the right pixel group has more pixels than the left pixel group, so the label number of the right pixel group is changed to 1, and the label number of the left pixel group is changed to 2.
- 4 to 7 exemplify the case where there are two dark portions 201, the label number is changed for all regions of the binary image. Therefore, the label numbers illustrated in FIGS. 6 and 7 do not necessarily match the processing results for the entire two-dimensional image in FIG.
- the arithmetic processing unit 60 sets each area, long side, and short side for all defect candidate portions 210 extracted by the processes of steps S3 and S4.
- the length and position on the 2D image are calculated, and the calculation results are output to the R Store in AM or storage unit 65.
- the area may be expressed by the number of pixels included in the defect candidate portion 210, or the actual area of the defect candidate portion 210 may be obtained by the product of the area occupied by one pixel and the number of pixels.
- the size of the long side and the short side of the defect candidate portion 210 can be obtained by multiplying the number of pixels in the defect candidate portion 210 in the X-axis direction and the y-axis direction and the actual size per pixel.
- the position of the defect candidate part 210 can be represented by a position representing the defect candidate part 210 (for example, the position of the center of gravity) in the X-axis coordinates and the y-axis coordinates.
- the arithmetic processing unit 60 detects the defect candidate part 210 having a predetermined size or larger, and identifies all these defect candidate parts 210 as defects. For example, a defect candidate part 210 having a short side of 0.2 mm or more is identified as a defect. Further, in the next step S7, the arithmetic processing unit 60 determines the density of the defect candidate part 210 that has not been handled as a defect in step S6, that is, the density of the defect candidate part 210 that is less than a predetermined size, on the two-dimensional image. Inspect every predetermined inspection area.
- step S6 if the minute dark portion 201 that is not determined as a defect in step S6 is concentrated within a certain range, it may be recognized as a defect by the inspector's eyes. This is a process of identifying a dense area as a defective area.
- an inspection region B having a predetermined size is set in the two-dimensional image, and the density of the defect candidate portion 210 is inspected for each inspection region B.
- FIG. 8 is an enlarged view of inspection area B in FIG.
- the sizes xd and yd in the X-axis direction and the y-axis direction of the inspection area B may be set as appropriate.
- small defect candidate portions 210 that are less than defects alone are concentrated in a relatively close range.
- the arithmetic processing part 60 determines the number of defect candidate parts 210 that are equal to or larger than a predetermined area existing in the inspection area B, such as the area of the defect candidate part 210 acquired in step S5. Based on this information, the discrimination is made in step S7, and the density of the defect candidates 210 in the inspection region B is also discriminated.
- the arithmetic processing unit 60 identifies an inspection region B in which the density of the defect candidate unit 210 is equal to or higher than a predetermined level as a defective region. For example, when the number of defect candidate portions 210 having a predetermined area or more is greater than or equal to a predetermined value, the inspection area B is identified as a defective area by considering that the density is high. Further, in step S8, it is recognized as a defective area. When separated, the position of the center of gravity of the group of minute defect candidate portions 210 included in the region is calculated as a position representing those defect candidate portions 210. As shown in FIG.
- the inspection region B is set by sequentially changing the position in the X-axis direction while partially overlapping the two-dimensional image.
- the inspection area B makes a round on the inner peripheral surface 100a in the X-axis direction
- the inspection area B is shifted while overlapping a part in the y-axis direction.
- the setting of the inspection area B and the denseness in the area are similarly performed. Repeat the inspection.
- the defect candidate part 210 identified as a defect in step S6 may or may not be excluded from the inspection target. Even if not excluded, it is possible to identify an area where only minute defect candidate parts 210 that are not identified as defects in step S6 are concentrated as a defect area.
- step S9 stores the identification results in steps S6 and S8 in the storage unit 65 as inspection results, And output to the output unit 64.
- the defect identified in step S6 and the defect inspection area identified in step S8 may be presented to the user as the same type of defect without being distinguished, and the two may be distinguished. You can present it to the user. Even if the two are distinguished and presented to the user, it can be detected as a defect according to the defect and the user can be informed of the presence of a small concentration of nests, etc. .
- the arithmetic processing unit 60 ends the defect detection routine. It should be noted that the correspondence relationship between the defect candidate portion and the like shown in FIGS. 4 to 8 and the size of the pixel is merely for explanation, and does not indicate an actual inspection state.
- the defect candidate portion 210 having a predetermined size or more is identified as a defect on the two-dimensional image of the inner peripheral surface 100a and the size thereof is large. If minute defect candidate portions 210 that are less than the size and are not treated as defects by themselves are densely packed in a relatively close category, the region can be identified as a defect region. As a result, the threshold value of the size of the defect candidate portion 210 for determining a defect is made smaller than necessary. Therefore, it is possible to eliminate the possibility of excessively detecting the defect candidate part 210 that exists minutely and independently as a defect.
- the resolution of the two-dimensional image that is, the size of one pixel is set so that the minute dark portion 201 to be evaluated in the defect area inspection occupies at least one pixel on the two-dimensional image. Therefore, it is not necessary to set the inspection resolution to be finer than necessary. For this reason, even if the inspection head 16 is rotated at a relatively high speed, defects and defect areas can be detected with high accuracy, and a decrease in inspection efficiency due to resolution or resolution enhancement can be prevented.
- the arithmetic processing unit 60 functions as an image generation unit by executing step S1 in FIG. 3, and functions as a defect candidate extraction unit by executing steps S2 to S5. It functions as a defect identification means by executing and functions as a defect area identification means by executing steps S7 and S8.
- the inspection head is rotated and sent out in the axial direction, and the inner peripheral surface is scanned with the inspection light.
- at least one of the rotational movement and the linear movement of the inspection head is omitted, and this is replaced.
- the present invention can also be applied to a surface inspection apparatus that scans the surface of an inspection object by rotating or linearly moving the inspection object.
- the processing for distinguishing between the first pixel of the gradation corresponding to the two-dimensional image force defect and the other second gradation pixel is not limited to the example of binarizing and distinguishing the image, It can be a grayscale image or a color image to distinguish pixels corresponding to defects! /.
- the process of extracting the defect candidate part is not limited to the labeling process, and various image processing methods may be used.
- the density of defect candidate portions in the inspection region is determined using the number of defect candidate portions having a predetermined area or more, but depending on the ratio of the total area of the defect candidate portions to the area of the inspection region.
- the degree of congestion may be determined. Or, focus on one of the small defect candidate parts that cannot be identified as defects by themselves, and determine the density by referring to the distance between the noticed defect candidate part and the adjacent defect candidate part.
- Candidate density determination is a seed This can be done using various information.
- the configuration of the surface inspection apparatus according to this embodiment is the same as the configuration described with reference to FIG.
- the processing in the arithmetic processing unit 60 is different from the above-described embodiment. This will be described below.
- the arithmetic processing unit 60 as an image generating means generates a two-dimensional image in which the inner peripheral surface 300a of the inspection object 300 shown in FIG. . That is, the arithmetic processing unit 60 sets the X axis along the circumferential direction of the inspected object 300 and the y axis along the axial direction as shown in FIG. Generates a two-dimensional image developed on a plane defined by an orthogonal two-axis coordinate system consisting of.
- the two-dimensional image is, for example, an 8-bit grayscale image.
- the X-axis direction is the direction corresponding to the circumference on the two-dimensional image 400
- the y-axis direction is the direction corresponding to the axis on the two-dimensional image 400.
- a calorie hole 303a as a calorie arbor are formed as regions having a lower reflectance than the ground surface 300b. 303b, 303c, 304 and exist.
- the ground surface 300b is a cut surface with no defects.
- the machining holes 303a to 303c have the same shape and size, and the positions of the machining holes 303a to 303c are also equal to each other in the y-axis direction. In the following, when it is not necessary to distinguish the machining holes 303a to 303c, they are described as force feed holes 303.
- the machining hole 304 is different in shape and size from the force hole 303, and is also displaced from the machining hole 303 in the y-axis direction.
- FIG. 10 An example of a two-dimensional image generated by the arithmetic processing unit 60 corresponding to the inner peripheral surface 300a of FIG. 10 is shown in FIG.
- the two-dimensional image 400 is configured by arranging a large number of pixels 400a in the X-axis direction and the y-axis direction.
- the size that one pixel 400a occupies on the inner peripheral surface 300a may be appropriate, but as an example, the width in the X-axis direction of one pixel 400a corresponds to 150 ⁇ m on the inner peripheral surface 300a, and in the y-axis direction 50 / zm on the inner peripheral surface 300a.
- this defect On the 2D image 400, this defect, 301, 302, corresponding defect images 401, 402, and machining hole images corresponding to the force holes 303a-303c, 304 4 03a-403c (represented by reference numeral 403) 404) appears.
- the density values of these images are lower (lower) than the background region 405 corresponding to the background 300b of the inner peripheral surface 300a. That is, in this embodiment, as the amount of reflected light increases, the density value of the pixel 400a in the two-dimensional image 400 increases, and the defect and the processed portion appear as dark portions.
- the arithmetic processing unit 60 determines whether or not there is a defect on the inner peripheral surface 300a of the inspection object 300 by processing the two-dimensional image 400 according to a predetermined algorithm, and outputs the determination result. Output to part 64.
- this defect detection the presence or absence of a defect image is determined by paying attention to the dark part of the two-dimensional image 400.
- the processed hole images 403 and 404 also appear as dark parts in the same manner as the defect images 401 and 402. Therefore, even if the defects 301 and 302 occur in the inspection object 300, There is a possibility that the processed hole images 403 and 404 are determined to be defective, and the inspection object 300 is erroneously determined to be defective.
- the arithmetic processing unit 60 executes the defect detection process shown in FIG. 13 to eliminate the influence of the processing parts such as the processing holes 303 and 304 on the defect detection.
- this defect detection process an image of a processed part that is known in advance to exist on the inner peripheral surface 300a of the inspection object 300 is prepared as a reference image, and appears as a dark part on the reference image and the two-dimensional image 400.
- the image of the processed part is detected based on the degree of coincidence with the existing image, the detected image of the processed part is excluded from the object of defect determination, and the presence or absence of a defect is determined.
- a reference image from which only the processed hole image 403 is extracted and a reference image from which only the processed hole image 404 is extracted are prepared.
- an image obtained by extracting the rectangular areas 411 and 4 12 having the minimum necessary size including the processed hole image 403 or the force hole image 404 from the two-dimensional image 400 is prepared as a reference image.
- the reference images 411 and 412 may be described. Since the processing holes 303a, 303b, and 303c have the same shape and size, the reference image 411 corresponding to them is common, that is, only one sheet is prepared.
- the reference images 411 and 412 are obtained by the user specifying an area to be used as the reference images 411 and 412 from the two-dimensional image 400 obtained by actually photographing the inner peripheral surface 300a of the inspection object 300. Can be created. Alternatively, the reference images 411 and 412 may be generated by calculating the images of the calorie holes 303 and 304 from the design data of the inspection object 300 and the imaging conditions. Note that the reference images 411 and 412 are created as grayscale images having the same gradation as the two-dimensional image 400.
- the reference images 411 and 412 are stored in the storage unit 65 in advance.
- FIG. 12 shows an example of the data structure of the reference images 411 and 412 stored in the storage unit 65.
- the data of the reference images 411 and 412 is numerical data in which the density values of the pixels included in the reference images 411 and 412 are described according to the arrangement order of the pixels. Those data correspond to the reference images 411 and 412
- the processed hole images 403 and 404 are stored in the storage unit 65 in a state of being associated with the representative coordinates yl and y2 in the y-axis direction and the numbers Nl and N2. As the representative coordinates yl, y2, for example, the y coordinate of the center of gravity (or center point) of the processed hole images 403, 404 is selected.
- the origin of coordinates in the y-axis direction is set at the upper end 400c of the two-dimensional image 400, that is, the edge 300c in the axial direction of the inner peripheral surface 300a in FIG.
- the numbers Nl and N2 are values indicating how many machining holes 303 and 304 of the same shape and size exist on the representative coordinates yl and y2, respectively.
- the number of processed hole images 403 and 404 corresponding to the reference images 411 and 412 should be present at the positions indicated by the y coordinates yl and y2.
- the number N1 is 3, and the number N2 is 1.
- the arithmetic processing unit 60 When the scanning of the inner peripheral surface 300a is completed, the arithmetic processing unit 60 generates a two-dimensional image 400 of the inner peripheral surface 300a based on the reflected light signal received from the signal processing unit 62.
- the two-dimensional image 400 is a gray scale image virtually generated on the RAM of the arithmetic processing unit 60.
- the arithmetic processing unit 60 After generating the two-dimensional image 400, the arithmetic processing unit 60 starts defect detection processing, and first sets an initial value for the y coordinate to be inspected in step SI1. In this case, the smallest y coordinate (yl in this example) of the representative coordinates yl and y2 associated with the reference images 411 and 412 should be set as the initial value.
- the arithmetic processing unit 60 sets an initial value 0 to the counter value for counting the number of detected holes.
- the arithmetic processing unit 60 selects an inspection target image to be compared with the reference image from the two-dimensional image 400.
- the inspection target image 420 has the same shape and the same size as the reference image 411 on the y coordinate yl, and the reference image 411 and y Selected to match axial position.
- the position in the X-axis direction of the inspection target image 420 is sequentially changed in the X-axis direction by a predetermined number of pixels every time step S14 is executed.
- step S14 when step S14 is first executed with respect to the y coordinate yl, the inspection target image 420 is in contact with the left end of the two-dimensional image 400.
- step S14 the position of the inspection target image 420 is sequentially changed to the right in the X-axis direction.
- This processing is performed on the 2D image 400 within the same dimension range as the reference image 411 in the y-axis direction with the y coordinate yl as the center, and the position of the reference image 411 in the X-axis direction from the left end of the 2D image 300. Is equivalent to the process of sequentially changing.
- the arithmetic processing unit 60 calculates the degree of coincidence between the inspection target image and the reference image using the normalized correlation.
- normal ⁇ correlation the density values of the images to be compared tend to be the same, that is, if they are similar, a positive correlation is shown, and the two values are opposite, that is, if they are dissimilar, a negative correlation Indicates.
- the correlation formula of normalized correlation is expressed by (AX 10000) ⁇ ⁇ (B X C).
- ⁇ indicates the number of pixels of the reference image
- I indicates the density value of each pixel of the inspection target image
- ⁇ indicates the density value of each pixel of the reference image.
- step S16 operation processing unit 60 determines whether or not the normal correlation value obtained in step S15 exceeds a predetermined threshold value.
- the threshold value used here may be set to a value at which the normalized correlation value is considered to show a positive correlation. If the threshold value is exceeded, the arithmetic processing unit 60 proceeds to the next step S17, identifies the area extracted as the inspection target image as a mask area, and determines the position of the pixel group included in the mask area on the two-dimensional image 400.
- the arithmetic processing unit 60 adds 1 to the counter value described above, and proceeds to step S19.
- the arithmetic processing unit 60 skips steps S17 and S18 and proceeds to step S19.
- the arithmetic processing unit 60 determines the counter value (number of detection of the force hole), the number of the hole holes acquired in step S13 (for example, N1 when the coordinate is yl) ) Is determined whether or not the force matches. If the counter value does not match the number of processed holes, the arithmetic processing unit 60 returns to step S14 and calculates the degree of coincidence between the next inspection target image 420 and the reference image. If the counter value matches the number of drilled holes in step S19, the processing unit 6 0 goes to step S20.
- step S20 arithmetic processing unit 60 determines whether or not the inspection has been completed for all y coordinates associated with the reference image data. If there is a y-coordinate that has not yet been examined, the arithmetic processing unit 60 selects the next y-coordinate in step S21 and returns to step S12. If the inspection of all y coordinates has been completed in step S20, the arithmetic processing unit 60 proceeds to the next step S22.
- step S22 the arithmetic processing unit 60 determines whether or not the defect image exists in the two-dimensional image 400 while excluding the target force for defect determination from the region identified as the mask region.
- the arithmetic processing unit 60 binarizes the two-dimensional image 400 with a threshold value such that the defect images 401 and 402 are classified as a dark portion and the background image 405 as a bright portion, and the obtained binary image is obtained. The presence or absence of a defect is determined using the size of the dark part inside as a clue.
- all the density values of the pixels identified as the mask area in step S 17 are converted to the same density value as the background area 405.
- the image regarded as the processed hole image is deleted from the obtained binary image. Therefore, when only the processed hole image exists on the inner peripheral surface 300a, there is no possibility that the processed hole image is erroneously determined as a defect.
- the arithmetic processing unit 60 ends the defect detection process of FIG.
- the mask is not affected by the position of the force hole image 403, 404 in the X-axis direction of the two-dimensional image 400 (corresponding to the circumferential direction of the inspection object 300).
- the area can be identified easily and quickly. This point will be described below.
- the axial direction of the inspection object 300 corresponds to the edge 300c of the inner peripheral surface 300a. If the edge 400c of the two-dimensional image 400 is used as a reference, the y coordinate where the force hole images 403 and 404 are present can be uniquely specified. However, the X coordinates of the processed hole images 403 and 404 change according to the relationship between the scanning start position of the inner peripheral surface 300a of the inspection object 300 and the position of the force hole 303 and 304, and the two-dimensional image 400 There is no clear standard that can be used to specify the position of the force hole image 403, 404 in the X-axis direction.
- the reference images 411 and 412 are prepared separately for each of the hole images 403 and 404 having at least one of the shape and the size, the reference image 41 1 is prepared.
- the size of 412 is small and the amount of data is small. For this reason, even if the reference images 411 and 412 are moved relative to the two-dimensional image 400 on the y coordinates yl and y2, the time required for the processing is short. However, the range in which the reference image 411 and the inspection image 420 should be compared is narrowed down to the same size as the reference image 411 in the y-axis direction around the y coordinate yl.
- the range to be compared is narrowed down to a range having the same dimensions as the reference image 412 in the y-axis direction with the y coordinate y2 as the center, it is not necessary to compare each of the reference images 411 and 412 with the entire surface of the two-dimensional image 400. As a result, the mask process for specifying the region to be excluded from the defect inspection target can be executed at high speed.
- the storage unit 65 corresponds to the reference image holding unit. Further, the arithmetic processing unit 60 functions as defect determination means by executing the processing of FIG. 13, and particularly functions as exclusion area specifying means by executing the processing of steps S14 to S19 of FIG. To do.
- the present invention can be implemented in various forms without being limited to the above forms.
- the number of processed holes associated with the reference image can be used not only as information for identifying the mask area but also as information for determining the presence or absence of a defect.
- the force using the center y-coordinate as the position of the image of the processed hole associated with the reference image is not limited to this, and an appropriate position is defined as the position of the image of the processed hole. Good.
- Means for acquiring a two-dimensional image of the surface of the object to be inspected is not limited to the above-described form, and can be changed as appropriate. Further, the present invention is not limited to the example of inspecting the inner peripheral surface of the object to be inspected, but can be applied to the case of inspecting a cylindrical outer peripheral surface. Furthermore, the present invention is not limited to the inspection of an inspection object provided with a processing hole as a processing part, but excludes any processed part on the cylindrical surface to be inspected from the object of defect determination according to the present invention.
- the concept of processing broadly includes those that artificially alter the material of the object to be inspected, and various types of processing such as printing, coloring, and surface modification can be included in the concept of processing.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Analytical Chemistry (AREA)
- General Health & Medical Sciences (AREA)
- Biochemistry (AREA)
- Immunology (AREA)
- Pathology (AREA)
- Chemical & Material Sciences (AREA)
- Quality & Reliability (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Geometry (AREA)
- Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
- Image Processing (AREA)
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP07743476A EP2023129A4 (en) | 2006-05-23 | 2007-05-16 | SURFACE INSPECTION DEVICE |
US12/300,823 US8351679B2 (en) | 2006-05-23 | 2007-05-16 | Exclusion of recognized parts from inspection of a cylindrical object |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006143181A JP2007315803A (ja) | 2006-05-23 | 2006-05-23 | 表面検査装置 |
JP2006-143181 | 2006-05-23 | ||
JP2006-258263 | 2006-09-25 | ||
JP2006258263A JP4923211B2 (ja) | 2006-09-25 | 2006-09-25 | 表面検査装置 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007135915A1 true WO2007135915A1 (ja) | 2007-11-29 |
Family
ID=38723229
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/060041 WO2007135915A1 (ja) | 2006-05-23 | 2007-05-16 | 表面検査装置 |
Country Status (5)
Country | Link |
---|---|
US (1) | US8351679B2 (ja) |
EP (1) | EP2023129A4 (ja) |
KR (1) | KR100996325B1 (ja) |
CN (1) | CN102062739B (ja) |
WO (1) | WO2007135915A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012154639A (ja) * | 2011-01-21 | 2012-08-16 | Jtekt Corp | 転がり軸受の表面検査装置 |
Families Citing this family (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8351683B2 (en) | 2007-12-25 | 2013-01-08 | Hitachi High-Technologies Corporation | Inspection apparatus and inspection method |
JP5408333B2 (ja) * | 2010-03-17 | 2014-02-05 | 株式会社島津製作所 | Tftアレイ検査方法およびtftアレイ検査装置 |
US20130113926A1 (en) * | 2010-07-02 | 2013-05-09 | Huazhong University Of Science & Technology | Detecting device for detecting icing by image and detecting method thereof |
JP2012028987A (ja) * | 2010-07-22 | 2012-02-09 | Toshiba Corp | 画像処理装置 |
US9508047B2 (en) * | 2010-12-06 | 2016-11-29 | The Boeing Company | Rapid rework analysis system |
US10295475B2 (en) | 2014-09-05 | 2019-05-21 | Rolls-Royce Corporation | Inspection of machined holes |
KR101733018B1 (ko) | 2015-02-25 | 2017-05-24 | 동우 화인켐 주식회사 | 광학 필름의 불량 검출 장치 및 방법 |
US10228669B2 (en) | 2015-05-27 | 2019-03-12 | Rolls-Royce Corporation | Machine tool monitoring |
CN105510345A (zh) * | 2015-12-04 | 2016-04-20 | 中航复合材料有限责任公司 | 一种蜂窝芯-蒙皮连接质量光成像方法 |
US9747683B2 (en) * | 2015-12-21 | 2017-08-29 | General Electric Company | Methods and systems for detecting component wear |
EP3400431B1 (en) | 2016-01-07 | 2022-03-23 | Arkema, Inc. | Optical method to measure the thickness of coatings deposited on substrates |
US11125549B2 (en) | 2016-01-07 | 2021-09-21 | Arkema Inc. | Optical intensity method to measure the thickness of coatings deposited on substrates |
MX2018008322A (es) | 2016-01-07 | 2018-09-21 | Arkema Inc | Metodo independiente de la posicion del objeto para medir el espesor de recubrimientos depositados en objetos curvados moviendose a altas velocidades. |
JP6642161B2 (ja) * | 2016-03-18 | 2020-02-05 | 株式会社リコー | 検査装置、検査方法及びプログラム |
CN107345918B (zh) * | 2017-08-16 | 2023-05-23 | 广西大学 | 一种板材质量检测装置及方法 |
CN107734272B (zh) * | 2017-09-30 | 2020-02-11 | 京东方科技集团股份有限公司 | 掩模板检测设备、掩模板检测方法以及相应的光源控制方法 |
CN109325930B (zh) * | 2018-09-12 | 2021-09-28 | 苏州优纳科技有限公司 | 边界缺陷的检测方法、装置及检测设备 |
JP2023039392A (ja) * | 2021-09-08 | 2023-03-20 | 株式会社エビデント | 検査支援方法、検査支援装置、検査支援システム、およびプログラム |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS50158387A (ja) * | 1974-06-10 | 1975-12-22 | ||
JPH08101130A (ja) * | 1994-09-29 | 1996-04-16 | Fuji Xerox Co Ltd | 表面欠陥検査装置 |
JPH11211674A (ja) * | 1998-01-22 | 1999-08-06 | Kawasaki Steel Corp | 表面疵検査方法および装置 |
JPH11281582A (ja) * | 1998-03-26 | 1999-10-15 | Tb Optical Kk | 表面検査装置 |
JPH11326226A (ja) * | 1998-05-19 | 1999-11-26 | Mega Float Gijutsu Kenkyu Kumiai | 大型水密構造物の欠陥検出装置 |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4079416A (en) * | 1975-12-01 | 1978-03-14 | Barry-Wehmiller Company | Electronic image analyzing method and apparatus |
US4691231A (en) * | 1985-10-01 | 1987-09-01 | Vistech Corporation | Bottle inspection system |
US5321767A (en) * | 1992-08-28 | 1994-06-14 | Kabushiki Kaisha Komatsu Seisakusho | Method of forming a mask in image processing operation |
CN2329925Y (zh) * | 1998-08-14 | 1999-07-21 | 侯增祺 | 平面型热管散热器 |
JP3964267B2 (ja) * | 2002-06-04 | 2007-08-22 | 大日本スクリーン製造株式会社 | 欠陥検出装置、欠陥検出方法、およびプログラム |
JP4095860B2 (ja) | 2002-08-12 | 2008-06-04 | 株式会社日立ハイテクノロジーズ | 欠陥検査方法及びその装置 |
DE102004024635A1 (de) * | 2004-05-12 | 2005-12-08 | Deutsche Gelatine-Fabriken Stoess Ag | Verfahren zur Herstellung von Formkörpern auf Basis von vernetzter Gelatine |
JP2006220644A (ja) * | 2005-01-14 | 2006-08-24 | Hitachi High-Technologies Corp | パターン検査方法及びその装置 |
WO2007060873A1 (ja) * | 2005-11-24 | 2007-05-31 | Kirin Techno-System Corporation | 表面検査装置 |
CN102062738B (zh) * | 2006-05-16 | 2012-11-21 | 麒麟工程技术系统公司 | 表面检查装置 |
JP2009071136A (ja) * | 2007-09-14 | 2009-04-02 | Hitachi High-Technologies Corp | データ管理装置、検査システムおよび欠陥レビュー装置 |
-
2007
- 2007-05-16 US US12/300,823 patent/US8351679B2/en active Active
- 2007-05-16 EP EP07743476A patent/EP2023129A4/en not_active Withdrawn
- 2007-05-16 KR KR1020087030089A patent/KR100996325B1/ko not_active IP Right Cessation
- 2007-05-16 WO PCT/JP2007/060041 patent/WO2007135915A1/ja active Application Filing
- 2007-05-16 CN CN2010105938176A patent/CN102062739B/zh active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS50158387A (ja) * | 1974-06-10 | 1975-12-22 | ||
JPH08101130A (ja) * | 1994-09-29 | 1996-04-16 | Fuji Xerox Co Ltd | 表面欠陥検査装置 |
JPH11211674A (ja) * | 1998-01-22 | 1999-08-06 | Kawasaki Steel Corp | 表面疵検査方法および装置 |
JPH11281582A (ja) * | 1998-03-26 | 1999-10-15 | Tb Optical Kk | 表面検査装置 |
JPH11326226A (ja) * | 1998-05-19 | 1999-11-26 | Mega Float Gijutsu Kenkyu Kumiai | 大型水密構造物の欠陥検出装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2023129A4 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012154639A (ja) * | 2011-01-21 | 2012-08-16 | Jtekt Corp | 転がり軸受の表面検査装置 |
Also Published As
Publication number | Publication date |
---|---|
EP2023129A1 (en) | 2009-02-11 |
KR100996325B1 (ko) | 2010-11-23 |
CN102062739B (zh) | 2012-08-22 |
US20090148031A1 (en) | 2009-06-11 |
KR20090009314A (ko) | 2009-01-22 |
EP2023129A4 (en) | 2012-02-22 |
US8351679B2 (en) | 2013-01-08 |
CN102062739A (zh) | 2011-05-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2007135915A1 (ja) | 表面検査装置 | |
JP2007315803A (ja) | 表面検査装置 | |
US7602487B2 (en) | Surface inspection apparatus and surface inspection head apparatus | |
US6366690B1 (en) | Pixel based machine for patterned wafers | |
KR102233050B1 (ko) | 결함 특유적 정보를 이용한 웨이퍼 상의 결함 검출 | |
JP4923211B2 (ja) | 表面検査装置 | |
JP2012013698A (ja) | 工具摩耗定量化システム及び方法 | |
JPH07190959A (ja) | 浸透探傷による分析方法を自動的に特性化・最適化・検査する方法および装置 | |
EP3187861B1 (en) | Substrate inspection device and substrate inspection method | |
US20140071442A1 (en) | Optical surface defect inspection apparatus and optical surface defect inspection method | |
JP5687014B2 (ja) | 光学式表面欠陥検査装置及び光学式表面欠陥検査方法 | |
JP2008076322A (ja) | 表面検査装置 | |
WO1995012810A1 (fr) | Methode de controle de l'etat de surface d'une face d'un solide et dispositif associe | |
JP2010266366A (ja) | 画像の特徴抽出方法並びに工具欠陥検査方法と工具欠陥検査装置 | |
JPH11230912A (ja) | 表面欠陥検出装置及びその方法 | |
CN114363481A (zh) | 具有检测样品相对物镜移位的装置的显微镜及其检测方法 | |
JP2007315823A (ja) | 表面検査装置 | |
JP7494672B2 (ja) | フォトマスクブランクス、フォトマスクブランクスの製造方法、学習方法およびフォトマスクブランクスの検査方法 | |
JP4036712B2 (ja) | 非破壊検査装置 | |
JP2015225015A (ja) | 欠陥判定装置及び欠陥判定方法 | |
JP2004020279A (ja) | 非破壊検査装置 | |
JP2009068985A (ja) | 表面欠陥検査装置及び方法 | |
JPH0755713A (ja) | バイアホール検査装置 | |
JP2002055054A (ja) | シートの検査方法およびシートの製造方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200780018328.4 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07743476 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2007743476 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020087030089 Country of ref document: KR |
|
WWE | Wipo information: entry into national phase |
Ref document number: 12300823 Country of ref document: US |