WO2020121594A1 - Surface characteristics inspection device and machine learning device for surface characteristics inspection - Google Patents

Surface characteristics inspection device and machine learning device for surface characteristics inspection Download PDF

Info

Publication number
WO2020121594A1
WO2020121594A1 PCT/JP2019/031698 JP2019031698W WO2020121594A1 WO 2020121594 A1 WO2020121594 A1 WO 2020121594A1 JP 2019031698 W JP2019031698 W JP 2019031698W WO 2020121594 A1 WO2020121594 A1 WO 2020121594A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
converted image
machine learning
test object
Prior art date
Application number
PCT/JP2019/031698
Other languages
French (fr)
Japanese (ja)
Inventor
長岡 英一
Original Assignee
株式会社堀場製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社堀場製作所 filed Critical 株式会社堀場製作所
Priority to JP2020559709A priority Critical patent/JPWO2020121594A1/en
Publication of WO2020121594A1 publication Critical patent/WO2020121594A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/30Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/55Specular reflectivity
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/55Specular reflectivity
    • G01N21/57Measuring gloss

Definitions

  • the present invention relates to a surface property inspection device that detects reflected light from a test object and calculates the surface property of the test object, and a machine learning device used for this surface property inspection.
  • Patent Document 1 From a captured image obtained by irradiating a test object with light and imaging reflected light from the test object, the presence or absence of surface material or surface irregularity is identified. A surface type identification device has been considered.
  • This surface type identification device calculates a characteristic amount that quantifies the degree of change in the pixel value of the captured image in the spatial direction and compares it with pre-registered dictionary data to determine the presence or absence of surface material or surface irregularities. To identify.
  • the feature amount is, for example, a distribution state of pixel values with respect to a relative angle (relative angle) ⁇ where the specular reflection direction is zero.
  • the surface type identification device described above is a measurement method that assumes a test object that does not have directivity in surface characteristics, and since the directionality is not considered at all in the method of calculating the relative angle, for example, streak It is difficult to accurately measure directional surface characteristics such as unevenness.
  • the present invention has been made to solve the above problems, and its main object is to accurately detect the surface characteristics of a test object.
  • the surface property inspection apparatus a light irradiation unit for irradiating a test object with light, an imaging unit for imaging reflected light from the test object, and a non-linear conversion of the imaged image of the imaging unit.
  • An image processing unit that generates a converted image and a surface characteristic calculation unit that calculates the surface characteristic of the test object using the converted image are provided.
  • the non-linear conversion logarithmic conversion, exponential conversion, gamma conversion and the like can be considered.
  • the captured image of reflected light is subjected to non-linear conversion and the converted image obtained by this is used to calculate the surface characteristics of the test object. Can be easily captured, and the surface characteristics of the test object can be accurately detected.
  • the surface characteristic calculation unit calculates the surface characteristic of the test object by using the luminance distribution along the predetermined direction in the converted image. With this configuration, since processing is performed using part of the data of the converted image, the processing time can be shortened.
  • the surface characteristic calculation unit calculates the surface characteristic of the test object by using the luminance distribution along the direction in which the luminance in the converted image extends.
  • the surface property inspection device In order to enable the surface property inspection device to automatically and accurately determine the surface property, it is composed of a converted image obtained by performing non-linear conversion of the imaged image of the imaging unit and a surface property label corresponding to the converted image. It is preferable that the apparatus further includes a machine learning unit that generates a learning model by machine learning using a learning data set, and the surface characteristic calculation unit calculates the surface characteristic of the test object based on the learning model.
  • the machine learning device for surface characteristic inspection includes a captured image acquisition unit that irradiates a test object with light and acquires a captured image obtained by capturing the reflected light from the test object.
  • a learning model is generated by machine learning using a transformed image generation unit that non-linearly transforms the acquired captured image to generate a transformed image, and a learning data set including the transformed image and a surface characteristic label corresponding to the transformed image.
  • a machine learning unit for performing the same includes a captured image acquisition unit that irradiates a test object with light and acquires a captured image obtained by capturing the reflected light from the test object.
  • a learning model is generated by machine learning using a transformed image generation unit that non-linearly transforms the acquired captured image to generate a transformed image, and a learning data set including the transformed image and a surface characteristic label corresponding to the transformed image.
  • a machine learning unit for performing the same for performing the same.
  • the captured image of reflected light is subjected to non-linear conversion, and machine learning is performed using the converted image obtained by this, so a learning model that captures changes in the luminance distribution of reflected light is used. It can be generated, and the surface characteristics of the test object can be detected with high accuracy.
  • the machine learning unit uses a brightness distribution along a predetermined direction in the converted image as the converted image of the learning data set.
  • the machine learning unit uses the direction in which the brightness in the converted image extends as the converted image of the learning data set. It is desirable to use a luminance distribution along.
  • the surface characteristics of the test object can be detected with high accuracy.
  • FIG. 1 It is the whole schematic diagram of the surface property inspection device concerning one embodiment of the present invention. It is a figure which shows (a) picked-up image and (b) relative brightness
  • the surface property inspection apparatus 100 of the present embodiment inspects the surface property of the test object W in a non-contact manner.
  • the surface characteristics serve as an index of the appearance and surface state of the test object W, and are, for example, optical characteristics such as surface reflectance and luminance distribution, or mechanical characteristics such as surface roughness and waviness. It includes a scale indicating the quality of the processed state and an index for determining the quality of a special processed state such as hairline processing.
  • the surface property inspection apparatus 100 detects the light irradiation unit 2 that irradiates the test object W with light and the reflected light reflected by the test object W to detect the test object W.
  • the image pickup unit 3 for picking up an image of the surface and the arithmetic unit 4 for processing the picked-up image picked up by the image pickup unit 3 to calculate the surface characteristics are provided.
  • the light irradiation unit 2 emits infrared light, for example, and emits light toward the imaging range of the imaging unit 3. It is desirable that the light irradiation unit 2 be configured so that the amount of light becomes uniform in the range captured by the imaging unit 3. Further, in order to eliminate the influence of ambient light or the like, it is preferable to provide an enclosure or the like that covers the object W including the light irradiation section 2 and the imaging section 3.
  • the image pickup unit 3 picks up at least specular reflection light (specular reflection light and diffuse reflection light in the present embodiment) reflected by the test object W, and has an imaging device having a plurality of pixels arranged two-dimensionally. 31 and a captured image generation unit 32 that generates a captured image 50 that is a luminance image of the test object W based on the accumulated charge amount of each pixel in the image sensor 31.
  • the image capturing unit 3 may capture at least specularly reflected light and its surroundings, specifically, at least part of specularly reflected light and diffusely reflected light around it.
  • FIG. 2A shows an example of a captured image 50 obtained by capturing an image of a flat plate-shaped test object W having streaky irregularities. Further, FIG.
  • 2B is a relative brightness distribution diagram in which the two-dimensional array of the image pickup devices 31 is set as the x-axis and the y-axis and the brightness of each pixel in the captured image 50 is displayed three-dimensionally as the height.
  • a sharp peaked specular reflection component 51 can be observed at the center of the xy plane.
  • the brightness values of the other pixels are relatively displayed using the brightness value of the specular reflection component 51 as a reference (for example, 100%).
  • a diffuse reflection component 52 exists around the regular reflection component 51, but since the luminance value is smaller by three digits or more, the height can be regarded as zero.
  • the luminance value is smaller than that of the specular reflection component 51 by one digit or more. Therefore, the change in the boundary region 53 that transits from the regular reflection component 51 to the diffuse reflection component 52 is not noticeable.
  • the arithmetic unit 4 processes the picked-up image 50 from the image pickup unit 3 to calculate the surface characteristics of the test object W.
  • the arithmetic unit 4 is a computer having a structure such as a CPU, a memory, an input/output interface, an AD converter, a display, an input means, and the like, and the CPU and the CPU based on the surface characteristic inspection program stored in the memory.
  • the peripheral devices By operating the peripheral devices, as shown in FIG. 1, at least the functions of the reception unit 41, the image processing unit 42, the surface characteristic calculation unit 43, the machine learning unit 44, the data storage unit 45, and the like are exhibited.
  • the reception unit 41 receives the captured image data indicating the brightness value of each pixel of the captured image 50 from the captured image generation unit 32 of the imaging unit 3.
  • the image processing unit 42 generates converted image data indicating a converted image 54 obtained by nonlinearly converting the captured image data of each pixel received by the receiving unit 41.
  • the non-linear conversion logarithmic conversion is used because the sense amount for human brightness is proportional to the logarithm of brightness according to the Weber-Fechner law.
  • exponential conversion, gamma conversion, or the like may be used as the other non-linear conversion.
  • FIG. 3A shows a converted image 54 obtained by nonlinearly converting the captured image 50.
  • the converted image 54 is an image in which the pixel value is the value of the converted image data that is the conversion result of the captured image data. Further, FIG.
  • 3B is a relative luminance distribution diagram in which the value of the converted image data is displayed three-dimensionally as height. Values of other converted image data are relatively displayed based on the non-linear conversion result of the regular reflection component 51 (for example, 100%).
  • the height of the diffuse reflection component 52 whose luminance value before conversion is three digits or less can still be regarded as zero (very small) even if nonlinear conversion is performed.
  • the boundary region 53 is non-linearly converted to a value comparable to the specular reflection component 51 (a small value that cannot be ignored, for example, a value of about half). Therefore, it is possible to clearly observe the change in the boundary region 53 that transitions from the regular reflection component 51 to the diffuse reflection component 52, and it is possible to accurately grasp the characteristics of the boundary region.
  • the captured image 50 generated by the captured image generation unit 32 of the imaging unit 3 is an image having a dynamic range that can withstand logarithmic conversion. It is necessary to be. Specifically, it is desirable that the captured image generation unit 32 generate the captured image 50 by HDR (high dynamic range) composition. That is, a method of synthesizing a plurality of captured images obtained by changing the exposure time in consideration of the exposure time is used. For example, after exposure for a relatively long time (for example, exposure for 1 second) to acquire the first captured image, the exposure time is reduced to 1/100 to acquire the second captured image, and the first and second captured images are acquired.
  • HDR high dynamic range
  • the pixels for detecting the specular reflection component of the reflected light are in a so-called “overexposure” state (a state in which the amount of accumulated charge of the image sensor 31 is saturated) during long-time exposure, and thus the second short-time exposure is performed.
  • the pixel for detecting the diffuse reflection component is buried in the noise component during short-time shooting (because the accumulated charge amount of the image pickup device 31 is insufficient), so the first shot image which is long-time exposure is used.
  • the luminance value of the pixels of the first captured image is multiplied by 100 to be combined. In this way, the one obtained by selecting an appropriate luminance value for each pixel from the first and second photographed images may be used as the photographed image 50 again.
  • the white bright lines extending in the oblique direction can be clearly observed, as shown in FIG. 3B.
  • a straight line that passes through the apex of the specular reflection component 51 having the maximum pixel value of the converted image 54 and extends in the direction of the white bright line it can be seen that the pixel values of the converted image 54 are distributed in line symmetry. This is a remarkable conversion result in the test object W having streaky irregularities.
  • the directions of the unevenness are aligned with the direction of the lines of the creases, so the microscopic inclination in the direction of the creases is small and the inclination in the direction orthogonal to the creases is large. Therefore, as shown in FIG. 3B, the white bright line extends linearly in a direction substantially orthogonal to the direction of the uneven lines.
  • a white bright line extends in the direction of the symmetry axis 55 as shown in FIG.
  • the brightness distribution of the converted image 54 in the direction along the symmetry axis 55 remarkably represents the degree of unevenness of the test object W and the degree of inclination.
  • the image processing unit 42 detects the symmetry axis 55 in the conversion image 54 indicated by the conversion image data and extracts the one-dimensional luminance distribution data in the direction along the symmetry axis 55. This one-dimensional luminance distribution As shown in FIG.
  • the image processing unit 42 detects the symmetry axis 55 by applying an edge detection technique such as Hough transform (Hough transform) known as a general image processing technique to the transformed image 54.
  • Hough transform Hough transform
  • the symmetry axis 55 may be determined as the main direction and the symmetry axis 56 may be determined as the sub direction based on the length of the white bright line of the converted image 54, and the one-dimensional luminance distribution data in each direction may be extracted.
  • the symmetry axis 55 is set in an arbitrary direction passing through the apex of the specular reflection component 51, and then the symmetry axis 56 is formed in the direction orthogonal to the symmetry axis 55.
  • the brightness distribution data in three directions may be extracted by defining the axis 57 that is inclined by 45 degrees from the symmetry axes 55 and 56.
  • the surface characteristic calculation unit 43 calculates the surface characteristic of the test object W using the converted image 54 converted by the image processing unit 42.
  • the surface characteristic calculation unit 43 calculates the surface characteristic of the test object W using the converted image data which is the brightness in the converted image 54 extracted by the image processing unit 42.
  • the surface characteristic of the test object W is calculated using the one-dimensional luminance distribution data in the direction along the symmetry axis 55 in the converted image 54 obtained by the image processing unit 42.
  • the surface property inspection apparatus 100 of the present embodiment further includes the machine learning unit 44 that generates a learning model for surface property detection.
  • the machine learning unit 44 uses a learning data set including converted image data obtained by performing non-linear conversion (logarithmic conversion here) of the captured image 50 of the imaging unit 3 and a surface characteristic label corresponding to the converted image 54.
  • a learning model is generated by machine learning.
  • the machine learning unit 44 performs machine learning using a learning data set including one-dimensional luminance distribution data in the direction along the symmetry axis 55 in the converted image 54 and a surface characteristic label corresponding to the luminance distribution data. Generate a learning model. The generated learning model is stored in the data storage unit 45.
  • ANN artificial neural network
  • SVM support vector machine
  • decision tree random forest
  • random forest random forest
  • k-means clustering self-organizing map
  • genetic algorithm genetic algorithm
  • Bayesian network One or more combinations selected from deep learning methods and the like can be used.
  • the surface characteristic calculation unit 43 uses the learning model generated by the machine learning unit 44 to convert the captured image data (converted image data obtained by performing non-linear conversion) of the captured image data of the inspection object W into the inspection object. Calculate the surface properties of W.
  • the captured image 50 accepted by the accepting unit 41, the converted image 54 obtained by the image processing unit 42, various data (for example, brightness distribution data) obtained from the converted image 54, and the surface characteristic obtained by the surface characteristic calculating unit 43 are as follows. , Output on the display, etc.
  • An image of the object W (surface to be inspected) whose surface characteristics are known is taken by the imaging unit 3.
  • the operator inputs the surface characteristic label of the object W imaged by the input means (not shown).
  • the surface characteristic label may be input by selecting from a database prepared in advance.
  • Known surface characteristics include, for example, those obtained by subjecting the metal surface to various hairline treatments, those obtained by subjecting the metal surface to various spin treatments, and various other surface treatments.
  • the image processing unit 42 non-linearly transforms (logarithmically transforms) the captured image 50 obtained by the imaging unit 3 to generate a transformed image 54. Further, the image processing unit 42 generates the luminance distribution data in the direction along the symmetry axis 55 of the luminance distribution from the converted image 54.
  • a learning data set (see FIG. 5) including the brightness distribution data thus generated and the surface characteristic label corresponding to the brightness distribution data is input to the machine learning unit 44.
  • FIG. 5 shows an example in which a plurality of brightness distribution data are used for one surface characteristic label A to D.
  • This learning data set was generated by the arithmetic unit 4 (specifically, the image processing unit 42) of the surface property inspection apparatus 100, but the previously prepared learning data set is an arithmetic unit of the surface property inspection apparatus 100. It may be configured such that by inputting to 4, the machine learning unit 44 is input. Alternatively, past surface characteristic inspection data may be stored in the data storage unit 45 as a learning data set and machine learning may be performed using this data.
  • the captured image obtained by capturing the reflected light is nonlinearly converted, and the surface property of the test object W is calculated using the converted image obtained by this conversion.
  • the change in the boundary area between the specular reflection component and the diffuse reflection component can be made relatively large, and the surface characteristics of the test object W can be accurately detected.
  • white bright spots which are specular reflection components
  • the boundary area between the regular reflection component and the diffuse reflection component can be easily confirmed.
  • the machine learning device for surface characteristic inspection that generates a learning model input to the surface characteristic inspection device also performs machine learning in the same manner as above. To do.
  • the method of detecting the axis of symmetry 55 is not limited to the edge detection technique such as Hough transform (Hough transform), and the following method may be used. That is, in the converted image 54, the point at which the converted image data becomes maximum (the brightness becomes maximum) is detected. This point is the center of the specular reflection component 51. Considering a line segment having a predetermined length that is oriented in any direction with this point as the middle point, the converted image data on the line segment is integrated (the cumulative value of the converted image data is obtained in units of predetermined pixel intervals).
  • Hough transform Hough transform
  • the length of the line segment is preferably about the same as the size of the converted image 54 in the vertical and horizontal directions.
  • weighting may be performed in proportion to the distance from the midpoint of the line segment. Further, if the square of the distance is used as the weight, it is equivalent to obtaining the variance assuming a one-dimensional Gaussian distribution on the line segment. That is, the direction in which the variance is maximized is searched for, and the direction of the symmetry axis 55 can be accurately detected.
  • the two-dimensional Gaussian distribution using the least squares method. That is, in the converted image 54, it is assumed that the distribution state of the converted image data can be approximated by a two-dimensional distribution function with respect to the x-axis and the y-axis on the plane of the image sensor 31, for example, a two-dimensional Gaussian distribution function. At this time, the contour lines of the two-dimensional Gaussian distribution function (lines connecting the same function values) are elliptic with the major axis in the direction of the symmetry axis 55 and the minor axis in the symmetry axis 56.
  • the major axis and minor axis directions of this ellipse can be calculated from the coefficients of the two-dimensional Gaussian distribution function. Therefore, in the converted image 54, a set of x-axis and y-axis coordinate values and converted image data may be selected and the coefficient of the two-dimensional Gaussian distribution function may be determined by the least square method.
  • the data set used for the least squares method it is preferable to select only pixels whose converted image data is larger than a predetermined threshold value. This is because a pixel having a value smaller than the threshold value acts as noise and deteriorates the ellipse detection accuracy.
  • the two-dimensional distribution function is not limited to the two-dimensional Gaussian distribution function, and a function that can approximate the distribution state of the converted image data with high accuracy may be used.

Abstract

The present invention accurately detects the surface characteristics of a specimen. The present invention comprises a light emission part 2 that emits light at a specimen W, an imaging part 3 that captures an image of reflected light from the specimen W, an image processing part 4 that performs a nonlinear transformation on the image captured by the imaging part 3 and thereby generates a transformed image, and a surface characteristics calculation part 5 that uses the transformed image to calculate the surface characteristics of the specimen.

Description

表面特性検査装置及び表面特性検査用の機械学習装置Surface property inspection device and machine learning device for surface property inspection
 本発明は、被検物からの反射光を検出して当該被検物の表面特性を算出する表面特性検査装置及びこの表面特性検査に用いられる機械学習装置に関するものである。 The present invention relates to a surface property inspection device that detects reflected light from a test object and calculates the surface property of the test object, and a machine learning device used for this surface property inspection.
 従来、特許文献1に示すように、被検物に光を照射して当該被検物からの反射光を撮像して得られた撮像画像から、表面の材質又は表面の凹凸の有無を識別する表面種別識別装置が考えられている。 Conventionally, as shown in Patent Document 1, from a captured image obtained by irradiating a test object with light and imaging reflected light from the test object, the presence or absence of surface material or surface irregularity is identified. A surface type identification device has been considered.
 この表面種別識別装置は、撮像画像の画素値における空間方向の変化の度合いを定量化した特徴量を算出し、予め登録された辞書データと照合することにより、表面の材質又は表面の凹凸の有無を識別するものである。ここで、前記特徴量としては、正反射方向をゼロとする相対的な角度(相対角度)ψに対する画素値の分布状態などである。 This surface type identification device calculates a characteristic amount that quantifies the degree of change in the pixel value of the captured image in the spatial direction and compares it with pre-registered dictionary data to determine the presence or absence of surface material or surface irregularities. To identify. Here, the feature amount is, for example, a distribution state of pixel values with respect to a relative angle (relative angle) ψ where the specular reflection direction is zero.
 しかしながら、上記の表面種別識別装置は、表面特性に方向性を有さない被検物を想定した測定方法であり、相対角度の算出方法に方向性の考慮が全くなされていないため、例えば筋状の凹凸等の方向性のある表面特性を精度良く測定することが難しい。 However, the surface type identification device described above is a measurement method that assumes a test object that does not have directivity in surface characteristics, and since the directionality is not considered at all in the method of calculating the relative angle, for example, streak It is difficult to accurately measure directional surface characteristics such as unevenness.
特開2002-174595号公報JP, 2002-174595, A
 そこで本発明は上記問題点を解決すべくなされたものであり、被検物の表面特性を精度良く検出することをその主たる課題とするものである。 Therefore, the present invention has been made to solve the above problems, and its main object is to accurately detect the surface characteristics of a test object.
 すなわち本発明に係る表面特性検査装置は、被検物に光を照射する光照射部と、前記被検物からの反射光を撮像する撮像部と、前記撮像部の撮像画像を非線形変換して変換画像を生成する画像処理部と、前記変換画像を用いて前記被検物の表面特性を算出する表面特性算出部とを備えることを特徴とする。ここで、非線形変換には、対数変換、指数変換、ガンマ変換などが考えられる。 That is, the surface property inspection apparatus according to the present invention, a light irradiation unit for irradiating a test object with light, an imaging unit for imaging reflected light from the test object, and a non-linear conversion of the imaged image of the imaging unit. An image processing unit that generates a converted image and a surface characteristic calculation unit that calculates the surface characteristic of the test object using the converted image are provided. Here, as the non-linear conversion, logarithmic conversion, exponential conversion, gamma conversion and the like can be considered.
 このようなものであれば、反射光を撮像した撮像画像を非線形変換し、これによって得られた変換画像を用いて被検物の表面特性を算出しているので、反射光の輝度分布の変化を捉えやすくなり、被検物の表面特性を精度良く検出することができる。 In such a case, the captured image of reflected light is subjected to non-linear conversion and the converted image obtained by this is used to calculate the surface characteristics of the test object. Can be easily captured, and the surface characteristics of the test object can be accurately detected.
 変換画像の全部のデータを用いた場合には、データ量が大きいため、処理時間がかかってしまう恐れがある。このため、前記表面特性算出部は、前記変換画像における所定方向に沿った輝度分布を用いて前記被検物の表面特性を算出するものであることが望ましい。この構成であれば、変換画像の一部のデータを用いて処理しているので、処理時間を短縮することができる。 If all the data of the converted image is used, the amount of data is large, so processing time may be required. Therefore, it is preferable that the surface characteristic calculation unit calculates the surface characteristic of the test object by using the luminance distribution along the predetermined direction in the converted image. With this configuration, since processing is performed using part of the data of the converted image, the processing time can be shortened.
 変換画像の一部のデータを用いる場合には、変換画像における正反射成分と拡散反射成分との境界領域が大きく表れている部分を用いることが望ましい。このため、前記表面特性算出部は、前記変換画像における輝度が延びる方向に沿った輝度分布を用いて前記被検物の表面特性を算出するものであることが望ましい。 When using part of the data of the converted image, it is desirable to use the part where the boundary area between the specular reflection component and the diffuse reflection component in the converted image is large. Therefore, it is desirable that the surface characteristic calculation unit calculates the surface characteristic of the test object by using the luminance distribution along the direction in which the luminance in the converted image extends.
 表面特性検査装置が自動的に精度良く表面特性を判定できるようにするためには、前記撮像部の撮像画像を非線形変換して得られた変換画像及び当該変換画像に対応する表面特性ラベルからなる学習データセットを用いた機械学習により学習モデルを生成する機械学習部をさらに備え、前記表面特性算出部は、前記学習モデルに基づいて前記被検物の表面特性を算出することが望ましい。 In order to enable the surface property inspection device to automatically and accurately determine the surface property, it is composed of a converted image obtained by performing non-linear conversion of the imaged image of the imaging unit and a surface property label corresponding to the converted image. It is preferable that the apparatus further includes a machine learning unit that generates a learning model by machine learning using a learning data set, and the surface characteristic calculation unit calculates the surface characteristic of the test object based on the learning model.
 また、本発明に係る表面特性検査用の機械学習装置は、被検物に光を照射し、前記被検物からの反射光を撮像して得られた撮像画像を取得する撮像画像取得部と、取得された撮像画像を非線形変換して変換画像を生成する変換画像生成部と、前記変換画像及び前記変換画像に対応する表面特性ラベルからなる学習データセットを用いた機械学習により学習モデルを生成する機械学習部とを備えることを特徴とする。 Further, the machine learning device for surface characteristic inspection according to the present invention includes a captured image acquisition unit that irradiates a test object with light and acquires a captured image obtained by capturing the reflected light from the test object. , A learning model is generated by machine learning using a transformed image generation unit that non-linearly transforms the acquired captured image to generate a transformed image, and a learning data set including the transformed image and a surface characteristic label corresponding to the transformed image. And a machine learning unit for performing the same.
 このようなものであれば、反射光を撮像した撮像画像を非線形変換し、これによって得られた変換画像を用いて機械学習しているので、反射光の輝度分布の変化を捉えた学習モデルを生成することができ、被検物の表面特性を精度良く検出することができるようになる。 In such a case, the captured image of reflected light is subjected to non-linear conversion, and machine learning is performed using the converted image obtained by this, so a learning model that captures changes in the luminance distribution of reflected light is used. It can be generated, and the surface characteristics of the test object can be detected with high accuracy.
 機械学習の処理時間を短縮するためには、前記機械学習部は、前記学習データセットの前記変換画像として、前記変換画像における所定方向に沿った輝度分布を用いることが望ましい。 In order to reduce the processing time of machine learning, it is preferable that the machine learning unit uses a brightness distribution along a predetermined direction in the converted image as the converted image of the learning data set.
 変換画像における反射光の輝度分布の変化が大きく表れている情報を用いて機械学習するためには、前記機械学習部は、前記学習データセットの前記変換画像として、前記変換画像における輝度が延びる方向に沿った輝度分布を用いることが望ましい。 In order to perform machine learning using the information in which the change in the brightness distribution of the reflected light in the converted image is largely expressed, the machine learning unit uses the direction in which the brightness in the converted image extends as the converted image of the learning data set. It is desirable to use a luminance distribution along.
 以上に述べた本発明によれば、被検物の表面特性を精度良く検出することができる。 According to the present invention described above, the surface characteristics of the test object can be detected with high accuracy.
本発明の一実施形態に係る表面特性検査装置の全体模式図である。It is the whole schematic diagram of the surface property inspection device concerning one embodiment of the present invention. 同実施形態の(a)撮像画像、(b)相対輝度分布を示す図である。It is a figure which shows (a) picked-up image and (b) relative brightness|luminance distribution of the same embodiment. 同実施形態の(a)変換画像、(b)相対輝度分布を示す図、(c)変換画像における対称軸を示す図である。It is a figure which shows the (a) conversion image of the same embodiment, the figure which shows the (b) relative brightness|luminance distribution, and the figure which shows the symmetry axis in the (c) conversion image. 同実施形態の撮像画像及び変換画像における対称軸に沿った方向の輝度分布を示す図である。It is a figure which shows the brightness|luminance distribution of the direction along the symmetry axis in the captured image and conversion image of the same embodiment. 同実施形態の学習データセットを示す模式図である。It is a schematic diagram which shows the learning data set of the same embodiment.
100・・・表面特性検査装置
W  ・・・被検物
2  ・・・光照射部
3  ・・・撮像部
42 ・・・画像処理部
43 ・・・表面特性算出部
44 ・・・機械学習部
100... Surface characteristic inspection device W... Object 2... Light irradiation unit 3... Imaging unit 42... Image processing unit 43... Surface characteristic calculation unit 44... Machine learning unit
 以下、本発明の一実施形態に係る表面特性検査装置について、図面を参照しながら説明する。 A surface property inspection apparatus according to an embodiment of the present invention will be described below with reference to the drawings.
<全体構成>
 本実施形態の表面特性検査装置100は、非接触で被検物Wの表面特性を検査するものである。ここで、表面特性とは、被検物Wの外観や表面状態の指標となるものであり、例えば表面の反射率や輝度分布などの光学特性、あるいは、表面粗さ、うねりなどの機械的な加工状態の良し悪しを表す尺度、ヘアライン加工のような特殊な加工状態の良否判定を行うための指標を含む。
<Overall structure>
The surface property inspection apparatus 100 of the present embodiment inspects the surface property of the test object W in a non-contact manner. Here, the surface characteristics serve as an index of the appearance and surface state of the test object W, and are, for example, optical characteristics such as surface reflectance and luminance distribution, or mechanical characteristics such as surface roughness and waviness. It includes a scale indicating the quality of the processed state and an index for determining the quality of a special processed state such as hairline processing.
 具体的に表面特性検査装置100は、図1に示すように、被検物Wに光を照射する光照射部2と、被検物Wにより反射した反射光を検出して被検物Wの表面を撮像する撮像部3と、撮像部3により撮像された撮像画像を処理して表面特性を算出する演算装置4とを備えている。 Specifically, as shown in FIG. 1, the surface property inspection apparatus 100 detects the light irradiation unit 2 that irradiates the test object W with light and the reflected light reflected by the test object W to detect the test object W. The image pickup unit 3 for picking up an image of the surface and the arithmetic unit 4 for processing the picked-up image picked up by the image pickup unit 3 to calculate the surface characteristics are provided.
 光照射部2は、例えば赤外光を射出するものであり、撮像部3の撮像範囲に向けて光を射出する。光照射部2は、撮像部3が撮影する範囲について、光量が一様になるように構成することが望ましい。また、外乱光等の影響を排除するために、光照射部2および撮像部3を含めて被検物Wを覆う囲い等を設けると良い。 The light irradiation unit 2 emits infrared light, for example, and emits light toward the imaging range of the imaging unit 3. It is desirable that the light irradiation unit 2 be configured so that the amount of light becomes uniform in the range captured by the imaging unit 3. Further, in order to eliminate the influence of ambient light or the like, it is preferable to provide an enclosure or the like that covers the object W including the light irradiation section 2 and the imaging section 3.
 撮像部3は、被検物Wにより反射した少なくとも正反射光(本実施形態では正反射光及び拡散反射光)を撮像するものであり、二次元状に配列された複数の画素を有する撮像素子31と、当該撮像素子31における各画素の蓄積電荷量に基づいて、被検物Wの輝度画像である撮像画像50を生成する撮像画像生成部32とを有している。なお、撮像部3は、少なくとも正反射光及びその周囲、具体的には、少なくとも正反射光及びその周囲の拡散反射光の一部を撮像するものであっても良い。図2(a)は筋状の凹凸を有する平板状の被検物Wを撮影した撮影画像50の一例を示すものである。また図2(b)は、撮像素子31の二次元状配列をx軸、y軸とし、撮影画像50における各画素の輝度を高さとして3次元的に表示した相対輝度分布図である。xy平面の中央部には鋭いピーク状の正反射成分51が観察できる。この正反射成分51の輝度値を基準として(例えば100%として)、他の画素の輝度値を相対的に表示する。正反射成分51の周囲には、拡散反射成分52が存在するが、輝度値が3桁以上小さいので高さはゼロとみなせる。また、正反射成分51と拡散反射成分52の境界領域53も、輝度値が正反射成分51に比べて1桁以上小さい。このため、正反射成分51から拡散反射成分52へと遷移する境界領域53の変化が目立たない。 The image pickup unit 3 picks up at least specular reflection light (specular reflection light and diffuse reflection light in the present embodiment) reflected by the test object W, and has an imaging device having a plurality of pixels arranged two-dimensionally. 31 and a captured image generation unit 32 that generates a captured image 50 that is a luminance image of the test object W based on the accumulated charge amount of each pixel in the image sensor 31. The image capturing unit 3 may capture at least specularly reflected light and its surroundings, specifically, at least part of specularly reflected light and diffusely reflected light around it. FIG. 2A shows an example of a captured image 50 obtained by capturing an image of a flat plate-shaped test object W having streaky irregularities. Further, FIG. 2B is a relative brightness distribution diagram in which the two-dimensional array of the image pickup devices 31 is set as the x-axis and the y-axis and the brightness of each pixel in the captured image 50 is displayed three-dimensionally as the height. A sharp peaked specular reflection component 51 can be observed at the center of the xy plane. The brightness values of the other pixels are relatively displayed using the brightness value of the specular reflection component 51 as a reference (for example, 100%). A diffuse reflection component 52 exists around the regular reflection component 51, but since the luminance value is smaller by three digits or more, the height can be regarded as zero. In the boundary area 53 between the specular reflection component 51 and the diffuse reflection component 52, the luminance value is smaller than that of the specular reflection component 51 by one digit or more. Therefore, the change in the boundary region 53 that transits from the regular reflection component 51 to the diffuse reflection component 52 is not noticeable.
 演算装置4は、撮像部3からの撮像画像50を処理して、被検物Wの表面特性を算出するものである。具体的に演算装置4は、構造としては、CPU、メモリ、入出力インターフェース、AD変換器、ディスプレイ、入力手段などを有するコンピュータであり、メモリに格納された表面特性検査プログラムに基づいて、CPU及びその周辺機器が作動することにより、図1に示すように、受付部41、画像処理部42、表面特性算出部43、機械学習部44、データ格納部45等としての機能を少なくとも発揮する。 The arithmetic unit 4 processes the picked-up image 50 from the image pickup unit 3 to calculate the surface characteristics of the test object W. Specifically, the arithmetic unit 4 is a computer having a structure such as a CPU, a memory, an input/output interface, an AD converter, a display, an input means, and the like, and the CPU and the CPU based on the surface characteristic inspection program stored in the memory. By operating the peripheral devices, as shown in FIG. 1, at least the functions of the reception unit 41, the image processing unit 42, the surface characteristic calculation unit 43, the machine learning unit 44, the data storage unit 45, and the like are exhibited.
 受付部41は、撮像部3の撮像画像生成部32から撮像画像50の各画素の輝度値を示す撮像画像データを受け付けるものである。 The reception unit 41 receives the captured image data indicating the brightness value of each pixel of the captured image 50 from the captured image generation unit 32 of the imaging unit 3.
 画像処理部42は、受付部41が受け付けた各画素の撮像画像データを非線形変換した変換画像54を示す変換画像データを生成する。ここで、非線形変換としては、ウェーバー・フェヒナーの法則(Weber-Fechner law)により人間の明るさに対する感覚量は明るさの対数に比例することから、対数変換を用いている。なお、その他の非線形変換として、指数変換やガンマ変換等を用いても良い。図3(a)は撮影画像50を非線形変換した変換画像54である。変換画像54は、撮影画像データの変換結果である変換画像データの値を画素値とした画像である。また図3(b)は、変換画像データの値を高さとして3次元的に表示した相対輝度分布図である。正反射成分51の非線形変換結果を基準として(例えば100%として)、他の変換画像データの値を相対的に表示する。変換前の輝度値が3ケタ以上小さい拡散反射成分52の高さは、非線形変換を行っても相変わらずゼロとみなせる(非常に小さい)。しかしながら境界領域53は、正反射成分51と比べても遜色ない値(小さいが無視できない程度の値、例えば半分程度の値)に非線形変換される。このため正反射成分51から拡散反射成分52へと遷移する境界領域53の変化を明瞭に観察でき、境界領域の特性を正確に捉えることができる。 The image processing unit 42 generates converted image data indicating a converted image 54 obtained by nonlinearly converting the captured image data of each pixel received by the receiving unit 41. Here, as the non-linear conversion, logarithmic conversion is used because the sense amount for human brightness is proportional to the logarithm of brightness according to the Weber-Fechner law. Note that exponential conversion, gamma conversion, or the like may be used as the other non-linear conversion. FIG. 3A shows a converted image 54 obtained by nonlinearly converting the captured image 50. The converted image 54 is an image in which the pixel value is the value of the converted image data that is the conversion result of the captured image data. Further, FIG. 3B is a relative luminance distribution diagram in which the value of the converted image data is displayed three-dimensionally as height. Values of other converted image data are relatively displayed based on the non-linear conversion result of the regular reflection component 51 (for example, 100%). The height of the diffuse reflection component 52 whose luminance value before conversion is three digits or less can still be regarded as zero (very small) even if nonlinear conversion is performed. However, the boundary region 53 is non-linearly converted to a value comparable to the specular reflection component 51 (a small value that cannot be ignored, for example, a value of about half). Therefore, it is possible to clearly observe the change in the boundary region 53 that transitions from the regular reflection component 51 to the diffuse reflection component 52, and it is possible to accurately grasp the characteristics of the boundary region.
 このように画像処理部42により撮像画像50の撮影画像データが対数変換されることから、撮像部3の撮像画像生成部32により生成される撮像画像50は対数変換に耐えられるダイナミックレンジを有する画像であることが必要である。具体的には、撮像画像生成部32がHDR(ハイダイナミックレンジ)合成により撮像画像50を生成することが望ましい。すなわち、露光時間を変えて取得した複数の撮影画像を、露光時間を考慮して合成する等の手法を用いる。例えば、比較的長時間露光(たとえば1秒間露光)させて第1の撮影画像を取得した後、露光時間を1/100に減じて第2の撮影画像を取得し、第1および第2の撮影画像を合成すれば、2桁広いダイナミックレンジを有する画像が得られる。この際、反射光の正反射成分などを検出する画素は、長時間露光時には所謂「白とび」状態(撮像素子31の蓄積電荷量が飽和した状態)となるので、短時間露光である第2の撮影画像を利用する。また拡散反射成分を検出する画素は、短時間撮影時にはノイズ成分に埋もれる(撮像素子31の蓄積電荷量が不足した状態となる)ので、長時間露光である第1の撮影画像を利用する。そして、第1の撮影画像は露光時間が100倍長いので、第1の撮影画像の画素については、輝度値を100倍して合成する。こうして第1および第2の撮影画像から画素ごとに適切な輝度値を選択したものを、改めて撮影画像50とすればよい。 In this way, since the captured image data of the captured image 50 is logarithmically converted by the image processing unit 42, the captured image 50 generated by the captured image generation unit 32 of the imaging unit 3 is an image having a dynamic range that can withstand logarithmic conversion. It is necessary to be. Specifically, it is desirable that the captured image generation unit 32 generate the captured image 50 by HDR (high dynamic range) composition. That is, a method of synthesizing a plurality of captured images obtained by changing the exposure time in consideration of the exposure time is used. For example, after exposure for a relatively long time (for example, exposure for 1 second) to acquire the first captured image, the exposure time is reduced to 1/100 to acquire the second captured image, and the first and second captured images are acquired. When images are combined, an image having a dynamic range that is two orders of magnitude wider can be obtained. At this time, the pixels for detecting the specular reflection component of the reflected light are in a so-called “overexposure” state (a state in which the amount of accumulated charge of the image sensor 31 is saturated) during long-time exposure, and thus the second short-time exposure is performed. Use the captured image of. Further, the pixel for detecting the diffuse reflection component is buried in the noise component during short-time shooting (because the accumulated charge amount of the image pickup device 31 is insufficient), so the first shot image which is long-time exposure is used. Then, since the exposure time of the first captured image is 100 times longer, the luminance value of the pixels of the first captured image is multiplied by 100 to be combined. In this way, the one obtained by selecting an appropriate luminance value for each pixel from the first and second photographed images may be used as the photographed image 50 again.
 このようにして得た撮影画像50を非線形変換した変換画像54を見ると、図3(b)に示すように斜め方向に延びる白色輝線が明瞭に観察できる。変換画像54の画素値が最大となる正反射成分51の頂点を通り、白色輝線が伸びる方向となる直線を考えると、変換画像54の画素値が概ね線対称に分布していることがわかる。これは、筋状の凹凸を有する被検物Wで顕著な変換結果である。 Looking at the converted image 54 obtained by nonlinearly converting the captured image 50 thus obtained, the white bright lines extending in the oblique direction can be clearly observed, as shown in FIG. 3B. Considering a straight line that passes through the apex of the specular reflection component 51 having the maximum pixel value of the converted image 54 and extends in the direction of the white bright line, it can be seen that the pixel values of the converted image 54 are distributed in line symmetry. This is a remarkable conversion result in the test object W having streaky irregularities.
 表面に凹凸がある被検物Wに光照射部2から光が照射されると、表面の巨視的な法線方向に反射する正反射成分51に加えて、凹凸による微細形状に応じた微視的な傾きに応じた方向へと反射する成分が存在する。その程度は凹凸の大小によって増減し、凹凸の全く無い鏡面では、正反射成分51だけが残ることになる。 When the object W having unevenness on the surface is irradiated with light from the light irradiating section 2, in addition to the regular reflection component 51 that is reflected in the macroscopic normal direction of the surface, microscopic images corresponding to the fine shape due to unevenness are obtained. There is a component that reflects in the direction corresponding to the specific inclination. The degree increases or decreases depending on the size of the unevenness, and only the specular reflection component 51 remains on the mirror surface having no unevenness.
 さらに筋状の凹凸を有する被検物Wでは、凹凸の方向が筋目の方向に揃っているので、筋目の方向の微視的な傾きが小さく、筋目と直交する方向の傾きが大きい。このため図3(b)に示すように、凹凸の筋目の方向と略直交した方向に白色輝線が直線状に延びる。 In addition, in the test object W having streaky unevenness, the directions of the unevenness are aligned with the direction of the lines of the creases, so the microscopic inclination in the direction of the creases is small and the inclination in the direction orthogonal to the creases is large. Therefore, as shown in FIG. 3B, the white bright line extends linearly in a direction substantially orthogonal to the direction of the uneven lines.
 すなわち、図3(c)に示すような対称軸55の方向に白色輝線が伸びる。そして、対称軸55に沿った方向の変換画像54の輝度分布は、被検物Wの凹凸の程度や傾きの揃い具合を顕著に表している。 That is, a white bright line extends in the direction of the symmetry axis 55 as shown in FIG. The brightness distribution of the converted image 54 in the direction along the symmetry axis 55 remarkably represents the degree of unevenness of the test object W and the degree of inclination.
 そこで画像処理部42は、変換画像データが示す変換画像54において(対称軸55を検出し、当該対称軸55に沿った方向の1次元の輝度分布データを抽出する。この1次元の輝度分布を図4に示す。 Therefore, the image processing unit 42 detects the symmetry axis 55 in the conversion image 54 indicated by the conversion image data and extracts the one-dimensional luminance distribution data in the direction along the symmetry axis 55. This one-dimensional luminance distribution As shown in FIG.
 ここで画像処理部42は、一般の画像処理技術として公知のハフ変換(Hough変換)等のエッジ検出技術を変換画像54に適用して対称軸55を検出する。 Here, the image processing unit 42 detects the symmetry axis 55 by applying an edge detection technique such as Hough transform (Hough transform) known as a general image processing technique to the transformed image 54.
 また図3(b)の輝度分布を見るとわかるように、筋状の凹凸を有する被検物Wでは対称軸55に加えて、図3(c)に示す対称軸56の方向にも線対称となる。このような場合には変換画像54の白色輝線の長さから対称軸55を主方向、対称軸56を副方向と決定して、それぞれの方向の1次元輝度分布データを抽出しても良い。なお、変換画像54の輝度分布が点対称に近い場合には、正反射成分51の頂点を通る任意の方向に対称軸55を定めた後、当該対称軸55に直交する方向の対称軸56と、対称軸55、56から45度傾いた方向の軸57を定めて、3方向についての輝度分布データを抽出しても良い。 Further, as can be seen from the luminance distribution of FIG. 3B, in the object W having streaky irregularities, in addition to the symmetry axis 55, line symmetry also exists in the direction of the symmetry axis 56 shown in FIG. 3C. Becomes In such a case, the symmetry axis 55 may be determined as the main direction and the symmetry axis 56 may be determined as the sub direction based on the length of the white bright line of the converted image 54, and the one-dimensional luminance distribution data in each direction may be extracted. When the luminance distribution of the converted image 54 is close to point symmetry, the symmetry axis 55 is set in an arbitrary direction passing through the apex of the specular reflection component 51, and then the symmetry axis 56 is formed in the direction orthogonal to the symmetry axis 55. The brightness distribution data in three directions may be extracted by defining the axis 57 that is inclined by 45 degrees from the symmetry axes 55 and 56.
 表面特性算出部43は、画像処理部42により変換された変換画像54を用いて被検物Wの表面特性を算出するものである。 The surface characteristic calculation unit 43 calculates the surface characteristic of the test object W using the converted image 54 converted by the image processing unit 42.
 具体的に表面特性算出部43は、画像処理部42により抽出された変換画像54における輝度である変換画像データを用いて、被検物Wの表面特性を算出する。本実施形態では、画像処理部42により得られた変換画像54における対称軸55に沿った方向の1次元の輝度分布データを用いて被検物Wの表面特性を算出する。 Specifically, the surface characteristic calculation unit 43 calculates the surface characteristic of the test object W using the converted image data which is the brightness in the converted image 54 extracted by the image processing unit 42. In the present embodiment, the surface characteristic of the test object W is calculated using the one-dimensional luminance distribution data in the direction along the symmetry axis 55 in the converted image 54 obtained by the image processing unit 42.
 しかして本実施形態の表面特性検査装置100は、表面特性検出用の学習モデルを生成する機械学習部44をさらに備えている。 However, the surface property inspection apparatus 100 of the present embodiment further includes the machine learning unit 44 that generates a learning model for surface property detection.
 この機械学習部44は、撮像部3の撮像画像50を非線形変換(ここでは対数変換)して得られた変換画像データ及び当該変換画像54に対応する表面特性ラベルからなる学習データセットを用いた機械学習により学習モデルを生成する。 The machine learning unit 44 uses a learning data set including converted image data obtained by performing non-linear conversion (logarithmic conversion here) of the captured image 50 of the imaging unit 3 and a surface characteristic label corresponding to the converted image 54. A learning model is generated by machine learning.
 具体的に機械学習部44は、変換画像54における対称軸55に沿った方向の1次元の輝度分布データと、当該輝度分布データに対応する表面特性ラベルからなる学習データセットを用いた機械学習により学習モデルを生成する。なお、生成された学習モデルは、データ格納部45に格納される。 Specifically, the machine learning unit 44 performs machine learning using a learning data set including one-dimensional luminance distribution data in the direction along the symmetry axis 55 in the converted image 54 and a surface characteristic label corresponding to the luminance distribution data. Generate a learning model. The generated learning model is stored in the data storage unit 45.
 ここで、機械学習部44による機械学習アルゴリズムとしては、人工ニューラルネットワーク(ANN)、サポートベクターマシン(SVM)、決定木、ランダムフォレスト、k平均法クラスタリング、自己組織化マップ、遺伝的アルゴリズム、ベイジアンネットワーク、ディープラーニング手法などから選択される1つ又は複数の組み合わせを用いることができる。 Here, as the machine learning algorithm by the machine learning unit 44, artificial neural network (ANN), support vector machine (SVM), decision tree, random forest, k-means clustering, self-organizing map, genetic algorithm, Bayesian network. , One or more combinations selected from deep learning methods and the like can be used.
 そして、表面特性算出部43は、機械学習部44により生成された学習モデルを用いて、被検物Wを撮像して得られた撮影画像データ(を非線形変換した変換画像データ)から被検物Wの表面特性を算出する。 Then, the surface characteristic calculation unit 43 uses the learning model generated by the machine learning unit 44 to convert the captured image data (converted image data obtained by performing non-linear conversion) of the captured image data of the inspection object W into the inspection object. Calculate the surface properties of W.
 なお、上記の受付部41が受け付けた撮像画像50、画像処理部42により得られた変換画像54及びそれから得られる各種データ(例えば輝度分布データ)、表面特性算出部43により得られた表面特性は、ディスプレイ上に表示する等、出力することができる。 The captured image 50 accepted by the accepting unit 41, the converted image 54 obtained by the image processing unit 42, various data (for example, brightness distribution data) obtained from the converted image 54, and the surface characteristic obtained by the surface characteristic calculating unit 43 are as follows. , Output on the display, etc.
<機械学習手順>
 次に、機械学習におけるデータの処理手順について簡単に説明する。
<Machine learning procedure>
Next, a data processing procedure in machine learning will be briefly described.
 表面特性が既知の被検物W(被検面)を撮像部3により撮像する。このとき、オペレータは、図示しない入力手段によって撮像した被検物Wの表面特性ラベルを入力する。なお、表面特性ラベルの入力は、予め準備されたデータベースから選択するように構成しても良い。既知の表面特性としては、例えば、金属表面に種々のヘアライン加工を施したものや、金属表面に種々のスピン加工を施したものなどが種々の表面加工が施されたものが考えられる。 An image of the object W (surface to be inspected) whose surface characteristics are known is taken by the imaging unit 3. At this time, the operator inputs the surface characteristic label of the object W imaged by the input means (not shown). The surface characteristic label may be input by selecting from a database prepared in advance. Known surface characteristics include, for example, those obtained by subjecting the metal surface to various hairline treatments, those obtained by subjecting the metal surface to various spin treatments, and various other surface treatments.
 画像処理部42は、撮像部3により得られた撮像画像50を非線形変換(ここでは対数変換)して変換画像54を生成する。また、画像処理部42は、変換画像54から輝度分布の対称軸55に沿った方向の輝度分布データを生成する。 The image processing unit 42 non-linearly transforms (logarithmically transforms) the captured image 50 obtained by the imaging unit 3 to generate a transformed image 54. Further, the image processing unit 42 generates the luminance distribution data in the direction along the symmetry axis 55 of the luminance distribution from the converted image 54.
 このようにして生成された輝度分布データと、当該輝度分布データに対応する表面特性ラベルとからなる学習データセット(図5参照)が機械学習部44に入力される。なお、図5では、1つの表面特性ラベルA~Dに対して、複数の輝度分布データを用いた例を示している。この学習データセットは、表面特性検査装置100の演算装置4(具体的には画像処理部42)において生成するものであったが、予め準備された学習データセットを表面特性検査装置100の演算装置4に入力することによって、機械学習部44に入力するように構成しても良い。また、過去の表面特性検査データを学習データセットとしてデータ格納部45に格納しておき、このデータを用いて機械学習しても良い。 A learning data set (see FIG. 5) including the brightness distribution data thus generated and the surface characteristic label corresponding to the brightness distribution data is input to the machine learning unit 44. Note that FIG. 5 shows an example in which a plurality of brightness distribution data are used for one surface characteristic label A to D. This learning data set was generated by the arithmetic unit 4 (specifically, the image processing unit 42) of the surface property inspection apparatus 100, but the previously prepared learning data set is an arithmetic unit of the surface property inspection apparatus 100. It may be configured such that by inputting to 4, the machine learning unit 44 is input. Alternatively, past surface characteristic inspection data may be stored in the data storage unit 45 as a learning data set and machine learning may be performed using this data.
<本実施形態の効果>
 本実施形態の表面特性検査装置100によれば、反射光を撮像した撮像画像を非線形変換し、これによって得られた変換画像を用いて被検物Wの表面特性を算出しているので、変換画像において正反射成分と拡散反射成分との境界領域の変化を相対的に大きくすることができ、被検物Wの表面特性を精度良く検出することができる。
 例えば、筋状の凹凸を有する金属表面を撮像した場合、撮像画像では正反射成分である白色輝点しか確認できないが、非線形変換した変換画像では、正反射成分から直線的に延びる白色輝線、すなわち正反射成分と拡散反射成分との境界領域が容易に確認できる。
<Effects of this embodiment>
According to the surface property inspection apparatus 100 of the present embodiment, the captured image obtained by capturing the reflected light is nonlinearly converted, and the surface property of the test object W is calculated using the converted image obtained by this conversion. In the image, the change in the boundary area between the specular reflection component and the diffuse reflection component can be made relatively large, and the surface characteristics of the test object W can be accurately detected.
For example, when a metal surface having streaky irregularities is imaged, only white bright spots, which are specular reflection components, can be confirmed in the captured image, but in the converted image obtained by nonlinear conversion, white bright lines that linearly extend from the specular reflection component The boundary area between the regular reflection component and the diffuse reflection component can be easily confirmed.
<その他の変形実施形態>
 なお、本発明は前記実施形態に限られるものではない。
<Other modified embodiments>
The present invention is not limited to the above embodiment.
 例えば、前記実施形態では機械学習機能を有する表面特性検査装置について説明したが、表面特性検査装置に入力される学習モデルを生成する表面特性検査用の機械学習装置についても、上記と同様に機械学習する。 For example, although the surface characteristic inspection device having the machine learning function has been described in the above-described embodiment, the machine learning device for surface characteristic inspection that generates a learning model input to the surface characteristic inspection device also performs machine learning in the same manner as above. To do.
 また、対称軸55の検出方法も、ハフ変換(Hough変換)等のエッジ検出技術に限られるものではなく、以下のような方法を用いても良い。
 すなわち変換画像54において、変換画像データが最大となる(輝度が最大となる)点を検出する。この点は正反射成分51の中心になる点である。この点を中点とし任意の方向を向いた所定長さの線分を考え、線分上の変換画像データを積分する(所定画素間隔単位で変換画像データの累計値を求める)。線分の方向を変えながら繰り返し計算し、積分値(累計値)が最大となる方向が対称軸55の方向であると判定すれば良く、山登り法など周知の極値探索アルゴリズムを用いれば良い。なお線分の長さは、変換画像54の縦横方向のサイズと同程度であることが望ましい。また、単純な積分ではなく、線分の中点からの距離に比例した重みづけを行っても良い。さらに距離の2乗を重みとして処理すれば、線分上の1次元ガウス分布を仮定した分散を求めることと等しくなる。すなわち、分散が最も大きくなる方向を探索することになり、対称軸55の方向を精度よく検出できる。
The method of detecting the axis of symmetry 55 is not limited to the edge detection technique such as Hough transform (Hough transform), and the following method may be used.
That is, in the converted image 54, the point at which the converted image data becomes maximum (the brightness becomes maximum) is detected. This point is the center of the specular reflection component 51. Considering a line segment having a predetermined length that is oriented in any direction with this point as the middle point, the converted image data on the line segment is integrated (the cumulative value of the converted image data is obtained in units of predetermined pixel intervals). It is sufficient to repeatedly calculate while changing the direction of the line segment and determine that the direction in which the integrated value (cumulative value) is maximum is the direction of the symmetry axis 55, and a known extreme value search algorithm such as the hill climbing method may be used. The length of the line segment is preferably about the same as the size of the converted image 54 in the vertical and horizontal directions. Further, instead of simple integration, weighting may be performed in proportion to the distance from the midpoint of the line segment. Further, if the square of the distance is used as the weight, it is equivalent to obtaining the variance assuming a one-dimensional Gaussian distribution on the line segment. That is, the direction in which the variance is maximized is searched for, and the direction of the symmetry axis 55 can be accurately detected.
 この考え方を2次元に拡張して、2次元ガウス分布に最小二乗法を用いてフィッティングしても良い。すなわち変換画像54において、変換画像データの分布状態は、撮像素子31平面上のx軸、y軸に対する2次元の分布関数、例えば2次元ガウス分布関数で近似できると仮定する。この時、2次元ガウス分布関数の等高線(関数値が同じ値の所をつないだ線)は、対称軸55の方向を長軸、対称軸56を短軸とする楕円形となる。この楕円形の長軸・短軸方向は、2次元ガウス分布関数の係数から計算できる。そこで変換画像54において、x軸、y軸座標値と、変換画像データの組を選択して、最小二乗法により2次元ガウス分布関数の係数を決定すればよい。ここで、最小二乗法に用いるデータの組は、変換画像データが所定の閾値より大きい画素だけを選択すると良い。閾値より小さい値を有する画素は、ノイズとして作用するため楕円の検出精度を劣化させるからである。また、2次元の分布関数は2次元ガウス分布関数に限られるものではなく、変換画像データの分布状態を精度よく近似できる関数を用いれば良い。 Extending this idea to two dimensions, you may fit to the two-dimensional Gaussian distribution using the least squares method. That is, in the converted image 54, it is assumed that the distribution state of the converted image data can be approximated by a two-dimensional distribution function with respect to the x-axis and the y-axis on the plane of the image sensor 31, for example, a two-dimensional Gaussian distribution function. At this time, the contour lines of the two-dimensional Gaussian distribution function (lines connecting the same function values) are elliptic with the major axis in the direction of the symmetry axis 55 and the minor axis in the symmetry axis 56. The major axis and minor axis directions of this ellipse can be calculated from the coefficients of the two-dimensional Gaussian distribution function. Therefore, in the converted image 54, a set of x-axis and y-axis coordinate values and converted image data may be selected and the coefficient of the two-dimensional Gaussian distribution function may be determined by the least square method. Here, as the data set used for the least squares method, it is preferable to select only pixels whose converted image data is larger than a predetermined threshold value. This is because a pixel having a value smaller than the threshold value acts as noise and deteriorates the ellipse detection accuracy. Further, the two-dimensional distribution function is not limited to the two-dimensional Gaussian distribution function, and a function that can approximate the distribution state of the converted image data with high accuracy may be used.
 その他、本発明の趣旨に反しない限りにおいて様々な実施形態の変形や組み合わせを行っても構わない。 In addition, various modifications and combinations of the embodiments may be made without departing from the spirit of the present invention.
 本発明によれば、被検物の表面特性を精度良く検出することができる。

 
According to the present invention, it is possible to accurately detect the surface characteristics of the test object.

Claims (7)

  1.  被検物に光を照射する光照射部と、
     前記被検物からの反射光を撮像する撮像部と、
     前記撮像部の撮像画像を非線形変換して変換画像を生成する画像処理部と、
     前記変換画像を用いて前記被検物の表面特性を算出する表面特性算出部とを備える、表面特性検査装置。
    A light irradiation unit that irradiates the test object with light,
    An imaging unit that images reflected light from the test object,
    An image processing unit that generates a converted image by performing non-linear conversion on the captured image of the imaging unit,
    And a surface characteristic calculation unit that calculates the surface characteristic of the test object using the converted image.
  2.  前記表面特性算出部は、前記変換画像における所定方向の輝度分布を用いて前記被検物の表面特性を算出するものである、請求項1記載の表面特性検査装置。 The surface characteristic inspection apparatus according to claim 1, wherein the surface characteristic calculation unit calculates the surface characteristic of the test object by using a luminance distribution in a predetermined direction in the converted image.
  3.  前記表面特性算出部は、前記変換画像における輝度が延びる方向の輝度分布を用いて前記被検物の表面特性を算出するものである、請求項1又は2記載の表面特性検査装置。 3. The surface property inspection apparatus according to claim 1, wherein the surface property calculation unit is configured to calculate the surface property of the test object by using a brightness distribution in a direction in which brightness in the converted image extends.
  4.  前記撮像部の撮像画像を非線形変換して得られた変換画像及び当該変換画像に対応する表面特性ラベルからなる学習データセットを用いた機械学習により学習モデルを生成する機械学習部をさらに備え、
     前記表面特性算出部は、前記学習モデルに基づいて前記被検物の表面特性を算出する、請求項1乃至3の何れか一項に記載の表面特性検査装置。
    A machine learning unit that generates a learning model by machine learning using a learning data set consisting of a conversion image obtained by performing non-linear conversion of the imaged image of the imaging unit and a surface characteristic label corresponding to the conversion image,
    The surface characteristic inspection device according to claim 1, wherein the surface characteristic calculation unit calculates the surface characteristic of the test object based on the learning model.
  5.  被検物に光を照射し、前記被検物からの反射光を撮像して得られた撮像画像を取得する撮像画像取得部と、
     取得された撮像画像を非線形変換して変換画像を生成する変換画像生成部と、
     前記変換画像及び前記変換画像に対応する表面特性ラベルからなる学習データセットを用いた機械学習により学習モデルを生成する機械学習部とを備える、表面特性検査用の機械学習装置。
    A captured image acquisition unit that irradiates a test object with light and acquires a captured image obtained by capturing the reflected light from the test object,
    A converted image generation unit that non-linearly converts the acquired captured image to generate a converted image;
    A machine learning device for inspecting surface characteristics, comprising: a machine learning unit that generates a learning model by machine learning using a learning data set consisting of the converted image and a surface characteristic label corresponding to the converted image.
  6.  前記機械学習部は、前記学習データセットの前記変換画像として、前記変換画像における所定方向の輝度分布を用いる、請求項5記載の表面特性検査用の機械学習装置。 The machine learning device for surface characteristic inspection according to claim 5, wherein the machine learning unit uses, as the converted image of the learning data set, a luminance distribution in a predetermined direction in the converted image.
  7.  前記機械学習部は、前記学習データセットの前記変換画像として、前記変換画像における輝度が延びる方向の輝度分布を用いる、請求項5又は6記載の表面特性検査用の機械学習装置。 The machine learning device for surface characteristic inspection according to claim 5 or 6, wherein the machine learning unit uses, as the converted image of the learning data set, a brightness distribution in a direction in which brightness in the converted image extends.
PCT/JP2019/031698 2018-12-14 2019-08-09 Surface characteristics inspection device and machine learning device for surface characteristics inspection WO2020121594A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2020559709A JPWO2020121594A1 (en) 2018-12-14 2019-08-09 Surface property inspection device and machine learning device for surface property inspection

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018234701 2018-12-14
JP2018-234701 2018-12-14

Publications (1)

Publication Number Publication Date
WO2020121594A1 true WO2020121594A1 (en) 2020-06-18

Family

ID=71076673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/031698 WO2020121594A1 (en) 2018-12-14 2019-08-09 Surface characteristics inspection device and machine learning device for surface characteristics inspection

Country Status (2)

Country Link
JP (1) JPWO2020121594A1 (en)
WO (1) WO2020121594A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597666A (en) * 2021-01-08 2021-04-02 北京深睿博联科技有限责任公司 Pavement state analysis method and device based on surface material modeling

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0777416A (en) * 1993-09-08 1995-03-20 Tipton Mfg Corp Surface roughness measuring method and measuring device with neural network
JP2005351645A (en) * 2004-06-08 2005-12-22 Fuji Photo Film Co Ltd Surface-damage detecting method, detector therefor and program
JP2006177937A (en) * 2004-11-26 2006-07-06 Denso It Laboratory Inc Distance measuring device and distance measurement method
US20080008375A1 (en) * 2006-07-06 2008-01-10 Petersen Russell H Method for inspecting surface texture direction of workpieces
JP2010230450A (en) * 2009-03-26 2010-10-14 Panasonic Electric Works Co Ltd Object surface inspection apparatus
US20160274022A1 (en) * 2015-03-18 2016-09-22 CENTRE DE RECHERCHE INDUSTRIELLE DU QUéBEC Optical method and apparatus for identifying wood species of a raw wooden log
US20170109874A1 (en) * 2014-07-01 2017-04-20 Trumpf Werkzeugmaschinen Gmbh + Co. Kg Determining a Material Type and/or a Surface Condition of a Workpiece
WO2019117301A1 (en) * 2017-12-15 2019-06-20 株式会社堀場製作所 Surface characteristic inspection device and surface characteristic inspection program

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0777416A (en) * 1993-09-08 1995-03-20 Tipton Mfg Corp Surface roughness measuring method and measuring device with neural network
JP2005351645A (en) * 2004-06-08 2005-12-22 Fuji Photo Film Co Ltd Surface-damage detecting method, detector therefor and program
JP2006177937A (en) * 2004-11-26 2006-07-06 Denso It Laboratory Inc Distance measuring device and distance measurement method
US20080008375A1 (en) * 2006-07-06 2008-01-10 Petersen Russell H Method for inspecting surface texture direction of workpieces
JP2010230450A (en) * 2009-03-26 2010-10-14 Panasonic Electric Works Co Ltd Object surface inspection apparatus
US20170109874A1 (en) * 2014-07-01 2017-04-20 Trumpf Werkzeugmaschinen Gmbh + Co. Kg Determining a Material Type and/or a Surface Condition of a Workpiece
US20160274022A1 (en) * 2015-03-18 2016-09-22 CENTRE DE RECHERCHE INDUSTRIELLE DU QUéBEC Optical method and apparatus for identifying wood species of a raw wooden log
WO2019117301A1 (en) * 2017-12-15 2019-06-20 株式会社堀場製作所 Surface characteristic inspection device and surface characteristic inspection program

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597666A (en) * 2021-01-08 2021-04-02 北京深睿博联科技有限责任公司 Pavement state analysis method and device based on surface material modeling

Also Published As

Publication number Publication date
JPWO2020121594A1 (en) 2021-10-21

Similar Documents

Publication Publication Date Title
US9928592B2 (en) Image-based signal detection for object metrology
KR101122971B1 (en) Biometric authentication device, biometric authentication method, and computer-readable recording medium having biometric authentication program recorded thereon
JP6305171B2 (en) How to detect objects in a scene
US20170262985A1 (en) Systems and methods for image-based quantification for allergen skin reaction
JP6189127B2 (en) Soldering inspection apparatus, soldering inspection method, and electronic component
Karner et al. An image based measurement system for anisotropic reflection
US20170258391A1 (en) Multimodal fusion for object detection
US20170262965A1 (en) Systems and methods for user machine interaction for image-based metrology
WO2019059011A1 (en) Training data creation method and device, and defect inspecting method and device
US20170262977A1 (en) Systems and methods for image metrology and user interfaces
JP2021522591A (en) How to distinguish a 3D real object from a 2D spoof of a real object
US5424823A (en) System for identifying flat orthogonal objects using reflected energy signals
CN107203743B (en) Face depth tracking device and implementation method
CN109751980A (en) Wave height measurement method based on monocular vision laser triangulation
WO2019228471A1 (en) Fingerprint recognition method and device, and computer-readable storage medium
CN115791806B (en) Detection imaging method, electronic equipment and medium for automobile paint defects
KR102559586B1 (en) Structural appearance inspection system and method using artificial intelligence
CN115143895A (en) Deformation vision measurement method, device, equipment, medium and double-shaft measurement extensometer
WO2020121594A1 (en) Surface characteristics inspection device and machine learning device for surface characteristics inspection
JP3871963B2 (en) Surface inspection apparatus and surface inspection method
CN115115653A (en) Refined temperature calibration method for cold and hot impact test box
JP4449576B2 (en) Image processing method and image processing apparatus
Kogumasaka et al. Surface finishing inspection using a fisheye camera system
Malekmohamadi Deep learning based photometric stereo from many images and under unknown illumination
US10341555B2 (en) Characterization of a physical object based on its surface roughness

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19896303

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2020559709

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19896303

Country of ref document: EP

Kind code of ref document: A1