JP2004069698A - Image-processing method - Google Patents

Image-processing method Download PDF

Info

Publication number
JP2004069698A
JP2004069698A JP2003280362A JP2003280362A JP2004069698A JP 2004069698 A JP2004069698 A JP 2004069698A JP 2003280362 A JP2003280362 A JP 2003280362A JP 2003280362 A JP2003280362 A JP 2003280362A JP 2004069698 A JP2004069698 A JP 2004069698A
Authority
JP
Japan
Prior art keywords
image
inspection
teaching
pixel
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2003280362A
Other languages
Japanese (ja)
Other versions
JP3800208B2 (en
Inventor
Yoshihito Hashimoto
橋本 良仁
Kazutaka Ikeda
池田 和隆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Panasonic Electric Works Co Ltd
Original Assignee
Matsushita Electric Works Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Works Ltd filed Critical Matsushita Electric Works Ltd
Priority to JP2003280362A priority Critical patent/JP3800208B2/en
Publication of JP2004069698A publication Critical patent/JP2004069698A/en
Application granted granted Critical
Publication of JP3800208B2 publication Critical patent/JP3800208B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To precisely extract a defect in an inspection image without recognizing erroneously as the defect deformation of the inspection image caused by fluctuation of an imaging environment. <P>SOLUTION: The least square method is applied for linear relations of respective outlines of an instruction image registered preliminarily and the inspection image of a concentration image, and a conversion parameter is thereby calculated to express the deformation of the inspection image with respect to the instruction image (S2-S5). A concentration difference of corresponding picture elements in a deformed image of the instruction image and the inspection image is found based on the found conversion parameter, and the picture element of which the concentration difference exceeds a preset threshold value is extracted (S8,S9). The extracted picture element is used for appearance inspection for the inspection object. An influence of the deformation of the inspection image caused by fluctuation of the image picking-up environment is removed to discriminate the inherent defect in the inspection image. <P>COPYRIGHT: (C)2004,JPO

Description

 本発明は、検査対象物を撮像して得た撮像画像の中から検査対象物の画像である検査画像を検出し、検査画像をあらかじめ設定してある教示画像と比較して検査画像のうち教示画像と一致しない部分を抽出することにより検査対象物の外観検査を行う画像処理方法に関するものである。 The present invention detects an inspection image, which is an image of an inspection object, from a captured image obtained by imaging the inspection object, compares the inspection image with a preset teaching image, and teaches the inspection image. The present invention relates to an image processing method for performing a visual inspection of an inspection object by extracting a part that does not match an image.

 一般に、検査対象物を含む撮像空間の撮像画像から検査対象物の画像である検査画像を抽出し、検査画像をあらかじめ設定してある教示画像と比較することによって、検査対象物の種類を判別したり、検査対象物の良否を判定したりする画像処理技術が知られている(たとえば、特許文献1参照)。 In general, the type of an inspection object is determined by extracting an inspection image, which is an image of the inspection object, from a captured image in an imaging space including the inspection object, and comparing the inspection image with a preset teaching image. There is known an image processing technique for judging the quality of an inspection object (for example, see Patent Document 1).

 この種の画像処理に際しては、撮像画像内での検査画像の位置と、教示画像に対する検査画像の回転角度と、教示画像に対する拡大縮小率とを求め、教示画像との寸法合わせと位置合わせとを行った後に、教示画像と重ね合わせて教示画像との面積の差分を求め、得られた差分の大小に応じて検査対象物の外観上の欠陥(傷、欠け、汚れなど)の有無を抽出している。
特開2001−175865号公報
In this type of image processing, the position of the inspection image in the captured image, the rotation angle of the inspection image with respect to the teaching image, and the enlargement / reduction ratio with respect to the teaching image are obtained. After performing, the difference between the area of the teaching image and the teaching image is obtained by superimposing the teaching image, and the presence / absence of a defect (scratch, chipping, dirt, etc.) on the appearance of the inspection object is extracted according to the obtained difference. ing.
JP 2001-175865 A

 しかしながら、検査画像と教示画像との画素数の差分を求めて検査対象物の良否を判定するだけでは、画像入力手段に対して検査対象物が移動することによって生じる1次変形や画像入力手段に用いるレンズの光軸と検査対象物の位置関係によって生じる2次変形などによる検査画像の歪みを除去できず、検査画像における傷、欠け、汚れのような欠陥と、検査画像の歪みとを誤認して良否を誤判定する場合がある。 However, by simply determining the difference between the number of pixels between the inspection image and the teaching image to determine the quality of the inspection object, the primary deformation caused by the movement of the inspection object with respect to the image input unit and the image deformation by the image input unit The distortion of the inspection image due to the secondary deformation caused by the positional relationship between the optical axis of the lens to be used and the inspection object cannot be removed, and a defect such as a scratch, chip, or dirt in the inspection image and a distortion of the inspection image are erroneously recognized. Erroneous judgment may be made.

 本発明は上記事由に鑑みて為されたものであり、その目的は、撮像環境の変動による検査画像の変形を欠陥と誤認することなく検査画像における欠陥を精度よく抽出可能とした画像処理方法を提供することにある。 The present invention has been made in view of the above circumstances, and an object of the present invention is to provide an image processing method capable of accurately extracting a defect in an inspection image without erroneously recognizing a deformation of the inspection image due to a change in an imaging environment as a defect. To provide.

 請求項1の発明は、検査対象物の画像である検査画像と基準となる教示画像との輪郭線をそれぞれ求め、検査画像を教示画像に1次式で表される変形を加えた画像とみなして検査画像と教示画像との輪郭線上の画素に最小二乗法を適用することによって教示画像に対する検査画像の変形を表す変換パラメータを算出し、求めた変換パラメータにより教示画像を変形させた画像と検査画像との対応画素の濃度差を求めるとともに濃度差があらかじめ設定してある閾値を超える画素を抽出し、抽出した画素を検査対象物の外観検査に用いることを特徴とする。 According to the first aspect of the present invention, contours of an inspection image as an image of an inspection object and a reference teaching image are obtained, and the inspection image is regarded as an image obtained by adding a deformation represented by a linear expression to the teaching image. By applying the least squares method to pixels on the contour line between the inspection image and the teaching image, a conversion parameter representing the deformation of the inspection image with respect to the teaching image is calculated. The method is characterized in that a density difference between a corresponding pixel and an image is obtained, a pixel in which the density difference exceeds a preset threshold value is extracted, and the extracted pixel is used for an appearance inspection of an inspection object.

 この方法によれば、検査画像と教示画像との輪郭線を用い、検査画像を教示画像に1次式で表される変形を加えた画像とみなして輪郭線について最小二乗法を適用することにより変換パラメータを算出し、この変換パラメータを1次変形の情報として抽出するから、検査対象物を画像入力手段で撮像したときの撮像環境の影響で生じた1次変形の影響を除去し、検査画像の欠陥部分のみを正確に検出して、正確な外観検査を行うことができるという利点がある。 According to this method, by using the contour of the inspection image and the teaching image, the inspection image is regarded as an image obtained by adding a deformation represented by a linear expression to the teaching image, and the least square method is applied to the contour. Since the conversion parameters are calculated and the conversion parameters are extracted as information on the primary deformation, the influence of the primary deformation caused by the influence of the imaging environment when the inspection object is imaged by the image input means is removed, and the inspection image is obtained. There is an advantage that it is possible to accurately detect only the defective portion and perform an accurate appearance inspection.

 請求項2の発明は、検査対象物の画像である検査画像と基準となる教示画像との輪郭線をそれぞれ求め、検査画像を教示画像に2次式で表される変形を加えた画像とみなして検査画像と教示画像との輪郭線上の画素に最小二乗法を適用することによって教示画像に対する検査画像の変形を表す変換パラメータを算出し、求めた変換パラメータにより教示画像を変形させた画像と検査画像との対応画素の濃度差を求めるとともに濃度差があらかじめ設定してある閾値を超える画素を抽出し、抽出した画素を検査対象物の外観検査に用いることを特徴とする。 According to a second aspect of the present invention, the contours of the inspection image, which is the image of the inspection object, and the reference teaching image are obtained, and the inspection image is regarded as an image obtained by adding a deformation represented by a quadratic expression to the teaching image. By applying the least squares method to pixels on the contour line between the inspection image and the teaching image, a conversion parameter representing the deformation of the inspection image with respect to the teaching image is calculated. The method is characterized in that a density difference between a corresponding pixel and an image is obtained, a pixel in which the density difference exceeds a preset threshold value is extracted, and the extracted pixel is used for an appearance inspection of an inspection object.

 この方法によれば、検査画像と教示画像との輪郭線を用い、検査画像を教示画像に2次式で表される変形を加えた画像とみなして輪郭線について最小二乗法を適用することにより変換パラメータを算出し、この変換パラメータを2次変形の情報として抽出するから、検査対象物を画像入力手段で撮像したときの撮像環境の影響で生じた2次変形の影響を除去し、検査画像の欠陥部分のみを正確に検出して、正確な外観検査を行うことができるという利点がある。 According to this method, by using the contour of the inspection image and the teaching image, the inspection image is regarded as an image obtained by adding a deformation represented by a quadratic expression to the teaching image, and the least square method is applied to the contour. Since the conversion parameter is calculated and the conversion parameter is extracted as information of the secondary deformation, the influence of the secondary deformation caused by the influence of the imaging environment when the inspection object is imaged by the image input means is removed, and the inspection image is obtained. There is an advantage that it is possible to accurately detect only the defective portion and perform an accurate appearance inspection.

 請求項3の発明は、請求項1または請求項2の発明において、前記検査画像は検査対象物を含む空間領域を画像入力手段により撮像した撮像画像内に含まれ、前記教示画像として、あらかじめテンプレート記憶部に登録してある教示画像を撮像画像と比較することにより教示画像の位置、回転角度、拡大縮小率を検査画像に対して低精度で合わせるように変形した画像を用いることを特徴とする。 According to a third aspect of the present invention, in the first or second aspect of the present invention, the inspection image is included in a captured image obtained by capturing an image of a spatial region including the inspection target object by an image input unit, and the teaching image is prepared as a template in advance. By comparing the teaching image registered in the storage unit with the captured image, an image modified so as to match the position, rotation angle, and enlargement / reduction ratio of the teaching image with the inspection image with low accuracy is used. .

 この方法によれば、撮像画像内における検査画像について教示画像に対する概略の位置、回転角度、拡大縮小率を求める前処理を行うから、教示画像の検査画像に対して概略合わせることができ、最小二乗法を適用する際の誤差が小さくなるから、教示画像と検査画像との比較を精度よく行うことができる。すなわち、外観検査の結果の信頼性が高くなる。 According to this method, since the preprocessing for obtaining the approximate position, rotation angle, and enlargement / reduction ratio of the inspection image in the captured image with respect to the teaching image is performed, the teaching image can be roughly matched with the inspection image. Since the error when applying the multiplication method is reduced, it is possible to accurately compare the teaching image with the inspection image. That is, the reliability of the result of the appearance inspection increases.

 請求項4の発明は、請求項1または請求項2の発明において、前記輪郭線の抽出に際して、前記検査画像と前記教示画像とにソーベルフィルタを適用して得られる極大値の画素を追跡した線を輪郭線として抽出することを特徴とする。 According to a fourth aspect of the present invention, in the first or second aspect of the invention, when extracting the contour, a pixel having a maximum value obtained by applying a Sobel filter to the inspection image and the teaching image is tracked. It is characterized in that a line is extracted as a contour line.

 この方法では、検査画像および教示画像について輪郭線を精度よく抽出することができ、検査画像の歪みに関して多くの情報を用いることができるから、高精度の外観検査が可能になる。 (4) According to this method, a contour line can be accurately extracted from an inspection image and a teaching image, and much information can be used on distortion of the inspection image.

 請求項5の発明は、請求項4の発明において、前記輪郭線を追跡する際に、次に追跡する画素の微分値が極大値になる方向とすでに追跡した輪郭線の法線との角度差が±45度以内であるときに次に追跡する画素を輪郭線上の画素とすることを特徴とする。 According to a fifth aspect of the present invention, in the invention of the fourth aspect, when the contour is traced, the angle difference between the direction in which the differential value of the next pixel to be traced becomes a local maximum value and the normal of the contour already traced. Is within ± 45 degrees, a pixel to be tracked next is set as a pixel on the contour line.

 この方法によれば、輪郭線の追跡の際に輪郭線である可能性の高い画素のみを選択することになるから、不要な画素が輪郭線として抽出される可能性を低減することになり、輪郭線を抽出する処理の信頼性が高まる。 According to this method, only pixels that are likely to be contours are selected at the time of tracking the contours, so that the possibility that unnecessary pixels are extracted as contours is reduced, The reliability of the process of extracting a contour line is improved.

 請求項6の発明は、請求項1または請求項2の発明において、前記輪郭線の抽出に際して、前記検査画像と前記教示画像との各画素に対して、それぞれ近傍である第1領域内の画素の濃度値の平均値を画素値とする第1平滑画像を生成するとともに、第1領域よりも広い第2領域内の画素の濃度値の平均値を画素値とする第2平滑画像を生成し、次に第1平滑画像の各画素の画素値に1以上の重み係数による重み付けを行った後、第2平滑画像の対応する画素の画素値から減算して差分画像を求め、差分画像について隣接する各一対の画素の画素値の符号が変化するときに一方の画素を輪郭線上の画素として抽出することを特徴とする。 According to a sixth aspect of the present invention, in the first or second aspect of the present invention, when extracting the contour line, a pixel in the first area which is close to each pixel of the inspection image and the teaching image, respectively. Generating a first smoothed image having an average value of the density values of the pixels as pixel values, and generating a second smoothed image having a pixel value of the average value of the density values of the pixels in the second area wider than the first area. Then, after weighting the pixel value of each pixel of the first smoothed image by one or more weighting factors, subtraction is performed from the pixel value of the corresponding pixel of the second smoothed image to obtain a difference image. When the sign of the pixel value of each pair of pixels changes, one pixel is extracted as a pixel on the contour line.

 この方法によれば、平滑化によって背景ノイズの影響や輪郭線の微小な変動による影響を低減することができ、検出のロバスト性が向上するとともに処理する情報量を低減させて処理の高速化が可能になる。つまり、一般に検査画像の周囲の背景にノイズが含まれていると、ノイズの一部を検査画像と判断して良否を誤判定する場合があるのに対して、この方法を採用することにより、ノイズの影響による誤判定を抑制できる。 According to this method, it is possible to reduce the influence of background noise and the influence of minute fluctuation of the outline by smoothing, thereby improving the robustness of detection and reducing the amount of information to be processed, thereby increasing the processing speed. Will be possible. That is, in general, when noise is included in the background around the inspection image, a part of the noise may be determined as the inspection image and the quality may be erroneously determined. By adopting this method, Erroneous determination due to the influence of noise can be suppressed.

 請求項7の発明は、請求項1ないし請求項6の発明において、前記変換パラメータにより教示画像を変形させた画像との対応画素の濃度差を求める検査画像は、教示画像の各画素の濃度値に重み付けを行って元の検査画像の対応する画素の濃度値との差分を求めたときに差分値が規定の閾値を越える画素で形成される領域内のみの画素からなることを特徴とする。 According to a seventh aspect of the present invention, in the first to sixth aspects of the present invention, the inspection image for obtaining a density difference between a corresponding pixel and an image obtained by deforming the teaching image using the conversion parameter is a density value of each pixel of the teaching image. When the difference from the density value of the corresponding pixel of the original inspection image is obtained by performing weighting on the original inspection image, only the pixels formed in the region formed by the pixel having the difference value exceeding the specified threshold value are included.

 この方法によれば、教示画像の各画素に重み付けを行うから教示画像において重要度の低い画素については検査画像との比較対象から除去することになり、検査画像と教示画像との比較を行う際の処理量を低減することができる。また、重要度の低い画素を考慮しないようにすることが可能であるから、重要度の高い領域について正確な判定が可能になる。 According to this method, each pixel of the teaching image is weighted, so that pixels having low importance in the teaching image are removed from the comparison target with the inspection image, so that when comparing the inspection image with the teaching image, Can be reduced. In addition, since it is possible not to consider pixels having low importance, accurate determination can be made for a region having high importance.

 請求項8の発明は、請求項1ないし請求項7の発明において、前記外観検査に際して、抽出された画素の連結領域を求め、連結領域の教示画像上での分布、連結領域内の濃度分布および連結領域の形状を用いて検査対象物の外観上の欠陥の種類を判別することを特徴とする。 According to an eighth aspect of the present invention, in the first to seventh aspects, at the time of the appearance inspection, a connected region of the extracted pixels is obtained, and a distribution of the connected region on the teaching image, a density distribution in the connected region, and It is characterized in that the type of a defect on the appearance of the inspection object is determined using the shape of the connection area.

 すなわち、従来構成では検査画像と教示画像との面積の差のみを用いて良否を判定しているから、欠陥の種類を判断することはできなかったのに対して、この方法を採用することによって、欠陥の種類の判別も可能になる。 That is, in the conventional configuration, the quality is determined using only the difference between the area of the inspection image and the area of the teaching image. Therefore, the type of the defect cannot be determined. Also, the type of defect can be determined.

 本発明の方法によれば、検査画像と教示画像との輪郭線を用い、検査画像を教示画像に1次式または2次式で表される変形を加えた画像とみなして輪郭線について最小二乗法を適用することにより変換パラメータを算出し、この変換パラメータを1次変形または2次変形の情報として抽出するから、検査対象物を画像入力手段で撮像したときの撮像環境の影響で生じた1次変形や2次変形の影響を除去し、検査画像の欠陥部分のみを正確に検出して、正確な外観検査を行うことができるという利点がある。 According to the method of the present invention, the contour of the inspection image and the teaching image is used, and the inspection image is regarded as an image obtained by adding a deformation represented by a linear expression or a quadratic expression to the teaching image, and the contour line is minimized by two. The conversion parameter is calculated by applying the multiplication method, and the conversion parameter is extracted as the information of the primary deformation or the secondary deformation. Therefore, 1 is generated due to the influence of the imaging environment when the inspection object is imaged by the image input means. There is an advantage that the influence of the secondary deformation and the secondary deformation can be removed, and only the defective portion of the inspection image can be accurately detected, and an accurate appearance inspection can be performed.

 (実施形態1)
 図2に示すように、検査対象物Obを含む撮像空間をTVカメラのような画像入力手段1により撮像することによって得た撮像画像を用いて検査対象物Obの外観上の欠陥の有無を抽出する。図示していないが画像入力手段1はA/D変換器を含み、撮像画像は各画素にデジタル値の濃度値を対応付けた濃淡画像であるものとする。また、濃淡画像は画像メモリ2に格納される。なお、画像入力手段1はTVカメラに限らず、スキャナなどを用いることも可能であって、濃淡画像が得られる構成であればとくに制限はない。
(Embodiment 1)
As shown in FIG. 2, the presence or absence of a defect in the appearance of the inspection object Ob is extracted using an image obtained by imaging the imaging space including the inspection object Ob by the image input unit 1 such as a TV camera. I do. Although not shown, the image input means 1 includes an A / D converter, and the captured image is a grayscale image in which each pixel is associated with a digital density value. The grayscale image is stored in the image memory 2. Note that the image input means 1 is not limited to a TV camera, and it is also possible to use a scanner or the like, and there is no particular limitation as long as a grayscale image can be obtained.

 また、本発明においても従来構成と同様に、撮像画像と比較するための教示画像があらかじめ設定されており、教示画像はテンプレート記憶部3に格納されている。テンプレート記憶部3に格納されている教示画像としては、欠陥のないことがわかっている検査対象物Obを適正な照明下において画像入力手段1により撮像した濃淡画像を用いる。教示画像に対しては前処理として輪郭線抽出部4おいてソーベル(Sobel)フィルタ(3×3局所画像領域を対象とする)を適用し、教示画像の輪郭線を抽出する。輪郭線を抽出するに際しては、ソーベルフィルタを適用して得られる極大値の画素を結ぶ線を輪郭線として追跡し、次に追跡する画素の微分値が0度、45度、90度、135度、180度、225度、270度、315度のいずれかの方向(たとえば、画像の水平方向右向きを0度とし、左回りを正の角度とする)で極大値をとり、かつ極大値をとる方向とすでに追跡した輪郭線の法線との角度差が±45度以内であるときにのみ次に追跡する画素を輪郭線上の画素として抽出する。この処理によって、輪郭線を誤検出する可能性を低減することができる。 Also, in the present invention, similarly to the conventional configuration, a teaching image for comparison with a captured image is set in advance, and the teaching image is stored in the template storage unit 3. As the teaching image stored in the template storage unit 3, a grayscale image of the inspection object Ob, which is known to have no defect, captured by the image input unit 1 under appropriate illumination is used. As a pre-processing, a Sobel filter (for a 3 × 3 local image area) is applied as a pre-process to the teaching image in the contour extraction unit 4 to extract the contour of the teaching image. When extracting the contour, a line connecting the pixels having the maximum value obtained by applying the Sobel filter is traced as the contour, and the differential value of the pixel to be traced is 0, 45, 90, or 135. Degrees, 180 degrees, 225 degrees, 270 degrees, and 315 degrees (for example, the horizontal right direction of the image is 0 degree, and the counterclockwise direction is a positive angle), and the maximum value is obtained. Only when the angle difference between the direction to be taken and the normal of the already traced contour is within ± 45 degrees, the pixel to be tracked next is extracted as a pixel on the contour. By this processing, the possibility of erroneously detecting a contour line can be reduced.

 画像メモリ2に格納された検査画像は、図1に示すように、まず前処理部5において前処理が施され(S1)、撮像画像の中での検査画像の概略の位置・回転角度および教示画像との概略の拡大縮小率が求められる。検査画像の教示画像に対する位置・回転角度・拡大縮小率が低精度で求められる。前処理部5では、検査画像の教示画像に対する概略の位置・回転角度・拡大縮小率を求めるために、たとえば、「一般化ハフ変換」、「正規化相関」など従来周知の技術を用いる。前処理部5で求めた位置・回転角度・拡大縮小率にはそれぞれ比較的大きな誤差が含まれるが、誤差の範囲はほぼ決まっているから、位置・回転角度・拡大縮小率の範囲を絞り込んだことになる。位置・回転角度・拡大縮小率の誤差範囲は、たとえば、±4画素、±9度、±10%程度になる。前処理部5では、位置・回転角度・拡大縮小率についておよその範囲を粗く決めることはできるが、検査対象物Obに対する撮像環境の変動には対応していないから、前処理部5で求めた位置・回転角度・拡大縮小率を教示画像に適用して教示画像を変形し、撮像環境の変動に対応可能になるように以下の処理を行う。つまり、以下の処理における教示画像は、前処理部5において検査画像に合わせるように変形した教示画像になる。 As shown in FIG. 1, the inspection image stored in the image memory 2 is first subjected to preprocessing by the preprocessing unit 5 (S1), and the approximate position / rotation angle and teaching of the inspection image in the captured image are obtained. An approximate scaling factor with the image is determined. The position, rotation angle, and enlargement / reduction ratio of the inspection image with respect to the teaching image are obtained with low accuracy. The pre-processing unit 5 uses a conventionally known technique such as “generalized Hough transform” and “normalized correlation” in order to obtain the approximate position / rotation angle / enlargement / reduction ratio of the inspection image with respect to the teaching image. The position, rotation angle, and enlargement / reduction ratio obtained by the pre-processing unit 5 each include a relatively large error. However, since the range of the error is almost fixed, the range of the position, rotation angle, and the enlargement / reduction ratio is narrowed down. Will be. The error ranges of the position, the rotation angle, and the enlargement / reduction ratio are, for example, about ± 4 pixels, ± 9 degrees, and ± 10%. The pre-processing unit 5 can roughly determine the approximate range of the position, the rotation angle, and the enlargement / reduction ratio. However, since the pre-processing unit 5 does not cope with the fluctuation of the imaging environment for the inspection object Ob, the pre-processing unit 5 calculates The following processing is performed so that the teaching image is deformed by applying the position, the rotation angle, and the enlargement / reduction ratio to the teaching image so as to be able to cope with a change in the imaging environment. That is, the teaching image in the following processing is a teaching image deformed by the preprocessing unit 5 so as to match the inspection image.

 まず、前処理で決定した概略位置の近傍において教示画像と同様の技術を検査画像にも適用して検査画像の輪郭線を抽出する(S2)。次に、変換パラメータ演算部6において、以下の処理により変換パラメータを求める。すなわち、変換パラメータ演算部6では、まず前処理によって求めた検査画像の概略の位置・回転角度および教示画像との概略の拡大縮小率をパラメータとして用い、教示画像の輪郭線に座標変換を施して検査画像に重ね合わせる(S3)。ここで、図3に示すように、教示画像P1の輪郭線L1の上の各点(以下、「輪郭点」という)から教示画像P1の輪郭線L1の法線方向に検査画像P2を探索し、各輪郭点を出発点として探索したときにそれぞれ最初に見つかった検査画像P2の輪郭線L2の上の輪郭点を、教示画像P1の各輪郭点の対応点とする(S4)。教示画像P1のすべての輪郭点に対して対応点を求め、(1)式として示す誤差関数(累積二乗誤差)Qを求める。この誤差関数Qは、(2)(3)式で示すように、教示画像と検査画像とのそれぞれの輪郭線の1次関係(教示画像に(2)(3)式のような1次式で表される変形を加えた画像が検査画像であるとみなす関係)を表している。次に、誤差関数Qを誤差関数Qに含まれる各変換パラメータA〜F(後述する)で偏微分してdQ=0とおき、各変換パラメータA〜Fを未知数とする連立方程式を解いて各変換パラメータA〜Fを求めると、位置、回転角度、拡大縮小率を含む変換パラメータが得られる(S5)。
Q=Σ(Qx+Qy)  … (1)
ただし、
Qx=αn(Xn−(Axn+Byn+C)) … (2)
Qy=αn(Yn−(Dxn+Eyn+F)) … (3)
xn,yn:教示画像P1の輪郭点
Xn,Yn:検査画像P2の輪郭点
A=β・cosθ
B=−γ・sinφ
C=dx
D=β・sinθ
E=γ・cosφ
F=dy
である。また、αn:重み、β:x方向の拡大縮小率、γ:y方向の拡大縮小率、θ:x軸回りの回転角度、φ:y軸回りの回転角度、dx:x方向の変位(位置)、dy:y方向の変位(位置)を意味する。
First, a technique similar to the teaching image is applied to the inspection image in the vicinity of the approximate position determined in the preprocessing to extract the contour line of the inspection image (S2). Next, the conversion parameter calculation unit 6 obtains a conversion parameter by the following processing. That is, the conversion parameter calculation unit 6 first performs coordinate conversion on the outline of the teaching image by using, as parameters, the approximate position and rotation angle of the inspection image obtained by the preprocessing and the approximate scaling ratio with the teaching image. Superimpose on the inspection image (S3). Here, as shown in FIG. 3, the inspection image P2 is searched from each point (hereinafter, referred to as “contour point”) on the contour L1 of the teaching image P1 in the normal direction of the contour L1 of the teaching image P1. Then, the contour points on the contour line L2 of the inspection image P2 that are respectively first found when each contour point is searched as a starting point are set as the corresponding points of the contour points of the teaching image P1 (S4). Corresponding points are obtained for all the contour points of the teaching image P1, and an error function (cumulative square error) Q shown as Expression (1) is obtained. The error function Q is expressed by a linear relationship between the contour lines of the teaching image and the inspection image as shown by equations (2) and (3). (A relationship in which the deformed image represented by で is regarded as an inspection image). Next, the error function Q is partially differentiated with each of the conversion parameters A to F (described later) included in the error function Q to set dQ = 0, and a simultaneous equation in which each of the conversion parameters A to F is an unknown is solved to solve each of the equations. When the conversion parameters A to F are obtained, the conversion parameters including the position, the rotation angle, and the enlargement / reduction ratio are obtained (S5).
Q = Σ (Qx 2 + Qy 2 ) (1)
However,
Qx = αn (Xn− (Axn + Byn + C)) (2)
Qy = αn (Yn− (Dxn + Eyn + F)) (3)
xn, yn: contour point Xn, Yn of teaching image P1: contour point A = β · cos θ of inspection image P2
B = −γ · sinφ
C = dx
D = β · sin θ
E = γ · cosφ
F = dy
It is. Also, αn: weight, β: scaling ratio in x direction, γ: scaling ratio in y direction, θ: rotation angle around x axis, φ: rotation angle around y axis, dx: displacement in x direction (position ), Dy: Displacement (position) in the y direction.

 (1)式では教示画像P1の全輪郭点について(Qx2 +Qy2 )の総和を求めている。変換パラメータA〜Fは、∂Q/∂A=0、∂Q/∂B=0、……、∂Q/∂E=0、∂Q/∂F=0を満たす値として求める。また、重みαnについては種々設定可能であり、教示画像P1と検査画像P2との法線の内積、教示画像P1と検査画像P2との輪郭点間の距離の最大値に対する各距離の割合、教示画像P1と検査画像P2との輪郭点間の距離の逆数などを用いることができる。重みαnを1として重み付けを行わないようにすることも可能である。さらに、重みαnは各画素ごとに設定可能である。 In equation (1), the sum total of (Qx 2 + Qy 2 ) is obtained for all contour points of the teaching image P1. The conversion parameters A to F are obtained as values satisfying ∂Q / ∂A = 0, ∂Q / ∂B = 0,..., ∂Q / ∂E = 0, and ∂Q / ∂F = 0. The weight αn can be variously set. The inner product of the normal line between the teaching image P1 and the inspection image P2, the ratio of each distance to the maximum value of the distance between the contour points between the teaching image P1 and the inspection image P2, the teaching The reciprocal of the distance between the contour points of the image P1 and the inspection image P2 can be used. It is also possible to set the weight αn to 1 and not to perform weighting. Further, the weight αn can be set for each pixel.

 判定処理部7では、上述のようにして得られたβ、γ、θ、φ、dx、dyの値が設定した検出精度の半分以下になれば(S6)、処理を終了し位置・回転角度・拡大縮小率を出力する。また、β、γ、θ、φ、dx、dyの値が設定した検出精度の半分を越えているときには、得られた変換パラメータA〜Fに基づいて教示画像P1の輪郭点を座標変換した後に再度同様の処理を行い、演算終了の判定条件として同様の処理を繰り返す(S3〜S6)。この処理が終了すると(S7)。求めた変換パラメータA〜Fに基づいて教示画像P1を座標変換した後に(S8)、欠陥抽出部8において変換後の教示画像P1と検査画像P2との差分を求めることによって(S9)、撮像環境の変動による検査画像P2の1次変形の影響を除去することができ、検査画像P2の欠陥部分のみを正確に検出することが可能になる。なお、抽出した欠陥部分は、欠陥種別判別部9に入力され、後述する実施形態4の技術を適用することによって欠陥の種類が判別される。なお、図2に示す構成のうち画像入力手段1を除く部分はコンピュータ装置に適宜のプログラムを実行させることによって実現される。 When the values of β, γ, θ, φ, dx, and dy obtained as described above become equal to or less than half of the set detection accuracy (S6), the processing ends and the position / rotation angle is determined.・ Output the scaling ratio. When the values of β, γ, θ, φ, dx, and dy exceed half of the set detection accuracy, the coordinates of the contour points of the teaching image P1 are converted based on the obtained conversion parameters AF. The same processing is performed again, and the same processing is repeated as a condition for determining the end of the calculation (S3 to S6). When this process ends (S7). After coordinate conversion of the teaching image P1 based on the obtained conversion parameters A to F (S8), the defect extracting unit 8 obtains a difference between the converted teaching image P1 and the inspection image P2 (S9), thereby obtaining an imaging environment. Can remove the influence of the primary deformation of the inspection image P2 due to the fluctuation of the inspection image P2, and it is possible to accurately detect only the defective portion of the inspection image P2. The extracted defect portion is input to the defect type determination unit 9, and the type of the defect is determined by applying the technology of the fourth embodiment described later. 2 except for the image input unit 1 are realized by causing a computer device to execute an appropriate program.

 たとえば、CMOSカメラのようにシャッタ機能を備えていないTVカメラを画像入力手段1に用いる場合に、検査対象物Obがライン上で移動しているとすれば、図4(a)に示す教示画像P1に対して、検査画像P2は図4(b)(c)のように変形する(図示例では水平方向にせん断力が作用したかのように変形している)。このような変形は、上述のように1次関係で表すことが可能である。したがって、本実施形態の上述した手順を用いると1次関係に関する位置・回転角度・拡大縮小率を抽出することができるから、見掛け上の変形を無視して検査画像P2の本質的な欠陥のみを検査することが可能になる。 For example, if a TV camera having no shutter function, such as a CMOS camera, is used for the image input means 1 and the inspection object Ob is moving on a line, a teaching image shown in FIG. The inspection image P2 is deformed as shown in FIGS. 4B and 4C with respect to P1 (in the illustrated example, the inspection image P2 is deformed as if a shear force acts in the horizontal direction). Such a deformation can be represented by a linear relationship as described above. Therefore, by using the above-described procedure of the present embodiment, it is possible to extract the position, the rotation angle, and the enlargement / reduction ratio related to the primary relation, and thus, ignoring the apparent deformation, only the essential defect of the inspection image P2 is determined. Inspection becomes possible.

 (実施形態2)
 本実施形態は実施形態1と同様に、まず教示画像P1と検査画像P2の輪郭点の対応を求める。つまり、前処理の後に、教示画像P1の輪郭線L1の上の各輪郭点から教示画像P1の輪郭線L1の法線方向に検査画像P2を探索し、各輪郭点を出発点として探索したときにそれぞれ最初に見つかった検査画像P2の輪郭線L2の上の輪郭点を、教示画像P1の各輪郭点の対応点とする。さらに、教示画像P1のすべての輪郭点に対して対応点を求める。本実施形態では、誤差関数Qとして(4)式で示す値を用いる。この誤差関数Qは、教示画像P1と検査画像P2との2次関係(教示画像に(5)(6)式のような2次式で表される変形を加えた画像が検査画像であるとみなす関係)を表す。
Q=Σ(Qx+Qy)  … (4)
ただし、
Qx=αn(Xn−(Gxn+Hxn・yn+Iyn+Axn+Byn+C))  … (5)
Qy=αn(Yn−(Jxn+Kxn・yn+Lyn+Exn+Fyn+G))  … (6)
xn,yn:教示画像P1の輪郭点
Xn,Yn:検査画像P2の輪郭点
A〜Fは実施形態1において示した(1)式と同じである。また、αn:重み、G:xnのx方向の変形量への寄与度、H:xn・ynのx方向の変形量への寄与度、I:ynのx方向の変形量への寄与度、J:xnのy方向の変形量への寄与度、K:xn・ynのy方向の変形量への寄与度、L:ynのy方向の変形量への寄与度である。
(Embodiment 2)
In the present embodiment, as in the first embodiment, first, the correspondence between the contour points of the teaching image P1 and the inspection image P2 is determined. That is, after the pre-processing, when the inspection image P2 is searched in the normal direction of the contour L1 of the teaching image P1 from each contour point on the contour L1 of the teaching image P1, and the search is performed using each contour point as a starting point. The contour points on the contour line L2 of the inspection image P2, which are found first, are set as the corresponding points of the contour points of the teaching image P1. Further, corresponding points are obtained for all the contour points of the teaching image P1. In the present embodiment, a value represented by Expression (4) is used as the error function Q. The error function Q is a quadratic relationship between the teaching image P1 and the inspection image P2 (if an image obtained by adding a deformation represented by a quadratic expression such as Expressions (5) and (6) to the teaching image is an inspection image). Relationship).
Q = Σ (Qx 2 + Qy 2 ) (4)
However,
Qx = αn (Xn− (Gxn 2 + Hxn · yn + Iyn 2 + Axn + Byn + C)) (5)
Qy = αn (Yn− (Jxn 2 + Kxn · yn + Lyn 2 + Exn + Fyn + G)) (6)
xn, yn: contour points Xn, Yn: contour points A to F of the inspection image P2 are the same as those of the equation (1) shown in the first embodiment. Further, .alpha.n: Weights, G: contribution to the deformation amount in the x direction of xn 2, H: contribution to the deformation amount in the x direction of xn · yn, I: contribution to the deformation amount in the x direction of yn 2 whenever, J: contribution to the amount of deformation of the y direction of xn 2, K: contribution to the amount of deformation of the y direction of xn · yn, L: is a contribution to the y-direction deformation amount of yn 2.

 (4)式では教示画像P1の全輪郭点について(Qx2 +Qy2 )の総和を求めている。変換パラメータA〜Lは、∂Q/∂A=0、∂Q/∂B=0、……、∂Q/∂K=0、∂Q/∂L=0を満たす値として求める。また、重みαnについては実施形態1と同様のものを用いる。上述の手順で変換パラメータA〜Lを求めることにより、2次関係の場合の位置・回転角度・拡大縮小率を求めることができる。 In equation (4), the sum total of (Qx 2 + Qy 2 ) is obtained for all the contour points of the teaching image P1. The conversion parameters A to L are obtained as values satisfying ∂Q / ∂A = 0, ∂Q / ∂B = 0,..., ∂Q / ∂K = 0, and ∂Q / ∂L = 0. Further, the same weight as in the first embodiment is used for the weight αn. By obtaining the conversion parameters A to L in the above procedure, the position, rotation angle, and enlargement / reduction ratio in the case of the quadratic relationship can be obtained.

 変換パラメータA〜Lが求められると、実施形態1と同様にして変換パラメータA〜Lに基づいて教示画像P1の座標変換を行った後に変換後の教示画像P1と検査画像P2との差分を求めることによって、撮像環境の変動による検査画像P2の2次変形の影響を除去することができ、検査画像P2の欠陥部分のみを正確に検出することが可能になる。なお、処理の終了条件は実施形態1と同様に、β、γ、θ、φ、dx、dyの値が設定した検出精度の半分以下になることとし、変換パラメータG〜Lは終了条件には用いない。 When the conversion parameters A to L are obtained, the coordinates of the teaching image P1 are converted based on the conversion parameters A to L in the same manner as in the first embodiment, and then the difference between the converted teaching image P1 and the inspection image P2 is obtained. As a result, it is possible to remove the influence of the secondary deformation of the inspection image P2 due to the change in the imaging environment, and it is possible to accurately detect only the defective portion of the inspection image P2. Note that, as in the first embodiment, the end conditions of the process are that the values of β, γ, θ, φ, dx, and dy are equal to or less than half of the set detection accuracy. Do not use.

 たとえば、図5(a)に示す教示画像P1に対して、検査画像P2がカメラの光軸方向(奥行方向)に回転した場合には図5(b)のような変形が生じ、レンズに歪がある場合には図5(c)のような変形が生じる。つまり、上述のような2次関係によって変形を表すことが可能である。そこで、上述した本実施形態の技術を適用することによって、見掛け上の変形を無視して検査画像P2の本質的な欠陥のみを検査することが可能になる。 For example, when the inspection image P2 is rotated in the optical axis direction (depth direction) of the camera with respect to the teaching image P1 shown in FIG. 5A, deformation as shown in FIG. If there is, a deformation as shown in FIG. That is, the deformation can be represented by the above-described quadratic relation. Therefore, by applying the technology of the present embodiment described above, it is possible to inspect only the essential defects of the inspection image P2 ignoring the apparent deformation.

 いま、動画像を追跡する場合を想定すると、検査対象物Obの移動が3次元になり、しかもレンズの歪が場所ごとに異なるから、検査画像P2は図5(b)(c)のような変形が生じることになる。ここで、本実施形態の技術を適用することによって、この種の変形を除去することが可能になり、検査画像P2のの追跡や検査が可能になる。また、動画像においては、検査画像P2の位置は前画面における検出位置の近傍になるから、最初の1画面以外の画像では前処理は不要である。 Now, assuming that a moving image is tracked, the movement of the inspection object Ob becomes three-dimensional, and the distortion of the lens differs for each location. Therefore, the inspection image P2 is as shown in FIGS. Deformation will occur. Here, by applying the technique of the present embodiment, this type of deformation can be removed, and tracking and inspection of the inspection image P2 can be performed. Further, in the moving image, the position of the inspection image P2 is near the detection position on the previous screen, so that the preprocessing is not required for images other than the first one screen.

 (実施形態3)
 上述した各実施形態では輪郭線抽出部4においてソーベルフィルタを用いることによって教示画像P1および検査画像P2の輪郭線を抽出したが、本実施形態では輪郭線を抽出するにあたり、特開2002−183713号公報に開示されている技術を用いる。すなわち、まず教示画像P1および検査画像P2に対して、各画素の近傍領域の画素値(濃度値)の平均値を各画素の値とするように平滑化を行い、近傍領域の大きさの異なる2種類の平滑画像を得る。つまり、近傍領域である第1領域内の画素の濃度値の平均値を各画素の画素値とする第1平滑画像と、第1領域よりも広い第2領域内の画素の濃度値の平均値を各画素の画素値とする第2平滑画像とを生成する。次に、第1平滑画像の各画素の画素値に適宜の重み付け(重み係数は1以上)を行った後に、第2平滑画像の各画素の画素値から減算して差分画像を求める。このようにして得た差分画像について隣接する各一対の画素の画素値の符号が変化するときに一方の画素を輪郭線上の画素として抽出するのである。特開2002−183713号公報に示された技術は輪郭線を抽出するものではないが、画素値が正になる画素を着目部分の画素として抽出するものであるから、画素値が正から負になる変化点が輪郭線に相当することになる。ここに、平滑化を行う範囲および重み付けを行う係数を適宜に選択することによって、輪郭線に関して大局的な情報のみを抽出することが可能になる。なお、平滑化、重み付け、差分画像を求める処理は各別の手段で行うようにすればよい。
(Embodiment 3)
In each of the above-described embodiments, the contour lines of the teaching image P1 and the inspection image P2 are extracted by using the Sobel filter in the contour line extraction unit 4. However, in this embodiment, when extracting the contour lines, Japanese Patent Application Laid-Open No. 2002-183713. The technology disclosed in Japanese Unexamined Patent Publication (Kokai) No. H10-209400 is used. That is, first, the teaching image P1 and the inspection image P2 are smoothed so that the average value of the pixel values (density values) of the neighboring area of each pixel is set to the value of each pixel, and the sizes of the neighboring areas differ. Two types of smooth images are obtained. That is, a first smoothed image in which the average value of the density values of the pixels in the first area which is the neighboring area is the pixel value of each pixel, and an average value of the density values of the pixels in the second area wider than the first area And a second smoothed image having the pixel value of each pixel. Next, after appropriately weighting the pixel value of each pixel of the first smoothed image (the weighting factor is 1 or more), a difference image is obtained by subtracting from the pixel value of each pixel of the second smoothed image. When the sign of the pixel value of each pair of adjacent pixels changes in the difference image thus obtained, one pixel is extracted as a pixel on the contour line. Although the technique disclosed in Japanese Patent Application Laid-Open No. 2002-183713 does not extract a contour line, it extracts a pixel having a positive pixel value as a pixel of a target portion, so that the pixel value changes from positive to negative. The changing point corresponds to the contour line. Here, by appropriately selecting a smoothing range and a weighting coefficient, it is possible to extract only global information regarding the contour. The processing for obtaining the smoothing, the weighting, and the difference image may be performed by different means.

 いま、図6に示す教示画像P1に対してソーベルフィルタを適用して輪郭線を抽出した場合には、図7(a)〜(c)に示すように、輪郭線を比較的精度よく抽出することになるから情報量が多くなり、背景のノイズ成分や輪郭線の微小な変動成分も抽出することになる。これに対して、上述のように平滑画像の差分を求めることによって輪郭線を抽出する場合には、平滑化を行う近傍領域を比較的狭くするとともに重み係数を比較的小さくして平滑化の効果を小さくした場合でも図8(a)〜(c)のように、ソーベルフィルタを適用する場合よりも輪郭線に関する情報を低減することが可能になる。しかも、平滑化を行ったことによって背景のノイズ成分や輪郭線の微小な変動成分も平滑化されて影響が軽減されることになる。平滑化を行う近傍領域を比較的大きくとり重み係数も比較的大きくすれば、図9(a)〜(c)のように、輪郭線の情報をさらに低減することができる。 Now, when a contour is extracted by applying a Sobel filter to the teaching image P1 shown in FIG. 6, the contour is extracted with relatively high accuracy as shown in FIGS. 7 (a) to 7 (c). Therefore, the amount of information is increased, and a background noise component and a minute fluctuation component of an outline are also extracted. On the other hand, in the case of extracting a contour line by obtaining a difference between smoothed images as described above, the effect of smoothing by making the neighboring area to be smoothed relatively small and the weight coefficient relatively small. 8A to 8C, it is possible to reduce the information on the contour line as compared with the case where the Sobel filter is applied, as shown in FIGS. In addition, by performing the smoothing, the background noise component and the minute fluctuation component of the contour line are also smoothed, and the influence is reduced. If the neighborhood area to be smoothed is relatively large and the weighting coefficient is also relatively large, the information on the contour line can be further reduced as shown in FIGS.

 上述のようにして輪郭線に関する情報量を低減したとしても、輪郭線を抽出する目的は、教示画像P1と検査画像P2との位置、回転角度、拡大縮小率を粗く抽出することであるから、十分に目的を達成することができ、むしろ情報量を低減させることによって処理時間の短縮につながる。たとえば、ソーベルフィルタを適用した図7の例で処理時間として80[ms]を要していたとすると、図8の例では20[ms]、図9の例では10[ms]などと処理時間を短縮することが可能になる。ちなみに、本実施形態の採用によって、ソーベルフィルタを用いる場合に比較して4〜8倍の高速化が可能であった。しかも、ノイズ成分や輪郭線の変動成分の影響を受けにくくなることによって、変形の検出に関するロバスト性が向上する。他の構成および機能は実施形態1、2と同様である。 Even if the information amount regarding the contour is reduced as described above, the purpose of extracting the contour is to roughly extract the position, the rotation angle, and the enlargement / reduction ratio between the teaching image P1 and the inspection image P2. The purpose can be sufficiently achieved, and the processing time can be shortened by reducing the amount of information. For example, assuming that a processing time of 80 [ms] is required in the example of FIG. 7 to which the Sobel filter is applied, the processing time is 20 [ms] in the example of FIG. 8, 10 [ms] in the example of FIG. Can be shortened. By the way, by adopting the present embodiment, it is possible to increase the speed 4 to 8 times as compared with the case where the Sobel filter is used. Moreover, the robustness with respect to the detection of the deformation is improved by being less affected by the noise component and the fluctuation component of the outline. Other configurations and functions are the same as those of the first and second embodiments.

 (実施形態4)
 本実施形態では、教示画像P1と検査画像P2との差分を求める際に、撮像画像に対して強弱2段階の平滑化を行い、2種類得られた平滑画像の一方に重み付けを行って差分をとる。このような処理を行うことによって、撮像画像から検査画像のみを抽出した画像が得られることになる。この手順は具体的には特開2002−183713号公報に記載されている。
(Embodiment 4)
In the present embodiment, when calculating the difference between the teaching image P1 and the inspection image P2, the captured image is smoothed in two levels of strength, and one of the two types of obtained smooth images is weighted to calculate the difference. Take. By performing such processing, an image obtained by extracting only the inspection image from the captured image can be obtained. This procedure is specifically described in JP-A-2002-183713.

 上述のようにして検査画像P2の領域を抽出した後に、この領域内の画素と教示画像P1の画素との差分を求めると、不要な領域の情報が除去されて背景ノイズの除去に効果がある。たとえば、検査対象物が文字であるときに、図10(a)のように撮像画像において検査対象物である文字の背後に別の文字が存在する場合に、図10(b)のように検査画像P2に対応する領域(図10(b)の黒細線で囲まれた領域)を抽出することができ、この領域内の画素についてのみ教示画像P1との差分を求めると、図10(c)のように、背景ノイズを除去して欠陥の領域を抽出することが可能になる。つまり、検査画像P2についてのみ欠陥に対応する領域Dxを抽出することができ、後述するような欠陥の種類を判別する外観検査が容易になる。 If the difference between the pixels in this area and the pixels in the teaching image P1 is determined after extracting the area of the inspection image P2 as described above, unnecessary area information is removed, which is effective in removing background noise. . For example, when the object to be inspected is a character and another character exists behind the character to be inspected in the captured image as shown in FIG. 10A, the inspection is performed as shown in FIG. An area corresponding to the image P2 (an area surrounded by a thin black line in FIG. 10B) can be extracted. Only the pixels in this area are different from the teaching image P1 in FIG. 10C. As described above, it is possible to extract a defective area by removing background noise. That is, the area Dx corresponding to the defect can be extracted only from the inspection image P2, and the appearance inspection for determining the type of the defect as described later becomes easy.

 しかも、本実施形態の方法によれば、平均化の効果によって大局的な形状のみが抽出されるため、輪郭線や文字表面上の微小な変動は削除され、大局的な欠陥部分のみが抽出されることで、検査画像上の本質的な欠陥のみに対して検査を行うことが可能となる。他の構成および動作は実施形態1〜3と同様である。 Moreover, according to the method of the present embodiment, only the global shape is extracted due to the averaging effect, so that minute variations on the outline and the character surface are deleted, and only the global defective portion is extracted. This makes it possible to inspect only essential defects on the inspection image. Other configurations and operations are the same as those of the first to third embodiments.

 (実施形態5)
 上述した各実施形態では、検査画像P2から教示画像P1とは一致しない部位を欠陥の画素として抽出する技術について説明したが、本実施形態では欠陥の画素として抽出した画素から欠陥の種類を判別する技術について説明する。すなわち、図2に示した欠陥種別判別部9における処理について説明する。欠陥の種類を判別するために、本実施形態では、検査画像P2と教示画像P1との互いに対応する各画素の濃度差を求めるとともに、濃度差があらかじめ定めた閾値を越える画素からなる連結領域(差分領域)を抽出する。つまり、濃度差が閾値を超える画素が隣接している領域を連結領域として抽出する。次に、連結領域が検査画像の範囲内か背景領域の範囲内かを分ける。
(Embodiment 5)
In each of the embodiments described above, the technique of extracting a part that does not match the teaching image P1 from the inspection image P2 as a defective pixel has been described. In the present embodiment, the type of the defect is determined from the pixel extracted as the defective pixel. The technology will be described. That is, the processing in the defect type determination unit 9 shown in FIG. 2 will be described. In order to determine the type of defect, in the present embodiment, a density difference between pixels corresponding to each other in the inspection image P2 and the teaching image P1 is determined, and a connected region (a pixel) in which the density difference exceeds a predetermined threshold value. (Difference area). That is, an area where pixels whose density differences exceed the threshold value is adjacent is extracted as a connected area. Next, it is determined whether the connected region is within the range of the inspection image or the range of the background region.

 まず、検査画像P2の内側に対する処理について説明する。検査画像P2に対しては以下の4種類の処理を行うことにより、連結領域の濃度分布および形状を判定する。
(1)連結領域の輪郭線を追跡し、教示画像P1の輪郭線が含まれるかを検証する。
(2)教示画像P1の輪郭線が連結領域の輪郭線に含まれる場合には、含まれている教示画像P1の輪郭線が複数の部分に分割されているかを検証する。
(3)連結領域内の濃度分布(分散)を求め、求めた分散の値が規定した範囲内(使用者が指定した値、以前の検査において決定した値など)かを検証する。これは、連結領域内においてグラデーションが生じているか否かを検証する処理である。
(4)各連結領域の輪郭線のうちで教示画像P1の輪郭線と一致する部分の割合(=教示画像P1の輪郭線と一致する部分の長さ/連結領域の輪郭線の全長)が規定した範囲内(使用者が指定した値、以前の検査において決定した値など)かを検証する。
First, processing for the inside of the inspection image P2 will be described. The following four types of processing are performed on the inspection image P2 to determine the density distribution and shape of the connected region.
(1) The contour of the connected area is tracked, and it is verified whether or not the contour of the teaching image P1 is included.
(2) When the outline of the teaching image P1 is included in the outline of the connected region, it is verified whether the included outline of the teaching image P1 is divided into a plurality of parts.
(3) The density distribution (variance) in the connected area is obtained, and it is verified whether the obtained value of the variance is within a specified range (a value specified by a user, a value determined in a previous test, and the like). This is a process for verifying whether a gradation has occurred in the connected area.
(4) The proportion of the portion of the contour of each connected region that matches the contour of the teaching image P1 (= the length of the portion that matches the contour of the teaching image P1 / the total length of the contour of the connecting region) is defined. It verifies whether the value is within the specified range (the value specified by the user, the value determined in the previous inspection, etc.).

 次に、上述した4種類の処理の結果を組み合わせることによって、欠陥の種類を、「傷」、「欠け」、「かすれ」、「細り」の4種類に分類する。各欠陥と判別する条件は以下の通りである。
(A)連結領域が教示画像P1の輪郭線を含まない場合は「傷」と判別する。
(B)連結領域の輪郭線において教示画像P1の輪郭線を含む部分が複数に分割され、かつ連結領域内の濃度分布が一定(分散が閾値以下)である場合、または、連結領域の輪郭線において教示画像P1の輪郭線を含む部分が分割されず、かつ(教示画像P1の輪郭線と一致する部分の長さ/連結領域の輪郭線の全長)の値が規定した閾値より小さく、かつ連結領域内の濃度分布が一定(分散が閾値以下)場合には、「欠け」と判別する。
(C)連結領域の輪郭線において教示画像P1の輪郭線を含む部分が複数に分割され、かつ連結領域内の濃度分布にグラデーションが生じている場合、または、連結領域の輪郭線において教示画像P1の輪郭線を含む部分が分割されず、かつ(教示画像P1の輪郭線と一致する部分の長さ/連結領域の輪郭線の全長)の値が規定値より小さく、かつ連結領域内の濃度分布にグラデーションが生じている場合には、「かすれ」と判別する。
(D)「傷」、「欠け」、「かすれ」以外の場合は、「細り」と判別する。
Next, by combining the results of the above-described four types of processing, the types of defects are classified into four types of “scratch”, “chip”, “blurred”, and “thin”. The conditions for determining each defect are as follows.
(A) When the connected area does not include the contour line of the teaching image P1, it is determined to be “scratch”.
(B) When the portion including the outline of the teaching image P1 in the outline of the connected region is divided into a plurality of parts and the density distribution in the connected region is constant (the variance is equal to or less than the threshold), or the outline of the connected region , The portion including the outline of the teaching image P1 is not divided, and the value of (length of the portion that matches the outline of the teaching image P1 / total length of the outline of the connection area) is smaller than the specified threshold value and is connected. If the density distribution in the area is constant (the variance is equal to or less than the threshold), it is determined as “missing”.
(C) In the case where the portion including the outline of the teaching image P1 in the outline of the connected region is divided into a plurality of parts and gradation occurs in the density distribution in the connected region, or the case where the teaching image P1 is included in the outline of the connected region Is not divided, and the value of (length of the part that matches the contour of the teaching image P1 / total length of the contour of the connected area) is smaller than a specified value, and the density distribution in the connected area If there is a gradation in the image, it is determined that the image is "faint".
(D) In cases other than "scratch", "chip", and "blurring", it is determined to be "thin".

 次に、検査画像P2の外側(つまり、背景側)に対する処理について説明する。背景側に対しては以下の5種類の処理を行うことにより、連結領域の濃度分布および形状を判定する。
(1)連結領域の輪郭線を追跡し、教示画像P1の輪郭線が含まれるかを検証する。
(2)教示画像P1の輪郭線が連結領域の輪郭線に含まれる場合には、含まれている教示画像P1の輪郭線が複数の部分に分割されているかを検証する。
(3)教示画像P1の輪郭線の各閉曲線を追跡し、(教示画像P1の輪郭線の閉曲線のうち連結領域に含まれる部分の長さ/教示画像P1の閉曲線の全長)が規定した範囲内(使用者が指定した値、以前の検査において決定した値など)か検証する。
(4)連結領域内の濃度分布(分散)を求め、求めた分散の値が規定した範囲内(使用者が指定した値、以前の検査において決定した値など)か否かを検証する。これは、連結領域内においてグラデーションが生じているかを検証する処理である。
(5)各連結領域の輪郭線のうちで教示画像P1の輪郭線と一致する部分の割合(=教示画像P1の輪郭線と一致する部分の長さ/連結領域の輪郭線の全長)が規定した範囲内(使用者が指定した値、以前の検査において決定した値など)かを検証する。
Next, processing on the outside of the inspection image P2 (that is, the background side) will be described. The following five types of processing are performed on the background side to determine the density distribution and shape of the connected region.
(1) The contour of the connected area is tracked, and it is verified whether or not the contour of the teaching image P1 is included.
(2) When the outline of the teaching image P1 is included in the outline of the connected region, it is verified whether the included outline of the teaching image P1 is divided into a plurality of parts.
(3) Each closed curve of the outline of the teaching image P1 is tracked, and (the length of a portion included in the connected area in the closed curve of the outline of the teaching image P1 / the total length of the closed curve of the teaching image P1) is within a prescribed range. (Eg, values specified by the user, values determined in previous tests, etc.).
(4) The density distribution (variance) in the connected area is obtained, and it is verified whether the value of the obtained variance is within a specified range (a value specified by a user, a value determined in a previous test, and the like). This is a process for verifying whether gradation has occurred in the connected region.
(5) The proportion of the part of the contour of each connected area that matches the contour of the teaching image P1 (= the length of the part that matches the contour of the teaching image P1 / the total length of the contour of the connecting area) is defined. It verifies whether the value is within the specified range (the value specified by the user, the value determined in the previous inspection, etc.).

 次に、上述した5種類の処理の結果を組み合わせることによって、欠陥の種類を、「背景ノイズ」、「つぶれ」、「余分」、「にじみ」、「太り」の5種類に分類する。各欠陥と判断する条件は以下の通りである。
(A)連結領域が教示画像P1の輪郭線を含まない場合は、「背景ノイズ」と判別する。
(B)連結領域の輪郭線において教示画像P1の輪郭線が分割されず、かつ教示画像P1の輪郭線のうち連結領域に含まれる部分の長さ/教示画像P1の閉曲線の全長)の値が規定した閾値より大きい場合は、「つぶれ」と判別する。
(C)連結領域の輪郭線が教示画像P1の輪郭線を含む部分が複数に分割され、かつ連結領域内の濃度分布が一定(分散が閾値以下)である場合、または、連結領域の輪郭線において教示画像P1の輪郭線を含む部分が分割されず、かつ(教示画像P1の輪郭線と一致する部分の長さ/連結領域の輪郭線の全長)の値が規定値より大きく、かつ連結領域内の濃度分布が一定(分散が閾値以下)場合には、「余分」と判別する。
(D)連結領域の輪郭線において教示画像P1の輪郭線を含む部分が複数に分割され、かつ連結領域内の濃度分布にグラデーションが生じている場合、または、連結領域の輪郭線において教示画像P1の輪郭線を含む部分が分割されず、かつ(教示画像P1の輪郭線と一致する部分の長さ/連結領域の輪郭線の全長)の値が規定値より大きく、かつ連結領域内の濃度分布にグラデーションが生じている場合には、「にじみ」と判別する。
(E)「背景ノイズ」、「つぶれ」、「余分」、「にじみ」以外の場合は、「太り」と判断する。
Next, by combining the results of the above-described five types of processing, the types of defects are classified into five types of “background noise”, “crushed”, “extra”, “bleeding”, and “fat”. The conditions for determining each defect are as follows.
(A) If the connected region does not include the outline of the teaching image P1, it is determined as "background noise".
(B) The outline of the teaching image P1 is not divided by the outline of the connected region, and the value of the length of the portion of the outline of the teaching image P1 included in the connected region / the total length of the closed curve of the teaching image P1) is If it is larger than the specified threshold value, it is determined as “crush”.
(C) When the portion of the outline of the connected region including the outline of the teaching image P1 is divided into a plurality of parts and the density distribution in the connected region is constant (variance is equal to or less than a threshold), or the outline of the connected region , The portion including the outline of the teaching image P1 is not divided, and the value of (length of the portion that matches the outline of the teaching image P1 / total length of the outline of the connection region) is larger than the specified value, and the connection region If the density distribution inside is constant (the variance is equal to or less than the threshold), it is determined to be “extra”.
(D) When the portion including the outline of the teaching image P1 in the outline of the connected region is divided into a plurality of parts and gradation occurs in the density distribution in the connected region, or the teaching image P1 is generated in the outline of the connected region. Is not divided, and the value of (length of the part that matches the contour of the teaching image P1 / total length of the contour of the connected area) is larger than a specified value, and the density distribution in the connected area Is determined to be "smeared" when gradation occurs.
(E) In cases other than “background noise”, “crush”, “extra”, and “bleeding”, it is determined to be “fat”.

 上述のようにして欠陥を判別した例を図11に示す。図11(a)は教示画像P1であって、教示画像P1の輪郭線は図11(b)のように抽出することができる。ここで、図12(a)に示す検査画像P2が得られたとすると、図12(b)のように、連結領域の輪郭線において教示画像P1の輪郭線を含む部分が分割されず、かつ教示画像P1の閉曲線の全長に対して連結領域に含まれる部分の長さの割合が大きいから、「つぶれ」と判別される。また、図13(a)に示す検査画像P2が得られたとすると、図13(b)のように、連結領域の輪郭線において教示画像P1の輪郭線を含む部分が分割されず、かつ(教示画像P1の輪郭線と一致する部分の長さ/連結領域の輪郭線の全長)の値が規定した閾値より小さく、かつ連結領域内の分散が閾値以下であるから、「欠け」と判別される。さらに、図14(a)のような検査画像P2に対しては、図14(b)から明らかなように、「欠け」、「太り」「余分」の欠陥が生じていると判別することができる。 FIG. 11 shows an example in which a defect is determined as described above. FIG. 11A shows the teaching image P1, and the outline of the teaching image P1 can be extracted as shown in FIG. 11B. Here, assuming that the inspection image P2 shown in FIG. 12A is obtained, as shown in FIG. 12B, the portion including the outline of the teaching image P1 in the outline of the connected region is not divided and Since the ratio of the length of the portion included in the connected region to the total length of the closed curve of the image P1 is large, it is determined that the image is “crushed”. Further, assuming that the inspection image P2 shown in FIG. 13A is obtained, as shown in FIG. 13B, the portion including the outline of the teaching image P1 in the outline of the connected region is not divided, and Since the value of the length of the portion that matches the contour of the image P1 / the total length of the contour of the connected area) is smaller than the specified threshold value and the variance in the connected area is equal to or less than the threshold value, it is determined to be “missing”. . Further, as is clear from FIG. 14B, it is determined that an inspection image P2 such as that shown in FIG. 14A has defects such as "chip", "fat" and "extra". it can.

 なお、「傷」、「欠け」、「かすれ」、「細り」を以下の組合せで判断してもよい。以下では、論理記号を用い、∧は論理積、∨は論理和、¬は否定を表すものとする。なお、以下の(A)〜(D)における(1)〜(4)の処理は上述した教示画像P1の内側に対する4種類の処理に対応する。
(A)¬(1)は「傷」。
(B)(1)∧(((2)∧¬(3))∨(¬(3)∧(4))は「欠け」。
(C)(1)∧(((2)∧(3))∨((3)∧(4))は「かすれ」。
(D)(A)〜(C)以外の組合せ「細り」。
Note that “scratch”, “chip”, “blurring”, and “thinning” may be determined by the following combinations. In the following, logical symbols are used, ∧ represents logical product, ∨ represents logical sum, and ¬ represents negation. The processes (1) to (4) in the following (A) to (D) correspond to the four types of processes for the inside of the teaching image P1 described above.
(A) ¬ (1) is “scratch”.
(B) (1) ∧ (((2) ∧¬ (3)) ∨ (¬ (3) ∧ (4)) is “missing”.
(C) (1) ∧ (((2) ∧ (3)) ∨ ((3) ∧ (4)) is “blurred”.
(D) Combination "thin" other than (A) to (C).

 この場合、「背景ノイズ」、「つぶれ」、「余分」、「にじみ」、「太り」の判断を以下の組合せで行う。なお、以下の(A)〜(E)における(1)〜(5)の処理は上述した教示画像P1の外側に対する5種類の処理に対応する。
(A)¬(1)は「背景ノイズ」。
(B)(1)∧(2)は「つぶれ」。
(C)(1)∧¬(2)∧(((3)∧¬(4))∨(¬(4)∧(5))は「余分」。
(D)(1)∧¬(2)∧(((3)∧(4))∨((4)∧(5))は「にじみ」。
(E)(A)〜(D)以外の組合せ「太り」。
In this case, the determination of “background noise”, “crush”, “extra”, “bleeding”, and “fat” is made by the following combinations. The processes (1) to (5) in the following (A) to (E) correspond to the above-described five types of processes on the outside of the teaching image P1.
(A) ¬ (1) is “background noise”.
(B) (1) ∧ (2) is “crushed”.
(C) (1) ∧¬ (2) ∧ (((3) ∧¬ (4)) ∨ (¬ (4) ∧ (5)) is “extra”.
(D) (1) ∧¬ (2) ∧ (((3) ∧ (4)) ∨ ((4) ∧ (5)) is “bleeding”.
(E) Combination other than (A) to (D) "Fat".

 このような判断による結果を図15に示す。図15(a)は教示画像P1であって、図15(b)〜(e)に検査画像P2と判定結果とをそれぞれ示している。 結果 The result of such a determination is shown in FIG. FIG. 15A shows a teaching image P1, and FIGS. 15B to 15E show an inspection image P2 and a determination result, respectively.

 上述のように、連結領域における濃度分布および形状に関する9種類の情報を組み合わせて用いることにより、検査画像P2の範囲内での「傷」、「欠け」、「かすれ」、「細り」と、検査画像P2の背景側での「背景ノイズ」、「つぶれ」、「余分」、「にじみ」、「太り」との合計9種類の欠陥の種類の判別が可能になるのである。本実施形態は実施形態1〜3のいずれの技術に対しても適用することができる。 As described above, by using nine types of information on the density distribution and the shape in the connected region in combination, it is possible to perform the inspection such as “scratch”, “chip”, “blurring”, “thinning” within the range of the inspection image P2. It is possible to determine a total of nine types of defects such as “background noise”, “crush”, “extra”, “bleeding”, and “fat” on the background side of the image P2. This embodiment can be applied to any of the technologies of the first to third embodiments.

 なお、上述の例ではテンプレート記憶部3に格納されている教示画像を前処理によって変形した後に変換パラメータを求める例を示したが、検査対象物の位置が画像入力手段1に対して固定されている場合のように検査画像と教示画像とのずれが少ない場合には、前処理を行わずに変換パラメータを求めるようにしてもよい。 In the above-described example, an example has been described in which the conversion parameters are obtained after the teaching image stored in the template storage unit 3 is deformed by the preprocessing. However, the position of the inspection target is fixed with respect to the image input unit 1. When the deviation between the inspection image and the teaching image is small, as in the case where there is, the conversion parameter may be obtained without performing the preprocessing.

 また、上述の例では、変換パラメータにより教示画像を変形させた画像と撮像画像に含まれる検査画像との対応画素の濃度差を求めているが、教示画像の各画素の濃度値に重み付けを行い、重み付け後の教示画像の各画素の画素値と、撮像画像に含まれる元の検査画像の対応する画素の濃度値との差分を求め、このときの差分値が規定の閾値を越える画素で形成される領域内のみの画素について、変換パラメータにより教示画像を変形させた画像との濃度差を求めるようにしてもよい。 Further, in the above-described example, the density difference of the corresponding pixel between the image obtained by deforming the teaching image by the conversion parameter and the inspection image included in the captured image is obtained. However, the density value of each pixel of the teaching image is weighted. The difference between the pixel value of each pixel of the weighted teaching image and the density value of the corresponding pixel of the original inspection image included in the captured image is determined, and the difference value at this time is formed of pixels exceeding a specified threshold. A density difference between an image obtained by deforming the teaching image and the image obtained by transforming the teaching image using the conversion parameter may be obtained for pixels only in the region to be processed.

本発明の実施形態1の動作説明図である。FIG. 4 is an operation explanatory diagram of the first embodiment of the present invention. 同上のブロック図である。It is a block diagram same as the above. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above. 本発明の実施形態2の動作説明図である。It is operation | movement explanatory drawing of Embodiment 2 of this invention. 本発明の実施形態3の動作説明図である。FIG. 13 is an operation explanatory diagram of the third embodiment of the present invention. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above. 本発明の実施形態4の動作説明図である。It is operation | movement explanatory drawing of Embodiment 4 of this invention. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above. 同上の動作説明図である。It is operation | movement explanatory drawing same as the above.

符号の説明Explanation of reference numerals

 1 画像入力手段
 2 画像メモリ
 3 テンプレート記憶部
 4 輪郭線抽出部
 5 前処理部
 6 変換パラメータ演算部
 7 判定処理部
 8 欠陥抽出部
 9 欠陥種判別部
 Ob 検査対象物
 L1,L2 輪郭線
 P1 教示画像
 P2 検査画像
DESCRIPTION OF SYMBOLS 1 Image input means 2 Image memory 3 Template storage part 4 Contour extraction part 5 Preprocessing part 6 Conversion parameter calculation part 7 Judgment processing part 8 Defect extraction part 9 Defect type discrimination part Ob Inspection object L1, L2 Contour line P1 Teaching image P2 inspection image

Claims (8)

 検査対象物の画像である検査画像と基準となる教示画像との輪郭線をそれぞれ求め、検査画像を教示画像に1次式で表される変形を加えた画像とみなして検査画像と教示画像との輪郭線上の画素に最小二乗法を適用することによって教示画像に対する検査画像の変形を表す変換パラメータを算出し、求めた変換パラメータにより教示画像を変形させた画像と検査画像との対応画素の濃度差を求めるとともに濃度差があらかじめ設定してある閾値を超える画素を抽出し、抽出した画素を検査対象物の外観検査に用いることを特徴とする画像処理方法。 The contours of the inspection image, which is the image of the inspection object, and the reference teaching image are obtained, and the inspection image is regarded as an image obtained by adding a deformation represented by a linear expression to the teaching image, and the inspection image, the teaching image, and By applying the least-squares method to the pixels on the contour line, a conversion parameter representing the deformation of the inspection image with respect to the teaching image is calculated, and the density of the corresponding pixel between the image obtained by deforming the teaching image with the obtained conversion parameter and the inspection image An image processing method comprising: obtaining a difference, extracting a pixel having a density difference exceeding a preset threshold value, and using the extracted pixel for a visual inspection of an inspection object.  検査対象物の画像である検査画像と基準となる教示画像との輪郭線をそれぞれ求め、検査画像を教示画像に2次式で表される変形を加えた画像とみなして検査画像と教示画像との輪郭線上の画素に最小二乗法を適用することによって教示画像に対する検査画像の変形を表す変換パラメータを算出し、求めた変換パラメータにより教示画像を変形させた画像と検査画像との対応画素の濃度差を求めるとともに濃度差があらかじめ設定してある閾値を超える画素を抽出し、抽出した画素を検査対象物の外観検査に用いることを特徴とする画像処理方法。 The contours of the inspection image, which is the image of the inspection object, and the reference teaching image are obtained, and the inspection image is regarded as an image obtained by adding a deformation represented by a quadratic expression to the teaching image, and the inspection image and the teaching image are used. By applying the least-squares method to the pixels on the contour line, a conversion parameter representing the deformation of the inspection image with respect to the teaching image is calculated, and the density of the corresponding pixel between the image obtained by deforming the teaching image with the obtained conversion parameter and the inspection image An image processing method comprising: obtaining a difference, extracting a pixel having a density difference exceeding a preset threshold value, and using the extracted pixel for a visual inspection of an inspection object.  前記検査画像は検査対象物を含む空間領域を画像入力手段により撮像した撮像画像内に含まれ、前記教示画像として、あらかじめテンプレート記憶部に登録してある教示画像を撮像画像と比較することにより教示画像の位置、回転角度、拡大縮小率を検査画像に対して低精度で合わせるように変形した画像を用いることを特徴とする請求項1または請求項2記載の画像処理方法。 The inspection image is included in a captured image obtained by capturing an image of a spatial region including an inspection target by an image input unit, and is taught by comparing a teaching image registered in a template storage unit in advance with the captured image as the teaching image. 3. The image processing method according to claim 1, wherein an image deformed so that the position, rotation angle, and enlargement / reduction ratio of the image are adjusted to the inspection image with low accuracy is used.  前記輪郭線の抽出に際して、前記検査画像と前記教示画像とにソーベルフィルタを適用して得られる極大値の画素を追跡した線を輪郭線として抽出することを特徴とする請求項1または請求項2記載の画像処理方法。 2. The method according to claim 1, wherein, when extracting the contour, a line that traces a pixel having a maximum value obtained by applying a Sobel filter to the inspection image and the teaching image is extracted as the contour. 2. The image processing method according to 2.  前記輪郭線を追跡する際に、次に追跡する画素の微分値が極大値になる方向とすでに追跡した輪郭線の法線との角度差が±45度以内であるときに次に追跡する画素を輪郭線上の画素とすることを特徴とする請求項4記載の画像処理方法。 When tracking the contour, the pixel to be tracked next when the angle difference between the direction in which the differential value of the pixel to be tracked next becomes a local maximum and the normal of the already tracked contour is within ± 45 degrees. 5. The image processing method according to claim 4, wherein is a pixel on the contour line.  前記輪郭線の抽出に際して、前記検査画像と前記教示画像との各画素に対して、それぞれ近傍である第1領域内の画素の濃度値の平均値を画素値とする第1平滑画像を生成するとともに、第1領域よりも広い第2領域内の画素の濃度値の平均値を画素値とする第2平滑画像を生成し、次に第1平滑画像の各画素の画素値に1以上の重み係数による重み付けを行った後、第2平滑画像の対応する画素の画素値から減算して差分画像を求め、差分画像について隣接する各一対の画素の画素値の符号が変化するときに一方の画素を輪郭線上の画素として抽出することを特徴とする請求項1または請求項2記載の画像処理方法。 When extracting the contour, a first smoothed image is generated for each pixel of the inspection image and the teaching image, where the average value of the density values of the pixels in the first region that is in the vicinity is used as the pixel value. Generates a second smoothed image having an average value of the density values of the pixels in the second area wider than the first area as a pixel value, and then assigns a weight of one or more to each pixel value of the first smoothed image. After performing weighting by the coefficient, the difference image is obtained by subtracting from the pixel value of the corresponding pixel of the second smoothed image, and when the sign of the pixel value of each pair of pixels adjacent to the difference image changes, one of the pixels is changed. 3. The image processing method according to claim 1, wherein the image is extracted as a pixel on an outline.  前記変換パラメータにより教示画像を変形させた画像との対応画素の濃度差を求める検査画像は、教示画像の各画素の濃度値に重み付けを行って元の検査画像の対応する画素の濃度値との差分を求めたときに差分値が規定の閾値を越える画素で形成される領域内のみの画素からなることを特徴とする請求項1ないし請求項6のいずれか1項に記載の画像処理方法。 The inspection image for calculating the density difference of the corresponding pixel from the image obtained by deforming the teaching image by the conversion parameter is obtained by weighting the density value of each pixel of the teaching image and the density value of the corresponding pixel of the original inspection image. The image processing method according to any one of claims 1 to 6, wherein the image processing method includes pixels only in an area formed by pixels having a difference value exceeding a predetermined threshold value when the difference is obtained.  前記外観検査に際して、抽出された画素の連結領域を求め、連結領域の教示画像上での分布、連結領域内の濃度分布および連結領域の形状を用いて検査対象物の外観上の欠陥の種類を判別することを特徴とする請求項1ないし請求項7のいずれか1項に記載の画像処理方法。 At the time of the appearance inspection, a connected region of the extracted pixels is obtained, and the distribution of the connected region on the teaching image, the density distribution in the connected region, and the shape of the connected region are used to determine the type of defect in the appearance of the inspection object. The image processing method according to claim 1, wherein the determination is performed.
JP2003280362A 2002-07-26 2003-07-25 Image processing method Expired - Fee Related JP3800208B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2003280362A JP3800208B2 (en) 2002-07-26 2003-07-25 Image processing method

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2002218999 2002-07-26
JP2003280362A JP3800208B2 (en) 2002-07-26 2003-07-25 Image processing method

Publications (2)

Publication Number Publication Date
JP2004069698A true JP2004069698A (en) 2004-03-04
JP3800208B2 JP3800208B2 (en) 2006-07-26

Family

ID=32032738

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2003280362A Expired - Fee Related JP3800208B2 (en) 2002-07-26 2003-07-25 Image processing method

Country Status (1)

Country Link
JP (1) JP3800208B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011018338A (en) * 2009-07-10 2011-01-27 Palo Alto Research Center Inc Method and system for classifying connected group of foreground pixel in scanned document image according to type of marking
JP2013191064A (en) * 2012-03-14 2013-09-26 Omron Corp Image inspection method and inspection area setting method
CN104020175A (en) * 2013-02-28 2014-09-03 发那科株式会社 Device and method for appearance inspection of object having line pattern
JP2014238751A (en) * 2013-06-10 2014-12-18 独立行政法人産業技術総合研究所 Image creating device and image creating method

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011018338A (en) * 2009-07-10 2011-01-27 Palo Alto Research Center Inc Method and system for classifying connected group of foreground pixel in scanned document image according to type of marking
JP2013191064A (en) * 2012-03-14 2013-09-26 Omron Corp Image inspection method and inspection area setting method
CN104020175A (en) * 2013-02-28 2014-09-03 发那科株式会社 Device and method for appearance inspection of object having line pattern
JP2014167431A (en) * 2013-02-28 2014-09-11 Fanuc Ltd Appearance inspection device and appearance inspection method of object including linear pattern
CN104020175B (en) * 2013-02-28 2016-06-29 发那科株式会社 Comprise appearance inspection device and the appearance inspection method of the object of linear pattern
US10113975B2 (en) 2013-02-28 2018-10-30 Fanuc Corporation Appearance inspection device and method for object having line pattern
JP2014238751A (en) * 2013-06-10 2014-12-18 独立行政法人産業技術総合研究所 Image creating device and image creating method

Also Published As

Publication number Publication date
JP3800208B2 (en) 2006-07-26

Similar Documents

Publication Publication Date Title
JP3951984B2 (en) Image projection method and image projection apparatus
US9235902B2 (en) Image-based crack quantification
US8019164B2 (en) Apparatus, method and program product for matching with a template
TWI238366B (en) Image processing method for appearance inspection
KR100810326B1 (en) Method for generation of multi-resolution 3d model
JP2000105829A (en) Method and device for face parts image detection
US20120121127A1 (en) Image processing apparatus and non-transitory storage medium storing image processing program
JP7188201B2 (en) Image processing device, image processing method, and image processing program
Babbar et al. Comparative study of image matching algorithms
JP2005339288A (en) Image processor and its method
CN107909085A (en) A kind of characteristics of image Angular Point Extracting Method based on Harris operators
JP4003465B2 (en) Specific pattern recognition method, specific pattern recognition program, specific pattern recognition program recording medium, and specific pattern recognition apparatus
JP2019036030A (en) Object detection device, object detection method and object detection program
CN110288040A (en) A kind of similar evaluation method of image based on validating topology and equipment
JP2002140713A (en) Image processing method and image processor
JP7230722B2 (en) Image processing device and image processing method
WO2020158726A1 (en) Image processing device, image processing method, and program
JP3800208B2 (en) Image processing method
JP2011107878A (en) Position detection apparatus and position detection method
JP6006675B2 (en) Marker detection apparatus, marker detection method, and program
JP2007140729A (en) Method and device detecting position and attitude of article
JP4470513B2 (en) Inspection method and inspection apparatus
JP3508518B2 (en) Appearance inspection method
JP2010243209A (en) Defect inspection method and defect detection device
McIvor et al. Simple surface segmentation

Legal Events

Date Code Title Description
A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20051228

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20060110

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20060313

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20060404

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20060417

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090512

Year of fee payment: 3

S533 Written request for registration of change of name

Free format text: JAPANESE INTERMEDIATE CODE: R313533

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090512

Year of fee payment: 3

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100512

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100512

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110512

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120512

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120512

Year of fee payment: 6

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313111

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120512

Year of fee payment: 6

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

LAPS Cancellation because of no payment of annual fees