JPH04104048A - Image processing apparatus - Google Patents

Image processing apparatus

Info

Publication number
JPH04104048A
JPH04104048A JP2221034A JP22103490A JPH04104048A JP H04104048 A JPH04104048 A JP H04104048A JP 2221034 A JP2221034 A JP 2221034A JP 22103490 A JP22103490 A JP 22103490A JP H04104048 A JPH04104048 A JP H04104048A
Authority
JP
Japan
Prior art keywords
image
neural network
defect
appearance
inspected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2221034A
Other languages
Japanese (ja)
Inventor
Tatsumi Furuya
古谷 立美
Atsushi Karakama
厚志 唐鎌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National Institute of Advanced Industrial Science and Technology AIST
Asahi Chemical Industry Co Ltd
Original Assignee
Agency of Industrial Science and Technology
Asahi Chemical Industry Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Agency of Industrial Science and Technology, Asahi Chemical Industry Co Ltd filed Critical Agency of Industrial Science and Technology
Priority to JP2221034A priority Critical patent/JPH04104048A/en
Publication of JPH04104048A publication Critical patent/JPH04104048A/en
Pending legal-status Critical Current

Links

Landscapes

  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PURPOSE:To make it possible to identify the kinds of the contents of abnormalities by providing a neural network for outputting identification data which are allocated to the characteristic data of the image of a body having the normal appearance, which is closest to the image of a body to be inspected when the characteristic data are inputted, and the image of a body having the various kinds of defects on the appearance. CONSTITUTION:In a neural network 50, the characteristics of the image of a body having the normal appearance and the characteristics of the image having the various kinds of defects are made to learn beforehand. The characteristic data are extracted from the image of the body to be inspected with a characteristic extracting means 15 and inputted into the neural network 50. The pattern matching between the image of the appearance of the body to be inspected and the learned characteristics is executed in the neural network 50. As a result, the identification data indicating the learned image close to the image of the appearance of the body to be inspected are outputted from the neural network 50. Therefore, the contents of the defect in addition to the presence or absence of the defect can be automatically identified. Since the identification for the presence or absence of the defect and the identification of the contents of the defect are performed at the same time in the neural network 50, the time for the visual inspection can be shortened.

Description

【発明の詳細な説明】 [産業上の利用分野1 本発明は、画像処理により物体の表面の傷1色むら等の
欠陥部の有無および欠陥内容の識別を行う画像処理装置
に関する。
DETAILED DESCRIPTION OF THE INVENTION [Industrial Field of Application 1] The present invention relates to an image processing apparatus that uses image processing to identify the presence or absence of a defect such as a scratch or color unevenness on the surface of an object and the content of the defect.

[従来の技術1 従来、製品の表面をイメージセンサなどの受光器で撮像
し、撮像結果として得られる画像信号を画像処理するこ
とにより製品の外観検査を行う画像処理装置が知られて
いる。
[Prior Art 1] Conventionally, an image processing apparatus is known that performs an external appearance inspection of a product by capturing an image of the surface of the product using a light receiver such as an image sensor and processing an image signal obtained as a result of the image capture.

従来この種装置において、−射的によく用いられている
外観検査の手法は、画像信号の示す画素毎の輝度レベル
を、しきい値と比較することにより外観の異常の有無を
判定する手法である。
Conventionally, the visual inspection method often used in this type of device is a method that determines whether there is an abnormality in the external appearance by comparing the brightness level of each pixel indicated by the image signal with a threshold value. be.

[発明が解決しようとする課題] しかしながら、従来この種装置では、製品の外観の異常
を検圧することができるが、異冨の内容までを識別する
ことはできなかった。たとえば、繊維製品の外観検査を
行う場合、外観検査の対象として糸抜け、汚れの付着、
樹脂または染色の色むらがある。
[Problems to be Solved by the Invention] However, although conventional devices of this type can detect abnormalities in the appearance of products, they have not been able to identify the contents of the abnormalities. For example, when inspecting the appearance of textile products, the objects of the appearance inspection include thread removal, dirt adhesion,
There are uneven colors in the resin or dyeing.

従来では画像処理装置(外観検査装置)の画像処理によ
り外観異常を検圧すると、異常部の生じた部分を検査員
が目視で形状や異常内容を確認しなければならない。
Conventionally, when an abnormality in appearance is detected by image processing using an image processing device (visual inspection device), an inspector must visually confirm the shape and content of the abnormality in the area where the abnormality has occurred.

そこで、本発明の目的は、このような点に鑑みて、外観
検査を画像処理により行う場合、異常内容の種類を識別
することの可能な画像処理装置を提供することにある。
SUMMARY OF THE INVENTION In view of the above, an object of the present invention is to provide an image processing device that is capable of identifying the type of abnormality when visual inspection is performed by image processing.

[課題を解決するための手段] このような目的を達成するために、本発明は、正常な外
観を有する物体における画像の特徴を示す第1特徴情報
および外観に各種欠陥を有する前記物体における画像の
特徴を示す第2特徴情報ならびに、前記第1特徴情報お
よび前記第2特徴情報にそれぞれ割当てた識別情報を予
め学習し、第3特徴情報を入力したときに、当該第3特
徴情報に最も近い前記第1特徴情報または前記第2特徴
情報に対応の識別情報を出力する神経回路網と、検査対
象の前記物体の画像から、前記神経回路網に入力するた
めの前記第3特徴情報を抽出する特徴抽出回路とを具え
たことを特徴とする。
[Means for Solving the Problems] In order to achieve such an object, the present invention provides first feature information indicating characteristics of an image of an object having a normal appearance and an image of the object having various defects in appearance. The second feature information indicating the feature of A neural network outputs identification information corresponding to the first feature information or the second feature information, and extracts the third feature information to be input to the neural network from an image of the object to be inspected. It is characterized by comprising a feature extraction circuit.

[作 用j 本発明では、神経回路網にパターンマツチングの機能が
あることに着目し、物体の正常な外観についての画像の
特徴および各種欠陥を有する画像の特徴を予め学習させ
、検査対象の物体の外観についての画像の特徴と学習し
た特徴のパターンマツチングを神経回路網に実行させる
。その結果、検査対象の物体の外観の画像に近い学習の
画像を示す識別情報が神経回路網から出力されるので、
欠陥の有無の他、欠陥の内容をも自動的に識別すること
ができる。また、神経回路網は欠陥の有無の識別および
欠陥の内容の識別を同時に行うので、外観検査時間が従
来よりも短縮される。
[Function j] The present invention focuses on the fact that the neural network has a pattern matching function, and learns in advance the characteristics of images of the normal appearance of objects and the characteristics of images with various defects, and A neural network performs pattern matching between image features and learned features about the appearance of an object. As a result, the neural network outputs identification information indicating a training image that is close to the image of the external appearance of the object to be inspected.
In addition to the presence or absence of a defect, the content of the defect can also be automatically identified. Furthermore, since the neural network simultaneously identifies the presence or absence of a defect and identifies the content of the defect, the time required for visual inspection is reduced compared to the conventional method.

[実施例] 以下、図面を参照して本発明の実施例を詳細に説明する
[Example] Hereinafter, an example of the present invention will be described in detail with reference to the drawings.

第1図は本発明実施例の回路構成を示す。FIG. 1 shows the circuit configuration of an embodiment of the present invention.

第1図において、ラインセンサ10は第2図に示すよう
に、検査対象の製品の搬送方向と直角に読取り走査を行
う。本実施例ではラインセンサ10の主走査方向の読取
り画素数を説明のために10個と設定する。
In FIG. 1, the line sensor 10 performs reading scanning perpendicular to the conveyance direction of the product to be inspected, as shown in FIG. In this embodiment, the number of pixels read by the line sensor 10 in the main scanning direction is set to 10 for explanation.

読取り画素数が数百〜数千のオーダのラインセンサ1を
用いる場合は、検査精度に対応させて必要な個数の画像
データをラインセンサlOの出力信号からサンプリング
するとよい。
When using a line sensor 1 whose number of read pixels is on the order of several hundred to several thousand, it is preferable to sample a necessary number of image data from the output signal of the line sensor 1O in accordance with the inspection accuracy.

シリアル/パラレル変換器20はラインセンサlOから
シリアル(直列的)に8カされるアナログ画像信号を各
画素列毎のパラレルの画像信号に変換する。
The serial/parallel converter 20 converts the analog image signals serially received from the line sensor IO into parallel image signals for each pixel column.

予測フィルタ30は各列毎の画像データを時系列的に入
力し、各列毎の画像データの予測値を算出し、最新の実
測の画像データと上記予測値との間の予測誤差を算出す
る。
The prediction filter 30 inputs image data for each column in time series, calculates a predicted value of the image data for each column, and calculates a prediction error between the latest measured image data and the predicted value. .

予測値および予測誤差の演算処理について説明する。Calculation processing of predicted values and prediction errors will be explained.

搬送されている検査対象の製品に対しである時刻Tから
画像の読取走査を開始し、m回(本例で10回)の主走
査を行ったとき、m+1回目の主走査、そのときの読取
り時刻11画素位置jにおいて予」りされる輝度レベル
X1.、はこれまでに時系列的に得られた実測の輝度レ
ベルX1−1.i〜X +−IIJを用いると次式で表
わされる。
When image reading scanning starts from a certain time T on the product to be inspected while it is being conveyed, and main scanning is performed m times (10 times in this example), the m+1st main scanning and the reading at that time are performed. The predicted brightness level X1 at time 11 pixel position j. , are actually measured luminance levels X1-1. Using i to X + -IIJ, it is expressed by the following formula.

X +、’r =a+ ’L −+J+ax ’Xニー
z、 j ””am’Xi −m、 jΣ am’Xニ
ー1 、       filここで、係数a、〜a1
は正常な外観を有する製品を撮像したとき、そのときの
実測の輝度レベルx1、と予測値X1.がほぼ一致する
ように予め定められる。
X +, 'r = a + 'L - + J + ax 'X knee z, j ""am'Xi -m, jΣ am'
is the actually measured brightness level x1 at that time and the predicted value X1. are predetermined so that they almost match.

検査対象の製品の時刻iでの実測の画像データX1.と
上記(1)式により算出さtた予測値x1Jとの差、ま
たは差の2乗値は予測誤差と呼ばれる。
Actual image data X1 of the product to be inspected at time i. The difference between t and the predicted value x1J calculated by the above equation (1), or the square value of the difference, is called a prediction error.

予測フィルタ30はラインセンサ10の1回の読取り走
査に応じて上述の予算誤差を各列毎に算出し、算出の予
測誤差をそれぞれ出力する。予測フィルタ30には周知
の回路を用いることができるので、詳細な説明を省略す
る。
The prediction filter 30 calculates the above-mentioned budget error for each column in response to one reading scan of the line sensor 10, and outputs the calculated prediction errors, respectively. Since a well-known circuit can be used for the prediction filter 30, detailed explanation will be omitted.

統計演算回路40は予測フィルタ30により各列毎に求
まった予測誤差についての次の特徴量を計算する。本例
では予測誤差の(1)平均、(2)最大値、(3)最小
値、(4)分散、(5)歪度、(6)尖り度の6つの統
計量を画像の特徴を表わす特徴パラメータとして使用す
る。
The statistical calculation circuit 40 calculates the next feature amount for the prediction error found for each column by the prediction filter 30. In this example, six statistics of prediction error: (1) average, (2) maximum value, (3) minimum value, (4) variance, (5) skewness, and (6) kurtosis are used to represent image features. Use as a feature parameter.

神経回路網(連想記憶メモリ、ニューラルネットワーク
とも称す)50には入力層51.出力層53および中間
層52からなる階層型の、パックプロパゲーションを行
う神経回路網を用いる。
The neural network (also referred to as an associative memory or neural network) 50 includes an input layer 51 . A hierarchical neural network consisting of an output layer 53 and an intermediate layer 52 that performs pack propagation is used.

入力層51は後述の6種の特徴量を入力する。中間層5
2は学習済みの特徴量と入力の特徴量との類似比較を行
う。出力層53人力の特徴量に最も近い特徴量の識別情
報を信号出力する。
The input layer 51 inputs six types of feature amounts, which will be described later. middle layer 5
2, a similarity comparison is made between the learned feature amount and the input feature amount. Output layer 53 outputs the identification information of the feature closest to the human-powered feature.

神経回路網50には外観の正常な物体の画像の特徴量お
よび各種欠陥を含む画像の特徴量ならびに各特徴量に対
応した識別情報を予め学習記憶させている。なお、本実
施例では識別情報には3ビツトで数値“0”〜“7”を
表わす識別コードを用いる。
The neural network 50 has previously learned and stored feature quantities of images of objects with normal appearance, feature quantities of images including various defects, and identification information corresponding to each feature quantity. In this embodiment, an identification code representing a 3-bit numerical value "0" to "7" is used as the identification information.

本実施例では欠陥の識別対象として (1)正常 (2)縦(搬送方向と並行)方向の大きな欠陥・・・糸
抜は 縦方向の小さな欠陥 横方向の大きな欠陥 横方向の大きな欠陥 大きな円形欠陥 小さな円形欠陥 8)その他 ・・・はころび ・・・糸抜け ・・・はころび ・・・しみ ・・・ゴミの付着 ・・・色むら の8種類を取り扱う。
In this example, defects are identified as (1) normal (2) large defect in the vertical (parallel to the conveyance direction) direction... Thread removal is a small defect in the vertical direction, a large defect in the horizontal direction, a large defect in the horizontal direction, a large circle DefectsSmall circular defects 8)Others: 8 types of defects: falling, thread loss, falling, stains, adhesion of dust, uneven coloring.

このような画像処理装置の回路動作を次に説明する。The circuit operation of such an image processing device will be explained next.

検査対象の搬送中の製品に対してラインセンサ10によ
る10回の読取り走査を行うと、予測フィルタ30は各
列毎の現時点における予測誤差を算出する。
When the line sensor 10 performs ten reading scans on the product being inspected while it is being transported, the prediction filter 30 calculates the prediction error at the current moment for each column.

統計演算回路40では各列の予測誤差から、予測誤差に
ついての6種の特徴量を算出し、神経回路網50に入力
する。神経回路網50では入力の特徴量と予め学習の特
徴量との比較が中間層52で行われ、入力の特徴量を表
す信号が、最も近い特徴量の種類情報を示す識別信号に
変換される。
The statistical calculation circuit 40 calculates six types of feature quantities regarding the prediction error from the prediction error of each column, and inputs them to the neural network 50. In the neural network 50, a comparison between the input feature amount and the previously learned feature amount is performed in the intermediate layer 52, and the signal representing the input feature amount is converted into an identification signal indicating the type information of the closest feature amount. .

この結果、検査対象の製品からサンプリングされたIO
X 10個の画像データの中に、たとえば縦方向の大き
な欠陥画像が含まれている場合、神経回路網50から識
別コード“2”を表わす欠陥情報が8カされる。
As a result, the IOs sampled from the product under inspection
If the 10 image data include, for example, a large defect image in the vertical direction, the neural network 50 outputs 8 pieces of defect information representing the identification code "2".

以下、ラインセンサ10において画像の読取り走査が行
われる毎に、現時点と、その時点から10行前の読取り
時点との間で読取られた画像データ群の特徴の抽出が予
測フィルタ30および統計演算回路40においてなされ
る。また、神経回路網50では抽出された特徴に基づき
、欠陥の有無、欠陥の内容の識別処理がラインセンサ1
0の読取り走査毎に行われる。
Hereinafter, every time an image is read and scanned in the line sensor 10, the prediction filter 30 and the statistical calculation circuit extract the features of the image data group read between the current time and the reading time 10 lines before that time. Done at 40. Furthermore, based on the extracted features, the neural network 50 performs processing to identify the presence or absence of a defect and the content of the defect in the line sensor 1.
This is done every 0 read scan.

本実施例の他、次の例が挙げられる。In addition to this embodiment, the following examples are given.

1)本実施例では、予測誤差についての各種統計的な特
徴量を用いて欠陥内容の識別を行ったが、予測誤差の大
きさモのものを欠陥の内容を示す特徴量としてもよい。
1) In this embodiment, the defect contents are identified using various statistical feature amounts regarding prediction errors, but the magnitude of the prediction error may be used as the feature amount indicating the defect contents.

この場合、各列毎の予測誤差を統計演算回路40で合計
し、第3図に示すように、予測誤差の合計値がどの種類
のしきい値範囲に入るかを神経回路50により決定する
In this case, the prediction errors for each column are summed up by the statistical calculation circuit 40, and as shown in FIG. 3, the neural circuit 50 determines which type of threshold range the total value of the prediction errors falls within.

なお、このとき、複数回の読取り走査により取得した予
測値の移動平均を神経回路網50への入力値として用い
るとさらに欠陥内容の識別精度を向上させることができ
る。
Note that at this time, if the moving average of the predicted values obtained through multiple reading scans is used as the input value to the neural network 50, the accuracy of identifying the defect content can be further improved.

2)本実施例では予測フィルタ30において実行する予
測演算式(1)の中の乗算係数a1〜a、を予め設定を
しているが、乗算係数a、−a、の値を可変設定可能な
予測フィルタを用いる場合、次のような演算処理を行う
ことにより、乗算係数の個数をも可変設定することがで
きる。
2) In this embodiment, the multiplication coefficients a1 to a in the prediction calculation formula (1) to be executed in the prediction filter 30 are set in advance, but the values of the multiplication coefficients a and -a can be set variably. When using a prediction filter, the number of multiplication coefficients can also be variably set by performing the following arithmetic processing.

すなわち、重み係数akを次式(2)により表わす。That is, the weighting coefficient ak is expressed by the following equation (2).

am  1.lsw=ai+  o+a+ C1’CL
、a  Xt、j’Xt−mJ(k = 1.2. ・
−m)     (2)ak□、は更新後の重みを示し
、akolllは更新前の重みを示す。αは適当な固定
の係数で0.2前後の値がよい。a、。14を初期値“
O”とおいて、以下類に(X、、、−X、、、)”が最
小(数値“O”に近い値)になるように重み係数を変更
する。
am 1. lsw=ai+o+a+C1'CL
, a Xt, j'Xt-mJ (k = 1.2. ・
-m) (2) ak□ indicates the weight after update, and akoll indicates the weight before update. α is an appropriate fixed coefficient and preferably has a value of around 0.2. a. 14 is the initial value “
O'', the weighting coefficients are changed in the following classes so that (X, , -X, , )'' becomes the minimum (a value close to the numerical value "O").

次に、N個(有限個)の連続した欠陥が存在しない観測
値を数十個から数百個あるいはそれ以上予測フィルタ3
0内に蓄積しておく。そして、適当な小さな値mを設定
し、主に係数の全てが収束条件になるまで、例えば重み
係数の更新前後での差の絶対値が1O−3以下になるま
でN個の観測値を用いて上述の繰り返し式(1)と式(
2)を実行する。
Next, the prediction filter 3 selects N (a finite number) consecutive observed values that do not have any defects from several tens to hundreds or more.
Store it within 0. Then, set an appropriately small value m and use N observed values until all the coefficients meet the convergence condition, for example, until the absolute value of the difference between before and after updating the weighting coefficients becomes 1O-3 or less. Then, the above-mentioned iterative formula (1) and formula (
Execute 2).

そして係数値が収束した後、mの関数である最終予測誤
差FPE(lIllを次式により計算する。
After the coefficient values converge, the final prediction error FPE (lIll), which is a function of m, is calculated using the following equation.

FPE(+e)= (N++++÷1)/ fN−m−
1)・(1/ (N−11) )   Σ  (X、−
X、)”   (3)次にmの値を1つ大きくし、上で
述べたことを繰り返す。つまり重み係数をN個の観測値
から求め、FPE(m)を計算する。この操作を次々に
mを増やしながら進める。そしてFPE(厘)が最小に
なるmを求める。この時のmが最適な次数mであり、重
み係数もこの時のものを使う。このような演算処理を予
測フィルタ30において、自動的に行うと、重み係数a
、〜a、が設定される。
FPE (+e) = (N++++÷1)/fN-m-
1)・(1/ (N-11) ) Σ (X, -
X, )" (3) Next, increase the value of m by one and repeat the above. In other words, find the weighting coefficient from N observed values and calculate FPE(m). Repeat this operation one after another. Proceed while increasing m.Then, find m that minimizes FPE.The m at this time is the optimal order m, and the weighting coefficient at this time is also used.Such arithmetic processing is used as a prediction filter. 30, if done automatically, the weighting coefficient a
, ~a, are set.

[発明の効果1 以上、説明したように、本発明によれば、検査対象の物
体の表面の欠陥の有無だけではなく、欠陥内容の識別を
同時に行うことができるので、外観検査時間が短いとい
う効果が得られる。
[Advantageous Effects of the Invention 1] As explained above, according to the present invention, it is possible to simultaneously identify not only the presence or absence of defects on the surface of the object to be inspected, but also the details of the defects, which shortens the visual inspection time. Effects can be obtained.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図は本発明実施例の回路構成を示すブロック図、 第2図は本発明実施例のラインセンサの設置位置を示す
斜視図、 第3図は本発明他の実施例における欠陥内容の識別処理
を説明するための波形図である。 10ライソこンプ 5・・・検査対象、 10・・・ラインセンサ、 15・・・特徴抽出回路、 20・・・シリアル/パラレル変換器、30・・・予測
フィルタ、 40・・・統計演算回路、 50・・・神経回路網。 盈敷 第2図
Fig. 1 is a block diagram showing the circuit configuration of an embodiment of the present invention, Fig. 2 is a perspective view showing the installation position of a line sensor in an embodiment of the invention, and Fig. 3 is identification of defect content in another embodiment of the invention. FIG. 3 is a waveform diagram for explaining processing. 10 Lyso Comp 5... Inspection object, 10... Line sensor, 15... Feature extraction circuit, 20... Serial/parallel converter, 30... Prediction filter, 40... Statistical calculation circuit , 50... Neural network. Eishiki Figure 2

Claims (1)

【特許請求の範囲】 1)正常な外観を有する物体における画像の特徴を示す
第1特徴情報および外観に各種欠陥を有する前記物体に
おける画像の特徴を示す第2特徴情報ならびに、前記第
1特徴情報および前記第2特徴情報にそれぞれ割当てた
識別情報を予め学習し、第3特徴情報を入力したときに
、当該第3特徴情報に最も近い前記第1特徴情報または
前記第2特徴情報に対応の識別情報を出力する神経回路
網と、 検査対象の前記物体の画像から、前記神経回路網に入力
するための前記第3特徴情報を抽出する特徴抽出回路と を具えたことを特徴とする画像処理装置。
[Scope of Claims] 1) First feature information indicating features of an image of an object having a normal appearance, second feature information indicating features of an image of the object having various defects in appearance, and the first feature information. and the identification information respectively assigned to the second feature information is learned in advance, and when the third feature information is input, the identification information corresponding to the first feature information or the second feature information closest to the third feature information is learned. An image processing device comprising: a neural network that outputs information; and a feature extraction circuit that extracts the third feature information to be input to the neural network from an image of the object to be inspected. .
JP2221034A 1990-08-24 1990-08-24 Image processing apparatus Pending JPH04104048A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2221034A JPH04104048A (en) 1990-08-24 1990-08-24 Image processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2221034A JPH04104048A (en) 1990-08-24 1990-08-24 Image processing apparatus

Publications (1)

Publication Number Publication Date
JPH04104048A true JPH04104048A (en) 1992-04-06

Family

ID=16760450

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2221034A Pending JPH04104048A (en) 1990-08-24 1990-08-24 Image processing apparatus

Country Status (1)

Country Link
JP (1) JPH04104048A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08501386A (en) * 1992-09-07 1996-02-13 アグロビジョン・エービー Method and apparatus for automatic evaluation of cereal grains and other granular products

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH08501386A (en) * 1992-09-07 1996-02-13 アグロビジョン・エービー Method and apparatus for automatic evaluation of cereal grains and other granular products

Similar Documents

Publication Publication Date Title
JP7004145B2 (en) Defect inspection equipment, defect inspection methods, and their programs
US5046111A (en) Methods and apparatus for optically determining the acceptability of products
JP2877405B2 (en) Image update detection method and image update detection device
US4378495A (en) Method and apparatus for setup of inspection devices for glass bottles
KR20090101356A (en) Defect detecting device, and defect detecting method
WO2021205573A1 (en) Learning device, learning method, and inference device
KR100204215B1 (en) Methods of inspection
JPH04104048A (en) Image processing apparatus
CN115546141A (en) Small sample Mini LED defect detection method and system based on multi-dimensional measurement
JPH08145907A (en) Inspection equipment of defect
JP3127598B2 (en) Method for extracting density-varying constituent pixels in image and method for determining density-fluctuation block
JP3012295B2 (en) Image processing apparatus and method
CN112070847A (en) Wood floor color sorting method and device
Somwang et al. Image Processing for Quality Control in Manufacturing Process
JPH04364449A (en) Fruit defect detecting apparatus
JPH09319871A (en) Visual check assisting device for printed matter
KR19980086523A (en) Optical nonuniformity inspection device and optical nonuniformity inspection method
KR20070032571A (en) Processing method for calculation water level with image of level dictator
CN110349133A (en) Body surface defect inspection method, device
CN111583247B (en) Image binarization processing method and device
CN113870255B (en) Mini LED product defect detection method and related equipment
JPH08247962A (en) Defect detection method and apparatus for color filter and defect detection method and apparatus for panel display
JPH04297961A (en) Magnetic particle inspection data processing device
JPH03160309A (en) Image quality testing apparatus
JP3785693B2 (en) Image processing inspection equipment