JP2516844B2 - Parts detection method and device - Google Patents

Parts detection method and device

Info

Publication number
JP2516844B2
JP2516844B2 JP3050276A JP5027691A JP2516844B2 JP 2516844 B2 JP2516844 B2 JP 2516844B2 JP 3050276 A JP3050276 A JP 3050276A JP 5027691 A JP5027691 A JP 5027691A JP 2516844 B2 JP2516844 B2 JP 2516844B2
Authority
JP
Japan
Prior art keywords
component
area
edge
image
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP3050276A
Other languages
Japanese (ja)
Other versions
JPH04269606A (en
Inventor
真次 稲垣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yamaha Motor Co Ltd
Original Assignee
Yamaha Motor Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yamaha Motor Co Ltd filed Critical Yamaha Motor Co Ltd
Priority to JP3050276A priority Critical patent/JP2516844B2/en
Publication of JPH04269606A publication Critical patent/JPH04269606A/en
Application granted granted Critical
Publication of JP2516844B2 publication Critical patent/JP2516844B2/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】本発明は、画像処理によって部品
の位置を検出する部品検出方法及び装置に関する。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a component detecting method and device for detecting the position of a component by image processing.

【0002】[0002]

【従来の技術】従来、電子部品等の微小部品の位置を画
像認識によって検出する手法の1つとして、画像処理領
域を複数の小領域に予め分割しておき、先ず粗い走査に
よって対象物が存在する領域を或る一定の画像濃度値を
超える領域として認識し、次にその小領域のみを新たに
密に走査することによって速やかに部品の位置を検出す
る方法が提案されている(特開平1−240987号公
報参照)。
2. Description of the Related Art Conventionally, as one of methods for detecting the position of a minute component such as an electronic component by image recognition, an image processing area is divided into a plurality of small areas in advance, and an object is first detected by rough scanning. There has been proposed a method of recognizing an area to be processed as an area exceeding a certain image density value, and then newly densely scanning only the small area to quickly detect the position of the component (Japanese Patent Laid-Open No. HEI-1). -240987 gazette).

【0003】[0003]

【発明が解決しようとする課題】しかしながら、上記従
来の方法においては、予め密に走査すべき小領域の大き
さ(面積)が決められているため、実際にはその小領域
の中でも走査する必要のない部分(部品が存在しない部
分)までも密に走査することとなり、無駄な処理時間を
費やしてしまうという問題があった。
However, in the above-mentioned conventional method, since the size (area) of the small area to be densely scanned is determined in advance, it is actually necessary to scan even in the small area. There is a problem in that even a part without a part (a part where no part exists) is densely scanned, resulting in wasted processing time.

【0004】本発明は上記問題に鑑みてなされたもの
で、その目的とする処は、画像処理時間の短縮を図って
迅速に部品の位置を検出することができる部品検出方法
及び装置を提供することにある。
The present invention has been made in view of the above problems, and an object of the present invention is to provide a component detecting method and device capable of quickly detecting the position of a component by shortening the image processing time. Especially.

【0005】[0005]

【課題を解決するための手段】本発明方法は、部品の少
なくとも一端部が含まれる領域を処理画面上に指定し、
その領域内の画像を走査することによって部品の一端部
の位置を検出し、該部品の一端部の位置情報と部品の外
形情報に基づいて部品の他の一端部が含まれる領域を決
定し、その領域内の画像を走査して部品の他の一端部の
位置を検出し、今までに検出された部品端部の位置情報
と部品の外形情報に基づいて部品の次の一端部が含まれ
る領域を決定し、以後同様にして部品の全ての端部位置
を検出することを特徴とする。
According to the method of the present invention, an area including at least one end of a part is designated on a processing screen,
The position of one end of the component is detected by scanning the image in the region, and the region including the other end of the component is determined based on the position information of the one end of the component and the outline information of the component, The image in that area is scanned to detect the position of the other end of the component, and the next end of the component is included based on the position information of the component end detected so far and the outline information of the component. It is characterized in that the area is determined, and thereafter all the end positions of the parts are detected in the same manner.

【0006】又、本発明装置は、部品を撮像する撮像手
段と、該撮像手段によって得られる画像を指定された領
域内で走査することによって部品の端部を検出する端部
検出手段と、該端部検出手段によって検出された部品端
部の位置情報と部品の外形情報に基づいて前記端部検出
手段が次に検出すべき画面上の領域を決定する処理領域
決定手段と、前記端部検出手段によって得られた部品端
部の位置情報に基づいて画像を走査することによって部
品の位置を算出し、その結果を出力する位置決め出力手
段を含んで構成されることを特徴とする。
Further, the apparatus of the present invention comprises an image pickup means for picking up an image of a part, an end detecting means for detecting an end of the part by scanning an image obtained by the image pickup means in a designated area, and Processing area determining means for determining the area on the screen to be detected next by the edge detecting means based on the position information of the edge of the component detected by the edge detecting means and the outline information of the component; It is characterized by including a positioning output means for calculating the position of the component by scanning the image based on the position information of the component end obtained by the means, and outputting the result.

【0007】[0007]

【作用】本発明によれば、部品の或る端部位置を検出す
るために走査すべき処理領域は、今までに検出された部
品の他の端部位置の情報と既知の部品の外形情報に基づ
いて決定されるため、その処理領域を次第に狭めていく
ことが可能となり、走査すべき画面上の領域面積を従来
法のそれに比して小さく抑えることができ、この結果、
画像処理時間が短縮されて部品の位置検出速度が高めら
れる。
According to the present invention, the processing area to be scanned in order to detect a certain end position of the part is the information of the other end position of the part detected so far and the known outline information of the part. Since it is determined based on, it is possible to gradually narrow the processing area, it is possible to suppress the area area on the screen to be scanned smaller than that of the conventional method, as a result,
The image processing time is shortened and the position detection speed of the component is increased.

【0008】[0008]

【実施例】以下に本発明の一実施例を添付図面に基づい
て説明する。
An embodiment of the present invention will be described below with reference to the accompanying drawings.

【0009】図1は本発明に係る部品検出装置の構成を
示すブロック図であり、本実施例においては、周囲に複
数のピンを有する微小な電子部品の位置を検出する。
FIG. 1 is a block diagram showing the structure of a component detecting apparatus according to the present invention. In this embodiment, the position of a minute electronic component having a plurality of pins around it is detected.

【0010】先ず、図1に基づいて部品検出装置の構成
を説明すると、該部品検出装置は、対象物である部品を
撮像するテレビカメラ等の撮像手段1と、該撮像手段1
によって得られる画像を保管する記憶手段2と、該記憶
手段2に保管された画像を画面上の指定された領域内で
走査することによって部品の端部(ピンエッジ)を検出
する端部検出手段3と、該端部検出手段3によって検出
された部品端部(ピンエッジ)の位置情報と部品の既知
の外形情報に基づいて、前記端部検出手段3が次に検出
すべき画面上の領域を決定するための処理領域決定手段
4と、位置決め出力手段5とで構成されている。
First, the structure of the component detecting device will be described with reference to FIG. 1. The component detecting device includes an image pickup means 1 such as a television camera for picking up an image of a component which is an object, and the image pickup means 1.
Storage means 2 for storing the image obtained by the above, and edge detection means 3 for detecting the edge (pin edge) of the component by scanning the image stored in the storage means 2 within a designated area on the screen. And the area on the screen to be detected next by the edge detecting means 3 based on the position information of the edge (pin edge) of the component detected by the edge detecting means 3 and the known outline information of the component. The processing area determination means 4 and the positioning output means 5 are included in the processing area determination means 4.

【0011】上記位置決め出力手段5は、前記端部検出
手段3によって得られた部品端部(ピンエッジ)の位置
情報に基づいて画像を走査することによってピン列を検
出するピン列検出手段6と、該ピン列検出手段6によっ
て得られたピン列の位置情報に基づいて部品の位置(中
心位置)を算出する演算手段7と、該演算手段7によっ
て算出された結果を出力する出力手段8とで構成されて
いる。
The positioning output means 5 detects the pin row by scanning the image based on the position information of the end (pin edge) of the component obtained by the end detection means 3, and a pin row detection means 6; The calculating means 7 calculates the position (center position) of the component based on the position information of the pin row obtained by the pin row detecting means 6, and the output means 8 that outputs the result calculated by the calculating means 7. It is configured.

【0012】ここで、電子部品の位置を検出する手順を
図2乃至図8に基づいて具体的に説明する。尚、図2は
処理画面を示す図、図3及び図4はピンエッジの検出方
法を示す説明図、図5はピン列の検出方法を示す説明
図、図6は図5のP部(画像濃度分布)の拡大図、図7
は部品の中心位置の算出方法を示す説明図、図8は位置
検出手順を示すフローチャートである。
Here, the procedure for detecting the position of the electronic component will be specifically described with reference to FIGS. 2 is a diagram showing a processing screen, FIGS. 3 and 4 are explanatory diagrams showing a pin edge detecting method, FIG. 5 is an explanatory diagram showing a pin row detecting method, and FIG. 6 is a P portion (image density) of FIG. Enlarged view of distribution), Figure 7
Is an explanatory view showing a method of calculating the center position of a component, and FIG. 8 is a flowchart showing a position detection procedure.

【0013】前記撮像手段1によって撮像された部品は
図2に示される処理画面上にWとして表示され、これの
周囲には複数のピン9…が突出している。
The component imaged by the image pickup means 1 is displayed as W on the processing screen shown in FIG. 2, and a plurality of pins 9 ...

【0014】先ず、処理画面上に表示される部品画像W
の上左部のピン9Aが検出されるが、この検出に際して
は図2の処理領域S1が設定され(図8のステップ
1)、この処理領域S1内で前記端部検出手段3による
ピン9Aの検出が行なわれる。尚、処理領域S1は、部
品が収納されている不図示のトレイ内で該部品が移動し
得る距離範囲によって決定される。
First, the part image W displayed on the processing screen.
The pin 9A in the upper left part is detected, and the processing area S1 of FIG. 2 is set at the time of this detection (step 1 of FIG. 8), and the pin 9A by the edge detecting means 3 is detected in this processing area S1. Detection is performed. The processing area S1 is determined by the distance range within which a component can be moved within a tray (not shown) in which the component is stored.

【0015】而して、処理領域S1においては、図3に
示すように端部検出手段3によってa点から所定の間隔
で4ライン毎に右方向に走査が行なわれ(図8のステッ
プ2)、各走査ラインL0,L4,L8,L12におい
ては画像濃度が或る閾値Th1以上で、且つ、巾が所定
範囲内となる最初の点(図3に示す例では、b点)を見
付けるために走査が行なわれる。そして、その検出点を
左上ピンエッジの1次候補点として次の2次検出が行な
われる。
Then, in the processing area S1, as shown in FIG. 3, the edge detecting means 3 scans rightward every four lines from the point a at a predetermined interval (step 2 in FIG. 8). In order to find the first point (the point b in the example shown in FIG. 3) at which the image density is above a certain threshold Th1 and the width is within a predetermined range in each scanning line L0, L4, L8, L12. The scan is performed. Then, the detection point is used as the primary candidate point of the upper left pin edge, and the next secondary detection is performed.

【0016】2次検出においては、図4に示すように、
1次候補点(b点)を通る走査ラインL12から3ライ
ンだけ戻って走査ラインL9から1ライン毎に4ライン
連続走査する。例えば、図4において走査ラインL9か
らL12までの4ライン連続走査においては、ピン9…
(図示例では、ピン9C,9D,9E,9F)の位置が
ラインL9,L10,L11毎に変わるため、これらの
ピン9…(9C,9D,9E,9F)は求める上左部の
ピン9Aとは認識されない。この場合には、1次検出に
戻って候補点を再度探す。
In the secondary detection, as shown in FIG.
Only three lines are returned from the scanning line L12 passing through the primary candidate point (point b), and four lines are continuously scanned for each line from the scanning line L9. For example, in four-line continuous scanning from scanning lines L9 to L12 in FIG.
Since the positions of the pins (9C, 9D, 9E, 9F in the illustrated example) change for each of the lines L9, L10, L11, these pins 9 ... (9C, 9D, 9E, 9F) are to be obtained from the upper left pin 9A. Is not recognized. In this case, returning to the primary detection, the candidate point is searched again.

【0017】而して、図4に示すように、走査ラインL
13〜L16の4ライン連続走査においては、3走査ラ
インL14,L15,L16においてピン9Aの位置
A,A’,A’’点が所定巾内(例えば、±1画素の範
囲内)にあるため、ピン9Aを上左部のピンであると認
識し、且つ、最も上の走査ラインL14上の点Aを検出
すべきピンエッジ点とする。
Thus, as shown in FIG. 4, the scanning line L
In four-line continuous scanning of 13 to L16, the positions A, A ', A''of the pin 9A are within a predetermined width (for example, within ± 1 pixel range) in the three scanning lines L14, L15, L16. , The pin 9A is recognized as the upper left portion pin, and the point A on the uppermost scanning line L14 is set as the pin edge point to be detected.

【0018】以上のようにして、処理領域S1内で上左
部のピン9Aのエッジ点Aが検出されると、前記処理領
域決定手段4は、このエッジ点A(x1,y1)位置と
部品自体の既知の外形情報(本実施例では、部品の同一
辺上の両端のピン9,9間の距離d)に基づいて、端部
検出手段3が上右部のピン9Bを検出するために走査す
べき処理領域S2を決定する(図8のステップ4)。ピ
ン9Bのエッジ点B(x2,y2)は部品の最大傾き角
θと前記距離dとで表わされる次式:
When the edge point A of the upper left pin 9A is detected in the processing area S1 as described above, the processing area determining means 4 determines the position of this edge point A (x1, y1) and the component. In order to detect the upper right pin 9B by the edge detecting means 3 based on the known external shape information (in this embodiment, the distance d between the pins 9 on both sides on the same side of the component). The processing area S2 to be scanned is determined (step 4 in FIG. 8). The edge point B (x2, y2) of the pin 9B is represented by the following equation expressed by the maximum inclination angle θ of the component and the distance d:

【0019】[0019]

【数1】 x1+d・cosθ≦x2≦x1+d y1−d・sinθ≦y2≦y1+d・sinθ の範囲内にあると推定されるため、図2に示すようにエ
ッジ点Bを含む処理領域S2の面積は処理領域S1のそ
れよりも小さくなる。
Since it is estimated that x1 + d · cos θ ≦ x2 ≦ x1 + d y1−d · sin θ ≦ y2 ≦ y1 + d · sin θ, the area of the processing area S2 including the edge point B as shown in FIG. It is smaller than that of the processing area S1.

【0020】而して、端部検出手段3による処理領域S
2内での左向きの走査(図8のステップ5)によって前
述と同様に上右部のピン9Bのエッジ点Bが検出される
(図8のステップ6)が、前述のように処理領域S2の
面積は処理領域S1のそれよりも小さいため、画像処理
時間が短縮されてエッジ点Bの検出速度が高められる。
Thus, the processing area S by the edge detecting means 3
The edge point B of the pin 9B in the upper right portion is detected by the leftward scanning in step 2 (step 5 in FIG. 8) as described above (step 6 in FIG. 8). Since the area is smaller than that of the processing region S1, the image processing time is shortened and the detection speed of the edge point B is increased.

【0021】このようにして、ピン9A,9Bのエッジ
点A,Bが求められると、図5に示すように、点A,B
を結ぶ直線を所定量だけオフセットした直線Lを走査ラ
インとして走査が行なわれ、この走査によって同図に示
すような走査ラインLに沿う画像濃度分布が得られる
(図8のステップ7)。又、直線Lを複数(例えば、3
本の場合は直線Lの前後1本ずつとなる)取り、それら
の直線L上での画像濃度分布を足し合わせることによっ
て、より精度を上げることも可能である。
When the edge points A and B of the pins 9A and 9B are obtained in this way, the points A and B are obtained as shown in FIG.
Scanning is performed by using a straight line L obtained by offsetting a straight line connecting the lines by a predetermined amount as a scanning line, and an image density distribution along the scanning line L as shown in FIG. 8 is obtained by this scanning (step 7 in FIG. 8). Also, a plurality of straight lines L (for example, 3
In the case of a book, one line is provided before and after the straight line L), and the image density distributions on the straight line L are added together to further improve the accuracy.

【0022】次に、以上と同様の手順によって下左部の
ピン9Cのエッジ点C及び下右部のピン9Dのエッジ点
D(図2参照)を検出し(図8のステップ8〜13)、
左右端のピン9C,9D間の画像濃度分布を得る(図8
のステップ14)。このとき、端部検出手段3がピン9
C,9Dを検出するために走査すべき処理領域S3,S
4(図2参照)は処理領域決定手段4によって設定され
る(図8のステップ8,11)が、処理領域S3は既に
求められたエッジ点A,Bの位置情報と部品の外形情報
に基づいて決定され、処理領域S4はエッジ点A,B,
Cの位置情報と部品の外形情報に基づいて決定されるた
め、これら処理領域S3,S4の面積は図2に示すよう
に次第に小さくなり、処理領域S1,S2,S3,S4
の面積はこの順に小さくなる。従って、エッジ点A,
B,C,Dを検出するために要する画像処理時間は次第
に短縮され、検出速度が次第に高められる。
Next, the edge point C of the lower left pin 9C and the edge point D of the lower right pin 9D (see FIG. 2) are detected by the same procedure as described above (steps 8 to 13 in FIG. 8). ,
The image density distribution between the left and right pins 9C and 9D is obtained (FIG. 8).
Step 14). At this time, the end portion detecting means 3 moves to the pin 9
Processing areas S3, S to be scanned to detect C, 9D
4 (see FIG. 2) is set by the processing area determining means 4 (steps 8 and 11 in FIG. 8), but the processing area S3 is based on the already obtained position information of the edge points A and B and the external shape information of the parts. And the processing area S4 is determined by the edge points A, B,
Since it is determined based on the position information of C and the outer shape information of the parts, the areas of these processing regions S3, S4 become gradually smaller as shown in FIG. 2, and the processing regions S1, S2, S3, S4.
Area decreases in this order. Therefore, the edge point A,
The image processing time required for detecting B, C, and D is gradually shortened, and the detection speed is gradually increased.

【0023】以上のようにして上下辺の左右端ピン9
A,9B間及び9C,9D間の画像濃度分布が求められ
ると、前記ピン列検出手段6はこの画像濃度分布に基づ
いて上下の辺のピン列9…の位置を検出し(図8のステ
ップ15)、前記演算手段7はピン列検出手段6によっ
て検出されたピン列9…の位置情報に基づいて該上下の
辺のピン列9…の中心を演算する(図8のステップ1
6)。
As described above, the left and right end pins 9 on the upper and lower sides
When the image density distributions between A and 9B and between 9C and 9D are obtained, the pin row detecting means 6 detects the positions of the pin rows 9 on the upper and lower sides based on this image density distribution (step of FIG. 8). 15), the calculating means 7 calculates the centers of the pin rows 9 ... on the upper and lower sides based on the position information of the pin rows 9 ... Detected by the pin row detecting means 6 (step 1 in FIG. 8).
6).

【0024】ここで、上の辺のピン列9…の位置の検出
法を説明すると、図6に示す画像濃度分布における或る
閾値Th2を跨ぐ4点g,h,i,j点を求めた後、左
側、右側の各エッジ点m1,m2を補間によって求め、
これらの中点mをそのピン9の中点とする。そして、こ
のようにして求められる全てのピン9…の中点座標の重
心を算出すれば、その点が上の辺のピン列9…の中点M
1となる(図5及び図7参照)。
Here, the method of detecting the positions of the pin rows 9 on the upper side will be described. Four points g, h, i and j points across a certain threshold Th2 in the image density distribution shown in FIG. 6 were obtained. After that, the left and right edge points m1 and m2 are obtained by interpolation,
Let these midpoints m be the midpoints of the pins 9. Then, if the center of gravity of the midpoint coordinates of all the pins 9 obtained in this way is calculated, that point is the midpoint M of the pin row 9 on the upper side.
1 (see FIGS. 5 and 7).

【0025】同様にして、下の辺のピン列9…の中点M
2(図7参照)が求められる。
Similarly, the middle point M of the lower row of pin rows 9 ...
2 (see FIG. 7) is required.

【0026】ところで、以上は上下の辺のピン列9…の
中点M1,M2を求めるまでの手順を説明したが、左右
の辺のピン列9…の中点M3,M4(図7参照)も全く
同様に求められる(図8のステップ17,18)。即
ち、図8のステップ1〜14と同様の手順で左右の辺の
ピン列9…におけるエッジ点E,F,G,H(図2及び
図7参照)が求められ、これらのピン列9…の画像濃度
分布が求められるが、エッジ点E,F,G,Hを検出す
るための処理領域は前述と同じ理由によって次第に狭め
られるため、画像処理時間が一層短縮されて検出を高速
で行なうことができるようになる。又、エッジ点A,
B,C,D及び部品の外形情報によりエッジ点E,F,
G,Hを検出することなく画像濃度分布を求めることも
可能であり、このようにすれば画像処理が更に高速化さ
れる。
By the way, the procedure for obtaining the midpoints M1, M2 of the pin rows 9 on the upper and lower sides has been described above, but the midpoints M3, M4 of the pin rows 9 on the left and right sides (see FIG. 7). Is obtained in exactly the same manner (steps 17 and 18 in FIG. 8). That is, the edge points E, F, G, H (see FIGS. 2 and 7) in the pin rows 9 on the left and right sides are obtained by the same procedure as steps 1 to 14 in FIG. However, the processing area for detecting the edge points E, F, G, and H is gradually narrowed for the same reason as described above, so that the image processing time is further shortened and the detection can be performed at high speed. Will be able to. Also, the edge point A,
Edge points E, F, according to the outline information of B, C, D and parts
It is also possible to obtain the image density distribution without detecting G and H. By doing so, the image processing can be further speeded up.

【0027】而して、上下及び左右のピン列9…の各中
点M2,M2,M3,M4が求められると、図7に示す
ように、相対向する中点M1とM2、M3とM4を結ぶ
それぞれの直線の交点を求めれば部品全体の重心Gが求
められ、これによって部品の位置及び傾きが演算され
(図8のステップ19)、この結果は前記位置決め出力
手段5の出力手段8によって出力され、この出力結果に
よって当該電子部品が位置決めされる。
When the middle points M2, M2, M3 and M4 of the upper and lower and left and right pin rows 9 are obtained, as shown in FIG. 7, the opposite middle points M1 and M2, M3 and M4. The center of gravity G of the whole part is calculated by finding the intersection of the straight lines connecting the two parts, and the position and inclination of the part are calculated (step 19 in FIG. 8). The result is output by the output means 8 of the positioning output means 5. It is output, and the electronic component is positioned by the output result.

【0028】[0028]

【発明の効果】以上の説明で明らかな如く、本発明によ
れば、部品の或る端部位置を検出するために走査すべき
処理領域は、今までに検出された部品の他の端部位置の
情報と既知の部品の外形情報に基づいて決定されるた
め、その処理領域を次第に狭めていくことが可能とな
り、走査すべき画面上の領域面積を従来法のそれに比し
て小さく抑えることができ、画像処理時間を短縮して部
品の位置検出速度を高めることができるという効果が得
られる。
As is apparent from the above description, according to the present invention, the processing area to be scanned for detecting one end position of a part is the other end part of the part which has been detected so far. Since it is determined based on the position information and the external information of known parts, the processing area can be gradually narrowed, and the area of the screen area to be scanned can be kept smaller than that of the conventional method. Therefore, it is possible to obtain the effect that the image processing time can be shortened and the component position detection speed can be increased.

【図面の簡単な説明】[Brief description of drawings]

【図1】本発明に係る部品検出装置の構成を示すブロッ
ク図である。
FIG. 1 is a block diagram showing a configuration of a component detection device according to the present invention.

【図2】処理画面を示す図である。FIG. 2 is a diagram showing a processing screen.

【図3】ピンエッジの検出方法を示す説明図である。FIG. 3 is an explanatory diagram showing a pin edge detection method.

【図4】ピンエッジの検出方法を示す説明図である。FIG. 4 is an explanatory diagram showing a pin edge detection method.

【図5】ピン列の検出方法を示す説明図である。FIG. 5 is an explanatory diagram showing a method of detecting a pin row.

【図6】図5のP部(画像濃度分布)の拡大図である。6 is an enlarged view of part P (image density distribution) of FIG.

【図7】部品の中心位置の算出方法を示す説明図であ
る。
FIG. 7 is an explanatory diagram showing a method of calculating the center position of a component.

【図8】位置検出手順を示すフローチャートである。FIG. 8 is a flowchart showing a position detection procedure.

【符号の説明】[Explanation of symbols]

1 撮像手段 2 記憶手段 3 端部検出手段 4 処理領域決定手段 5 位置決め出力手段 6 ピン列検出手段 7 演算手段 8 出力手段 9 ピン A〜H 部品のエッジ点(端部) S1〜S4 処理領域 W 部品(画像) DESCRIPTION OF SYMBOLS 1 Imaging means 2 Storage means 3 Edge detection means 4 Processing area determination means 5 Positioning output means 6 Pin row detection means 7 Calculation means 8 Output means 9 pins A to H Edge points (ends) of parts S1 to S4 Processing areas W Parts (image)

Claims (2)

(57)【特許請求の範囲】(57) [Claims] 【請求項1】 部品の少なくとも一端部が含まれる領域
を処理画面上に指定し、その領域内の画像を走査するこ
とによって部品の一端部の位置を検出し、該部品の一端
部の位置情報と部品の外形情報に基づいて部品の他の一
端部が含まれる領域を決定し、その領域内の画像を走査
して部品の他の一端部の位置を検出し、今までに検出さ
れた部品端部の位置情報と部品の外形情報に基づいて部
品の次の一端部が含まれる領域を決定し、以後同様にし
て部品の全ての端部位置を検出することを特徴とする部
品検出方法。
1. An area including at least one end of a part is designated on a processing screen, the position of one end of the part is detected by scanning an image in the area, and position information of the one end of the part is detected. Area that includes the other end of the part is determined based on the external information of the part and the part, and the position of the other end of the part is detected by scanning the image in the area. A component detecting method characterized in that a region including the next one end of a component is determined based on the position information of the end and the outer shape information of the component, and thereafter all the end positions of the component are detected in the same manner.
【請求項2】 部品を撮像する撮像手段と、該撮像手段
によって得られる画像を指定された領域内で走査するこ
とによって部品の端部を検出する端部検出手段と、該端
部検出手段によって検出された部品端部の位置情報と部
品の外形情報に基づいて前記端部検出手段が次に検出す
べき画面上の領域を決定する処理領域決定手段と、前記
端部検出手段によって得られた部品端部の位置情報に基
づいて画像を走査することによって部品の位置を算出
し、その結果を出力する位置決め出力手段を含んで構成
されることを特徴とする部品検出装置。
2. An image pickup means for picking up an image of a component, an edge detection means for detecting an edge of the component by scanning an image obtained by the image pickup means within a designated area, and the edge detection means. The processing area determining means determines the area on the screen to be detected next by the edge detecting means based on the detected positional information of the edge of the component and the external shape information of the component, and the edge detecting means. A component detecting device comprising a positioning output means for calculating the position of a component by scanning an image based on the position information of the end of the component and outputting the result.
JP3050276A 1991-02-25 1991-02-25 Parts detection method and device Expired - Lifetime JP2516844B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP3050276A JP2516844B2 (en) 1991-02-25 1991-02-25 Parts detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP3050276A JP2516844B2 (en) 1991-02-25 1991-02-25 Parts detection method and device

Publications (2)

Publication Number Publication Date
JPH04269606A JPH04269606A (en) 1992-09-25
JP2516844B2 true JP2516844B2 (en) 1996-07-24

Family

ID=12854418

Family Applications (1)

Application Number Title Priority Date Filing Date
JP3050276A Expired - Lifetime JP2516844B2 (en) 1991-02-25 1991-02-25 Parts detection method and device

Country Status (1)

Country Link
JP (1) JP2516844B2 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6690819B1 (en) 1999-02-12 2004-02-10 Juki Corporation Method and apparatus for recognizing components

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008151606A (en) * 2006-12-15 2008-07-03 Juki Corp Image processing method and image processing apparatus
JP5060675B2 (en) * 2007-11-13 2012-10-31 株式会社キーエンス Program creation device for image processing controller

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6342410A (en) * 1986-08-09 1988-02-23 Fujitsu Ltd Pattern detecting method
JPH01240987A (en) * 1988-03-23 1989-09-26 Toshiba Corp Flat package-shaped element positioning method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6690819B1 (en) 1999-02-12 2004-02-10 Juki Corporation Method and apparatus for recognizing components

Also Published As

Publication number Publication date
JPH04269606A (en) 1992-09-25

Similar Documents

Publication Publication Date Title
JP2919284B2 (en) Object recognition method
US7317474B2 (en) Obstacle detection apparatus and method
US6591005B1 (en) Method of estimating image format and orientation based upon vanishing point location
JPH01231183A (en) Linearity deciding device in image processor
JPH0658716A (en) Detecting method of edge on image
JPH08292014A (en) Measuring method of pattern position and device thereof
US6683977B1 (en) Method of taking three-dimensional measurements of object surfaces
JP3066173B2 (en) Pattern matching method
JP2516844B2 (en) Parts detection method and device
JP3066137B2 (en) Pattern matching method
JP2981382B2 (en) Pattern matching method
JPH0875454A (en) Range finding device
US20040146194A1 (en) Image matching method, image matching apparatus, and wafer processor
JP2961140B2 (en) Image processing method
JPH06168331A (en) Patter matching method
JPH11190611A (en) Three-dimensional measuring method and three-dimensional measuring processor using this method
JP2943257B2 (en) High accuracy position recognition method
JP2897439B2 (en) Corner position detection method
JP2010041416A (en) Image processing unit, image processing method, image processing program, and imaging apparatus
JPH07280560A (en) Correlation computation evaluating method
JPH05172531A (en) Distance measuring method
JPH05172535A (en) Peak-point detecting method and calibration method
JP2501150B2 (en) Laser welding method
JP2948617B2 (en) Stereo image association device
JPH1040393A (en) Method for recognizing image and device for recognizing image using the method and device for mounting electronic part

Legal Events

Date Code Title Description
R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090430

Year of fee payment: 13

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20090430

Year of fee payment: 13

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100430

Year of fee payment: 14

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100430

Year of fee payment: 14

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110430

Year of fee payment: 15

EXPY Cancellation because of completion of term