JP2016114433A - Image processing device, image processing method, and component supply device - Google Patents

Image processing device, image processing method, and component supply device Download PDF

Info

Publication number
JP2016114433A
JP2016114433A JP2014252498A JP2014252498A JP2016114433A JP 2016114433 A JP2016114433 A JP 2016114433A JP 2014252498 A JP2014252498 A JP 2014252498A JP 2014252498 A JP2014252498 A JP 2014252498A JP 2016114433 A JP2016114433 A JP 2016114433A
Authority
JP
Japan
Prior art keywords
image
posture
recognition
captured
component
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2014252498A
Other languages
Japanese (ja)
Other versions
JP6288517B2 (en
Inventor
裕志 阿武
Hiroshi Abu
裕志 阿武
英光 神
Hidemitsu Jin
英光 神
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yaskawa Electric Corp
Original Assignee
Yaskawa Electric Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yaskawa Electric Corp filed Critical Yaskawa Electric Corp
Priority to JP2014252498A priority Critical patent/JP6288517B2/en
Publication of JP2016114433A publication Critical patent/JP2016114433A/en
Application granted granted Critical
Publication of JP6288517B2 publication Critical patent/JP6288517B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Feeding Of Articles To Conveyors (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide an image processing device for determining the posture of each of components individually.SOLUTION: Provided is an image processing device for recognizing the captured image 11 of a component 100 transported in a prescribed transfer direction and thereby determining the posture of the component 100, the image processing device having: a procedure for importing the detection area image of a detection area 12 divided at a detection position in the transfer path 2a of the component 100 in the captured image 11; a procedure for importing the recognition area image of a recognition area 13 divided at a position adjacent to the detection area 12 on the upstream side of the transfer direction in the captured image 11; a procedure for detecting whether the component 100 exists at the detection position on the basis of the detection area image; and a procedure for determining the posture of the component 100 on the basis of the recognition area image when the presence of the component 100 at the detection position is detected.SELECTED DRAWING: Figure 2

Description

開示の実施形態は、画像処理装置、画像処理方法、及び部品供給装置に関する。   Embodiments disclosed herein relate to an image processing apparatus, an image processing method, and a component supply apparatus.

特許文献1には、振動フィーダによって移送される被移送品をカメラで撮影し、その撮影画像を画像認識することにより上記被移送品の形状方向を正しく認識する構成が記載されている。   Patent Document 1 describes a configuration in which a product to be transported that is transported by a vibration feeder is photographed by a camera and the shape direction of the product to be transported is correctly recognized by recognizing the photographed image.

特開平5−164529号公報JP-A-5-164529

しかしながら振動フィーダでは各被移送品の移送タイミングや移送速度を制御できないため、移送方向に2つの被移送品が接触したまま移送される連接移送状態となる場合があり、上記従来技術ではそれらの形状方向(姿勢)を個別に認識できなかった。   However, since the vibratory feeder cannot control the transfer timing and transfer speed of each transferred product, there are cases in which the two transferred products are transferred in contact with each other in the transfer direction. The direction (posture) could not be recognized individually.

本発明はこのような問題点に鑑みてなされたものであり、各被移送体のそれぞれに対し個別に姿勢を判別できる画像処理装置、画像処理方法、及び部品供給装置を提供することを目的とする。   The present invention has been made in view of such problems, and an object thereof is to provide an image processing device, an image processing method, and a component supply device that can individually determine the posture of each transported body. To do.

上記課題を解決するため、本発明の一の観点によれば、所定の移送方向に移送される被移送体の撮像画像を画像認識することにより当該被移送体の姿勢を判別する画像処理装置であって、前記撮像画像中において前記被移送体の移送経路上の所定位置で区画された検知領域の検知領域画像を取り込む検知画像取込手段と、前記撮像画像中において前記検知領域の前記移送方向の上流側に隣接する位置で区画された認識領域の認識領域画像を取り込む認識画像取込手段と、前記検知画像取込手段が取り込んだ前記検知領域画像に基づいて前記所定位置に前記被移送体が存在するか検知する検知手段と、前記検知手段が前記所定位置での前記被移送体の存在を検知した場合に、前記認識画像取込手段が取り込んだ前記認識領域画像に基づいて当該被移送体の姿勢を判別する判別手段と、を有する画像処理装置が適用される。   In order to solve the above-described problem, according to one aspect of the present invention, an image processing apparatus that recognizes an image of a captured image of a transferred object that is transferred in a predetermined transfer direction to determine the posture of the transferred object. A detection image capturing means for capturing a detection region image of a detection region partitioned at a predetermined position on a transport path of the transported object in the captured image, and the transport direction of the detection region in the captured image A recognition image capturing unit that captures a recognition area image of a recognition area partitioned at a position adjacent to the upstream side of the detection object, and the transferred object at the predetermined position based on the detection region image captured by the detection image capturing unit. Detecting means for detecting whether or not there is present, and when the detecting means detects the presence of the transported object at the predetermined position, the detection means is based on the recognition area image captured by the recognized image capturing means. And discriminating means for discriminating the orientation of the conveying member, an image processing apparatus having applied.

また、本発明の別の観点によれば、所定の移送方向に移送される被移送体の撮像画像を画像認識することにより前記被移送体の姿勢を判別する画像処理方法であって、前記撮像画像中において前記被移送体の移送経路上の所定位置で区画された検知領域の検知領域画像を取り込むことと、前記撮像画像中において前記検知領域の前記移送方向の上流側に隣接する位置で区画された認識領域の認識領域画像を取り込むことと、取り込んだ前記検知領域画像に基づいて前記所定位置に前記被移送体が存在するか検知することと、前記所定位置での前記被移送体の存在を検知した場合に、取り込んだ前記認識領域画像に基づいて当該被移送体の姿勢を判別することと、を実行する画像処理方法が適用される。   According to another aspect of the present invention, there is provided an image processing method for discriminating a posture of the transported body by recognizing an image of a captured image of the transported body that is transported in a predetermined transport direction. Capturing a detection region image of a detection region partitioned at a predetermined position on a transport path of the transported object in the image, and partitioning at a position adjacent to the upstream side of the detection region in the transport direction in the captured image Capturing a recognition area image of the recognized recognition area, detecting whether the transported object is present at the predetermined position based on the captured detection area image, and existence of the transported object at the predetermined position An image processing method is executed to detect the posture of the transferred object based on the captured recognition area image when the image is detected.

また、本発明の別の観点によれば、所定の移送方向に移送される被移送体の撮像画像を画像認識することにより前記被移送体の姿勢を判別する画像処理装置が備える演算装置に実行させる画像処理プログラムであって、前記撮像画像中において前記被移送体の移送経路上の所定位置で区画された検知領域の検知領域画像を取り込むことと、前記撮像画像中において前記検知領域の前記移送方向の上流側に隣接する位置で区画された認識領域の認識領域画像を取り込むことと、取り込んだ前記検知領域画像に基づいて前記所定位置に前記被移送体が存在するか検知することと、前記所定位置での前記被移送体の存在を検知した場合に、取り込んだ前記認識領域画像に基づいて当該被移送体の姿勢を判別することと、
を実行する画像処理プログラムが適用される。
According to another aspect of the present invention, the image processing apparatus includes an image processing device that determines an attitude of the transported body by recognizing a captured image of the transported body that is transported in a predetermined transport direction. An image processing program for capturing a detection area image of a detection area partitioned at a predetermined position on a transfer path of the transferred object in the captured image, and transferring the detection area in the captured image Capturing a recognition area image of a recognition area partitioned at a position adjacent to the upstream side of the direction, detecting whether the transported object is present at the predetermined position based on the captured detection area image, When detecting the presence of the transported body at a predetermined position, determining the posture of the transported body based on the captured recognition area image;
An image processing program for executing is applied.

また、本発明の別の観点によれば、内部に貯留した複数の部品を移送経路上に一列に移送させる移送手段と、前記移送経路を撮像する撮像手段と、前記撮像手段で撮像された前記部品の撮像画像を画像認識することにより当該部品の姿勢を判別する画像処理装置と、前記画像処理装置によって所定の姿勢にないと判別された部品を前記移送経路から排除する排除手段と、を有する部品供給装置であって、前記画像処理装置は、前記撮像画像中において前記移送経路上の所定位置で区画された検知領域の検知領域画像を取り込む検知画像取込手段と、前記撮像画像中において前記検知領域の前記部品の移送方向の上流側に隣接する位置で区画された認識領域の認識領域画像を取り込む認識画像取込手段と、前記検知画像取込手段が取り込んだ前記検知領域画像に基づいて前記所定位置に前記部品が存在するか検知する検知手段と、前記検知手段が前記所定位置での前記部品の存在を検知した場合に、前記認識画像取込手段が取り込んだ前記認識領域画像に基づいて当該部品の姿勢を判別する判別手段と、を有する部品供給装置が適用される。   Further, according to another aspect of the present invention, a transfer unit that transfers a plurality of components stored therein in a line on a transfer path, an image pickup unit that images the transfer path, and the image picked up by the image pickup unit An image processing device that determines the posture of the component by recognizing a captured image of the component, and an exclusion unit that excludes the component determined to be not in a predetermined posture by the image processing device from the transfer path. A component supply device, wherein the image processing device includes a detection image capturing unit that captures a detection region image of a detection region partitioned at a predetermined position on the transfer path in the captured image, and A recognition image capturing means for capturing a recognition area image of a recognition area partitioned at a position adjacent to the upstream side of the detection area in the transfer direction of the component, and before the detection image capturing means captures Detection means for detecting whether the part is present at the predetermined position based on a detection area image, and when the detection means detects the presence of the part at the predetermined position, the recognition image capturing means captures the part. A component supply apparatus having a determining unit that determines the posture of the component based on the recognition area image is applied.

本発明によれば、各被移送体のそれぞれに対し個別に姿勢を判別できる。   According to the present invention, the posture can be individually determined for each of the transferred objects.

実施形態のパーツフィーダの外観平面及び制御構成を表した図である。It is a figure showing the external appearance plane and control structure of the parts feeder of embodiment. カメラが撮像した移送経路の撮像画像の一例を示す図である。It is a figure which shows an example of the captured image of the transfer path | route which the camera imaged. 部品を移送経路から落下させて排除する撮像画像の一例を示す図である。It is a figure which shows an example of the captured image which drops components from a conveyance path | route and excludes them. 実施形態で姿勢判別する対象の部品の正常姿勢を示す図である。It is a figure which shows the normal attitude | position of the component of the object which carries out attitude | position discrimination | determination by embodiment. 実施形態で姿勢判別する対象の部品の上下反転姿勢を示す図である。It is a figure which shows the upside down attitude | position of the component of the object which carries out attitude | position discrimination | determination by embodiment. 部品が認識領域に到達した直後の状態を示す図である。It is a figure which shows the state immediately after components arrive at a recognition area. 部品が検知領域に到達する直前の状態を示す図である。It is a figure which shows the state immediately before components reach | attain a detection area. 部品が検知領域に到達して姿勢判別可能な状態を示す図である。It is a figure which shows the state from which components can reach | attain a detection area and attitude | position discrimination | determination is possible. 連接移送状態にある下流側の部品が検知位置に存在している状態を示す図である。It is a figure which shows the state in which the downstream part in a connection transfer state exists in a detection position. 連接移送状態にある上流側の部品が検知位置に存在している状態を示す図である。It is a figure which shows the state in which the upstream part in a connection transfer state exists in a detection position. 画像処理装置が行う周期処理を模式的に示した図である。It is the figure which showed typically the periodic process which an image processing apparatus performs. n回目の撮像時で移送方向長さの短い部品が検知領域に到達する直前の状態を示す図である。It is a figure which shows the state just before the components with a short transfer direction length arrive at a detection area at the time of the n-th imaging. n+1回目の撮像時で移送方向長さの短い部品が検知領域に到達した状態を示す図である。It is a figure which shows the state which the components with a short transfer direction length arrived at the detection area at the time of the (n + 1) th imaging. 下流側の部品の上流側端部の位置に合わせて再認識領域を区画設定する状態を示す図である。It is a figure which shows the state which divides and sets the re-recognition area | region according to the position of the upstream edge part of downstream components. 再認識領域で姿勢判別した後に認識領域を元の位置に戻した状態を示す図である。It is a figure which shows the state which returned the recognition area to the original position after carrying out attitude | position discrimination | determination in the re-recognition area. 連接移送状態の2つの部品がともに認識領域に存在している状態を示す図である。It is a figure which shows the state in which two components of the articulated transfer state exist in the recognition area. 連接移送状態の下流側の部品を姿勢判別するために再認識領域を区画設定した状態を示す図である。It is a figure which shows the state which divided and set the re-recognition area | region in order to carry out attitude | position discrimination | determination of the downstream components in the articulated transfer state. 画像処理装置のCPUが実行する制御手順を示すフローチャートである。It is a flowchart which shows the control procedure which CPU of an image processing apparatus performs. 上下方向と表裏方向で姿勢の区別がある部品で、表裏方向、上下方向ともに正常な姿勢を示す正面図である。FIG. 4 is a front view showing a normal posture in both the front and back directions and the vertical direction, with components having a posture distinction between the vertical direction and the front and back directions. 図19中の矢視XX−XX断面を示す図である。It is a figure which shows the arrow XX-XX cross section in FIG. 上下方向と表裏方向で姿勢の区別がある部品で、表裏方向、上下方向の少なくともいずれかで反転した姿勢を示す正面図である。FIG. 6 is a front view showing a posture that is distinguished in at least one of a front / back direction and an up / down direction, with parts having a posture distinction between the vertical direction and the front / back direction. 上下方向と表裏方向で姿勢の区別がある部品に対する画像認識の姿勢判別手法を説明する図である。It is a figure explaining the attitude | position discrimination | determination method of the image recognition with respect to components with the difference in attitude | position in an up-down direction and a front-back direction. 上下方向と表裏方向で姿勢の区別がある部品に対する画像認識の姿勢判別手法を説明する図である。It is a figure explaining the attitude | position discrimination | determination method of the image recognition with respect to components with the difference in attitude | position in an up-down direction and a front-back direction. 移送方向で姿勢の区別がある部品で、正常な姿勢と反転した姿勢を示す正面図である。FIG. 10 is a front view showing a normal posture and a reversed posture, with parts having a posture distinction in a transfer direction. 移送方向で姿勢の区別がある部品に対し、形状特徴部の比較パターンを用いた画像認識の姿勢判別手法を説明する図である。It is a figure explaining the attitude | position discrimination | determination method of the image recognition using the comparison pattern of a shape feature part with respect to components with the attitude | position distinction in a transfer direction. 移送方向で姿勢の区別がある部品に対し、ともに正常な姿勢の連接移送状態で再認識領域を区画設定した状態を示す図である。It is a figure which shows the state which divided and set the re-recognition area | region with the connection transfer state of a normal attitude | position with respect to components with the distinction of an attitude | position in the transfer direction. 移送方向で姿勢の区別がある部品に対し、ともに反転した姿勢の連接移送状態で再認識領域を区画設定した状態を示す図である。It is a figure which shows the state which divided and set the re-recognition area | region in the articulated transfer state of the attitude | position reversed together with respect to the components with the attitude | position distinction in a transfer direction. 上下方向で姿勢の区別がある部品で、正常な姿勢と反転した姿勢を示す正面図である。FIG. 5 is a front view showing a normal posture and a reversed posture, with parts having a posture distinction in the vertical direction. 上下方向で姿勢の区別がある部品に対し、形状特徴部の比較パターンを用いた画像認識の姿勢判別手法を説明する図である。It is a figure explaining the attitude | position discrimination | determination method of the image recognition using the comparison pattern of a shape feature part with respect to components with the attitude | position distinction in an up-down direction. 上下方向で姿勢の区別がある部品に対し、ともに正常な姿勢の連接移送状態で再認識領域を区画設定した状態を示す図である。It is a figure which shows the state which divided and set the re-recognition area | region with the connection transfer state of a normal attitude | position with respect to the components with the attitude | position distinction in an up-down direction. 上下方向で姿勢の区別がある部品に対し、下流側が正常な姿勢で上流側が判定した姿勢の連接移送状態で再認識領域を区画設定した状態を示す図である。It is a figure which shows the state which divided and set the re-recognition area | region with the connection transfer state of the attitude | position which the upstream determined with respect to the components with the attitude | position distinction in the up-down direction in the downstream.

以下、一実施の形態について図面を参照しつつ説明する。なお、以下では、部品供給装置等の構成の説明の便宜上、上下左右等の方向を適宜使用するが、部品供給装置等の各構成の位置関係を限定するものではない。   Hereinafter, an embodiment will be described with reference to the drawings. In the following description, directions such as up, down, left, and right are used as appropriate for convenience of description of the configuration of the component supply device and the like, but the positional relationship between the components of the component supply device and the like is not limited.

<パーツフィーダの概略構成>
図1は、本実施形態の部品供給装置であるパーツフィーダの外観平面及び制御構成を表している。この図1において、パーツフィーダ1は、振動フィーダ2と、照明3と、カメラ4と、空気噴出ノズル5と、電磁弁6と、画像処理装置7と、画像表示装置8と、操作インタフェイス9とを有している。
<Schematic configuration of parts feeder>
FIG. 1 shows an external appearance plane and a control configuration of a parts feeder which is a component supply apparatus of the present embodiment. In FIG. 1, a parts feeder 1 includes a vibration feeder 2, an illumination 3, a camera 4, an air ejection nozzle 5, an electromagnetic valve 6, an image processing device 7, an image display device 8, and an operation interface 9. And have.

振動フィーダ2(移送手段に相当)は、その全体が平面視奥側に底部を有するいわゆるすり鉢形状(凹椀形状)の容器体であり、その外周側の内壁には螺旋状に配置された移送経路2aを備えている。この移送経路2aの内周端部は、当該振動フィーダ2の中央底部に位置する貯留部2bに連通しており、また移送経路2aの外周端部は、直線状に延びて排出口2cを形成している。そして、上記貯留部2bには供給対象の全て同一形状にある多数の部品100が貯留され、特に図示しない加振機構により振動フィーダ2全体に例えば遠心振動等の振動を与えることにより、貯留部2b内の部品100が移送経路2aに入り込む。移送経路2aは、その幅寸法が部品100の所定方向の幅寸法と略等しいレール状に形成されているため、当該移送経路2a上にはその長手方向に沿って多数の部品100が一列に整列するようにはまり込む。さらに、移送経路2aの路面上には所定パターンの細かい凹凸が形成されており、移送経路2a上の各部品100は上記振動によってそれら凹凸から推進力が与えられて当該移送経路2aに沿う方向(螺旋の外周側へ向かう経路方向;以下、移送方向という)に移送される。   The vibration feeder 2 (corresponding to the transfer means) is a so-called mortar-shaped (concave-shaped) container body that has a bottom portion on the back side in plan view, and is arranged in a spiral manner on the inner wall on the outer peripheral side thereof A path 2a is provided. The inner peripheral end portion of the transfer path 2a communicates with a storage portion 2b located at the center bottom of the vibration feeder 2, and the outer peripheral end portion of the transfer path 2a extends linearly to form a discharge port 2c. doing. The storage unit 2b stores a large number of parts 100 all having the same shape to be supplied, and by applying vibration such as centrifugal vibration to the entire vibration feeder 2 by a vibration mechanism (not shown) in particular, the storage unit 2b. The inner part 100 enters the transfer path 2a. Since the transfer path 2a is formed in a rail shape whose width dimension is substantially equal to the width dimension of the part 100 in a predetermined direction, a large number of parts 100 are aligned in a line along the longitudinal direction on the transfer path 2a. Get stuck in. Furthermore, fine irregularities of a predetermined pattern are formed on the road surface of the transfer path 2a, and each component 100 on the transfer path 2a is given a propulsive force from the irregularities by the vibration, and the direction along the transfer path 2a ( It is transferred in the path direction toward the outer peripheral side of the spiral;

照明3は、上記移送経路2aの所定位置に向けて投光する照明器具である。   The illumination 3 is an illumination fixture that projects light toward a predetermined position of the transfer path 2a.

カメラ4(撮像手段に相当)は、上記照明3の投光箇所を撮像してその撮像画像のデータを後述の画像処理装置7に送信する機能を有する。なお、このカメラ4は、投光箇所の移送経路2aに対して同じ高さでかつ上記部品100の移送方向と直交する撮像方向で撮像する。   The camera 4 (corresponding to the image pickup means) has a function of picking up the projected portion of the illumination 3 and transmitting the data of the picked-up image to an image processing device 7 described later. Note that the camera 4 captures an image in the imaging direction that is the same height as the transfer path 2a of the light projecting location and is orthogonal to the transfer direction of the component 100.

空気噴出ノズル5(排除手段に相当)は、上記投光箇所より少しだけ移送方向下流側に配置されており、特に図示しないエアーポンプから供給される圧縮空気を噴出する機能を有する。   The air ejection nozzle 5 (corresponding to the exclusion means) is disposed slightly downstream in the transfer direction from the light projecting portion, and has a function of ejecting compressed air supplied from an air pump (not shown).

電磁弁6は、後述の画像処理装置7からの指令に基づき、特に図示しないエアーポンプから上記空気噴出ノズル5への圧縮空気の供給と制止を制御する機能を有する。   The electromagnetic valve 6 has a function of controlling the supply and suppression of compressed air from an air pump (not shown) to the air ejection nozzle 5 based on a command from the image processing device 7 described later.

画像処理装置7は、特に図示しないCPU及びメモリを備えたコンピュータからなり、上記カメラ4から取得した撮像画像を画像認識することで移送経路2a上に移送される各部品100の姿勢を判別する機能を有する。   The image processing device 7 includes a computer having a CPU and a memory (not shown), and has a function of discriminating the posture of each component 100 transferred on the transfer path 2a by recognizing a captured image acquired from the camera 4. Have

画像表示装置8は、上記画像処理装置7を介してカメラ4の撮像画像や、各種指令及びパラメータの設定を表示する機能を有する。   The image display device 8 has a function of displaying captured images of the camera 4 and various command and parameter settings via the image processing device 7.

操作インタフェイス9は、キーボードやマウス等のポインティングデバイスからなり、操作者からの各種指令や設定の入力を受け付ける機能を有する。   The operation interface 9 includes a pointing device such as a keyboard and a mouse, and has a function of accepting various commands and setting inputs from the operator.

以上の構成を備えるパーツフィーダ1は、振動フィーダ2の中央の貯留部2bにそれぞれランダムな姿勢で貯留された同一形状の多数の部品100を、いずれも同じ所定の姿勢で一列に整列させて排出口2cより供給する装置である。しかし、上述したように振動フィーダ2の移送経路2aは各部品100の所定方向幅と同じ幅寸法の単純なレール構造でしかなく、またその移送経路2aへの部品100の嵌め込みは振動の勢いだけで行うため、各部品100は上記所定の姿勢とは異なる多様な姿勢で移送経路2aにはまり込み、移送される場合が多い。   The parts feeder 1 having the above-described configuration is configured such that a large number of parts 100 of the same shape stored in a random posture in the central storage portion 2b of the vibration feeder 2 are all aligned in a row with the same predetermined posture. It is an apparatus which supplies from the exit 2c. However, as described above, the transfer path 2a of the vibration feeder 2 has only a simple rail structure having the same width as the width of each part 100 in a predetermined direction, and the fitting of the part 100 into the transfer path 2a is only the momentum of vibration. Therefore, each component 100 often gets stuck in the transfer path 2a in various postures different from the predetermined posture and is transferred.

これに対し、画像処理装置7がカメラ4の撮像画像を画像認識して移送経路2a上の各部品100の姿勢を判別し(後述の図2参照)、適切な姿勢にない部品100に対しては空気噴出ノズル5を介して圧縮空気を当てるよう電磁弁6を制御する(後述の図3参照)。圧縮空気が当てられた部品100は移送経路2aから排除され、振動フィーダ2中央の貯留部2bへ戻される。これを繰り返すことにより、最終的に排出口2cから供給される部品100は、全て同じ所定の姿勢で一列に整列されて供給される。   On the other hand, the image processing device 7 recognizes the image captured by the camera 4 to determine the posture of each component 100 on the transfer path 2a (see FIG. 2 described later), and for the component 100 that is not in an appropriate posture. Controls the solenoid valve 6 so as to apply the compressed air through the air ejection nozzle 5 (see FIG. 3 described later). The component 100 to which the compressed air is applied is removed from the transfer path 2a and returned to the storage portion 2b in the center of the vibration feeder 2. By repeating this, the components 100 finally supplied from the discharge port 2c are all supplied in a line with the same predetermined posture.

<本実施形態の特徴>
しかし、上記構成の振動フィーダ2では、貯留部2bに貯留されている各部品100を移送経路2aに順次乗せるタイミングや、移送経路2a上における各部品100の移送速度までは正確に制御できないため、移送経路2a上においては各部品100がランダムな移送間隔でそれぞれ不確定な速度で移送されてしまう。このため、移送方向に前後に並ぶ2つの部品100どうしが接触したまま移送される連接移送状態となる場合があり、この場合にはそれぞれに対して確実に姿勢判別することが困難であった。また、各部品100の移送方向長さが短い場合でも、それぞれに対して確実に姿勢判別することが困難であった。本実施形態のパーツフィーダ1では、これらの事情に対処するよう画像処理装置7が撮像画像を画像認識することにより、各部品100のそれぞれに対し個別に姿勢を判別できるようにする。以下、そのための画像認識の手法について順に説明する。
<Features of this embodiment>
However, in the vibration feeder 2 having the above-described configuration, the timing at which the parts 100 stored in the storage unit 2b are sequentially placed on the transfer path 2a and the transfer speed of the parts 100 on the transfer path 2a cannot be accurately controlled. On the transfer path 2a, each component 100 is transferred at an uncertain speed at random transfer intervals. For this reason, there is a case where the two parts 100 arranged in the front-rear direction in the transfer direction are in a connected transfer state in which the two parts 100 are in contact with each other. In this case, it is difficult to reliably determine the posture of each. Further, even when the length of each component 100 in the transfer direction is short, it is difficult to reliably determine the posture of each component 100. In the parts feeder 1 of the present embodiment, the image processing device 7 recognizes the captured image so as to deal with these situations, so that the posture of each component 100 can be individually determined. Hereinafter, image recognition techniques for that purpose will be described in order.

<画像認識の基本手法>
図2に、上記カメラ4が撮像した移送経路2aの撮像画像の一例を示す。この撮像画像11は、図中の右側から左側へ向かう方向を移送方向とする移送経路2aの側方から撮像した画像であり、横方向にX座標を、縦方向にY座標をそれぞれ設定したX−Y座標が設定されている。
<Basic method of image recognition>
FIG. 2 shows an example of a captured image of the transfer path 2 a captured by the camera 4. This captured image 11 is an image captured from the side of the transfer path 2a in which the direction from the right side to the left side in the figure is the transfer direction. The X coordinate is set in the horizontal direction and the Y coordinate is set in the vertical direction. -Y coordinate is set.

画像処理装置7は、この撮像画像11中のうち移送経路2aにおける所定の検知位置(所定位置に相当)に検知領域12を区画してその部分画像を検知領域画像として取り込む。この検知領域画像は、上記検知領域12に部品100が存在するか否か、つまり部品100が上記検知位置に到達したか否かを検知するためのものである。そのため、検知領域12は、部品100の有無だけを判別できる程度にその移送方向の長さLk(X方向の長さ)を十分短く設定する。逆に、当該検知領域画像中に部品100の下流側端部が位置してその有無の判別処理が複雑になる可能性を低くするためにも、検知領域12の移送方向長さを長く設定しない。   The image processing apparatus 7 divides the detection area 12 at a predetermined detection position (corresponding to the predetermined position) in the transfer path 2a in the captured image 11, and captures the partial image as a detection area image. This detection area image is for detecting whether or not the part 100 exists in the detection area 12, that is, whether or not the part 100 has reached the detection position. Therefore, the detection region 12 is set to have a sufficiently short length Lk (length in the X direction) in the transfer direction so that only the presence / absence of the component 100 can be determined. Conversely, in order to reduce the possibility that the downstream end portion of the component 100 is positioned in the detection area image and the determination process of the presence / absence thereof becomes complicated, the length of the detection area 12 in the transfer direction is not set long. .

また、画像処理装置7は、撮像画像11中のうち上記検知領域12の移送方向上流側に隣接する位置に認識領域13を区画してその部分画像を認識領域画像として取り込む。この認識領域画像は、上記認識領域13に存在する部品100の姿勢を判別するためのものである。そのため、認識領域13は、部品100の姿勢を判別できる程度にその移送方向長さLnを設定する。この点については後に詳述する。   Further, the image processing apparatus 7 divides the recognition area 13 at a position adjacent to the upstream side of the detection area 12 in the transfer direction in the captured image 11 and captures the partial image as a recognition area image. This recognition area image is for determining the posture of the component 100 existing in the recognition area 13. Therefore, the recognition region 13 sets the length Ln in the transfer direction to the extent that the posture of the component 100 can be determined. This point will be described in detail later.

なお、カメラ4が撮像する撮像画像11は画素単位で取り扱うことのできる画像データであり、上記の検知領域画像及び認識領域画像の取り込みは画像処理装置7のソフトウェア処理によって行う。また、上記の検知領域画像及び認識領域画像のいずれも、移送経路2a上に移送される部品100の高さ全体を包括する程度の高さHy(Y方向の長さ、位置)に設定すればよい。そして、画像処理装置7は、検知領域画像に基づいて上記検知位置への部品100の到達を検知した際に、上記認識領域画像に基づいて当該部品100の姿勢を判別する。このように本実施形態が備える画像処理装置7は、カメラ4が撮像した撮像画像11全体ではなく、その一部から取り込んだ部分画像に対して画像認識を行うため、部品100の姿勢判別を短い処理時間で行える。そして、判別した部品100の姿勢が所定の正しい姿勢にない場合には、図3に示すように当該部品100が空気噴出ノズル5の前方に位置した際に電磁弁6を開放して圧縮空気を噴出させ、部品100を移送経路2aから落下させて排除する。   The captured image 11 captured by the camera 4 is image data that can be handled in units of pixels, and the detection area image and the recognition area image are captured by software processing of the image processing device 7. Also, if both the detection area image and the recognition area image are set to a height Hy (length and position in the Y direction) that covers the entire height of the component 100 transferred on the transfer path 2a. Good. When the image processing apparatus 7 detects the arrival of the component 100 at the detection position based on the detection region image, the image processing device 7 determines the posture of the component 100 based on the recognition region image. As described above, since the image processing apparatus 7 included in the present embodiment performs image recognition on a partial image captured from a part of the captured image 11 captured by the camera 4, the posture determination of the component 100 is short. It can be done in processing time. If the determined posture of the component 100 is not in a predetermined correct posture, the solenoid valve 6 is opened to release compressed air when the component 100 is positioned in front of the air ejection nozzle 5 as shown in FIG. The part 100 is dropped from the transfer path 2a and removed.

ここで、本実施形態の供給対象である部品100の形状例を図4、図5に示す。この部品100は、直方体形状の本体において、その長手方向(移送方向)全体に渡る均等な配置で9つの丸穴101がいずれも当該部品100の厚み方向(図中における紙面直交方向)で貫通するよう設けられている。また、全ての丸穴101は一律に同じ高さに配置され、その高さ位置が当該部品100全体の重心位置102に対して高さ方向(Y方向)にオフセットしている。このように本実施形態における部品100の形状は、部品100の厚み方向(図中における紙面直交方向)においては対称であるためカメラ4の撮像方向に対して表裏の区別がなく、各丸穴101の配置高さの違いによって正常姿勢と反転姿勢の違いが生じるだけである。本実施形態の例では、図4に示すように各丸穴101が重心位置102より高い配置にある姿勢を正常姿勢とし、図5に示すように各丸穴101が重心位置102より低い配置にある姿勢を反転姿勢とする。   Here, FIGS. 4 and 5 show examples of the shape of the component 100 to be supplied in the present embodiment. The component 100 is a rectangular parallelepiped-shaped main body, and the nine round holes 101 pass through in the thickness direction of the component 100 (the direction orthogonal to the paper surface in the drawing) with an even arrangement throughout the longitudinal direction (transfer direction). It is provided as follows. All the round holes 101 are uniformly arranged at the same height, and the height position is offset in the height direction (Y direction) with respect to the center of gravity position 102 of the entire component 100. Thus, since the shape of the component 100 in this embodiment is symmetric in the thickness direction of the component 100 (the direction orthogonal to the paper surface in the drawing), there is no distinction between the front and back with respect to the imaging direction of the camera 4, and each round hole 101. Only the difference between the normal posture and the inverted posture is caused by the difference in the arrangement height of the. In the example of the present embodiment, the posture in which each round hole 101 is located higher than the center of gravity position 102 as shown in FIG. 4 is a normal posture, and each round hole 101 is placed lower than the center of gravity position 102 as shown in FIG. A certain posture is set as a reversed posture.

上記認識領域画像に基づく姿勢判別では、各丸穴101の平均高さ位置と、部品100全体の重心位置102の高さ位置とを比較することで正常姿勢と反転姿勢の2種類の姿勢判別だけを行うものとする。この姿勢判別のためには、丸穴101を1つ画像認識すれば基本的には足りるが、判別精度を確保するためには丸穴101を少なくとも4つ以上画像認識すると良い。このときは、認識領域13の移送方向長さLnを丸穴101の4ピッチ分の長さL4以上に設定し、また認識領域画像内には丸穴101の4ピッチ分の長さL4以上で部品100が存在すると良い。なお、上記の部品100が各請求項記載の被移送体に相当し、丸穴101が各請求項記載の形状特徴部に相当する。   In the posture determination based on the recognition area image, only two types of posture determinations of a normal posture and a reversed posture are performed by comparing the average height position of each round hole 101 and the height position of the center of gravity position 102 of the entire component 100. Shall be performed. For this posture discrimination, it is basically sufficient to recognize one round hole 101. However, in order to ensure the discrimination accuracy, it is preferable to recognize at least four round holes 101. At this time, the length Ln in the transfer direction of the recognition area 13 is set to be equal to or longer than the length L4 corresponding to the four pitches of the round holes 101, and the length L4 corresponding to the four pitches of the round holes 101 is included in the recognition area image. It is preferable that the component 100 exists. In addition, said component 100 is corresponded to the to-be-transferred body as described in each claim, and the round hole 101 is corresponded in the shape characteristic part as described in each claim.

以上のような領域区画と部品構成とした本実施形態では、図6〜図8に示すような行程で部品100の到達検知と姿勢判別を行う。まず、振動フィーダ2での移送によって、図6に示すように部品100の下流側端部が認識領域13に到達した直後では、部品100はまだ検知領域画像に基づく検知位置への到達が検知されていないため姿勢判別は行われない。部品100の移送が進んで図7に示すように下流側端部が検知領域12に到達する直前でも、まだ同様に姿勢判別は行われない。そして、図8に示すように部品100が検知領域12に到達した場合には、当該部品100が画像認識装置によってその検知位置への到達が検知され、それを契機に姿勢判別が行われる。この際には、検知領域12の移送方向上流側に隣接している認識領域13において当該部品100の少なくとも一部がほぼ確実に存在しているため、同じ撮像画像11から取り込んだ認識領域画像に基づいて当該部品100の姿勢判別を行える。   In the present embodiment having the above-described region section and component configuration, the arrival detection and posture determination of the component 100 are performed in the processes shown in FIGS. First, as shown in FIG. 6, immediately after the downstream end of the component 100 reaches the recognition region 13 by the transfer by the vibration feeder 2, the component 100 is still detected to reach the detection position based on the detection region image. Because it is not, posture discrimination is not performed. The posture determination is not performed in the same manner even immediately before the part 100 is transferred and the downstream end reaches the detection region 12 as shown in FIG. When the component 100 reaches the detection area 12 as shown in FIG. 8, the arrival of the component 100 at the detection position is detected by the image recognition device, and the posture is determined based on the detection. At this time, since at least a part of the component 100 is almost certainly present in the recognition area 13 adjacent to the upstream side of the detection area 12 in the transfer direction, the recognition area image captured from the same captured image 11 is displayed. Based on this, the posture of the component 100 can be determined.

<連接移送状態に対する周期的処理>
しかし、上述した理由から、図9、図10に示すように移送方向に前後に並ぶ2つの部品100どうしが接触したまま移送される連接移送状態となる場合がある。この場合、上述した検知領域画像での部品100の検知では、先に検知した下流側の部品100と区別して上流側の部品100を単独で検知することが難しく、それぞれ個別に姿勢判別することが困難となる。つまり、図9に示すように下流側の部品100だけが検知位置に存在している状態から、図10に示すように上流側の部品100が検知位置に到達したタイミングの検知が困難である。これに対し本実施形態では、連接移送状態の各部品100に対して個別に姿勢を判別できるよう、画像処理装置7が最短の周期でカメラ4に逐次撮像させ、それら撮像画像11ごとに部品100の姿勢判別を行うようにする。
<Periodic processing for articulated transfer status>
However, for the reasons described above, there may be a connected transfer state in which the two parts 100 arranged in the front-rear direction in the transfer direction are in contact with each other as shown in FIGS. In this case, in the detection of the component 100 in the detection region image described above, it is difficult to detect the upstream component 100 separately from the previously detected downstream component 100, and it is possible to individually determine the posture. It becomes difficult. That is, it is difficult to detect the timing at which the upstream component 100 reaches the detection position as shown in FIG. 10 from the state where only the downstream component 100 exists at the detection position as shown in FIG. On the other hand, in the present embodiment, the image processing device 7 causes the camera 4 to sequentially capture images with the shortest cycle so that the posture can be individually determined for each component 100 in the articulated transfer state. The posture is determined.

本実施形態の画像処理装置7が行うこのような周期処理を図11に模式的に示す。図示するように、画像処理装置7は基本的に検知処理を繰り返し行い、その検知処理で部品100の検知位置における存在(到達)を検知した場合に判別処理を行う。なお検知処理内の主な工程としては、最初にカメラ4から撮像画像11を取得する撮像工程を行い、次にその撮像画像11内から検知領域画像を取り込む工程を行い、次にその検知領域画像に基づいて部品100の存在(検知位置への到達)を検知する工程を行う。このように検知処理は、最低限の工程だけを行う最短の処理周期Tで繰り返し行う。また判別処理内の主な工程としては、最初に認識領域画像を取り込む工程を行い、次にその認識領域画像に共付いて部品100の姿勢を判別する工程を行う。なお、認識領域画像の取り込みは、当該判別処理の直前の検知処理において撮像された撮像画像11から取り込む。これにより、撮像工程を省略して判別処理の所要時間を短縮化できるとともに、直前の検知時からタイムラグの少ないより正確な姿勢判別を可能にする。   Such periodic processing performed by the image processing apparatus 7 of the present embodiment is schematically shown in FIG. As shown in the figure, the image processing apparatus 7 basically repeats the detection process, and performs the determination process when the detection (detection) detects the presence (arrival) of the component 100 at the detection position. As main steps in the detection process, first, an imaging step of acquiring the captured image 11 from the camera 4 is performed, then a step of capturing a detection region image from the captured image 11 is performed, and then the detection region image is detected. The step of detecting the presence of the component 100 (arrival at the detection position) is performed based on the above. In this way, the detection process is repeated with the shortest processing cycle T in which only the minimum steps are performed. Further, as a main process in the discrimination process, a process of first capturing a recognition area image is performed, and then a process of determining the posture of the component 100 together with the recognition area image is performed. Note that the recognition area image is captured from the captured image 11 captured in the detection process immediately before the determination process. As a result, the time required for the discrimination process can be shortened by omitting the imaging process, and more accurate posture discrimination with a small time lag from the time of the previous detection is enabled.

以上のように検知処理を最短の処理周期Tで繰り返し行うことにより、複数の部品100が連接移送状態にある場合でも、各部品100のいずれに対しても姿勢判別を行うことができる。なお、同一の部品100に対して姿勢判別を複数回行う可能性もあるが、結局は正常姿勢にある部品100だけが通過し、反転姿勢にある部品100がすぐに排除されるだけであるため問題ない。   As described above, by repeatedly performing the detection process with the shortest processing cycle T, it is possible to determine the posture of each of the parts 100 even when the plurality of parts 100 are in the connected transfer state. Although there is a possibility that the posture determination is performed a plurality of times for the same component 100, only the component 100 in the normal posture passes through and the component 100 in the reversed posture is simply eliminated immediately. no problem.

<非同期移送に対する再認識処理>
しかし上述したように最短周期で検知処理を繰り返した場合でも、そのうちカメラ4での撮像工程と画像処理装置7での画像認識工程にはそれぞれ時間を要するため処理周期Tがある程度長くなってしまう。このため、上述したように各部品100がこの処理周期Tと非同期(位相不一致)かつ不定間隔で移送され、また特に各部品100の移送方向長さが比較的短い場合には、それぞれに対して確実に姿勢判別することが困難な場合がある。
<Re-recognition processing for asynchronous transport>
However, even when the detection process is repeated in the shortest cycle as described above, the image capturing process in the camera 4 and the image recognition process in the image processing apparatus 7 each take time, and thus the processing period T becomes somewhat long. Therefore, as described above, each component 100 is transferred asynchronously (in phase mismatch) with this processing cycle T and at indefinite intervals, and particularly when the length of each component 100 in the transfer direction is relatively short, It may be difficult to reliably determine the posture.

例えば図12、図13に示すように、全体の移送方向長さが丸穴101の5ピッチ分の長さしかない部品100′の場合を考える。図12に示すn回目の検知処理時ではカメラ4が撮像した際に部品100′の下流側端部が検知領域12の上流側に近接している。この場合、それから検知処理の処理周期Tが経過した後の次のn+1回目の検知処理時には、図13に示すように認識領域画像中における部品100′の移送方向長さが姿勢判別に要する丸穴101の4ピッチ分の長さより短くなる可能性が高い。このように認識領域13中に丸穴101の4ピッチ分に満たない移送方向長さで部品100′の画像を認識しても、上述したように姿勢判別の精度が得られない。   For example, as shown in FIGS. 12 and 13, consider the case of a component 100 ′ whose overall transfer direction length is only 5 pitches of the round hole 101. In the n-th detection process shown in FIG. 12, the downstream end of the component 100 ′ is close to the upstream side of the detection region 12 when the camera 4 captures an image. In this case, at the time of the next n + 1 detection processing after the processing cycle T of the detection processing has elapsed, as shown in FIG. 13, the length in the transfer direction of the component 100 ′ in the recognition area image indicates the round hole required for posture determination. There is a high possibility that the length is shorter than the length of 4 pitches of 101. Thus, even if the image of the component 100 ′ is recognized in the recognition region 13 with a length in the transfer direction that is less than the four pitches of the round holes 101, the posture determination accuracy cannot be obtained as described above.

これに対して本実施形態では、認識領域画像を画像認識する判別処理において、まず認識領域画像中の移送方向下流側に位置する部品100′の移送方向上流側の端部の画像認識を優先的に行う。そして、次に、認識領域画像中において送方向下流側に位置する部品100′の移送方向長さが丸穴101の4ピッチ分の判別基準長L4より短いと判定した場合には、現行の認識領域13よりも移送方向下流側にずれた位置に区画した再認識領域14で認識領域画像をあらたに取り込む。   On the other hand, in the present embodiment, in the recognition process for recognizing the recognition area image, first, the image recognition of the end portion on the upstream side in the transfer direction of the component 100 ′ located downstream in the transfer direction in the recognition area image is prioritized. To do. Then, when it is determined that the length in the transfer direction of the part 100 ′ located downstream in the feed direction in the recognition area image is shorter than the discrimination reference length L 4 for the four pitches of the round holes 101, The recognition area image is newly captured in the re-recognition area 14 that is partitioned at a position shifted to the downstream side in the transport direction from the area 13.

例えば、図14に示すように、初期位置の認識領域画像中における移送方向下流側の部品100′の移送方向上流側端部の位置を検出する。この上流側端部位置の近傍に認識領域13の移送方向上流側の端部が位置するようずらすことであらたに再認識領域14を区画設定し、この内部の部分画像をあらたな認識領域画像として取り込む。認識領域13の移送方向長さLnが姿勢判別の基準となる判別基準長L4以上(丸穴101の4ピッチ分以上)に設定されていれば、再認識領域14中には必ず部品100′が上記判別基準長L4以上の長さで存在する。これにより、姿勢判別を高い精度で行える。   For example, as shown in FIG. 14, the position of the upstream end portion in the transfer direction of the component 100 ′ on the downstream side in the transfer direction in the recognition region image at the initial position is detected. The re-recognition area 14 is newly set by shifting so that the upstream end in the transfer direction of the recognition area 13 is positioned in the vicinity of the upstream end position, and the partial image inside is newly set as a new recognition area image. take in. If the length Ln in the transfer direction of the recognition area 13 is set to be greater than or equal to the discrimination reference length L4 that is a reference for attitude discrimination (4 pitches or more of the round holes 101), the component 100 ′ is always included in the re-recognition area 14. It exists with a length equal to or longer than the discrimination reference length L4. Thereby, posture discrimination can be performed with high accuracy.

なお、再認識領域14の区画設定及びその中の認識領域画像の取り込みは、直前の検知処理で撮像された撮像画像11から取り込んでもよく、この場合にはそれだけ判別処理を迅速に行うことができる。また、図15に示すように、移送方向に2つの部品100′が近接した状態で移送されている場合でも、再認識領域14の認識領域画像で姿勢判別した時点では前回の部品100′が必ず検知領域12を抜けているので、認識領域13を元の位置に戻すことで後方の部品100′の検知処理、判別処理には影響を与えない。また、図16、図17に示すように移送方向に並ぶ2つの部品100′が連接移送状態である場合でも、各部品100′に対して適切な再認識領域14を区画設定できるため、各部品100′の検知処理、判別処理を正常に行うことができる。   The section setting of the re-recognition area 14 and the capture of the recognition area image in the re-recognition area 14 may be captured from the captured image 11 captured in the immediately preceding detection process, and in this case, the determination process can be performed quickly. . Further, as shown in FIG. 15, even when the two parts 100 ′ are transferred in the proximity of each other in the transfer direction, the previous part 100 ′ is always present when the posture is determined by the recognition area image of the re-recognition area 14. Since the detection area 12 is missing, returning the recognition area 13 to the original position does not affect the detection process and the discrimination process for the rear part 100 ′. Also, as shown in FIGS. 16 and 17, even when two parts 100 ′ aligned in the transfer direction are connected and transferred, an appropriate re-recognition area 14 can be set for each part 100 ′. 100 'detection processing and discrimination processing can be performed normally.

<制御フロー>
以上のような機能を実現するために、画像処理装置7が備えるCPU(演算装置に相当;特に図示せず)が実行する制御手順を、図18により順を追って説明する。図18において、このフローに示す処理は、パーツフィーダ1の作動を開始した際に実行を開始する。
<Control flow>
A control procedure executed by a CPU (corresponding to a computing device; not shown in particular) included in the image processing device 7 in order to realize the functions as described above will be described step by step with reference to FIG. In FIG. 18, the processing shown in this flow is started when the operation of the parts feeder 1 is started.

まずステップS105で、画像処理装置7のCPUは、カメラ4に撮像させた撮像画像11を取得する。   First, in step S <b> 105, the CPU of the image processing device 7 acquires a captured image 11 captured by the camera 4.

次にステップS110へ移り、画像処理装置7のCPUは、上記ステップS105で取得した撮像画像11から検知領域画像を取り込む。   Next, the process proceeds to step S110, and the CPU of the image processing apparatus 7 captures a detection area image from the captured image 11 acquired in step S105.

次にステップS115へ移り、画像処理装置7のCPUは、上記ステップS110で取り込んだ検知領域画像内で部品100の有無を画像認識する。   Next, the process proceeds to step S115, and the CPU of the image processing apparatus 7 recognizes the presence / absence of the component 100 in the detection area image captured in step S110.

次にステップS120へ移り、画像処理装置7のCPUは、上記ステップS115での画像認識により、検知領域画像中に部品100が存在していると認識したか否かを判定する。検知領域画像中に部品100が存在していない場合、判定は満たされず、ステップS105へ戻り同様の手順を繰り返す。   Next, the process proceeds to step S120, and the CPU of the image processing apparatus 7 determines whether or not it is recognized that the component 100 is present in the detection area image by the image recognition in step S115. If the component 100 is not present in the detection area image, the determination is not satisfied and the process returns to step S105 and the same procedure is repeated.

一方、検知領域画像中に部品100が存在している場合、判定が満たされ、ステップS125へ移る。   On the other hand, if the component 100 is present in the detection area image, the determination is satisfied, and the routine goes to Step S125.

ステップS125では、画像処理装置7のCPUは、上記ステップS105で取得した撮像画像11から認識領域画像を取り込む。   In step S125, the CPU of the image processing device 7 captures a recognition area image from the captured image 11 acquired in step S105.

次にステップS130へ移り、画像処理装置7のCPUは、上記ステップS125で取り込んだ認識領域画像内で部品100の上流側端部を画像認識する。   Next, the process proceeds to step S130, and the CPU of the image processing apparatus 7 recognizes the upstream end of the component 100 in the recognition area image captured in step S125.

次にステップS135へ移り、画像処理装置7のCPUは、認識領域画像内での部品長を算出する。   Next, proceeding to step S135, the CPU of the image processing apparatus 7 calculates the component length in the recognition area image.

次にステップS140へ移り、画像処理装置7のCPUは、上記ステップS135で算出した部品長が判別基準長(この例の丸穴101の4ピッチ分)以上にあるか否かを判定する。認識領域画像内での部品長が判別基準長未満である場合、判定は満たされず、ステップS145へ移る。   Next, the process proceeds to step S140, and the CPU of the image processing apparatus 7 determines whether or not the component length calculated in step S135 is equal to or greater than the discrimination reference length (four pitches of the round hole 101 in this example). If the component length in the recognition area image is less than the determination reference length, the determination is not satisfied and the routine goes to Step S145.

ステップS145では、画像処理装置7のCPUは、現行の認識領域13より移送方向下流側にずらした再認識領域14を区画して設定する。このずらし量は、例えば上記ステップS130で画像認識した上流側端部の位置の近傍に再認識領域14の上流側の端部が位置するようずらせばよい(上記図14、図17参照)。   In step S145, the CPU of the image processing apparatus 7 divides and sets the re-recognition area 14 shifted from the current recognition area 13 to the downstream side in the transfer direction. The shift amount may be shifted so that, for example, the upstream end of the re-recognition region 14 is positioned in the vicinity of the position of the upstream end recognized in step S130 (see FIGS. 14 and 17).

次にステップS150へ移り、画像処理装置7のCPUは、上記ステップS145で設定した再認識領域14の部分画像を認識領域画像として取り込む。そして、ステップS155へ移る。   Next, the process proceeds to step S150, and the CPU of the image processing apparatus 7 captures the partial image of the re-recognition area 14 set in step S145 as a recognition area image. Then, the process proceeds to step S155.

また一方、上記ステップS140の判定において、認識領域画像内での部品長が判別基準長以上である場合、判定が満たされ、ステップS155へ移る。   On the other hand, if it is determined in step S140 that the component length in the recognition area image is equal to or greater than the determination reference length, the determination is satisfied, and the process proceeds to step S155.

ステップS155では、画像処理装置7のCPUは、認識領域画像内で部品100の姿勢を画像認識により判別する(上記図4、図5参照)。   In step S155, the CPU of the image processing device 7 determines the posture of the component 100 in the recognition area image by image recognition (see FIGS. 4 and 5 above).

次にステップS160へ移り、画像処理装置7のCPUは、上記ステップS155での姿勢判別で部品100の姿勢が正常であったか否かを判定する。姿勢判別で部品100の姿勢が正常でない(反転姿勢である)と判別された場合には、判定は満たされず、ステップS165へ移る。   Next, the process moves to step S160, and the CPU of the image processing apparatus 7 determines whether or not the posture of the component 100 is normal in the posture determination in step S155. If it is determined in the posture determination that the posture of the component 100 is not normal (inverted posture), the determination is not satisfied and the routine goes to Step S165.

ステップS165では、画像処理装置7のCPUは、適宜のタイミングで電磁弁6を開放し、空気噴出ノズル5から圧縮空気を噴出させて姿勢判別した部品100を移送経路2aから排除する(上記図3参照)。そして、ステップS170へ移る。   In step S165, the CPU of the image processing device 7 opens the electromagnetic valve 6 at an appropriate timing, and ejects the compressed air from the air ejection nozzle 5 to remove the component 100 whose posture is determined from the transfer path 2a (FIG. 3 above). reference). Then, the process proceeds to step S170.

また一方、上記ステップS160の判定において、姿勢判別で部品100の姿勢が正常であると判別された場合には、判定が満たされ、ステップS170へ移る。   On the other hand, if it is determined in the determination in step S160 that the posture of the component 100 is normal in the posture determination, the determination is satisfied, and the process proceeds to step S170.

ステップS170では、画像処理装置7のCPUは、操作インタフェイス9を介して操作者からパーツフィーダ1の作動を終了させる操作が入力されたか否かを判定する。終了操作が入力されていない場合、判定は満たされず、上記ステップS105へ戻り同様の手順を繰り返す。一方、終了操作が入力された場合、判定が満たされ、このフローを終了する。   In step S <b> 170, the CPU of the image processing apparatus 7 determines whether an operation for ending the operation of the parts feeder 1 is input from the operator via the operation interface 9. If the end operation has not been input, the determination is not satisfied and the process returns to step S105 and the same procedure is repeated. On the other hand, when the end operation is input, the determination is satisfied, and this flow ends.

以上において、ステップS110の手順が各請求項記載の検知画像取込手段に相当し、ステップS125、S145、S150の手順が各請求項記載の認識画像取込手段に相当し、ステップS115の手順が各請求項記載の検知手段に相当し、ステップS130、S135、S155の手順が各請求項記載の判別手段に相当する。   In the above, the procedure of step S110 corresponds to the detected image capturing unit described in each claim, the procedures of steps S125, S145, and S150 correspond to the recognized image capturing unit described in each claim, and the procedure of step S115 is performed. It corresponds to the detection means described in each claim, and the procedures of steps S130, S135, and S155 correspond to the determination means described in each claim.

<本実施形態の効果>
以上説明したように、本実施形態のパーツフィーダ1によれば、画像処理装置7が、ステップS115の手順で撮像画像11全体を認識するのではなく、その一部から取り込んだ検知画像に基づいて検知位置における部品100の到達を短い処理時間で検知できる。このため、ステップS105の撮像工程とステップS110の検知領域画像取込工程とステップS115の検知工程を繰り返す検知処理の処理周期Tを十分短くすることができる(上記図11参照)。これにより、この処理周期Tと非同期(位相不一致)かつ不定間隔で移送される各部品100の移送方向長さが比較的短い場合であっても、検知位置における各部品100の到達をほぼ確実に検知できる。そしてこの場合には、フォトセンサ等の他のセンサを別途設けなくてもカメラ4等の撮像手段だけを用いた簡易な構成で、部品100の検知位置への到達を検知できる。
<Effect of this embodiment>
As described above, according to the parts feeder 1 of the present embodiment, the image processing device 7 does not recognize the entire captured image 11 in the procedure of step S115, but based on a detected image captured from a part thereof. The arrival of the component 100 at the detection position can be detected in a short processing time. For this reason, it is possible to sufficiently shorten the processing cycle T of the detection process in which the imaging process in step S105, the detection area image capturing process in step S110, and the detection process in step S115 are repeated (see FIG. 11 above). As a result, even if the length in the transfer direction of each component 100 transferred asynchronously (in phase mismatch) and at indefinite intervals with this processing cycle T is relatively short, the arrival of each component 100 at the detection position is almost assured. Can be detected. In this case, the arrival of the component 100 at the detection position can be detected with a simple configuration using only the imaging means such as the camera 4 without providing another sensor such as a photosensor.

そして、検知領域12の移送方向上流側に認識領域13が隣接しているため、(処理周期Tの短い)検知処理が検知位置における部品100の到達を検知した時点では、当該部品100の少なくとも一部が上記認識領域画像にも存在している。従って、ステップS130、S135、S155の手順でこの認識領域画像に基づいて、各部品100のそれぞれに対し個別に姿勢を判別できる。またこの判別処理は、上記検知処理により検知位置における部品100の到達を検知した場合にだけ行うことで、上述した検知処理の処理周期Tを十分短く維持することができ、検知位置への各部品100の到達をほぼ確実に検知できる。   Since the recognition area 13 is adjacent to the upstream side of the detection area 12 in the transfer direction, at least one of the parts 100 is detected when the detection process (with a short processing cycle T) detects the arrival of the part 100 at the detection position. Is also present in the recognition area image. Accordingly, it is possible to individually determine the posture of each component 100 based on the recognition area image in the steps S130, S135, and S155. Further, this determination process is performed only when the arrival of the component 100 at the detection position is detected by the detection process, so that the processing cycle T of the detection process described above can be kept sufficiently short, and each component at the detection position can be maintained. The arrival of 100 can be detected almost certainly.

以上の結果、本実施形態の画像処理装置7によれば、移送方向長さの短い複数の部品100が非同期かつ不定間隔で移送される場合でも、各部品100のそれぞれに対し個別に姿勢を判別できる。   As a result, according to the image processing apparatus 7 of the present embodiment, even when a plurality of parts 100 having a short length in the transfer direction are transferred asynchronously and at irregular intervals, the posture of each part 100 is individually determined. it can.

また、本実施形態では特に、判別処理が、直前の検知処理に用いたものと同一の撮像画像11から認識領域画像を取り込むことで、あらためて撮像しなおす時間を省略して処理周期Tを短縮でき、かつ部品100の少なくとも一部が確実に存在している認識領域画像を取得してその姿勢を判別できる。   In this embodiment, in particular, the recognition process can capture the recognition area image from the same captured image 11 as that used in the immediately preceding detection process, thereby shortening the processing cycle T by omitting the time for re-imaging. In addition, it is possible to acquire a recognition area image in which at least a part of the component 100 exists reliably and determine its posture.

また、本実施形態では特に、判別処理において、認識領域画像中における移送方向下流側に位置する部品100の移送方向上流側の端部を画像認識する(ステップS130)。これにより、移送方向で2つの部品100が連接して移送している状態でも、それらの境界(下流側の部品100の上流側端部)を認識して、下流側の部品100の姿勢だけを個別に判別できる。   In the present embodiment, in particular, in the discrimination process, the end of the part 100 located on the downstream side in the transport direction in the recognition area image is recognized in the image (step S130). Thereby, even when the two parts 100 are connected and transferred in the transfer direction, the boundary (the upstream end of the downstream part 100) is recognized, and only the posture of the downstream part 100 is recognized. Can be determined individually.

また、本実施形態では特に、判別処理において、認識領域画像中における移送方向下流側に位置する部品100の移送方向長さが所定長より短いと判定した場合には、認識領域13よりも移送方向の下流側にずれた位置で区画した再認識領域14の認識領域画像をあらたに取り込む。これにより、認識領域画像中における部品100の移送方向長さをさらに長く存在させることができ、判別処理における部品100の姿勢判別を確実に行うことができる。   In the present embodiment, in particular, in the determination process, when it is determined that the transfer direction length of the component 100 located on the downstream side in the transfer direction in the recognition area image is shorter than the predetermined length, the transfer direction is higher than the recognition area 13. The recognition area image of the re-recognition area 14 partitioned at a position shifted to the downstream side of the image is newly captured. As a result, the length in the transfer direction of the component 100 in the recognition area image can be made longer, and the posture determination of the component 100 in the determination process can be performed reliably.

また、本実施形態では特に、上記所定長は、判別処理で部品100の姿勢の判別の基準となる判別基準長であり、判別処理では、認識領域画像中における移送方向の下流側に位置する部品100の移送方向長さが判別基準長以上となる位置で再認識領域14を区画する。これにより、認識領域画像中における部品100の移送方向長さを判別基準長以上に存在させることができ、判別処理における部品100の姿勢判別を確実に行うことができる。   In the present embodiment, in particular, the predetermined length is a determination reference length that is a reference for determining the posture of the component 100 in the determination process. In the determination process, the component located downstream in the transfer direction in the recognition area image. The re-recognition area 14 is defined at a position where the length of 100 in the transfer direction is equal to or greater than the discrimination reference length. As a result, the length in the transfer direction of the component 100 in the recognition area image can be greater than or equal to the determination reference length, and the posture determination of the component 100 in the determination processing can be performed reliably.

また、本実施形態では特に、判別処理において、部品100が備える丸穴101を形状特徴部として画像認識し、当該形状特徴部の有無及び位置に基づいて当該部品100の移送方向の姿勢、上下方向の姿勢、及び表裏の姿勢を判別し、上記判別基準長は、部品100において丸穴101を所定数(この例の4つ)備え得る領域長さである。これにより、判別処理における部品100の姿勢判別をより確実に行うことができる。   In the present embodiment, in particular, in the discrimination process, the circular hole 101 included in the component 100 is image-recognized as a shape feature portion, and the posture of the component 100 in the transport direction and the vertical direction based on the presence and position of the shape feature portion. And the above-mentioned discrimination reference length is an area length in which a predetermined number (four in this example) of round holes 101 can be provided in the component 100. Thereby, the attitude | position discrimination | determination of the components 100 in a discrimination | determination process can be performed more reliably.

<他の部品形状への適用例>
上記実施形態のパーツフィーダ1は、他の多様な形状の部品に対しても適用可能である。
<Application examples to other parts shapes>
The parts feeder 1 of the above embodiment can be applied to other various shaped parts.

<1:上下方向と表裏方向で姿勢の区別がある場合>
例えば、正面図である図19、矢視XX−XXの断面図である図20に示すような形状の部品200に対しても上記実施形態のパーツフィーダ1は適用可能である。この部品200は、上記実施形態で用いた部品100(図4、図5参照)に、さらにカメラ撮像方向に対して表裏の一方側の上方縁部だけにエンボス状の切り欠き202を形成している。このような形状の部品200では、図19に示す表裏方向、上下方向ともに正常な姿勢に対し、図21中左側に示すように表裏方向で反転(切り欠き202が手前側に露出)し上下方向で正常な姿勢や、図21中中央に示すように表裏方向で正常で上下方向で反転した姿勢や、図21中右側に示すように表裏方向、上下方向ともに反転した姿勢、の4つの姿勢を取り得る。
<1: If there is a distinction between the vertical and front / back orientations>
For example, the parts feeder 1 of the above embodiment can also be applied to a part 200 having a shape as shown in FIG. 19 which is a front view and FIG. 20 which is a cross-sectional view taken along arrow XX-XX. The component 200 is formed by forming an embossed notch 202 only on the upper edge on one side of the front and back with respect to the imaging direction of the camera on the component 100 (see FIGS. 4 and 5) used in the above embodiment. Yes. In the component 200 having such a shape, the normal posture in both the front and back directions and the vertical direction shown in FIG. 19 is reversed in the front and back directions as shown on the left side in FIG. 21 (the notch 202 is exposed to the front side) and the vertical direction. And the normal posture in the front and back direction as shown in the center in FIG. 21, and the posture reversed in the vertical direction as shown in the right side in FIG. I can take it.

このような形状の部品200に対しては、判別処理における画像認識で各丸穴201とそれらに最も近い縁部との間の離間距離を計測すればよい。図22に示すように各離間距離が一律に同じであれば、当該部品200は表裏方向で正常であると判別できる。また、図23に示すように切り欠き202に近い丸穴201だけ離間距離が短く計測された場合(つまり離間距離に差異やばらつきがある場合)には、当該部品200は表裏方向で反転していると判別できる。なお、上下方向の姿勢については、上側縁部と下側縁部でそれぞれ丸穴201との離間距離を計測して比較してもよいし、または上記実施形態と同様に当該部品200全体の重心位置(図示省略)と各丸穴201との高さ位置の比較により判別してもよい。   For the component 200 having such a shape, the distance between each round hole 201 and the edge portion closest to them may be measured by image recognition in the discrimination process. As shown in FIG. 22, if the separation distances are uniformly the same, it can be determined that the component 200 is normal in the front and back direction. Further, as shown in FIG. 23, when the separation distance is measured by the round hole 201 close to the notch 202 (that is, when there is a difference or variation in the separation distance), the part 200 is reversed in the front and back direction. Can be determined. Note that the posture in the vertical direction may be compared by measuring the separation distance from the round hole 201 at each of the upper edge and the lower edge, or the center of gravity of the entire component 200 as in the above embodiment. You may discriminate | determine by comparing the position (illustration omitted) and the height position of each round hole 201. FIG.

<2:移送方向で姿勢の区別がある場合>
また、図24に示すように、移送方向で姿勢の区別がある形状の部品300に対しても上記実施形態のパーツフィーダ1は適用可能である。この部品300は、大径長部301と小径短部302のそれぞれの中心軸を一致させた配置で移送方向に並べて一体に結合した形状に形成されている。このような形状の部品300では、図24中左側に示すように小径短部302を下流側に位置させた移送方向に正常な姿勢や、図24中右側に示すように小径短部302を上流側に位置させた移送方向に反転した姿勢、の2つの姿勢を取り得る。
<2: When there is a difference in posture in the transfer direction>
Moreover, as shown in FIG. 24, the parts feeder 1 of the said embodiment is applicable also to the components 300 of a shape with the distinction of an attitude | position in a transfer direction. The component 300 is formed in a shape in which the central axes of the large-diameter long portion 301 and the small-diameter short portion 302 are aligned with each other and aligned in the transfer direction. In the component 300 having such a shape, a normal posture in the transfer direction in which the small-diameter short portion 302 is positioned on the downstream side as shown on the left side in FIG. 24, or the small-diameter short portion 302 on the upstream side as shown on the right side in FIG. Two postures can be taken: a posture reversed in the transfer direction located on the side.

このような形状の部品300に対しては、判別処理における画像認識で、最初に形状特徴部に一致する位置を検出する。図25に示す例では、大径長部301の下流側端部の上方角部を形状特徴部としており、それに対応する比較パターン303として略四角形の小領域のうち上流下方側だけに部品300の一部が存在するパターンを用いる。認識領域画像中においてこの比較パターン303の形状特徴部に一致する箇所の移送方向位置が、大径長部301の下流側端部の位置に相当する。なお、小径短部302の下流側端部を検出することがないよう、上記比較パターン303を走査して比較する範囲の高さを限定する。そして、検出した大径長部301の下流側端部を挟んだ移送方向前後の2カ所で部品300のY方向存在長をそれぞれ検出し、比較することで当該部品300の移送方向姿勢を判別できる。特に、大径長部301の下流側端部より移送方向下流側に小径短部302が存在しているか否かで移送方向姿勢を判別できる。   For a part 300 having such a shape, a position that matches the shape feature is first detected by image recognition in the discrimination process. In the example shown in FIG. 25, the upper corner of the downstream end of the large-diameter long portion 301 is a shape feature, and the comparison pattern 303 corresponding thereto has only the upstream and lower sides of the component 300 in a substantially rectangular small region. Use a pattern that is partially present. In the recognition area image, the position in the transfer direction at a location that matches the shape feature of the comparison pattern 303 corresponds to the position of the downstream end of the large-diameter long portion 301. Note that the comparison pattern 303 is scanned to limit the height of the comparison range so that the downstream end of the small diameter short portion 302 is not detected. Then, the Y-direction existence length of the component 300 is detected at two positions before and after the detected downstream end of the large-diameter long portion 301 and compared, whereby the posture in the transfer direction of the component 300 can be determined. . In particular, the posture in the transfer direction can be determined based on whether or not the small-diameter short portion 302 is present downstream of the downstream end of the large-diameter long portion 301 in the transfer direction.

部品300の移送方向姿勢が正常である場合には、図25に示すように形状特徴部の比較パターン303が一致する位置は認識領域13中の下流側に限定される。したがって、比較パターン303を走査して比較する移送方向範囲を認識領域13中の下流側に限定することで、図26に示すように部品300の形状特徴部が認識領域13を通過した後でも正しく再認識領域14を設定して姿勢判別できる。この場合、図示するように2つの部品300が連接移送状態であっても、上流側の部品300の形状特徴部を検出せずに下流側の部品300だけ個別に姿勢判別できる。また、図27に示すように連接移送状態の2つの部品300がともに移送方向反転姿勢にある場合でも、下流側から順に各部品300の移送方向姿勢判別を確実に行うことができる。   When the posture of the component 300 is normal, the position where the shape feature comparison pattern 303 matches is limited to the downstream side in the recognition region 13 as shown in FIG. Therefore, by limiting the transfer direction range to be compared by scanning the comparison pattern 303 to the downstream side in the recognition area 13, the shape feature of the component 300 can be correctly corrected even after passing the recognition area 13, as shown in FIG. The posture can be determined by setting the re-recognition area 14. In this case, even if the two parts 300 are in the connected transfer state as shown in the drawing, it is possible to individually determine the posture of only the downstream part 300 without detecting the shape feature of the upstream part 300. In addition, as shown in FIG. 27, even when the two parts 300 in the articulated transfer state are both in the transfer direction reversal posture, it is possible to reliably determine the transfer direction posture of each part 300 in order from the downstream side.

<3:上下方向で姿勢の区別がある場合>
また、図28に示すように、上下方向で姿勢の区別がある形状の部品400に対しても上記実施形態のパーツフィーダ1は適用可能である。この部品400は、大径長部401と大径短部402の間に小径部403が挟まれ、それぞれの中心軸を一致させた配置で上下方向に並べて一体に結合した形状に形成されている。このような形状の部品400では、図28中左側に示すように大径長部401を下方側に位置させた上下方向に正常な姿勢や、図28中右側に示すように大径短部401を下方側に位置させた上下方向に反転した姿勢、の2つの姿勢を取り得る。
<3: When there is a posture distinction in the vertical direction>
Moreover, as shown in FIG. 28, the parts feeder 1 of the said embodiment is applicable also to the components 400 of a shape with the distinction of an attitude | position in an up-down direction. The component 400 is formed in a shape in which a small-diameter portion 403 is sandwiched between a large-diameter long portion 401 and a large-diameter short portion 402, and the respective central axes are aligned so as to be aligned vertically and integrally joined. . In the component 400 having such a shape, a normal posture in the vertical direction in which the large-diameter long portion 401 is positioned on the lower side as shown on the left side in FIG. 28, or a large-diameter short portion 401 as shown on the right side in FIG. Two postures, that is, a posture reversed in the up-down direction, positioned on the lower side, can be taken.

このような形状の部品400に対しても、判別処理における画像認識で、形状特徴部に一致する箇所を検出する。図29に示す例では、大径短部402の上下方向全体と小径部403の下流側端部の上方角部を含めた部分を形状特徴部としており、それに対応する比較パターン404として略四角形の小領域のうち中央高さ位置で移送方向一杯に部品400の一部(大径短部402の一部)が存在するとともに、上流下方側にも部品400の一部(小径部403の一部)が存在するパターンを用いる。この比較パターン404は、部品400全体が上下方向に正常な姿勢にないと検出し得ない。したがって、比較パターン404と一致する形状特徴部が検出できれば当該部品400が上下方向に正常な姿勢にあると判別でき、検出できなければ当該部品400が上下方向に反転した姿勢にあると判別できる。   Even for the component 400 having such a shape, a portion that matches the shape feature portion is detected by image recognition in the discrimination processing. In the example shown in FIG. 29, a portion including the entire upper and lower direction of the large-diameter short portion 402 and the upper corner of the downstream end of the small-diameter portion 403 is used as a shape feature portion. A part of the part 400 (part of the large-diameter short part 402) is present in the center height position in the small region in the transfer direction, and a part of the part 400 (part of the small-diameter part 403 is also provided on the upstream and lower side. ) Is used. This comparison pattern 404 cannot be detected unless the entire component 400 is in a normal posture in the vertical direction. Therefore, if a shape feature that matches the comparison pattern 404 can be detected, it can be determined that the component 400 is in a normal posture in the vertical direction, and if it cannot be detected, it can be determined that the component 400 is in a posture inverted in the vertical direction.

部品400の上下方向姿勢が正常である場合には、図29に示すように形状特徴部の比較パターン404が一致する位置は認識領域13中の下流側に限定される。したがって、比較パターン404を走査して比較する移送方向範囲を認識領域13中の下流側に限定することで、図30に示すように部品400の形状特徴部が認識領域13を通過した後でも正しく再認識領域14を設定して姿勢判別できる。この場合、図示するように2つの部品400が連接移送状態であっても、上流側の部品400の形状特徴部を検出せずに下流側の部品400だけ個別に姿勢判別できる。また、図31に示すように連接移送状態の2つの部品400のいずれか一方が上下方向反転姿勢にある場合でも、下流側から順に各部品400の上下方向姿勢判別を確実に行うことができる。   When the vertical posture of the component 400 is normal, the position where the comparison pattern 404 of the shape feature portion matches is limited to the downstream side in the recognition area 13 as shown in FIG. Therefore, by limiting the transfer direction range to be compared by scanning the comparison pattern 404 to the downstream side in the recognition region 13, the shape feature of the component 400 can be correctly corrected even after passing the recognition region 13, as shown in FIG. The posture can be determined by setting the re-recognition area 14. In this case, even when the two parts 400 are in the connected transfer state as shown in the drawing, the posture of only the downstream part 400 can be individually determined without detecting the shape feature of the upstream part 400. Further, as shown in FIG. 31, even when any one of the two components 400 in the articulated transfer state is in the vertically inverted posture, the vertical posture determination of each component 400 can be reliably performed in order from the downstream side.

<ユーザインタフェイス及び設定項目>
上記実施形態の機能を実現するために用意すべきユーザインタフェイス及び設定項目を以下に挙げる。
<User interface and setting items>
The user interface and setting items that should be prepared in order to realize the functions of the above embodiment are listed below.

1つ目として、部品移送方向の設定項目がある。これはカメラ視野における部品100〜400の移送方向を設定するための項目であり、これによって撮像画像11中における検知領域12と認識領域13の配置、及び再認識領域14のずれ方向が決まる。   First, there is a setting item for the component transfer direction. This is an item for setting the transfer direction of the components 100 to 400 in the camera field of view, and this determines the arrangement of the detection area 12 and the recognition area 13 in the captured image 11 and the shift direction of the re-recognition area 14.

2つ目として、再認識領域ずれ量の設定項目がある。これは部品100〜400の検知位置への到達を検知していながら認識領域画像で当該部品100〜400の姿勢判別ができない場合に、最初の認識領域13から移送方向下流側にどれだけのずれ量で再認識領域14を区画するかの設定項目である。この設定項目では、判別対称の部品100〜400の形状や移送方向長さ、形状特徴部の配置などに応じて適切な値で設定する。設定値の数値単位は画素単位または実寸が基本となるが、部品特有の単位(例えば丸穴などの形状特徴部の数)で設定してもよい。   Second, there is a setting item for the re-recognition area shift amount. This is the amount of deviation from the first recognition area 13 to the downstream side in the transport direction when the posture of the parts 100 to 400 cannot be determined from the recognition area image while the arrival of the parts 100 to 400 is detected. Is a setting item as to whether or not the re-recognition area 14 is partitioned. In this setting item, an appropriate value is set according to the shape of the discriminating symmetrical parts 100 to 400, the length in the transfer direction, the arrangement of the shape feature portion, and the like. The numerical unit of the set value is basically a pixel unit or an actual size, but may be set in a unit specific to a part (for example, the number of shape feature portions such as a round hole).

3つ目として、寸法計測メニューのユーザインタフェイスがある。これは部品100〜400の各方向の寸法を画素単位で把握するための操作メニューであり、再認識領域ずれ量を画素単位または部品特有の単位で設定する場合に操作する。   Third, there is a user interface for a dimension measurement menu. This is an operation menu for grasping the dimensions of the parts 100 to 400 in each pixel unit, and is operated when setting the re-recognition area deviation amount in pixel units or component-specific units.

4つ目として、画素サイズ較正メニューのユーザインタフェイスがある。これは、画素と実寸のサイズ関係を求めるための操作メニューであり、再認識領域ずれ量を実寸で設定する場合に操作する。   Fourth, there is a user interface for a pixel size calibration menu. This is an operation menu for obtaining the size relationship between the pixels and the actual size, and is operated when setting the re-recognition area deviation amount at the actual size.

5つ目として、ウィンドウ枠線描画のユーザインタフェイスがある。これは、カメラ4の撮像画像11を画像表示装置8に表示させる際に、検知領域12及び認識領域13それぞれに相当する区画範囲のウィンドウ枠線を描画させるための操作メニューである。パーツフィーダ1の作動中に検知処理及び判別処理が適切なタイミングで実行されているかを操作者が目視で確認するためにも、各領域にウィンドウ枠線を描画する。   The fifth is a window border drawing user interface. This is an operation menu for drawing window frame lines in the partition ranges corresponding to the detection area 12 and the recognition area 13 when the captured image 11 of the camera 4 is displayed on the image display device 8. In order for the operator to visually confirm whether the detection process and the discrimination process are being performed at an appropriate timing while the parts feeder 1 is operating, a window frame line is drawn in each area.

操作者は、画像表示装置8で部品100〜400の移送と検知処理、判別処理の行程を確認しつつ、操作インタフェイス9を介して上記のユーザインタフェイスの操作及び設定項目の設定入力を行うことで、多様な形状の部品100〜400に対しても適切な姿勢判別を行わせることができる。   The operator performs the above-described user interface operation and setting item setting input via the operation interface 9 while confirming the transfer of the components 100 to 400 and the process of the detection process and the discrimination process on the image display device 8. As a result, it is possible to perform appropriate posture discrimination for the variously shaped parts 100 to 400.

なお、上記実施形態では、部品100〜400の移送速度及び移送間隔が不確定な振動フィーダ2を用いたパーツフィーダ1での画像認識による姿勢判別手法を説明したが、これに限られない。他にも、ベルトコンベアなどを用いて移送速度または移送間隔が安定した部品供給装置に対しても。上述した画像認識による姿勢判別手法を適用して同様の効果を得ることができる。   In the above-described embodiment, the posture determination method by image recognition in the parts feeder 1 using the vibration feeder 2 in which the transfer speed and transfer interval of the components 100 to 400 are uncertain has been described, but the present invention is not limited to this. In addition, for a component supply apparatus that uses a belt conveyor or the like and has a stable transfer speed or transfer interval. The same effect can be obtained by applying the above-described posture recognition method based on image recognition.

なお、以上の説明における「等しい」とは、厳密な意味ではない。すなわち、「等しい」とは、設計上、製造上の公差、誤差が許容され、「実質的に等しい」という意味である。   Note that “equal” in the above description does not have a strict meaning. In other words, “equal” means that “tolerance and error in manufacturing are allowed by design and is“ substantially equal ”.

なお、以上の説明における「垂直、直交」とは、厳密な意味ではない。すなわち、「垂直、直交」とは、設計上、製造上の公差、誤差が許容され、「実質的に垂直、直交」という意味である。   In the above description, “vertical, orthogonal” does not mean exactly. In other words, “vertical and orthogonal” means that “tolerance and error in manufacturing are allowed in design and“ substantially vertical and orthogonal ”.

なお、以上の説明における「平行」とは、厳密な意味ではない。すなわち、「平行」とは、設計上、製造上の公差、誤差が許容され、「実質的に平行」という意味である。   Note that “parallel” in the above description does not have a strict meaning. That is, “parallel” means that “tolerance and error in manufacturing are allowed in design and“ substantially parallel ”.

また、以上既に述べた以外にも、上記実施形態や各変形例による手法を適宜組み合わせて利用しても良い。   In addition to those already described above, the methods according to the above-described embodiments and modifications may be used in appropriate combination.

その他、一々例示はしないが、上記実施形態や各変形例は、その趣旨を逸脱しない範囲内において、種々の変更が加えられて実施されるものである。   In addition, although not illustrated one by one, the above-mentioned embodiment and each modification are implemented with various modifications within a range not departing from the gist thereof.

1 パーツフィーダ(部品供給装置)
2 振動フィーダ(移送手段)
2a 移送経路
2b 貯留部
2c 排出口
3 照明
4 カメラ(撮像手段)
5 空気噴出ノズル(排除手段)
6 電磁弁
7 画像処理装置
8 画像表示装置
9 操作インタフェイス
11 撮像画像
12 検知領域
13 認識領域
14 再認識領域
100 部品(被移送体)
100′ 部品(被移送体)
101 丸穴(形状特徴部)
102 重心位置
200 部品(被移送体)
201 丸穴(形状特徴部)
202 切り欠き(形状特徴部)
300 部品(被移送体)
301 大径長部
302 小径短部
303 形状特徴部に対応する比較パターン
400 部品(被移送体)
401 大径長部
402 大径短部
403 小径部
404 形状特徴部に対応する比較パターン
1 Parts feeder (part supply device)
2 Vibration feeder (transfer means)
2a Transfer route 2b Reservoir 2c Discharge port 3 Illumination 4 Camera (imaging means)
5 Air ejection nozzle (exclusion means)
6 Solenoid valve 7 Image processing device 8 Image display device 9 Operation interface 11 Captured image 12 Detection region 13 Recognition region 14 Re-recognition region 100 Parts (Transported object)
100 'Parts (Transported object)
101 Round hole (shape feature)
102 Center of gravity position 200 Parts (Transported object)
201 Round hole (shape feature)
202 Notch (shape feature)
300 Parts (Transported object)
301 Large-diameter long portion 302 Small-diameter short portion 303 Comparison pattern corresponding to shape feature 400 Component (Transported object)
401 Large diameter long part 402 Large diameter short part 403 Small diameter part 404 Comparison pattern corresponding to shape feature

Claims (9)

所定の移送方向に移送される被移送体の撮像画像を画像認識することにより当該被移送体の姿勢を判別する画像処理装置であって、
前記撮像画像中において前記被移送体の移送経路上の所定位置で区画された検知領域の検知領域画像を取り込む検知画像取込手段と、
前記撮像画像中において前記検知領域の前記移送方向の上流側に隣接する位置で区画された認識領域の認識領域画像を取り込む認識画像取込手段と、
前記検知画像取込手段が取り込んだ前記検知領域画像に基づいて前記所定位置に前記被移送体が存在するか検知する検知手段と、
前記検知手段が前記所定位置での前記被移送体の存在を検知した場合に、前記認識画像取込手段が取り込んだ前記認識領域画像に基づいて当該被移送体の姿勢を判別する判別手段と、
を有することを特徴とする画像処理装置。
An image processing apparatus for discriminating a posture of a transferred object by recognizing a captured image of the transferred object transferred in a predetermined transfer direction,
Detection image capturing means for capturing a detection region image of a detection region partitioned at a predetermined position on the transport path of the transported object in the captured image;
A recognition image capturing means for capturing a recognition region image of a recognition region partitioned at a position adjacent to the upstream side of the detection region in the transport direction in the captured image;
Detecting means for detecting whether or not the object to be transported exists at the predetermined position based on the detection area image captured by the detected image capturing means;
A discriminating means for discriminating the posture of the transported body based on the recognition area image captured by the recognized image capturing means when the detecting means detects the presence of the transported body at the predetermined position;
An image processing apparatus comprising:
前記認識画像取込手段は、
所定周期で撮像された前記撮像画像のうち、前記検知手段が前記所定位置での前記被移送体の存在を検知した撮像画像から前記認識領域画像を取り込むことを特徴とする請求項1記載の画像処理装置。
The recognition image capturing means includes
2. The image according to claim 1, wherein among the captured images captured at a predetermined cycle, the recognition unit captures the recognition area image from a captured image in which the detection unit detects the presence of the transported body at the predetermined position. Processing equipment.
前記判別手段は、
前記認識領域画像中において、前記移送方向の下流側に位置する被移送体の前記移送方向の上流側の端部を画像認識することを特徴とする請求項1又は2記載の画像処理装置。
The discrimination means includes
3. The image processing apparatus according to claim 1, wherein, in the recognition area image, an image recognition is performed on an upstream end portion in the transfer direction of a transfer object located downstream in the transfer direction.
前記認識画像取込手段は、
前記判別手段が、前記認識領域画像中における前記移送方向の下流側に位置する被移送体の移送方向長さが所定長より短いと判定した場合には、前記認識領域よりも前記移送方向の下流側にずれた位置で区画した再認識領域の認識領域画像をあらたに取り込むことを特徴とする請求項3記載の画像処理装置。
The recognition image capturing means includes
When the determination unit determines that the transfer direction length of the transfer object located downstream in the transfer direction in the recognition area image is shorter than a predetermined length, the determination unit is further downstream in the transfer direction than the recognition area. The image processing apparatus according to claim 3, wherein a recognition area image of a re-recognition area divided at a position shifted to the side is newly fetched.
前記所定長は、前記判別手段が前記被移送体の姿勢の判別の基準となる判別基準長であり、
前記認識画像取込手段は、
前記認識領域画像中における前記移送方向の下流側に位置する被移送体の移送方向長さが前記判別基準長以上となる位置で前記再認識領域を区画することを特徴とする請求項4記載の画像処理装置。
The predetermined length is a determination reference length that is used as a reference for determining the posture of the transported body by the determination unit.
The recognition image capturing means includes
5. The re-recognition area is defined at a position where a transfer direction length of a transfer object located downstream in the transfer direction in the recognition area image is equal to or greater than the determination reference length. Image processing device.
前記判別手段は、
前記被移送体が備える形状特徴部を画像認識し、当該形状特徴部の有無及び位置に基づいて当該被移送体の前記移送方向の姿勢、上下方向の姿勢、及び表裏の姿勢を判別し、
前記判別基準長は、
前記被移送体において前記形状特徴部を所定数備え得る領域長さである
ことを特徴とする請求項5記載の画像処理装置。
The discrimination means includes
Recognizing an image of the shape feature portion of the transported body, and determining the posture of the transported body in the transport direction, the vertical posture, and the front and back posture based on the presence and position of the shape feature portion,
The discrimination reference length is
The image processing apparatus according to claim 5, wherein the transfer object has a region length that can include a predetermined number of the shape feature portions.
所定の移送方向に移送される被移送体の撮像画像を画像認識することにより前記被移送体の姿勢を判別する画像処理方法であって、
前記撮像画像中において前記被移送体の移送経路上の所定位置で区画された検知領域の検知領域画像を取り込むことと、
前記撮像画像中において前記検知領域の前記移送方向の上流側に隣接する位置で区画された認識領域の認識領域画像を取り込むことと、
取り込んだ前記検知領域画像に基づいて前記所定位置に前記被移送体が存在するか検知することと、
前記所定位置での前記被移送体の存在を検知した場合に、取り込んだ前記認識領域画像に基づいて当該被移送体の姿勢を判別することと、
を実行することを特徴とする画像処理方法。
An image processing method for discriminating the posture of the transferred object by recognizing a captured image of the transferred object transferred in a predetermined transfer direction,
Capturing a detection area image of a detection area partitioned at a predetermined position on a transfer path of the transfer object in the captured image;
Capturing a recognition area image of a recognition area partitioned at a position adjacent to the upstream side of the detection area in the transfer direction in the captured image;
Detecting whether the transferred object is present at the predetermined position based on the captured detection area image;
When detecting the presence of the transported object at the predetermined position, determining the posture of the transported object based on the captured recognition area image;
The image processing method characterized by performing.
所定の移送方向に移送される被移送体の撮像画像を画像認識することにより前記被移送体の姿勢を判別する画像処理装置が備える演算装置に実行させる画像処理プログラムであって、
前記撮像画像中において前記被移送体の移送経路上の所定位置で区画された検知領域の検知領域画像を取り込むことと、
前記撮像画像中において前記検知領域の前記移送方向の上流側に隣接する位置で区画された認識領域の認識領域画像を取り込むことと、
取り込んだ前記検知領域画像に基づいて前記所定位置に前記被移送体が存在するか検知することと、
前記所定位置での前記被移送体の存在を検知した場合に、取り込んだ前記認識領域画像に基づいて当該被移送体の姿勢を判別することと、
を実行することを特徴とする画像処理プログラム。
An image processing program to be executed by an arithmetic device provided in an image processing apparatus for recognizing a posture of the transferred object by recognizing a captured image of the transferred object transferred in a predetermined transfer direction,
Capturing a detection area image of a detection area partitioned at a predetermined position on a transfer path of the transfer object in the captured image;
Capturing a recognition area image of a recognition area partitioned at a position adjacent to the upstream side of the detection area in the transfer direction in the captured image;
Detecting whether the transferred object is present at the predetermined position based on the captured detection area image;
When detecting the presence of the transported object at the predetermined position, determining the posture of the transported object based on the captured recognition area image;
An image processing program characterized by executing
内部に貯留した複数の部品を移送経路上に一列に移送させる移送手段と、
前記移送経路を撮像する撮像手段と、
前記撮像手段で撮像された前記部品の撮像画像を画像認識することにより当該部品の姿勢を判別する画像処理装置と、
前記画像処理装置によって所定の姿勢にないと判別された部品を前記移送経路から排除する排除手段と、
を有する部品供給装置であって、
前記画像処理装置は、
前記撮像画像中において前記移送経路上の所定位置で区画された検知領域の検知領域画像を取り込む検知画像取込手段と、
前記撮像画像中において前記検知領域の前記部品の移送方向の上流側に隣接する位置で区画された認識領域の認識領域画像を取り込む認識画像取込手段と、
前記検知画像取込手段が取り込んだ前記検知領域画像に基づいて前記所定位置に前記部品が存在するか検知する検知手段と、
前記検知手段が前記所定位置での前記部品の存在を検知した場合に、前記認識画像取込手段が取り込んだ前記認識領域画像に基づいて当該部品の姿勢を判別する判別手段と、
を有することを特徴とする部品供給装置。
Transfer means for transferring a plurality of parts stored therein in a row on a transfer path;
Imaging means for imaging the transfer path;
An image processing device for recognizing the orientation of the component by recognizing the captured image of the component imaged by the imaging means;
Exclusion means for excluding parts determined by the image processing apparatus that are not in a predetermined posture from the transfer path;
A component supply device comprising:
The image processing apparatus includes:
Detection image capturing means for capturing a detection area image of a detection area partitioned at a predetermined position on the transfer path in the captured image;
Recognition image capturing means for capturing a recognition area image of a recognition area partitioned at a position adjacent to the upstream side in the transfer direction of the part of the detection area in the captured image;
Detecting means for detecting whether or not the component is present at the predetermined position based on the detection area image captured by the detected image capturing means;
A discriminating unit for discriminating the posture of the component based on the recognition area image captured by the recognition image capturing unit when the detection unit detects the presence of the component at the predetermined position;
A component supply device comprising:
JP2014252498A 2014-12-12 2014-12-12 Image processing apparatus, image processing method, and component supply apparatus Active JP6288517B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2014252498A JP6288517B2 (en) 2014-12-12 2014-12-12 Image processing apparatus, image processing method, and component supply apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2014252498A JP6288517B2 (en) 2014-12-12 2014-12-12 Image processing apparatus, image processing method, and component supply apparatus

Publications (2)

Publication Number Publication Date
JP2016114433A true JP2016114433A (en) 2016-06-23
JP6288517B2 JP6288517B2 (en) 2018-03-07

Family

ID=56141395

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2014252498A Active JP6288517B2 (en) 2014-12-12 2014-12-12 Image processing apparatus, image processing method, and component supply apparatus

Country Status (1)

Country Link
JP (1) JP6288517B2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019169010A (en) * 2018-03-24 2019-10-03 株式会社ダイシン Conveyed object determination device and conveyance system using the same
WO2023026452A1 (en) * 2021-08-27 2023-03-02 ファナック株式会社 Three-dimensional data acquisition device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005233730A (en) * 2004-02-18 2005-09-02 Teitsu Engineering Co Ltd Component inspection device
US20060182610A1 (en) * 2004-10-25 2006-08-17 Sala Jaime M Article positioning machine

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005233730A (en) * 2004-02-18 2005-09-02 Teitsu Engineering Co Ltd Component inspection device
US20060182610A1 (en) * 2004-10-25 2006-08-17 Sala Jaime M Article positioning machine

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019169010A (en) * 2018-03-24 2019-10-03 株式会社ダイシン Conveyed object determination device and conveyance system using the same
WO2023026452A1 (en) * 2021-08-27 2023-03-02 ファナック株式会社 Three-dimensional data acquisition device

Also Published As

Publication number Publication date
JP6288517B2 (en) 2018-03-07

Similar Documents

Publication Publication Date Title
US9665946B2 (en) Article conveyor system
US20120296469A1 (en) Sucking-conveying device having vision sensor and suction unit
JP6462000B2 (en) Component mounter
JP6545936B2 (en) Parts transfer system and attitude adjustment device
JP2011183537A (en) Robot system, robot device and work taking-out method
CN104338684B (en) Feed appliance speed detector and feed appliance
JP6288517B2 (en) Image processing apparatus, image processing method, and component supply apparatus
JP2016196077A (en) Information processor, information processing method, and program
US20190031452A1 (en) Article transport system and transport system controller
US20190126473A1 (en) Information processing apparatus and robot arm control system
CN110228693B (en) Feeding device
TWI631063B (en) Image processing device for feeder and feeder
WO2019123527A1 (en) Pick-and-place device, detection device, and detection method
EP3328180B1 (en) Component-mounting machine
JP7283881B2 (en) work system
JP2018015430A (en) Tablet printing device, tablet, and tablet manufacturing method
KR101993262B1 (en) Apparatus for micro ball mounting work
JP2017036104A (en) Part feeder
JP2007311472A (en) Image acquisition method for component-recognition-data preparation, and component mounting machine
KR101479573B1 (en) Bowl feeder for parts sorting
TWI664129B (en) Parts supply system and method
KR20150015377A (en) Parts feeder and image processing apparatus therefor
CN108029235A (en) Control device
JP6707403B2 (en) Mounting related processing equipment
JP6183280B2 (en) Positioning device

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20161228

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20171020

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20171101

A521 Request for written amendment filed

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20171206

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20180111

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20180124

R150 Certificate of patent or registration of utility model

Ref document number: 6288517

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150