WO2019235210A1 - Two-dimensional position/attitude deduction apparatus and two-dimensional position/attitude deduction method - Google Patents
Two-dimensional position/attitude deduction apparatus and two-dimensional position/attitude deduction method Download PDFInfo
- Publication number
- WO2019235210A1 WO2019235210A1 PCT/JP2019/020071 JP2019020071W WO2019235210A1 WO 2019235210 A1 WO2019235210 A1 WO 2019235210A1 JP 2019020071 W JP2019020071 W JP 2019020071W WO 2019235210 A1 WO2019235210 A1 WO 2019235210A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- image
- unit
- orientation estimation
- dimensional position
- Prior art date
Links
Images
Classifications
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B25—HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
- B25J—MANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
- B25J13/00—Controls for manipulators
- B25J13/08—Controls for manipulators by means of sensing devices, e.g. viewing or touching devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
Definitions
- the present invention relates to a two-dimensional position / orientation estimation apparatus and a two-dimensional position / orientation estimation method for estimating the position and orientation of an object on an image.
- a two-dimensional position / orientation estimation apparatus that estimates a position / orientation of a target object from an image using a template matching technique (see, for example, Patent Document 1).
- the position and orientation of the object include the position (x, y) and orientation ( ⁇ ) of the object.
- an object model template is registered in advance, and the degree of coincidence of image features between the image and the template is evaluated to estimate the position and orientation of the object on the image. To do.
- the present invention has been made to solve the above-described problems, and an object thereof is to provide a two-dimensional position / orientation estimation apparatus that can detect the position / orientation of an object on an image without using a template. Yes.
- the two-dimensional position and orientation estimation apparatus includes an image acquisition unit that acquires an image, an area extraction unit that extracts an object region that is a region indicating a target object from the image acquired by the image acquisition unit, A position / orientation estimation unit that estimates an object position based on a reproducible unique point and a posture of the object based on a reproducible unique direction from an area based on the object region extracted by the area extraction unit; It is characterized by that.
- the position and orientation of an object on an image can be detected without using a template.
- FIG. 1 is a diagram showing a configuration example of a robot picking system provided with a two-dimensional position / orientation estimation apparatus 3 according to Embodiment 1 of the present invention.
- the two-dimensional position / orientation estimation apparatus 3 is applicable to, for example, a positioning guide for object picking operation or robot teaching work in FA (factory automation) or the like.
- the two-dimensional position / orientation estimation device 3 is applied to a robot picking system, and the robot picking system is based on the position / orientation of the object 11 that is the target estimated by the two-dimensional position / orientation estimation device 3. The case where the said object 11 is picked is shown.
- the robot picking system is, for example, arranged on a production line of a factory and picks the object 11 arranged on the work surface 12.
- the number of objects 11 arranged on the work surface 12 is not limited to one, but a plurality of objects 11 may be provided, but when there are a plurality of objects 11, each object 11 is arranged on the work surface 12 without overlapping.
- this robot picking system includes a robot 1, a camera (imaging device) 2, a two-dimensional position / orientation estimation device 3, and a robot controller 4.
- the robot 1 has an end effector 101 (a hand in FIG. 1) at the tip of an arm, and picks the object 11 using the end effector 101.
- the camera 2 is attached to the end effector 101 and images an imaging area to obtain an image.
- Data (image data) indicating an image obtained by the camera 2 is output to the two-dimensional position / orientation estimation device 3.
- the image obtained by the camera 2 may be an image that can distinguish the region (object region) indicating the object 11 from the other region (background region) when the object 11 exists in the imaging region. Any format is acceptable.
- the camera 2 may obtain a two-dimensional image, for example.
- the two-dimensional image generally includes a color or monochrome visible image, but is not limited to this, and when the above-described determination is not easy with a visible image, a near-infrared image, a multispectral image, etc. A special two-dimensional image may be used.
- the camera 2 may obtain a distance image when the object 11 has a certain depth.
- the two-dimensional position / orientation estimation apparatus 3 estimates the position / orientation of the object 11 on the image based on the image obtained by the camera 2.
- the position / orientation of the object 11 means the position (x, y) and orientation ( ⁇ ) of the object 11.
- Data (estimation result data) indicating an estimation result by the two-dimensional position and orientation estimation device 3 is output to the robot controller 4.
- a configuration example of the two-dimensional position / orientation estimation apparatus 3 will be described later.
- the robot controller 4 calculates the movement amount of the arm of the robot 1 based on the estimation result by the two-dimensional position / orientation estimation device 3 and controls the movement of the robot 1 so as to pick the object 11.
- the two-dimensional position / orientation estimation apparatus 3 includes an image acquisition unit 301, a region extraction unit 302, a position estimation unit 303, an orientation estimation unit 304, and an output unit 305.
- the two-dimensional position / orientation estimation apparatus 3 is realized by a processing circuit such as a system LSI (Large Scale Integration), or a CPU (Central Processing Unit) that executes a program stored in a memory or the like.
- the image acquisition unit 301 acquires an image obtained by the camera 2.
- the region extraction unit 302 extracts an object region from the image acquired by the image acquisition unit 301 by image processing. That is, the area extraction unit 302 extracts an object area from the image based on the degree of contrast change.
- a method for extracting an object region by the region extraction unit 302 for example, threshold processing or a background difference method is conceivable. However, there is no particular limitation as long as the method can extract the object region by distinguishing the object region and the background region. .
- the position estimation unit 303 estimates the position of the object 11 based on a reproducible unique point from the region based on the object region extracted by the region extraction unit 302.
- the region based on the object region extracted by the region extraction unit 302 includes the object region, a region that includes the object region in a convex shape, a region that approximates the object region with a geometric shape, and the object region that has a geometric shape.
- One or more of the minimum area included or the minimum area circumscribed by the geometrical shape of the object area are included. Examples of the geometric shape include a rectangle or an ellipse.
- a reproducible unique point is a point that is uniquely determined without depending on the posture of the object 11.
- the position estimation unit 303 uses, for example, a centroid of a region based on the object region extracted by the region extraction unit 302 as a reproducible unique point, and the centroid and an arbitrarily set reference point on the image.
- the position of the object 11 is estimated based on the relative relationship.
- An example of the reference point on the image is the center point of the image.
- the posture estimation unit 304 estimates the posture of the object 11 based on a reproducible unique direction from the region based on the object region extracted by the region extraction unit 302.
- the posture estimation unit 304 uses, for example, the major axis direction of the region based on the object region extracted by the region extraction unit 302 as a reproducible unique direction, and the relative direction between the major axis direction and an arbitrarily set reference direction Based on the relationship, the posture of the object 11 is estimated.
- Examples of the reference direction include the horizontal direction of the image.
- the output unit 305 outputs data indicating the estimation result by the position estimation unit 303 and the estimation result by the posture estimation unit 304 to the robot controller 4.
- the position estimation unit 303 and the posture estimation unit 304 indicate that “from the region based on the object region extracted by the region extraction unit 302, the position of the object 11 based on a reproducible unique point and the reproducible unique direction.
- the position / orientation estimation unit for estimating the attitude of the object 11 based on “ is configured.
- the posture estimation unit 304 can express the posture of the object 11 without using a reference template by deriving a unique direction with reproducibility from the region based on the object region.
- the posture estimation unit 304 represents the posture of the object 11 by using the long axis direction.
- the position estimation unit 303 derives a reproducible unique point from the area based on the object area, thereby determining the position of the object 11 without using a reference template. It can be expressed. Since the centroid of the region based on the object region is a uniquely determined feature point, the position estimation unit 303 represents the position of the object 11 using the centroid below.
- the image acquisition unit 301 acquires an image obtained by the camera 2 (step ST1).
- the region extraction unit 302 extracts an object region from the image acquired by the image acquisition unit 301 by image processing (step ST2).
- the region extraction unit 302 extracts an object region from an image by threshold processing.
- the image acquisition unit 301 acquires an image as shown in FIG. This image includes the object 11, the work surface 12, and other backgrounds.
- the two-dimensional position / orientation estimation apparatus 3 according to Embodiment 1 since no template is prepared, it is not easy to directly extract an object region from an image. Therefore, in the two-dimensional position / orientation estimation apparatus 3 according to the first embodiment, first, as shown in FIG. 5A, for example, an area indicating the work surface 12 (gray area shown in FIG. 5A) 501 is extracted from the image by threshold processing.
- an area 501 indicating the work surface 12 and an area (gray area illustrated in FIG. 5B) 502 in which the area is filled by closing processing or the like are included.
- the object region (gray region shown in FIG. 5C) 503 can be extracted.
- the position estimating unit 303 estimates the position of the object 11 based on a reproducible unique point from the region based on the object region extracted by the region extracting unit 302 (step ST3).
- the position estimation unit 303 uses the centroid of the region as a reproducible unique point, and determines the position of the object 11 based on the relative relationship between the centroid and an arbitrarily set reference point on the image. presume.
- the position estimation unit 303 first derives the centroid of the region based on the object region extracted by the region extraction unit 302 as a reproducible unique point.
- the position estimation unit 303 performs the 0th-order moment of the region.
- the centroid of the region is derived from the following equations (1) to (4) using the first moment. Note that N is the number of points constituting the region.
- m 00 represents the 0th-order moment of the region
- (m 10 , m 01 ) represents the first-order moment of the region
- (u (bar), v (bar )) Represents the centroid of the area.
- the position estimating unit 303 estimates the position of the object 11 from the following equation (5) based on the derived centroid and reference point.
- Equation (5) (u 0 , v 0 ) represents a reference point, and (x, y) represents the position of the object 11.
- the posture estimation unit 304 estimates the posture of the object 11 based on a reproducible unique direction from the region based on the object region extracted by the region extraction unit 302 (step ST4).
- the posture estimation unit 304 uses the long axis direction of the region as a reproducible unique direction, and estimates the posture of the object 11 based on the relative relationship between the long axis direction and an arbitrarily set reference direction. To do.
- the posture estimation unit 304 calculates from the following equations (6) to (9) based on the second moment of the region and the centroid of the region.
- the posture of the object 11 is estimated by deriving the rotation angle from the horizontal direction of the long axis direction.
- (m 20 , m 02 , m 11 ) represents the second moment of the above region
- ⁇ represents the rotation angle.
- the region extraction unit 302 has extracted an object region 503 as shown in FIG. 5C. Further, it is assumed that the region used by the position estimation unit 303 and the posture estimation unit 304 is a minimum rectangular region 601 including the object region 503 as illustrated in FIG. 6A. In this case, the centroid 602 derived by the position estimation unit 303 and the long axis direction 603 derived by the posture estimation unit 304 are as shown in FIG. 6B.
- the position estimation unit 303 estimates the position of the object 11 with respect to each of a plurality of areas such as the object area 503 shown in FIG. 5C and the rectangular area 601 shown in FIG. 6A, and estimates them.
- the position of the object 11 may be estimated based on the result. Thereby, the position estimation unit 303 can improve the accuracy of position estimation.
- the posture estimation unit 304 estimates the position of the object 11 with respect to each of a plurality of areas such as the object area 503 shown in FIG. 5C and the rectangular area 601 shown in FIG. 6A.
- the position estimation unit 303 derives the centroid of the region.
- the position estimation unit 303 may estimate the position of the object 11 using, for example, feature points of the outline of the region (for example, a right-angle portion 504 of the object region shown in FIG. 5C).
- the posture estimation unit 304 has derived the major axis direction of the region.
- the present invention is not limited to this, and the posture estimation unit 304 may estimate the posture of the object 11 using, for example, a line segment that connects the centroid of the region and the feature points of the outline of the region.
- the output unit 305 outputs data indicating the estimation result by the position estimation unit 303 and the estimation result by the posture estimation unit 304 to the robot controller 4 (step ST5).
- the robot controller 4 controls the operation of the robot 1 to pick the object 11 based on the estimation result by the two-dimensional position / orientation estimation apparatus 3.
- the robot picking system can pick the object 11 even when the position and orientation of the object 11 are indefinite.
- the two-dimensional position / orientation estimation apparatus 3 acquires the target object 11 from the image acquisition unit 301 that acquires an image and the image acquired by the image acquisition unit 301.
- a region extracting unit 302 that extracts an object region that is a region to be shown, and a region based on a unique point having reproducibility and a reproducible unique point from a region based on the object region extracted by the region extracting unit 302
- a position and orientation estimation unit that estimates the orientation of the object 11 based on the direction.
- any component of the embodiment can be modified or any component of the embodiment can be omitted within the scope of the invention.
- the two-dimensional position / orientation estimation method and the two-dimensional position / orientation estimation method according to the present invention acquire an image, extract an object region that is a region indicating a target object from the acquired image, and extract the extracted object region.
- the position and orientation of the object on the image can be detected by estimating the position of the object based on a unique point with reproducibility and the posture of the object based on a unique direction with reproducibility Since it is configured, it is suitable for use in positioning guides for object picking operations or robot teaching operations in FA or the like.
- Robot 2 Camera (imaging device) 3 Two-dimensional position and orientation estimation device 4 Robot controller 11 Object 12 Work surface 101 End effector 301 Image acquisition unit 302 Area extraction unit 303 Position estimation unit 304 Posture estimation unit 305 Output unit
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Robotics (AREA)
- Mechanical Engineering (AREA)
- Image Analysis (AREA)
- Manipulator (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The present invention is provided with: an image acquisition unit (301) that acquires an image; a region extraction unit (302) that, from the image acquired by the image acquisition unit (301), extracts an object region that is a region indicating an object of interest; and a position/attitude deduction unit that, from a region based on the object region extracted by the region extraction unit (302), deducts the position of the object based on a reproducible unique point and the attitude of the object based on a reproducible unique direction.
Description
この発明は、画像上での物体の位置姿勢を推定する2次元位置姿勢推定装置及び2次元位置姿勢推定方法に関する。
The present invention relates to a two-dimensional position / orientation estimation apparatus and a two-dimensional position / orientation estimation method for estimating the position and orientation of an object on an image.
従来から、テンプレートマッチングという手法を用いて、画像から、対象である物体の位置姿勢を推定する2次元位置姿勢推定装置が知られている(例えば特許文献1参照)。なお、物体の位置姿勢には、物体の位置(x,y)及び姿勢(θ)が含まれる。この従来の2次元位置姿勢推定装置では、物体のモデル(テンプレート)を予め登録し、画像とテンプレートとの間の画像特徴の一致度を評価することで、画像上での物体の位置姿勢を推定する。
2. Description of the Related Art Conventionally, there is known a two-dimensional position / orientation estimation apparatus that estimates a position / orientation of a target object from an image using a template matching technique (see, for example, Patent Document 1). The position and orientation of the object include the position (x, y) and orientation (θ) of the object. In this conventional two-dimensional position and orientation estimation apparatus, an object model (template) is registered in advance, and the degree of coincidence of image features between the image and the template is evaluated to estimate the position and orientation of the object on the image. To do.
しかしながら、従来の2次元位置姿勢推定装置では、テンプレートマッチングによって物体の位置姿勢を推定するために、物体毎に膨大な数のテンプレートを用意する必要がある。また、従来の2次元位置姿勢推定装置では、総当たりによるマッチングを行うため、演算コスト及び処理時間が増大する。
そこで、類似度の高いテンプレートを階層的にまとめてテンプレートを削減することで、高効率にテンプレートマッチングを行う技術が開示されている。しかしながら、この技術でも、テンプレートを予め用意する必要がある点には変わりがない。すなわち、この技術でも、テンプレートを予め用意するための手間(労力及び時間)が削減されているわけではなく、改善すべき余地がある。 However, in the conventional two-dimensional position / orientation estimation apparatus, in order to estimate the position / orientation of an object by template matching, it is necessary to prepare an enormous number of templates for each object. In addition, since the conventional two-dimensional position / orientation estimation apparatus performs brute force matching, calculation cost and processing time increase.
In view of this, there has been disclosed a technique for performing template matching with high efficiency by hierarchically collecting templates having high similarity and reducing the templates. However, this technique does not change the point that a template needs to be prepared in advance. That is, even with this technique, the effort (labor and time) for preparing a template in advance is not reduced, and there is room for improvement.
そこで、類似度の高いテンプレートを階層的にまとめてテンプレートを削減することで、高効率にテンプレートマッチングを行う技術が開示されている。しかしながら、この技術でも、テンプレートを予め用意する必要がある点には変わりがない。すなわち、この技術でも、テンプレートを予め用意するための手間(労力及び時間)が削減されているわけではなく、改善すべき余地がある。 However, in the conventional two-dimensional position / orientation estimation apparatus, in order to estimate the position / orientation of an object by template matching, it is necessary to prepare an enormous number of templates for each object. In addition, since the conventional two-dimensional position / orientation estimation apparatus performs brute force matching, calculation cost and processing time increase.
In view of this, there has been disclosed a technique for performing template matching with high efficiency by hierarchically collecting templates having high similarity and reducing the templates. However, this technique does not change the point that a template needs to be prepared in advance. That is, even with this technique, the effort (labor and time) for preparing a template in advance is not reduced, and there is room for improvement.
この発明は、上記のような課題を解決するためになされたもので、テンプレートを用いずに、画像上での物体の位置姿勢を検出可能な2次元位置姿勢推定装置を提供することを目的としている。
The present invention has been made to solve the above-described problems, and an object thereof is to provide a two-dimensional position / orientation estimation apparatus that can detect the position / orientation of an object on an image without using a template. Yes.
この発明に係る2次元位置姿勢推定装置は、画像を取得する画像取得部と、画像取得部により取得された画像から、対象である物体を示す領域である物体領域を抽出する領域抽出部と、領域抽出部により抽出された物体領域に基づく領域から、再現性のある一意の点に基づく物体の位置及び再現性のある一意の方向に基づく当該物体の姿勢を推定する位置姿勢推定部とを備えたことを特徴とする。
The two-dimensional position and orientation estimation apparatus according to the present invention includes an image acquisition unit that acquires an image, an area extraction unit that extracts an object region that is a region indicating a target object from the image acquired by the image acquisition unit, A position / orientation estimation unit that estimates an object position based on a reproducible unique point and a posture of the object based on a reproducible unique direction from an area based on the object region extracted by the area extraction unit; It is characterized by that.
この発明によれば、上記のように構成したので、テンプレートを用いずに、画像上での物体の位置姿勢が検出可能となる。
According to this invention, since it is configured as described above, the position and orientation of an object on an image can be detected without using a template.
以下、この発明の実施の形態について図面を参照しながら詳細に説明する。
実施の形態1.
図1はこの発明の実施の形態1に係る2次元位置姿勢推定装置3を備えたロボットピッキングシステムの構成例を示す図である。
2次元位置姿勢推定装置3は、例えば、FA(ファクトリーオートメーション)等における物体のピッキング動作又はロボットの教示作業等の位置決めガイド等に適用可能である。以下では、一例として、2次元位置姿勢推定装置3をロボットピッキングシステムに適用し、このロボットピッキングシステムが、2次元位置姿勢推定装置3により推定された対象である物体11の位置姿勢に基づいて、当該物体11のピッキングを行う場合を示す。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
Embodiment 1 FIG.
FIG. 1 is a diagram showing a configuration example of a robot picking system provided with a two-dimensional position /orientation estimation apparatus 3 according to Embodiment 1 of the present invention.
The two-dimensional position /orientation estimation apparatus 3 is applicable to, for example, a positioning guide for object picking operation or robot teaching work in FA (factory automation) or the like. Hereinafter, as an example, the two-dimensional position / orientation estimation device 3 is applied to a robot picking system, and the robot picking system is based on the position / orientation of the object 11 that is the target estimated by the two-dimensional position / orientation estimation device 3. The case where the said object 11 is picked is shown.
実施の形態1.
図1はこの発明の実施の形態1に係る2次元位置姿勢推定装置3を備えたロボットピッキングシステムの構成例を示す図である。
2次元位置姿勢推定装置3は、例えば、FA(ファクトリーオートメーション)等における物体のピッキング動作又はロボットの教示作業等の位置決めガイド等に適用可能である。以下では、一例として、2次元位置姿勢推定装置3をロボットピッキングシステムに適用し、このロボットピッキングシステムが、2次元位置姿勢推定装置3により推定された対象である物体11の位置姿勢に基づいて、当該物体11のピッキングを行う場合を示す。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
Embodiment 1 FIG.
FIG. 1 is a diagram showing a configuration example of a robot picking system provided with a two-dimensional position /
The two-dimensional position /
ロボットピッキングシステムは、例えば、工場の生産ライン等に配置され、作業面12上に配置された物体11のピッキングを行う。なお、作業面12上に配置される物体11は1つに限らず複数でもよいが、複数の物体11が存在する場合には各物体11は作業面12上に重ならずに配置されているものとする。このロボットピッキングシステムは、図1に示すように、ロボット1、カメラ(撮像装置)2、2次元位置姿勢推定装置3及びロボットコントローラ4を備えている。
The robot picking system is, for example, arranged on a production line of a factory and picks the object 11 arranged on the work surface 12. Note that the number of objects 11 arranged on the work surface 12 is not limited to one, but a plurality of objects 11 may be provided, but when there are a plurality of objects 11, each object 11 is arranged on the work surface 12 without overlapping. Shall. As shown in FIG. 1, this robot picking system includes a robot 1, a camera (imaging device) 2, a two-dimensional position / orientation estimation device 3, and a robot controller 4.
ロボット1は、アームの先端にエンドエフェクタ101(図1ではハンド)を有し、このエンドエフェクタ101を用いて物体11のピッキングを行う。
The robot 1 has an end effector 101 (a hand in FIG. 1) at the tip of an arm, and picks the object 11 using the end effector 101.
カメラ2は、エンドエフェクタ101に取付けられ、撮像領域を撮像して画像を得る。このカメラ2により得られた画像を示すデータ(画像データ)は2次元位置姿勢推定装置3に出力される。
なお、カメラ2により得られる画像は、撮像領域に物体11が存在する場合に、その物体11を示す領域(物体領域)とその他の領域(背景領域)とを判別可能な画像であればよく、その形式は問わない。カメラ2は、例えば2次元画像を得てもよい。この2次元画像としては、一般的にはカラー又はモノクロの可視画像が挙げられるが、これに限らず、可視画像では上記の判別が容易ではない場合には、近赤外線画像又はマルチスペクトル画像等の特殊な2次元画像を用いてもよい。また、カメラ2は、物体11にある程度の奥行きがある場合には、距離画像を得てもよい。 The camera 2 is attached to theend effector 101 and images an imaging area to obtain an image. Data (image data) indicating an image obtained by the camera 2 is output to the two-dimensional position / orientation estimation device 3.
The image obtained by the camera 2 may be an image that can distinguish the region (object region) indicating theobject 11 from the other region (background region) when the object 11 exists in the imaging region. Any format is acceptable. The camera 2 may obtain a two-dimensional image, for example. The two-dimensional image generally includes a color or monochrome visible image, but is not limited to this, and when the above-described determination is not easy with a visible image, a near-infrared image, a multispectral image, etc. A special two-dimensional image may be used. The camera 2 may obtain a distance image when the object 11 has a certain depth.
なお、カメラ2により得られる画像は、撮像領域に物体11が存在する場合に、その物体11を示す領域(物体領域)とその他の領域(背景領域)とを判別可能な画像であればよく、その形式は問わない。カメラ2は、例えば2次元画像を得てもよい。この2次元画像としては、一般的にはカラー又はモノクロの可視画像が挙げられるが、これに限らず、可視画像では上記の判別が容易ではない場合には、近赤外線画像又はマルチスペクトル画像等の特殊な2次元画像を用いてもよい。また、カメラ2は、物体11にある程度の奥行きがある場合には、距離画像を得てもよい。 The camera 2 is attached to the
The image obtained by the camera 2 may be an image that can distinguish the region (object region) indicating the
2次元位置姿勢推定装置3は、カメラ2により得られた画像に基づいて、当該画像上での物体11の位置姿勢を推定する。なお、物体11の位置姿勢とは、物体11の位置(x,y)及び姿勢(θ)を意味する。この2次元位置姿勢推定装置3による推定結果を示すデータ(推定結果データ)はロボットコントローラ4に出力される。なお、2次元位置姿勢推定装置3の構成例は後述する。
The two-dimensional position / orientation estimation apparatus 3 estimates the position / orientation of the object 11 on the image based on the image obtained by the camera 2. The position / orientation of the object 11 means the position (x, y) and orientation (θ) of the object 11. Data (estimation result data) indicating an estimation result by the two-dimensional position and orientation estimation device 3 is output to the robot controller 4. A configuration example of the two-dimensional position / orientation estimation apparatus 3 will be described later.
ロボットコントローラ4は、2次元位置姿勢推定装置3による推定結果に基づいて、ロボット1が有するアームの動作量を算出し、物体11のピッキングを行うようにロボット1の動作を制御する。
The robot controller 4 calculates the movement amount of the arm of the robot 1 based on the estimation result by the two-dimensional position / orientation estimation device 3 and controls the movement of the robot 1 so as to pick the object 11.
次に、2次元位置姿勢推定装置3の構成例について、図2を参照しながら説明する。
2次元位置姿勢推定装置3は、図2に示すように、画像取得部301、領域抽出部302、位置推定部303、姿勢推定部304及び出力部305を備えている。なお、2次元位置姿勢推定装置3は、システムLSI(Large Scale Integration)等の処理回路、又はメモリ等に記憶されたプログラムを実行するCPU(Central Processing Unit)等により実現される。 Next, a configuration example of the two-dimensional position /orientation estimation apparatus 3 will be described with reference to FIG.
As illustrated in FIG. 2, the two-dimensional position /orientation estimation apparatus 3 includes an image acquisition unit 301, a region extraction unit 302, a position estimation unit 303, an orientation estimation unit 304, and an output unit 305. The two-dimensional position / orientation estimation apparatus 3 is realized by a processing circuit such as a system LSI (Large Scale Integration), or a CPU (Central Processing Unit) that executes a program stored in a memory or the like.
2次元位置姿勢推定装置3は、図2に示すように、画像取得部301、領域抽出部302、位置推定部303、姿勢推定部304及び出力部305を備えている。なお、2次元位置姿勢推定装置3は、システムLSI(Large Scale Integration)等の処理回路、又はメモリ等に記憶されたプログラムを実行するCPU(Central Processing Unit)等により実現される。 Next, a configuration example of the two-dimensional position /
As illustrated in FIG. 2, the two-dimensional position /
画像取得部301は、カメラ2により得られた画像を取得する。
The image acquisition unit 301 acquires an image obtained by the camera 2.
領域抽出部302は、画像取得部301により取得された画像から、画像処理によって物体領域を抽出する。すなわち、領域抽出部302は、上記画像から、コントラストの変化の度合いに基づいて、物体領域を抽出する。領域抽出部302による物体領域の抽出手法としては、例えば閾値処理又は背景差分法等が考えられるが、物体領域と背景領域とを判別して物体領域を抽出可能な手法であれば特に制限はない。
The region extraction unit 302 extracts an object region from the image acquired by the image acquisition unit 301 by image processing. That is, the area extraction unit 302 extracts an object area from the image based on the degree of contrast change. As a method for extracting an object region by the region extraction unit 302, for example, threshold processing or a background difference method is conceivable. However, there is no particular limitation as long as the method can extract the object region by distinguishing the object region and the background region. .
位置推定部303は、領域抽出部302により抽出された物体領域に基づく領域から、再現性のある一意の点に基づく物体11の位置を推定する。なお、領域抽出部302により抽出された物体領域に基づく領域は、当該物体領域、当該物体領域を凸包した領域、当該物体領域を幾何学形状で近似した領域、当該物体領域を幾何学形状で包含した最小の領域、又は、当該物体領域を幾何学形状で外接した最小の領域のうちの1つ以上の領域を含む。幾何学形状としては、矩形又は楕円等が挙げられる。また、再現性のある一意の点とは、物体11の姿勢に依存せずに一意に定まる点である。
位置推定部303は、例えば、再現性のある一意の点として領域抽出部302により抽出された物体領域に基づく領域の図心を用い、当該図心と任意に設定した画像上の基準点との相対関係に基づいて物体11の位置を推定する。画像上の基準点としては、例えば画像の中心点が挙げられる。 Theposition estimation unit 303 estimates the position of the object 11 based on a reproducible unique point from the region based on the object region extracted by the region extraction unit 302. The region based on the object region extracted by the region extraction unit 302 includes the object region, a region that includes the object region in a convex shape, a region that approximates the object region with a geometric shape, and the object region that has a geometric shape. One or more of the minimum area included or the minimum area circumscribed by the geometrical shape of the object area are included. Examples of the geometric shape include a rectangle or an ellipse. A reproducible unique point is a point that is uniquely determined without depending on the posture of the object 11.
Theposition estimation unit 303 uses, for example, a centroid of a region based on the object region extracted by the region extraction unit 302 as a reproducible unique point, and the centroid and an arbitrarily set reference point on the image. The position of the object 11 is estimated based on the relative relationship. An example of the reference point on the image is the center point of the image.
位置推定部303は、例えば、再現性のある一意の点として領域抽出部302により抽出された物体領域に基づく領域の図心を用い、当該図心と任意に設定した画像上の基準点との相対関係に基づいて物体11の位置を推定する。画像上の基準点としては、例えば画像の中心点が挙げられる。 The
The
姿勢推定部304は、領域抽出部302により抽出された物体領域に基づく領域から、再現性のある一意の方向に基づく物体11の姿勢を推定する。
姿勢推定部304は、例えば、再現性のある一意の方向として領域抽出部302により抽出された物体領域に基づく領域の長軸方向を用い、当該長軸方向と任意の設定した基準方向との相対関係に基づいて、物体11の姿勢を推定する。基準方向としては、例えば画像の水平方向が挙げられる。 Theposture estimation unit 304 estimates the posture of the object 11 based on a reproducible unique direction from the region based on the object region extracted by the region extraction unit 302.
Theposture estimation unit 304 uses, for example, the major axis direction of the region based on the object region extracted by the region extraction unit 302 as a reproducible unique direction, and the relative direction between the major axis direction and an arbitrarily set reference direction Based on the relationship, the posture of the object 11 is estimated. Examples of the reference direction include the horizontal direction of the image.
姿勢推定部304は、例えば、再現性のある一意の方向として領域抽出部302により抽出された物体領域に基づく領域の長軸方向を用い、当該長軸方向と任意の設定した基準方向との相対関係に基づいて、物体11の姿勢を推定する。基準方向としては、例えば画像の水平方向が挙げられる。 The
The
出力部305は、位置推定部303による推定結果及び姿勢推定部304による推定結果を示すデータをロボットコントローラ4に出力する。
The output unit 305 outputs data indicating the estimation result by the position estimation unit 303 and the estimation result by the posture estimation unit 304 to the robot controller 4.
なお、位置推定部303及び姿勢推定部304は、「領域抽出部302により抽出された物体領域に基づく領域から、再現性のある一意の点に基づく物体11の位置及び再現性のある一意の方向に基づく当該物体11の姿勢を推定する位置姿勢推定部」を構成する。
Note that the position estimation unit 303 and the posture estimation unit 304 indicate that “from the region based on the object region extracted by the region extraction unit 302, the position of the object 11 based on a reproducible unique point and the reproducible unique direction. The position / orientation estimation unit for estimating the attitude of the object 11 based on “is configured.
次に、図2に示す2次元位置姿勢推定装置3の動作例について、図3を参照しながら説明する。
一般的に、物体11の姿勢を表現する場合、オイラー角(α,β,γ)及び四元数(w,x,y,z)に代表されるように、ある基準からの回転量(角度)で表現する場合が多い。この場合、物体11自体はその基準を持っていないため、ユーザは、任意に基準を設定する必要がある。また、基準の設定に関しては特に制限はない。よって、姿勢推定部304は、物体領域に基づく領域から再現性のある一意の方向を導出することで、基準となるテンプレートを用いずに物体11の姿勢が表現可能である。
また、画像上で物体11の姿勢を表現する場合、主にZ軸(カメラ2の光軸方向)周りの回転量で表現する場合が多い。よって、Z軸周りに回転対称な形状を有する物体11に対しては、姿勢の表現が不可能(不必要)である。このことから、画像上での姿勢推定が必要な物体11は、形状に何らかの幾何学的特徴を有するものに限定される。そして、物体11として形状に何らかの幾何学的特徴を有するものを想定した場合、その物体領域に基づく領域の長軸方向は一意に定まる。よって、以下では、姿勢推定部304は長軸方向を用いることで物体11の姿勢を表現する。
また、物体11の位置に関しても同様であり、位置推定部303は、物体領域に基づく領域から再現性のある一意の点を導出することで、基準となるテンプレートを用いずに物体11の位置を表現可能となる。また、物体領域に基づく領域の図心は一意に定まる特徴点であるため、以下では、位置推定部303は図心を用いることで物体11の位置を表現する。 Next, an operation example of the two-dimensional position /orientation estimation apparatus 3 shown in FIG. 2 will be described with reference to FIG.
In general, when expressing the posture of theobject 11, as represented by Euler angles (α, β, γ) and quaternions (w, x, y, z), the amount of rotation (angle) ) In many cases. In this case, since the object 11 itself does not have the reference, the user needs to arbitrarily set the reference. There are no particular restrictions on the setting of standards. Therefore, the posture estimation unit 304 can express the posture of the object 11 without using a reference template by deriving a unique direction with reproducibility from the region based on the object region.
In addition, when the posture of theobject 11 is expressed on the image, it is often expressed mainly by a rotation amount around the Z axis (the optical axis direction of the camera 2). Therefore, it is impossible (unnecessary) to express the posture of the object 11 having a rotationally symmetric shape around the Z axis. For this reason, the object 11 requiring posture estimation on the image is limited to one having some geometric feature in the shape. When the object 11 is assumed to have some geometric feature in the shape, the major axis direction of the region based on the object region is uniquely determined. Therefore, in the following, the posture estimation unit 304 represents the posture of the object 11 by using the long axis direction.
The same applies to the position of theobject 11, and the position estimation unit 303 derives a reproducible unique point from the area based on the object area, thereby determining the position of the object 11 without using a reference template. It can be expressed. Since the centroid of the region based on the object region is a uniquely determined feature point, the position estimation unit 303 represents the position of the object 11 using the centroid below.
一般的に、物体11の姿勢を表現する場合、オイラー角(α,β,γ)及び四元数(w,x,y,z)に代表されるように、ある基準からの回転量(角度)で表現する場合が多い。この場合、物体11自体はその基準を持っていないため、ユーザは、任意に基準を設定する必要がある。また、基準の設定に関しては特に制限はない。よって、姿勢推定部304は、物体領域に基づく領域から再現性のある一意の方向を導出することで、基準となるテンプレートを用いずに物体11の姿勢が表現可能である。
また、画像上で物体11の姿勢を表現する場合、主にZ軸(カメラ2の光軸方向)周りの回転量で表現する場合が多い。よって、Z軸周りに回転対称な形状を有する物体11に対しては、姿勢の表現が不可能(不必要)である。このことから、画像上での姿勢推定が必要な物体11は、形状に何らかの幾何学的特徴を有するものに限定される。そして、物体11として形状に何らかの幾何学的特徴を有するものを想定した場合、その物体領域に基づく領域の長軸方向は一意に定まる。よって、以下では、姿勢推定部304は長軸方向を用いることで物体11の姿勢を表現する。
また、物体11の位置に関しても同様であり、位置推定部303は、物体領域に基づく領域から再現性のある一意の点を導出することで、基準となるテンプレートを用いずに物体11の位置を表現可能となる。また、物体領域に基づく領域の図心は一意に定まる特徴点であるため、以下では、位置推定部303は図心を用いることで物体11の位置を表現する。 Next, an operation example of the two-dimensional position /
In general, when expressing the posture of the
In addition, when the posture of the
The same applies to the position of the
この実施の形態1に係る2次元位置姿勢推定装置3の動作例では、図3に示すように、まず、画像取得部301は、カメラ2により得られた画像を取得する(ステップST1)。
In the operation example of the two-dimensional position / orientation estimation apparatus 3 according to the first embodiment, as shown in FIG. 3, first, the image acquisition unit 301 acquires an image obtained by the camera 2 (step ST1).
次いで、領域抽出部302は、画像取得部301により取得された画像から、画像処理によって物体領域を抽出する(ステップST2)。ここでは、領域抽出部302は、閾値処理によって画像から物体領域を抽出するものとする。画像取得部301により例えば図4に示すような画像が取得されたとする。この画像には、物体11、作業面12及びその他の背景が含まれている。実施の形態1に係る2次元位置姿勢推定装置3では、テンプレートを用意しないため、画像から物体領域を直接抽出することは容易ではない。そこで、実施の形態1に係る2次元位置姿勢推定装置3では、まず、例えば図5Aに示すように、閾値処理によって画像から作業面12を示す領域(図5Aに示すグレーの領域)501を抽出することが望ましい。一般的に、生産ラインでは、物体11は作業面12上に配置されている場合がほとんどである。そのため、実施の形態1における領域抽出部302では、作業面12を示す領域501と、当該領域に対してクロージング処理等で領域の穴埋めを行った領域(図5Bに示すグレーの領域)502との差分をとることで、物体領域(図5Cに示すグレーの領域)503が抽出可能である。
Next, the region extraction unit 302 extracts an object region from the image acquired by the image acquisition unit 301 by image processing (step ST2). Here, it is assumed that the region extraction unit 302 extracts an object region from an image by threshold processing. Assume that the image acquisition unit 301 acquires an image as shown in FIG. This image includes the object 11, the work surface 12, and other backgrounds. In the two-dimensional position / orientation estimation apparatus 3 according to Embodiment 1, since no template is prepared, it is not easy to directly extract an object region from an image. Therefore, in the two-dimensional position / orientation estimation apparatus 3 according to the first embodiment, first, as shown in FIG. 5A, for example, an area indicating the work surface 12 (gray area shown in FIG. 5A) 501 is extracted from the image by threshold processing. It is desirable to do. Generally, in the production line, the object 11 is almost always disposed on the work surface 12. Therefore, in the area extraction unit 302 according to the first exemplary embodiment, an area 501 indicating the work surface 12 and an area (gray area illustrated in FIG. 5B) 502 in which the area is filled by closing processing or the like are included. By taking the difference, the object region (gray region shown in FIG. 5C) 503 can be extracted.
次いで、位置推定部303は、領域抽出部302により抽出された物体領域に基づく領域から、再現性のある一意の点に基づく物体11の位置を推定する(ステップST3)。ここでは、位置推定部303は、再現性のある一意の点として上記領域の図心を用い、当該図心と任意に設定した画像上の基準点との相対関係に基づいて物体11の位置を推定する。
Next, the position estimating unit 303 estimates the position of the object 11 based on a reproducible unique point from the region based on the object region extracted by the region extracting unit 302 (step ST3). Here, the position estimation unit 303 uses the centroid of the region as a reproducible unique point, and determines the position of the object 11 based on the relative relationship between the centroid and an arbitrarily set reference point on the image. presume.
具体的には、位置推定部303は、まず、再現性のある一意の点として領域抽出部302により抽出された物体領域に基づく領域の図心を導出する。ここで、上記領域を構成する点の座標が(ui,vi)(i,j=1,・・・,N)で表される場合、位置推定部303は、当該領域の0次モーメント及び1次モーメントを用いて、下式(1)~(4)から、当該領域の図心を導出する。なお、Nは上記領域を構成する点の個数である。また、式(1)~(4)において、m00は上記領域の0次モーメントを表し、(m10,m01)は当該領域の1次モーメントを表し、(u(バー),v(バー))は当該領域の図心を表す。
Specifically, theposition estimation unit 303 first derives the centroid of the region based on the object region extracted by the region extraction unit 302 as a reproducible unique point. Here, when the coordinates of the points constituting the region are represented by (u i , v i ) (i, j = 1,..., N), the position estimation unit 303 performs the 0th-order moment of the region. And the centroid of the region is derived from the following equations (1) to (4) using the first moment. Note that N is the number of points constituting the region. In equations (1) to (4), m 00 represents the 0th-order moment of the region, (m 10 , m 01 ) represents the first-order moment of the region, and (u (bar), v (bar )) Represents the centroid of the area.
Specifically, the
そして、位置推定部303は、導出した領域の図心及び基準点に基づいて、下式(5)から物体11の位置を推定する。なお、式(5)において、(u0,v0)は基準点を表し、(x,y)は物体11の位置を表す。
Then, theposition estimating unit 303 estimates the position of the object 11 from the following equation (5) based on the derived centroid and reference point. In Equation (5), (u 0 , v 0 ) represents a reference point, and (x, y) represents the position of the object 11.
Then, the
また、姿勢推定部304は、領域抽出部302により抽出された物体領域に基づく領域から、再現性のある一意の方向に基づく物体11の姿勢を推定する(ステップST4)。ここでは、姿勢推定部304は、再現性のある一意の方向として上記領域の長軸方向を用い、当該長軸方向と任意の設定した基準方向との相対関係に基づいて物体11の姿勢を推定する。
Also, the posture estimation unit 304 estimates the posture of the object 11 based on a reproducible unique direction from the region based on the object region extracted by the region extraction unit 302 (step ST4). Here, the posture estimation unit 304 uses the long axis direction of the region as a reproducible unique direction, and estimates the posture of the object 11 based on the relative relationship between the long axis direction and an arbitrarily set reference direction. To do.
具体的には、基準方向が画像上の水平方向である場合、姿勢推定部304は、上記領域の2次モーメント及び当該領域の図心に基づいて、下式(6)~(9)から、長軸方向の水平方向からの回転角度を導出することで物体11の姿勢を推定する。式(6)~(9)において、(m20,m02,m11)は上記領域の2次モーメントを表し、θは回転角度を表す。
Specifically, when the reference direction is the horizontal direction on the image, theposture estimation unit 304 calculates from the following equations (6) to (9) based on the second moment of the region and the centroid of the region. The posture of the object 11 is estimated by deriving the rotation angle from the horizontal direction of the long axis direction. In equations (6) to (9), (m 20 , m 02 , m 11 ) represents the second moment of the above region, and θ represents the rotation angle.
Specifically, when the reference direction is the horizontal direction on the image, the
例えば、領域抽出部302が図5Cに示すような物体領域503を抽出したとする。また、位置推定部303及び姿勢推定部304が用いる領域が、図6Aに示すように物体領域503を包含する最小の矩形領域601であるとする。この場合、位置推定部303により導出される図心602及び姿勢推定部304により導出される長軸方向603は図6Bに示すようになる。
For example, assume that the region extraction unit 302 has extracted an object region 503 as shown in FIG. 5C. Further, it is assumed that the region used by the position estimation unit 303 and the posture estimation unit 304 is a minimum rectangular region 601 including the object region 503 as illustrated in FIG. 6A. In this case, the centroid 602 derived by the position estimation unit 303 and the long axis direction 603 derived by the posture estimation unit 304 are as shown in FIG. 6B.
また、位置推定部303は、例えば図5Cに示す物体領域503及び図6Aに示す矩形領域601等のように複数の領域を用いてそれぞれに対して物体11の位置の推定を行い、それらの推定結果に基づいて物体11の位置を推定してもよい。これにより、位置推定部303は、位置推定の精度の向上を図ることができる。姿勢推定部304についても同様である。
Further, the position estimation unit 303 estimates the position of the object 11 with respect to each of a plurality of areas such as the object area 503 shown in FIG. 5C and the rectangular area 601 shown in FIG. 6A, and estimates them. The position of the object 11 may be estimated based on the result. Thereby, the position estimation unit 303 can improve the accuracy of position estimation. The same applies to the posture estimation unit 304.
また上記では、位置推定部303が、上記領域の図心を導出する場合を示した。しかしながら、これに限らず、位置推定部303は、例えば上記領域の外郭の特徴点(例えば図5Cに示す物体領域の直角部分504)を用いて、物体11の位置を推定してもよい。
また上記では、姿勢推定部304が、上記領域の長軸方向を導出する場合を示した。しかしながら、これに限らず、姿勢推定部304は、例えば、上記領域の図心と当該領域の外郭の特徴点とを結ぶ線分を用いて、物体11の姿勢を推定してもよい。 In the above description, theposition estimation unit 303 derives the centroid of the region. However, the present invention is not limited to this, and the position estimation unit 303 may estimate the position of the object 11 using, for example, feature points of the outline of the region (for example, a right-angle portion 504 of the object region shown in FIG. 5C).
In the above description, theposture estimation unit 304 has derived the major axis direction of the region. However, the present invention is not limited to this, and the posture estimation unit 304 may estimate the posture of the object 11 using, for example, a line segment that connects the centroid of the region and the feature points of the outline of the region.
また上記では、姿勢推定部304が、上記領域の長軸方向を導出する場合を示した。しかしながら、これに限らず、姿勢推定部304は、例えば、上記領域の図心と当該領域の外郭の特徴点とを結ぶ線分を用いて、物体11の姿勢を推定してもよい。 In the above description, the
In the above description, the
次いで、出力部305は、位置推定部303による推定結果及び姿勢推定部304による推定結果を示すデータをロボットコントローラ4に出力する(ステップST5)。その後、ロボットコントローラ4は、2次元位置姿勢推定装置3による推定結果に基づいて、物体11のピッキングを行うようにロボット1の動作を制御する。これにより、ロボットピッキングシステムは、物体11の位置姿勢が不定の場合であっても当該物体11をピッキング可能となる。
Next, the output unit 305 outputs data indicating the estimation result by the position estimation unit 303 and the estimation result by the posture estimation unit 304 to the robot controller 4 (step ST5). Thereafter, the robot controller 4 controls the operation of the robot 1 to pick the object 11 based on the estimation result by the two-dimensional position / orientation estimation apparatus 3. As a result, the robot picking system can pick the object 11 even when the position and orientation of the object 11 are indefinite.
以上のように、この実施の形態1によれば、2次元位置姿勢推定装置3は、画像を取得する画像取得部301と、画像取得部301により取得された画像から、対象である物体11を示す領域である物体領域を抽出する領域抽出部302と、領域抽出部302により抽出された物体領域に基づく領域から、再現性のある一意の点に基づく物体11の位置及び再現性のある一意の方向に基づく当該物体11の姿勢を推定する位置姿勢推定部とを備えた。これにより、実施の形態1に係る2次元位置姿勢推定装置3は、テンプレートを用いずに、多様な物体11に対して画像上での位置姿勢が検出可能となる。
As described above, according to the first embodiment, the two-dimensional position / orientation estimation apparatus 3 acquires the target object 11 from the image acquisition unit 301 that acquires an image and the image acquired by the image acquisition unit 301. A region extracting unit 302 that extracts an object region that is a region to be shown, and a region based on a unique point having reproducibility and a reproducible unique point from a region based on the object region extracted by the region extracting unit 302 A position and orientation estimation unit that estimates the orientation of the object 11 based on the direction. Thereby, the two-dimensional position / orientation estimation apparatus 3 according to Embodiment 1 can detect the position / orientation on the image of various objects 11 without using a template.
なお、本願発明はその発明の範囲内において、実施の形態の任意の構成要素の変形、もしくは実施の形態の任意の構成要素の省略が可能である。
In the present invention, any component of the embodiment can be modified or any component of the embodiment can be omitted within the scope of the invention.
本発明に係る2次元位置姿勢推定及び2次元位置姿勢推定方法は、画像を取得し、取得された画像から、対象である物体を示す領域である物体領域を抽出し、抽出された物体領域に基づく領域から、再現性のある一意の点に基づく前記物体の位置及び再現性のある一意の方向に基づく当該物体の姿勢を推定するようにして、画像上での物体の位置姿勢を検出可能に構成しているので、例えば、FA等における物体のピッキング動作又はロボットの教示作業等の位置決めガイド等に用いるのに適している。
The two-dimensional position / orientation estimation method and the two-dimensional position / orientation estimation method according to the present invention acquire an image, extract an object region that is a region indicating a target object from the acquired image, and extract the extracted object region. The position and orientation of the object on the image can be detected by estimating the position of the object based on a unique point with reproducibility and the posture of the object based on a unique direction with reproducibility Since it is configured, it is suitable for use in positioning guides for object picking operations or robot teaching operations in FA or the like.
1 ロボット
2 カメラ(撮像装置)
3 2次元位置姿勢推定装置
4 ロボットコントローラ
11 物体
12 作業面
101 エンドエフェクタ
301 画像取得部
302 領域抽出部
303 位置推定部
304 姿勢推定部
305 出力部 1 Robot 2 Camera (imaging device)
3 Two-dimensional position and orientation estimation device 4Robot controller 11 Object 12 Work surface 101 End effector 301 Image acquisition unit 302 Area extraction unit 303 Position estimation unit 304 Posture estimation unit 305 Output unit
2 カメラ(撮像装置)
3 2次元位置姿勢推定装置
4 ロボットコントローラ
11 物体
12 作業面
101 エンドエフェクタ
301 画像取得部
302 領域抽出部
303 位置推定部
304 姿勢推定部
305 出力部 1 Robot 2 Camera (imaging device)
3 Two-dimensional position and orientation estimation device 4
Claims (7)
- 画像を取得する画像取得部と、
前記画像取得部により取得された画像から、対象である物体を示す領域である物体領域を抽出する領域抽出部と、
前記領域抽出部により抽出された物体領域に基づく領域から、再現性のある一意の点に基づく前記物体の位置及び再現性のある一意の方向に基づく当該物体の姿勢を推定する位置姿勢推定部と
を備えた2次元位置姿勢推定装置。 An image acquisition unit for acquiring images;
A region extraction unit that extracts an object region that is a region indicating a target object from the image acquired by the image acquisition unit;
A position and orientation estimation unit that estimates the position of the object based on a reproducible unique point and the orientation of the object based on a reproducible unique direction from a region based on the object region extracted by the region extraction unit; A two-dimensional position and orientation estimation apparatus. - 前記画像取得部により取得される画像は2次元画像又は距離画像である
ことを特徴とする請求項1記載の2次元位置姿勢推定装置。 The two-dimensional position / orientation estimation apparatus according to claim 1, wherein the image acquired by the image acquisition unit is a two-dimensional image or a distance image. - 前記領域抽出部は、前記画像取得部により取得された画像から、コントラストの変化の度合いに基づいて、物体領域を抽出する
ことを特徴とする請求項1又は請求項2記載の2次元位置姿勢推定装置。 The two-dimensional position / orientation estimation according to claim 1, wherein the region extraction unit extracts an object region from an image acquired by the image acquisition unit based on a degree of change in contrast. apparatus. - 前記位置姿勢推定部は、前記領域抽出部により抽出された物体領域に基づく領域として、当該物体領域、当該物体領域を凸包した領域、当該物体領域を幾何学形状で近似した領域、当該物体領域を幾何学形状で包含した最小の領域、又は、当該物体領域を幾何学形状で外接した最小の領域のうちの1つ以上の領域を用いる
ことを特徴とする請求項1から請求項3のうちの何れか1項記載の2次元位置姿勢推定装置。 The position / orientation estimation unit includes, as the region based on the object region extracted by the region extraction unit, the object region, a region enveloping the object region, a region approximating the object region with a geometric shape, the object region One or more of a minimum region that includes a geometric shape and a minimum region that circumscribes the object region with a geometric shape is used. The two-dimensional position and orientation estimation apparatus according to claim 1. - 前記位置姿勢推定部は、再現性のある一意の点として前記領域抽出部により抽出された物体領域に基づく領域の図心を用い、当該図心と基準点との相対関係に基づいて前記物体の位置を推定する
ことを特徴とする請求項1から請求項4のうちの何れか1項記載の2次元位置姿勢推定装置。 The position and orientation estimation unit uses a centroid of a region based on the object region extracted by the region extraction unit as a reproducible unique point, and based on a relative relationship between the centroid and a reference point, The position is estimated. The two-dimensional position and orientation estimation apparatus according to any one of claims 1 to 4, wherein the position is estimated. - 前記位置姿勢推定部は、再現性のある一意の方向として前記領域抽出部により抽出された物体領域に基づく領域の長軸方向を用い、当該長軸方向と基準方向との相対関係に基づいて前記物体の姿勢を推定する
ことを特徴とする請求項1から請求項4のうちの何れか1項記載の2次元位置姿勢推定装置。 The position and orientation estimation unit uses the major axis direction of the region based on the object region extracted by the region extraction unit as a reproducible unique direction, and based on the relative relationship between the major axis direction and the reference direction, The two-dimensional position and orientation estimation apparatus according to any one of claims 1 to 4, wherein the posture of the object is estimated. - 画像取得部が、画像を取得するステップと、
領域抽出部が、前記画像取得部により取得された画像から、対象である物体を示す領域である物体領域を抽出するステップと、
位置姿勢推定部が、前記領域抽出部により抽出された物体領域に基づく領域から、再現性のある一意の点を用いた前記物体の位置及び再現性のある一意の方向を用いて当該物体の姿勢を推定するステップと
を有する2次元位置姿勢推定方法。 An image acquisition unit acquiring an image;
A region extracting unit extracting an object region which is a region indicating a target object from the image acquired by the image acquiring unit;
The position / orientation estimation unit uses the position of the object using a reproducible unique point and the reproducible unique direction from the region based on the object region extracted by the region extraction unit. A two-dimensional position / orientation estimation method comprising:
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2018-106983 | 2018-06-04 | ||
JP2018106983A JP7178802B2 (en) | 2018-06-04 | 2018-06-04 | 2D Position and Posture Estimation Apparatus and 2D Position and Posture Estimation Method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019235210A1 true WO2019235210A1 (en) | 2019-12-12 |
Family
ID=68770903
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/020071 WO2019235210A1 (en) | 2018-06-04 | 2019-05-21 | Two-dimensional position/attitude deduction apparatus and two-dimensional position/attitude deduction method |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP7178802B2 (en) |
WO (1) | WO2019235210A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6015780A (en) * | 1983-07-08 | 1985-01-26 | Hitachi Ltd | Robot |
JP2006260315A (en) * | 2005-03-18 | 2006-09-28 | Juki Corp | Method and device for detecting component position |
-
2018
- 2018-06-04 JP JP2018106983A patent/JP7178802B2/en active Active
-
2019
- 2019-05-21 WO PCT/JP2019/020071 patent/WO2019235210A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6015780A (en) * | 1983-07-08 | 1985-01-26 | Hitachi Ltd | Robot |
JP2006260315A (en) * | 2005-03-18 | 2006-09-28 | Juki Corp | Method and device for detecting component position |
Also Published As
Publication number | Publication date |
---|---|
JP7178802B2 (en) | 2022-11-28 |
JP2019211968A (en) | 2019-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6331517B2 (en) | Image processing apparatus, system, image processing method, and image processing program | |
WO2019114339A1 (en) | Method and device for correcting motion of robotic arm | |
JP6348093B2 (en) | Image processing apparatus and method for detecting image of detection object from input data | |
US10262417B2 (en) | Tooth axis estimation program, tooth axis estimation device and method of the same, tooth profile data creation program, tooth profile data creation device and method of the same | |
JP6415066B2 (en) | Information processing apparatus, information processing method, position and orientation estimation apparatus, robot system | |
US7460687B2 (en) | Watermarking scheme for digital video | |
JP6889865B2 (en) | Template creation device, object recognition processing device, template creation method and program | |
US10043279B1 (en) | Robust detection and classification of body parts in a depth map | |
CN110926330B (en) | Image processing apparatus, image processing method, and program | |
JP2007004767A (en) | Image recognition apparatus, method and program | |
CN109886124B (en) | Non-texture metal part grabbing method based on wire harness description subimage matching | |
JP2018119833A (en) | Information processing device, system, estimation method, computer program, and storage medium | |
JP2016170050A (en) | Position attitude measurement device, position attitude measurement method and computer program | |
CN116249607A (en) | Method and device for robotically gripping three-dimensional objects | |
US20150356346A1 (en) | Feature point position detecting appararus, feature point position detecting method and feature point position detecting program | |
JP6075888B2 (en) | Image processing method, robot control method | |
JP6936974B2 (en) | Position / orientation estimation device, position / orientation estimation method and program | |
JP6410231B2 (en) | Alignment apparatus, alignment method, and computer program for alignment | |
JP6922348B2 (en) | Information processing equipment, methods, and programs | |
JP2010184300A (en) | Attitude changing device and attitude changing method | |
KR20090090983A (en) | Method for extracting spacial coordimates using multiple cameras image | |
JP2020042575A (en) | Information processing apparatus, positioning method, and program | |
WO2019235210A1 (en) | Two-dimensional position/attitude deduction apparatus and two-dimensional position/attitude deduction method | |
JP6606340B2 (en) | Image detection apparatus, image detection method, and program | |
JP7117878B2 (en) | processor and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19816062 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19816062 Country of ref document: EP Kind code of ref document: A1 |