JPS61262986A - Producing device for seeming dictionary of cube - Google Patents

Producing device for seeming dictionary of cube

Info

Publication number
JPS61262986A
JPS61262986A JP60105355A JP10535585A JPS61262986A JP S61262986 A JPS61262986 A JP S61262986A JP 60105355 A JP60105355 A JP 60105355A JP 10535585 A JP10535585 A JP 10535585A JP S61262986 A JPS61262986 A JP S61262986A
Authority
JP
Japan
Prior art keywords
visible
dictionary
cube
appearance
triangle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP60105355A
Other languages
Japanese (ja)
Other versions
JPH0232669B2 (en
Inventor
Tomomitsu Murano
朋光 村野
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fujitsu Ltd
Original Assignee
Fujitsu Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fujitsu Ltd filed Critical Fujitsu Ltd
Priority to JP60105355A priority Critical patent/JPS61262986A/en
Publication of JPS61262986A publication Critical patent/JPS61262986A/en
Publication of JPH0232669B2 publication Critical patent/JPH0232669B2/ja
Granted legal-status Critical Current

Links

Landscapes

  • Image Analysis (AREA)

Abstract

PURPOSE:To decide quickly the type or the candidate of the cubic form of a matter to be recognized by collating the extracted seeming features of a cube with the contents of a dictionary. CONSTITUTION:The number of visible faces, the types of forms of these visible faces and the connecting relations among these faces which are obtained by observing a cube in a prescribed direction are written on a visible face table 6 in each prescribed direction. While the different appearances of the cube are stored in a dictionary storage part 7 in terms of the connecting relations of polygons. Then the contents of the table 6 are collated 8 with those of the part 7. Based on the result of this collation 8, the combinations among those items concerning the visible faces of the cube described on the table 6 are processed 9. Then the result of this processing 9 is written on the part 7. Thus a triangular prism, for example, is stored in terms of a triangle, a square, a triangle and a square sharing a side, two squares sharing a side and a triangle and two squares sharing a side with each other.

Description

【発明の詳細な説明】 〔概要〕 ロボソI・の目(ロポソIビジョン)等の立体認識装置
に設けられ゛乙認識の対象とする1′を体の種別(立方
体・五角錐、四角錐台等の別)の識別に供せられるit
体の見えかた辞書を、欠落がなく確実に作成できるよう
に構成したもの。
[Detailed Description of the Invention] [Summary] A three-dimensional recognition device such as the Roboso I Vision is equipped with It is used for the identification of
A dictionary of how the body looks that is structured to ensure that there are no omissions.

〔産業−にの利用分野〕[Industrial field of use]

本発明は立体認識装置、とくに立体の見えかたによって
、その1″を体がI′j方体であるか角錐であるか等を
認識する際に用いられる立体の見えかた辞書作成装置に
かんするものである。
The present invention relates to a three-dimensional recognition device, particularly a three-dimensional appearance dictionary creation device used to recognize whether a body is an I'j cube or a pyramid based on the appearance of the three-dimensional object. It's something to think about.

たとえば1組立T場等において用いられるロボットには
、コンベヤー等によって移送される部品の中から所望の
部品を検出し、これを把持して指定された場所に移動さ
せるものがある。
For example, some robots used in 1-assembly T-yards detect a desired part from among the parts being transferred by a conveyor or the like, grip it, and move it to a designated location.

このような場合、ロボソ]・は、まず、その物体がどの
ような姿勢であっても、その種別を敏速かつ確実に識別
することが必要である。
In such a case, it is first necessary for the robot to quickly and reliably identify the type of object, regardless of its orientation.

〔従来の技術〕[Conventional technology]

第4図はロボットビジョン等に用いられる立体認識装置
の構成図であり。
FIG. 4 is a block diagram of a three-dimensional recognition device used for robot vision and the like.

1は、被認識物体を観測して二次元画像データに変換す
る。たとえばT業用テレヒジョン(ITV)等を応用し
た観測部。
1 observes the object to be recognized and converts it into two-dimensional image data. For example, an observation section that uses industrial television (ITV).

2は、観測部lによって得られた二次元画像データから
、被認識物体の見えかたの特徴を抽出する特徴抽出部。
2 is a feature extraction unit that extracts features of the appearance of the object to be recognized from the two-dimensional image data obtained by the observation unit 1;

3は、認識の対象とするいくつかの立体について、各々
の形状を立体の種別毎に記述した立体データを記憶する
特徴辞書。
3 is a feature dictionary that stores three-dimensional data that describes the shape of each solid for each type of three-dimensional objects to be recognized;

4は、特徴辞書3に記憶される立体データから。4 is from the three-dimensional data stored in the feature dictionary 3.

特徴抽出部2において抽出されることが予想されるいく
つかの見えかたの特徴を作る変換部。
A conversion unit that creates several appearance features that are expected to be extracted by the feature extraction unit 2.

また、5は、特徴抽出部2によって抽出した被認識物体
の見えかたの特徴を、変換部4によって作られたいくつ
かの見えかたの特徴と照合することによって、被認識物
体の立体の種別を識別する識別部である。
Further, in step 5, the three-dimensional shape of the object to be recognized is calculated by comparing the appearance feature of the object to be recognized extracted by the feature extraction unit 2 with some appearance characteristics created by the conversion unit 4. This is an identification part that identifies the type.

例えば15万体の各頂点を第5図ratに示ずようにP
l・P2・・ P8とするとき、そのη方体は、第5図
(blに示すように、各頂点P1〜P8の座標値および
接続関係(稜線を介して隣接する頂点)によって記述し
た台1体データによって完全に表すことができ、特徴辞
書3にはイ1体の種別毎に、このような形で記述した立
体データを記憶する。
For example, each of the 150,000 vertices is set to P as shown in Figure 5 rat.
l, P2...P8, the η-cube is a platform described by the coordinate values and connection relationships (adjacent vertices via the ridge line) of each vertex P1 to P8, as shown in Figure 5 (bl). It can be completely represented by one-body data, and the feature dictionary 3 stores three-dimensional data described in this manner for each type of one-body.

一方、特徴抽出部2によっ°ζ得られる見えかたの特徴
は、第6図fat〜tc+に示ずように、1個の四角形
、−辺を共有する2個の四角形または、それぞれ−辺を
共有する3個の四角形のいずれかである。
On the other hand, as shown in FIG. Any of the three rectangles that share .

同様に、三角柱の場合の見えかたの特徴は、第1図fa
t〜f[llの5種類のいずれかである。
Similarly, the appearance characteristics of a triangular prism are shown in Figure 1 fa.
It is one of five types: t to f[ll.

したがって、たとえば特徴抽出部2において抽出される
見えかたの特徴が四角形の場合には、その立体はη方体
か三角柱、その他四角錐・四角台形等のいずれかの筈で
ある。
Therefore, for example, if the appearance feature extracted by the feature extraction unit 2 is a quadrilateral, the solid should be either an eta-cube, a triangular prism, or another square pyramid or square trapezoid.

このため、変換部4では、特徴辞書に記憶する各立体の
1″i体データから、それぞれの立体に予想される各種
の見えかたの特徴を作り、特徴抽出部2によって抽出し
た被認識物体の見えがたの特徴を、これらと照合するこ
とによって、被認識物体のh〕体の種別を少数の候補に
限定し、更に必要があれば別の手段を用いて被認識立体
の種別を識別して、その結果をロボット制御部に供給す
る。
For this reason, the conversion unit 4 creates various appearance characteristics expected for each solid from the 1″ i-body data of each solid stored in the feature dictionary, and the features of the recognition target extracted by the feature extraction unit 2 are By comparing the appearance characteristics with these, the type of the object to be recognized (h) is limited to a small number of candidates, and if necessary, the type of the three-dimensional object to be recognized is identified using other means. Then, the result is supplied to the robot control unit.

〔発明が解決しようとする問題点〕[Problem that the invention seeks to solve]

−F記構成の1″1体認識装置においては、特徴辞書3
に記憶される立体データから、特徴抽出部2において抽
出されることが予想されるいくつかの見えかたの特徴を
作るために長時間を要する。このため、被認識立体の認
識に長時間を要するという問題点がある。
- In the 1″ one body recognition device having the F configuration, the feature dictionary 3
It takes a long time to create several visual features that are expected to be extracted by the feature extraction unit 2 from the three-dimensional data stored in the three-dimensional data. Therefore, there is a problem in that it takes a long time to recognize the three-dimensional object to be recognized.

したがって本発明の目的は、立体認識装置に備えること
によってんしき速度の向上に役立ち得る立体の見えかた
辞書作成装置を提供することにある。
SUMMARY OF THE INVENTION Therefore, an object of the present invention is to provide a dictionary creation device for how a three-dimensional object appears when included in a three-dimensional recognition device, which can help improve the speed of recognition.

r問題点を解決するための手段〕 第1図は本発明の原理ブロック図であり。rMeans for solving problems] FIG. 1 is a block diagram of the principle of the present invention.

6は、立体を所定の観測方向から観測したときに得られ
る可視面の数と各RJ視面の形状の種別と各可視面の接
続関係とを前記所定の方向毎に記述した可視面テーブル
Reference numeral 6 denotes a visible surface table in which the number of visible surfaces obtained when a solid is observed from a predetermined observation direction, the type of shape of each RJ viewing surface, and the connection relationship of each visible surface are described for each of the predetermined directions.

7は、立体の見えかたを多角形の接続関係によって格納
する辞書記憶部。
Reference numeral 7 denotes a dictionary storage unit that stores the appearance of three-dimensional objects based on connection relationships of polygons.

8は、可視面テーブル6の前記方向毎の記述内容を辞書
記憶部7の内容と照合し、 pJ視前面テーブル6記述
されているoJ視面の数と各可視面の種別と各可視面の
接続関係との組合せが、辞書記憶部7に記憶されている
か否かを調べる照合部。
8 compares the description contents of the visible surface table 6 for each direction with the contents of the dictionary storage section 7, and calculates the number of oJ viewing surfaces described in the pJ viewing front table 6, the type of each visible surface, and the number of oJ viewing surfaces described in the pJ viewing front table 6. A collation unit that checks whether a combination with a connection relationship is stored in the dictionary storage unit 7.

9は、照合部8における照合の結果、可視面テーブル6
に記述されている可視面の数と各可視面の種別と各可視
面の接続関係との絹合せか、辞書記憶部7に記憶されζ
いない場合には、この組合せを辞書記憶部7に書き込む
ための処理を行う書込み処理部である。
9 shows the visible surface table 6 as a result of the matching in the matching section 8.
A combination of the number of visible surfaces, the type of each visible surface, and the connection relationship of each visible surface described in
If there is no such combination, it is a write processing section that performs processing for writing this combination into the dictionary storage section 7.

〔作用〕[Effect]

すなわち1例えば」−面と下面とが平行な三角柱に対し
ては、第2図ratに示すように、各面をf1〜f5と
し各面f1〜f5をすべて独立と見なすと、これを観測
したとき、1面だけが見える方向が5通り。
In other words, 1. For example, for a triangular prism whose -face and bottom face are parallel, as shown in Figure 2 rat, each face is f1 to f5, and each face f1 to f5 is considered to be all independent, and this is observed. At this time, there are 5 directions in which only one side can be seen.

2面だけが見える方向が9通り、また3面が見える方向
が6通り、あわせて20通りの見えがたがあり、これを
同図山)の形(O印は可視・空白は不可視を表す)で可
視面テーブル6に記述する。
There are nine directions in which only two sides can be seen, and six ways in which three sides can be viewed, for a total of 20 views. ) is written in the visible surface table 6.

ところで、可視面テーブル6では、各面fl−f5を独
立と見なしているので20通りの見えがたがあるが、た
とえば1の方向から観測したときの見えかたの特徴(三
角形が1個)と5の方向から観測したときの見えかたの
特徴(三角形が1個)は同しであり、2・3および4の
方向から観測したときの見えかたの特徴(四角形が11
11)をはしめ同じものがいくつかある 。
By the way, in the visible surface table 6, each surface fl-f5 is considered to be independent, so there are 20 different views, but for example, the characteristics of the view when observed from one direction (one triangle) The characteristics of the appearance when observed from directions 2, 3, and 4 (1 triangle) are the same, and the characteristics of appearance when observed from directions 2, 3, and 4 (11 squares) are the same.
11) There are some similar items.

したがって、これらを統合することによって。So by integrating these.

観測によって得られる二次元画像データから抽出される
可能性のある見えかたの特徴のみを、立体の見えかた辞
書として辞書記憶部7に記憶するようにしたものである
Only appearance features that may be extracted from two-dimensional image data obtained through observation are stored in the dictionary storage section 7 as a three-dimensional appearance dictionary.

その結果、各種の立体の見えかたが多角形の接続関係と
して1例えば三角柱に対しては、1個の三角形・1個の
四角形・−辺を共有する1個の三角形と四角形・−辺を
共有する2個の四角形・および相互に一辺を共有する1
個の三角形と2個の四角形(第7図参照)として、辞書
記憶部7に格納される。
As a result, the appearance of various solids is as follows: 1. For example, for a triangular prism, there are 1 triangle, 1 quadrilateral, 1 triangle and quadrilateral that share sides, and 2 squares that share one side and 1 that share one side with each other
These triangles and two quadrangles (see FIG. 7) are stored in the dictionary storage unit 7.

〔実施例〕〔Example〕

第3図は実施例の構成図であり。 FIG. 3 is a configuration diagram of the embodiment.

10は前記のようにして構成された各種の立体の見えか
たを記憶する見えかた辞書。
Reference numeral 10 denotes an appearance dictionary that stores the appearance of various three-dimensional objects constructed as described above.

11は、特徴抽出部2によって抽出された見えかたの特
徴を、見えかた辞書10の記憶内容と照合することによ
って、その立体の種別の候補をいくつかの少数の種別に
限定する識別部である。
Reference numeral 11 denotes an identification unit that limits the candidate types of the three-dimensional object to a small number of types by comparing the appearance characteristics extracted by the feature extraction unit 2 with the stored contents of the appearance dictionary 10. It is.

ずなわら、特徴抽出部2によって抽出された被認識物体
の見えかたの特徴を、見えかた辞書10の記憶内容と照
合することによって、被認識物体の種別を決定、あるい
はいくつかの少数の候補に限定する。
By comparing the appearance characteristics of the object to be recognized extracted by the feature extraction unit 2 with the stored contents of the appearance dictionary 10, the type of the object to be recognized is determined, or some small number of appearance characteristics are determined. limited to candidates.

さらに、必要があれば図示省略の別の認識部によって被
認識物体の種別を認識して、認識結果をロボット制御部
に供給する。
Furthermore, if necessary, the type of the object to be recognized is recognized by another recognition section (not shown), and the recognition result is supplied to the robot control section.

〔発明の効果〕〔Effect of the invention〕

以上説明したように1本発明によれば、被認識物体の立
体形状上の種別あるいは候補を短時間で決定することが
でき、たとえばロポ、トビジョン等に応用した場合その
応答速度を向上することができる。
As explained above, according to the present invention, the three-dimensional shape type or candidate of the object to be recognized can be determined in a short time, and the response speed can be improved when applied to robots, tovisions, etc. I can do it.

【図面の簡単な説明】[Brief explanation of drawings]

第1図は本発明の原理ブロック図。 第2図(al ft11は作用の説明図。 第3図は実施例の構成図。 第4図は立体認識装置の構成図。 第5図(al(bl−第6図(al 〜(C1−第7図
(a) 〜letは従来例の説明図である。 図中。 lは観測部、      2は特徴抽出部。 6は可視面テーブル、  7は辞書記憶部。 8は照合部、      9は書込み処理部。 αす Ak目の侵己日巳鴫 茅2司 五よ 、編口 (α)     (#)     (c)オ廼氷A列の 茅7 (ol)(e) 睨明覇 園
FIG. 1 is a block diagram of the principle of the present invention. Fig. 2 (al ft11 is an explanatory diagram of the action. Fig. 3 is a block diagram of the embodiment. Fig. 4 is a block diagram of the stereoscopic recognition device. 7(a) to 7 are explanatory diagrams of the conventional example. In the figure, l is an observation unit, 2 is a feature extraction unit, 6 is a visible surface table, 7 is a dictionary storage unit, 8 is a collation unit, and 9 is a Write processing unit. Alpha-th Ak-th invasion Himi Shizuka 2ji-go, Henguchi (α) (#) (c) Osaihyo A-row's Kaya 7 (ol) (e) Meimeihaen

Claims (1)

【特許請求の範囲】 立体を所定の観測方向から観測したときに得られる可視
面の数と各可視面の形状の種別と各可視面の接続関係と
を前記所定の方向毎に記述した可視面テーブル(6)と
、 立体の見えかたを多角形の接続関係によって格納する辞
書記憶部(7)と、 可視面テーブル(6)の前記観測方向毎の記述内容を辞
書記憶部(7)の内容と照合する照合部(8)と、 照合部(8)における照合結果に応じて可視面テーブル
(6)に記述されている可視面の数と各可視面の種別と
各可視面の接続関係との組合せを辞書記憶部(7)に書
き込む書込み処理部(9)とを備えることを特徴とする
立体の見えかた辞書作成装置。
[Scope of Claims] A visible surface in which the number of visible surfaces, the type of shape of each visible surface, and the connection relationship of each visible surface obtained when a solid is observed from a predetermined observation direction are described for each of the predetermined directions. A table (6), a dictionary storage unit (7) that stores the appearance of solid objects based on connection relationships of polygons, and a dictionary storage unit (7) that stores descriptions of the visible surface table (6) for each observation direction. A collation unit (8) that compares the content, and the number of visible planes, the type of each visible plane, and the connection relationship between each visible plane, which are described in the visible plane table (6) according to the matching result of the collation unit (8). A writing processing unit (9) for writing a combination of the three-dimensional appearance dictionary into a dictionary storage unit (7).
JP60105355A 1985-05-17 1985-05-17 Producing device for seeming dictionary of cube Granted JPS61262986A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP60105355A JPS61262986A (en) 1985-05-17 1985-05-17 Producing device for seeming dictionary of cube

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP60105355A JPS61262986A (en) 1985-05-17 1985-05-17 Producing device for seeming dictionary of cube

Publications (2)

Publication Number Publication Date
JPS61262986A true JPS61262986A (en) 1986-11-20
JPH0232669B2 JPH0232669B2 (en) 1990-07-23

Family

ID=14405417

Family Applications (1)

Application Number Title Priority Date Filing Date
JP60105355A Granted JPS61262986A (en) 1985-05-17 1985-05-17 Producing device for seeming dictionary of cube

Country Status (1)

Country Link
JP (1) JPS61262986A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6431188A (en) * 1987-07-28 1989-02-01 Agency Ind Science Techn Image recognition equipment for mobile robot
WO2004095374A1 (en) * 2003-04-21 2004-11-04 Nec Corporation Video object recognition device and recognition method, video annotation giving device and giving method, and program
JP2015169515A (en) * 2014-03-06 2015-09-28 株式会社メガチップス Posture estimation system, program and posture estimation method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS6431188A (en) * 1987-07-28 1989-02-01 Agency Ind Science Techn Image recognition equipment for mobile robot
WO2004095374A1 (en) * 2003-04-21 2004-11-04 Nec Corporation Video object recognition device and recognition method, video annotation giving device and giving method, and program
CN100371952C (en) * 2003-04-21 2008-02-27 日本电气株式会社 Video object recognition device and recognition method, video annotation giving device and giving method, and program
JP2015169515A (en) * 2014-03-06 2015-09-28 株式会社メガチップス Posture estimation system, program and posture estimation method

Also Published As

Publication number Publication date
JPH0232669B2 (en) 1990-07-23

Similar Documents

Publication Publication Date Title
US9327406B1 (en) Object segmentation based on detected object-specific visual cues
Mason et al. An object-based semantic world model for long-term change detection and semantic querying
Mundy Object recognition in the geometric era: A retrospective
Kragic et al. Vision for robotics
Crapo et al. Spaces of stresses, projections and parallel drawings for spherical polyhedra
He et al. Advances in sensing and processing methods for three-dimensional robot vision
JPS61262986A (en) Producing device for seeming dictionary of cube
Castore Solid modeling, aspect graphs, and robot vision
Ballard et al. Transformational Form Perception in 3D: Constraints, Algorithms, Implementation.
Welke et al. Active multi-view object search on a humanoid head
Nakano Stereo vision based single-shot 6d object pose estimation for bin-picking by a robot manipulator
Ponce et al. On image contours of projective shapes
Yoon et al. Human Recognition and Tracking in Narrow Indoor Environment using 3D Lidar Sensor
JP2021140429A (en) Three-dimentional model generation method
Gemme et al. 3D reconstruction of environments for planetary exploration
Huber et al. Using a hybrid of silhouette and range templates for real-time pose estimation
Ikeuchi Generating an interpretation tree from a CAD model to represent object configurations for bin-picking tasks
Luke et al. Linguistic spatial relations of three dimensional scenes using SIFT keypoints
Kornuta et al. Basic 3D solid recognition in RGB-D images
Zheng et al. Multi-sensor fusion based pose estimation for unmanned aerial vehicles on ships
Zhu et al. Geometrical modeling and real-time vision applications of a panoramic annular lens (PAL) camera system
Eich et al. Reasoning about geometry: An approach using spatial-descriptive ontologies
Bassmann et al. On the Recognition-by-Components Approach Applied to Computer Vision
Richtsfeld et al. Anytime perceptual grouping of 2D features into 3D basic shapes
Parodi et al. A linear complexity procedure for labelling line drawings of polyhedral scenes using vanishing points