JPS61114381A - Stereoscopic viewing device - Google Patents

Stereoscopic viewing device

Info

Publication number
JPS61114381A
JPS61114381A JP59235839A JP23583984A JPS61114381A JP S61114381 A JPS61114381 A JP S61114381A JP 59235839 A JP59235839 A JP 59235839A JP 23583984 A JP23583984 A JP 23583984A JP S61114381 A JPS61114381 A JP S61114381A
Authority
JP
Japan
Prior art keywords
image
point
feature points
feature
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP59235839A
Other languages
Japanese (ja)
Inventor
Atsushi Kuno
敦司 久野
Toshimichi Masaki
俊道 政木
Kazuhiko Saka
坂 和彦
Nobuo Nakatsuka
中塚 信雄
Mitsutaka Kato
加藤 充孝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Omron Corp
Original Assignee
Omron Tateisi Electronics Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Omron Tateisi Electronics Co filed Critical Omron Tateisi Electronics Co
Priority to JP59235839A priority Critical patent/JPS61114381A/en
Publication of JPS61114381A publication Critical patent/JPS61114381A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/147Details of sensors, e.g. sensor lenses

Abstract

PURPOSE:To recognize an object at a high speed by extracting the feature points of each of binary images obtained by >=3 units of 2-dimensional image pickup means, securing the correspondence between the feature points and calculating the 3-dimensional coordinates of the corresponding point on the subject. CONSTITUTION:Feature points PA, PB and PC of a point P on an object appear on images HA, HB and HC obtained through TV cameras 1A, 1B and 1C respectively. An image epipolar line l2 of a straight line connecting the focal point of the camera 1A and said feature points is set on the 2nd image HB. The feature point PB of the image HB is set on the line l2, and the feature point PC of the 3rd image HC is set on an intersecting point of epipolar lines l3 and m3. Thus it is understood that the feature points PA-PC are opposite to each other as images at the point P. Therefore the 3-dimensional coordinates of the point P are obtained in the form of an intersecting point among straight lines FAPA, FBPB and FCPC.

Description

【発明の詳細な説明】 〈発明の技術分野〉 この発明は、3次元物体の形状を認識したり、或いは3
次元物体の位置や姿勢を計測するのに用いられる立体視
装置に関連し、殊にこの発明は、3次元物体上の点の3
次元座標を高速に計測する新規装置を提供する。
[Detailed Description of the Invention] <Technical Field of the Invention> This invention relates to the recognition of the shape of a three-dimensional object, or the recognition of the shape of a three-dimensional object.
This invention relates to a stereoscopic viewing device used to measure the position and orientation of a three-dimensional object, and in particular, the present invention relates to a stereoscopic viewing device used to measure the position and orientation of a three-dimensional object.
A new device for measuring dimensional coordinates at high speed is provided.

〈発明の概要〉 この発明は、3台以上の2次元撮像手段を用いて各方向
より見た3次元物体の濃淡画像を求め、夫々濃淡画像を
2値化して2値画像を生成した後、各2値画像につき特
徴点を抽出し且つ特徴点間の対応付けを行なって、対応
する物体上の点の3次元座標を算出するよう構成したも
のであり、これにより立体視装置における物体認識処理
の高速化等をはかつている。
<Summary of the Invention> This invention obtains gray scale images of a three-dimensional object viewed from each direction using three or more two-dimensional imaging means, binarizes each gray scale image to generate a binary image, and then It is configured to extract feature points from each binary image and make correspondences between the feature points to calculate the three-dimensional coordinates of the corresponding points on the object. Efforts are being made to speed up the process.

〈発明の背景〉 従来の立体視装置は、2次元撮像装置を用いて3次元物
体の濃淡画像を求め、この濃淡画像につきエツジ強調処
理を施こした後、エツジを構成する点列に直線をあては
めて線画化し、この線画を解析することにより3次元物
体の形状認識等を行なっている。ところがこの方式の場
合、エツジ強調処理における演算コストが著しく高価に
つき、而も画像上のノイズの影響を受は易いため、常に
正確な形状認識を行なうことが困難である。
<Background of the Invention> Conventional stereoscopic viewing devices use a two-dimensional imaging device to obtain a grayscale image of a three-dimensional object, perform edge enhancement processing on this grayscale image, and then draw straight lines to the series of points that make up the edges. By applying the data to a line drawing, and analyzing this line drawing, shape recognition of a three-dimensional object is performed. However, in the case of this method, the calculation cost for edge enhancement processing is extremely high, and it is also easily affected by noise on the image, making it difficult to always perform accurate shape recognition.

また上記方式の他、3次元物体にスリット光を照射し、
このスリット光を走査しつつスリット光像を2次元撮像
装置で観測する方式のものも実施されているが、この方
式の場合、スリット光の走査を伴なうため、観測時間が
長くかかり、物体認識処理の高速化が望めないという欠
点がある。
In addition to the above method, slit light is irradiated onto a three-dimensional object,
A system in which the slit light image is observed using a two-dimensional imaging device while scanning the slit light has also been implemented, but in this method, since it involves scanning the slit light, it takes a long time to observe, and The disadvantage is that the recognition process cannot be accelerated.

〈発明の目的〉 この発明は、上記従来方式の欠点を解消するためのもの
で、3次元物体上の点の3次元座標を正確且つ高速に計
測し得る立体視装置を提供することを目的とする。
<Purpose of the Invention> The present invention is intended to eliminate the drawbacks of the above-mentioned conventional methods, and its purpose is to provide a stereoscopic viewing device that can accurately and rapidly measure the three-dimensional coordinates of a point on a three-dimensional object. do.

〈発明の構成および効果〉 上記目的を達成するため、この発明では、物体を少なく
とも3方向から撮像する3台以上の2次元撮像手段と、
各撮像手段からのビデオ信号を受信して各方向より見た
物体の濃淡画像を生成する手段と、夫々濃淡画像のデー
タを2値化して2値画像を生成する手段と、夫々2値画
像につき特徴点を抽出する手段と、各2値画像における
特徴点を対応付けて対応する物体上の点の3次元座標を
算出する手段とで立体視装置を構成するようにした。
<Configuration and Effects of the Invention> In order to achieve the above object, the present invention includes three or more two-dimensional imaging means for imaging an object from at least three directions;
means for receiving video signals from each imaging means and generating grayscale images of an object viewed from each direction; means for generating binary images by binarizing the data of the grayscale images; A stereoscopic viewing apparatus is configured by means for extracting feature points and means for associating the feature points in each binary image and calculating the three-dimensional coordinates of the corresponding points on the object.

この発明の立体視装置によれば、エツジ強請処理やスリ
ット光の走査のように複雑ないしは時間のかかる処理を
必要としないから、処理の高速化や演算コストの低減を
実現し得ると共に、ノイズの影響を受けにくく、物体認
識の正確性を向上し得る等、発明目的を達成した顕著な
効果を奏する。
According to the stereoscopic viewing device of the present invention, complicated or time-consuming processing such as edge coercion processing or slit light scanning is not required, so it is possible to realize faster processing and lower calculation costs, and also to reduce noise. This invention has remarkable effects that achieve the purpose of the invention, such as being less susceptible to influence and improving the accuracy of object recognition.

〈実施例の説明〉 第1図は3台のテレビカメラlA、lB、Icを用いた
本発明の立体視装置を示す。前記の各テレビカメラIA
 、 IB 、 ICは、3次元物体を異なる3方向か
ら撮像するためのものであり、例えば2次元面体撮像素
子で構成したもの等を用いる。各テレビカメラIA 、
 lB 、 Icの出力側には、同期信号生成回路やビ
デオ信号増幅回路を有する画像入力部2が接続されてお
り、この画像入力部2は各テレビカメラIA、lB、I
cからのビデオ信号を受信して、各方向から見た物体の
濃淡画像を生成する。第7図にある一方向から濃淡画像
Gを一例として示しである。
<Description of Embodiments> FIG. 1 shows a stereoscopic viewing apparatus of the present invention using three television cameras IA, IB, and Ic. Each of the above TV cameras IA
, IB, and IC are for capturing images of a three-dimensional object from three different directions, and use, for example, a two-dimensional surface imaging device. Each TV camera IA,
An image input section 2 having a synchronization signal generation circuit and a video signal amplification circuit is connected to the output side of IB and Ic, and this image input section 2 is connected to each television camera IA, IB, and Ic.
It receives video signals from c and generates grayscale images of the object viewed from each direction. The grayscale image G shown in FIG. 7 from one direction is shown as an example.

夫々の濃淡画像は、例えば浮動2値化回路より成る2値
化部3に入力され、この2値化部3において、夫々濃淡
画像のデータが2値化されて、2値画像が生成される。
Each grayscale image is input to a binarization unit 3 consisting of, for example, a floating binarization circuit, and in this binarization unit 3, the data of each grayscale image is binarized to generate a binary image. .

そしてこれら各2値画像は特徴点抽出部4に入力され、
各画像毎に、画像上の物体の角部分に相当する点が特徴
列データを得る輪郭線追跡部41と、輪郭点系列データ
に基つき2値画像を線画化する線画化処理部42とを含
んでおり、線画を構成する各1百線の交点等より前記特
徴点を抽出して、そのデータを出力する。
Each of these binary images is then input to the feature point extraction unit 4,
For each image, points corresponding to the corners of the object on the image are connected to a contour tracing unit 41 that obtains feature sequence data, and a line drawing processing unit 42 that converts the binary image into a line drawing based on the contour point series data. The feature points are extracted from the intersections of each of the 100 lines that make up the line drawing, and the data is output.

第8図il+ +21 +31は、線画化された3方向
の各画像H,、HB、 Hoを夫々示すものであり、例
えば各画像中、点PA、PB、Poは物体上の同一物点
にかかる特徴点を示している。
Figure 8 il+ +21 +31 shows the line drawing images H, HB, and Ho in three directions, respectively.For example, in each image, points PA, PB, and Po overlap the same object point on the object. Shows feature points.

かくて各特徴点抽出部4からは各画像についての特徴点
データが、また制御部6からは各テレビカメラIA 、
 l B 、 1cの位置や姿勢、更にはレンズの結像
距離等のカメラパラメータが夫々立体視部5に取り込ま
れるもので、例えば第3図に示す立体視部5においては
、対応点決定部51が特徴点間の対応付けを行ない、っ
ぎの三角測量部52が対応付けられた物体上の点の3次
元座標を公知の三角測量法を用いて算出する。
In this way, each feature point extraction unit 4 outputs feature point data for each image, and the control unit 6 outputs each television camera IA,
Camera parameters such as the positions and orientations of 1B and 1c, as well as the imaging distance of the lenses, are respectively taken into the stereoscopic vision section 5. For example, in the stereoscopic vision section 5 shown in FIG. 3, the corresponding point determination section 51 performs correspondence between the feature points, and the triangulation unit 52 calculates the three-dimensional coordinates of the corresponded points on the object using a known triangulation method.

第9図は前記対応点決定部51における特徴点間の対応
付は動作を示す原理図であり、各テレビカメラIA、I
B、ICにかかる画像HA、)iB。
FIG. 9 is a principle diagram illustrating the operation of the correspondence between feature points in the corresponding point determination unit 51, and shows how each television camera IA, I
B, Image HA on IC,)iB.

Ho(以下、第3画像HA、第2画像HB、第3画像H
o  という)上に物体上の物点Pについての特徴点P
A、 PB、 Poがあられれている。また第3画像H
6上には、第1カメラIAの焦点FAと特徴点PAとを
結ぶ直線FAPAの像(この直線像をエピポーラライン
という)!!2が設定され、同様に第3画像H6上には
、直線FAPAおよび直線FBPBの各エピポーラライ
ン13 、 m3か設定しである。
Ho (hereinafter, third image HA, second image HB, third image H
The feature point P for the object point P on the object
A, PB, and Po are scratched. Also, the third image H
6, there is an image of a straight line FAPA connecting the focal point FA of the first camera IA and the feature point PA (this straight line image is called an epipolar line)! ! Similarly, on the third image H6, the epipolar lines 13 and m3 of straight line FAPA and straight line FBPB are set.

第10図ill +21 +31は上記各画像HA、H
B、Hcを示す。同図によれば、第3画像H6における
特徴点PB はエピポーラライン12上に位置し、第3
画像H6における特徴点P。はエピポーララインls 
r m3の交点上に位置する。このことから特徴点PA
、PB、Poは物点Pの画像として相互に対応する点で
あることが理解され、従って物点Pの三次元座標は直線
FAPA、FBPB、FoP。
Figure 10 ill +21 +31 is each of the above images HA, H
B, indicates Hc. According to the figure, the feature point PB in the third image H6 is located on the epipolar line 12, and the feature point PB in the third image H6 is located on the epipolar line 12.
Feature point P in image H6. is the epipolar line ls
Located on the intersection of r m3. From this, the feature point PA
, PB, and Po are points that correspond to each other as images of the object point P, and therefore the three-dimensional coordinates of the object point P are straight lines FAPA, FBPB, and FoP.

の交点として求めることができる。尚第10図+21 
+31は、第9図の直線FAPの延長線上に位置する他
の物点にの特徴点Rn 、 Rcを併せて示しており、
この場合特徴点R6はエピポーラライン/3m3の交点
上に位置しない。
It can be found as the intersection of Furthermore, Figure 10 +21
+31 also indicates the feature points Rn and Rc of other object points located on the extension of the straight line FAP in FIG.
In this case, the feature point R6 is not located on the intersection of the epipolar line/3m3.

第4図および第5図は前記特徴点抽出部4の蜆の実施例
を示す。第4図に示す実施例は、2値m像を構成する各
行につき白画素および黒画素の開始点やその連続量をコ
ード化するランレングスコード化部43と各行のランレ
ングスコードを解析して特徴点を抽出するランレングス
コード解析部44とを具備して成る。
FIGS. 4 and 5 show an example of the feature point extracting section 4. FIG. The embodiment shown in FIG. 4 includes a run-length encoding unit 43 that encodes the starting points of white pixels and black pixels and their continuous amount for each row constituting a binary m-image, and a run-length encoding unit 43 that analyzes the run-length code of each row. It also includes a run-length code analysis section 44 that extracts feature points.

また第5図に示す実施例は、2値画像を所定サイズのマ
スクで走査し、このマスク内の部分パターンを部分パタ
ーン選択部46がメモリより読み出した基準の部分パタ
ーンと比較照合部45にて順次照合して、特徴点を抽出
するものである。
Further, in the embodiment shown in FIG. 5, a binary image is scanned with a mask of a predetermined size, and a partial pattern within this mask is compared with a reference partial pattern read out from a memory by a partial pattern selection section 46 in a matching section 45. Feature points are extracted by sequential comparison.

上記画像入力部2.2値化部3、特徴点抽出部4および
立体視部5の各動作は、制御部6が発する制御情報に基
づき一連に制御されるもので、特に第1図に示す実施例
では、3台のテレビカメラIA、IB、ICからのビデ
オ信号を順次切換える等して、夫々単一の構成各部にて
対応する処理が実行される。
The operations of the image input section 2, binarization section 3, feature point extraction section 4, and stereoscopic viewing section 5 are controlled in series based on control information issued by a control section 6, and are particularly shown in FIG. In the embodiment, video signals from three television cameras IA, IB, and IC are sequentially switched, and corresponding processing is executed in each unit of a single configuration.

第6図は、各テレビカメラLA 、 IB 、 ICに
対応して夫々3個の画像入力部2A、2B、2C,2値
化部3A 、 3B 、 3C1特徴点抽出部4A、4
B、4Cを設けた他の実施例を示すもので、この実施例
では回路構成は複雑となるが、処理が平行して進むため
、処理の高速化を促進できる。
FIG. 6 shows three image input units 2A, 2B, 2C, binarization units 3A, 3B, 3C, feature point extraction units 4A, 4 corresponding to each television camera LA, IB, IC, respectively.
This shows another embodiment in which circuits B and 4C are provided. Although this embodiment has a complicated circuit configuration, the processing proceeds in parallel, so it is possible to speed up the processing.

【図面の簡単な説明】[Brief explanation of the drawing]

第1図はこの発明にかかる立体視装置のブロック図、第
2図は特徴点抽出部の構成例を示すブロック図、第3図
は立体視部の構成例を示すブロック図、第4図および第
5図は特徴点抽出部の他の実施例を示すブロック図、第
6図はこの発明にかかる他の装置例を示すブロック図、
第7図は濃淡画像の一例を示す図、第8図fil T2
1431は線画化された画像を示す図、第9図および第
10図は特徴点間の対応骨は方法の原理を説明するため
の図である。 IA、IB、IC・・・テレビカメラ  2・・・画像
入力部3・・・2値化部       4・・・特徴点
抽出部5・・・立体視部 特許出願人  立石電機株式会社 →l 図 −〉ア+q  T2           、hqz卜
0ζ入−1ネi出1ト?−y+7 口 岳、!3国(2ン 耕q国 鈍10図
FIG. 1 is a block diagram of a stereoscopic viewing device according to the present invention, FIG. 2 is a block diagram showing an example of the configuration of a feature point extracting section, FIG. 3 is a block diagram showing an example of the configuration of a stereoscopic viewing section, FIG. FIG. 5 is a block diagram showing another embodiment of the feature point extraction section, FIG. 6 is a block diagram showing another example of the device according to the present invention,
Fig. 7 is a diagram showing an example of a grayscale image, Fig. 8 fil T2
1431 is a diagram showing a line-drawn image, and FIGS. 9 and 10 are diagrams for explaining the principle of the bone correspondence method between feature points. IA, IB, IC...Television camera 2...Image input unit 3...Binarization unit 4...Feature point extraction unit 5...Stereoscopic vision unit Patent applicant Tateishi Electric Co., Ltd.→l Figure -〉A+q T2, hqz卜0ζ入-1 に Gunt? -y+7 Kuchitake! 3 countries (2nd cultivation q country dull 10 figures

Claims (1)

【特許請求の範囲】[Claims] 物体を少なくとも3方向から撮像する3台以上の2次元
撮像手段と、各撮像手段からのビデオ信号を受信して各
方向より見た物体の濃淡画像を生成する手段と、夫々濃
淡画像のデータを2値化して2値画像を生成する手段と
、夫々2値画像につき特徴点を抽出する手段と、各2値
画像における特徴点を対応付けて対応する物体上の点の
3次元座標を算出する手段とを具備して成る立体視装置
three or more two-dimensional imaging means for capturing images of an object from at least three directions; means for receiving video signals from each imaging means to generate gray scale images of the object viewed from each direction; A means for binarizing to generate a binary image, a means for extracting feature points for each binary image, and a means for associating the feature points in each binary image and calculating the three-dimensional coordinates of the corresponding point on the object. A stereoscopic viewing device comprising means.
JP59235839A 1984-11-07 1984-11-07 Stereoscopic viewing device Pending JPS61114381A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP59235839A JPS61114381A (en) 1984-11-07 1984-11-07 Stereoscopic viewing device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP59235839A JPS61114381A (en) 1984-11-07 1984-11-07 Stereoscopic viewing device

Publications (1)

Publication Number Publication Date
JPS61114381A true JPS61114381A (en) 1986-06-02

Family

ID=16992026

Family Applications (1)

Application Number Title Priority Date Filing Date
JP59235839A Pending JPS61114381A (en) 1984-11-07 1984-11-07 Stereoscopic viewing device

Country Status (1)

Country Link
JP (1) JPS61114381A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61214888A (en) * 1985-03-20 1986-09-24 Toshiba Corp Positioning device for picture
JPS6310280A (en) * 1986-07-01 1988-01-16 Omron Tateisi Electronics Co 3-eye stereoscopic device
JPS6310279A (en) * 1986-07-01 1988-01-16 Omron Tateisi Electronics Co Multi-eye stereoscopic device
JPH01140396A (en) * 1987-11-27 1989-06-01 Hitachi Ltd Security supervising device

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5890268A (en) * 1981-11-24 1983-05-28 Agency Of Ind Science & Technol Detector of 3-dimensional object

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS5890268A (en) * 1981-11-24 1983-05-28 Agency Of Ind Science & Technol Detector of 3-dimensional object

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS61214888A (en) * 1985-03-20 1986-09-24 Toshiba Corp Positioning device for picture
JPS6310280A (en) * 1986-07-01 1988-01-16 Omron Tateisi Electronics Co 3-eye stereoscopic device
JPS6310279A (en) * 1986-07-01 1988-01-16 Omron Tateisi Electronics Co Multi-eye stereoscopic device
JPH01140396A (en) * 1987-11-27 1989-06-01 Hitachi Ltd Security supervising device

Similar Documents

Publication Publication Date Title
KR101121034B1 (en) System and method for obtaining camera parameters from multiple images and computer program products thereof
JPH0685183B2 (en) Identification method of 3D object by 2D image
CN109544628B (en) Accurate reading identification system and method for pointer instrument
CN107370950B (en) Focusing process method, apparatus and mobile terminal
US7280685B2 (en) Object segmentation from images acquired by handheld cameras
CN115761126A (en) Three-dimensional reconstruction method and device based on structured light, electronic equipment and storage medium
JPH04198741A (en) Shape defect detecting device
JP3862402B2 (en) 3D model generation apparatus and computer-readable recording medium on which 3D model generation program is recorded
JP3516118B2 (en) Object recognition method and object recognition device
JPS61114381A (en) Stereoscopic viewing device
JPH08210847A (en) Image processing method
JP2000185060A (en) Method for extracting margin line of tooth
JP2009211561A (en) Depth data generator and depth data generation method, and program thereof
Furukawa et al. Simultaneous shape registration and active stereo shape reconstruction using modified bundle adjustment
CN114120362A (en) Gesture collection method and device, electronic equipment and readable storage medium
Mikrut et al. Integration of image and laser scanning data based on selected example
JP2739319B2 (en) 3D shape reproduction device
CN113269207B (en) Image feature point extraction method for grid structure light vision measurement
Kawasaki et al. Registration and entire shape acquisition for grid based active one-shot scanning techniques
JPH0534117A (en) Image processing method
Wang et al. Research of depth information acquisition with two stage structured light method
JP2004177295A (en) Distance information selection means and distance information selection device
JP2966711B2 (en) Image contour extraction method and apparatus
CN111860544A (en) Projection-assisted clothes feature extraction method and system
JPH047805B2 (en)