JP2010256296A - Omnidirectional three-dimensional space recognition input apparatus - Google Patents

Omnidirectional three-dimensional space recognition input apparatus Download PDF

Info

Publication number
JP2010256296A
JP2010256296A JP2009109678A JP2009109678A JP2010256296A JP 2010256296 A JP2010256296 A JP 2010256296A JP 2009109678 A JP2009109678 A JP 2009109678A JP 2009109678 A JP2009109678 A JP 2009109678A JP 2010256296 A JP2010256296 A JP 2010256296A
Authority
JP
Japan
Prior art keywords
omnidirectional
image
camera
imaging
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2009109678A
Other languages
Japanese (ja)
Inventor
Masahiro Nagata
真啓 永田
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
NIPPON COMPUTER KK
Original Assignee
NIPPON COMPUTER KK
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by NIPPON COMPUTER KK filed Critical NIPPON COMPUTER KK
Priority to JP2009109678A priority Critical patent/JP2010256296A/en
Publication of JP2010256296A publication Critical patent/JP2010256296A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)
  • Image Analysis (AREA)

Abstract

<P>PROBLEM TO BE SOLVED: To obtain omnidirectional and high resolution and input having no blind angle at all times in a three-dimensional space recognition input apparatus for obtaining three-dimensional information such as stereoscopic images, distances, and shapes. <P>SOLUTION: The three-dimensional space recognition input apparatus for extracting three-dimensional coordinate values indicating stereoscopic images and the shape and structure of a subject in a photographing space through the use of two imaging means or more which obtaining omnidirectional images by imaging by an optical system constituted of mirror parts 3 and 4 and camera parts 3c and 4c includes a means for measuring high-quality stereoscopic and highly accurate three-dimensional position coordinates at all times by arranging the two omnidirectional imaging means or more in such a way as to optically superpose their fields of view on one another in a high-resolution region and to eliminate dead angles due to mutual reflection. <P>COPYRIGHT: (C)2011,JPO&INPIT

Description

本発明は、全方位の空間を一括して撮影する全方位カメラを用いて、全方位のステレオ画像及び3次元空間座標・形状情報を、高解像度及び死角の無い全方位画像より取得する為の3次元空間認識入力装置に関する。   The present invention uses an omnidirectional camera that shoots an omnidirectional space at once to acquire an omnidirectional stereo image and three-dimensional spatial coordinate / shape information from an omnidirectional image with high resolution and no blind spots. The present invention relates to a three-dimensional space recognition input device.

従来、ステレオ法による撮影に用いられるカメラの視野角には制限があり、ステレオ画像及び距離・形状など3次元情報を取得する上で、計測可能な視野領域は狭く、対象が大きい場合や動体の場合にはフレームから外れる事となる。図1は、ステレオ法に於ける通常のカメラによる撮影の例を示す図である。左カメラ(1)(基準カメラ)と右カメラ(2)(参照カメラ)の視野角は各々(1a)及び(2a)であり、ステレオ法による3次元情報の取得可能が可能となるのは、画角線(11)及び(22)で挟まれた狭い交差領域Qに制限される。   Conventionally, there is a limit to the viewing angle of a camera used for photographing by a stereo method. In obtaining three-dimensional information such as a stereo image and a distance / shape, a measurable viewing area is narrow and a subject is large or a moving object is In some cases, it will fall out of the frame. FIG. 1 is a diagram showing an example of photographing with a normal camera in the stereo method. The viewing angles of the left camera (1) (reference camera) and the right camera (2) (reference camera) are (1a) and (2a), respectively, and the three-dimensional information can be acquired by the stereo method. It is limited to a narrow intersection area Q sandwiched between the view angle lines (11) and (22).

又、全方位画像を取得可能な全方位カメラを用いたステレオ法に於いても、各々の視野範囲内にもう一方のカメラが写り込む事となり、死角領域が生じる事となる。図2は、ステレオ法に於ける全方位カメラによる撮影の例を示す図である。左カメラ(基準カメラ)の光学系として全方位を写し込む双曲面ミラー部(3)と同右カメラ(参照カメラ)の双曲面ミラー部(4)の視野において、各々双曲面焦点(32)及び(42)から相対する全方位カメラへの接線(31)及び(41)で挟まれた画角(3b)及び(4b)は各々の死角となり、全方位に於ける3次元情報取得の視野上の障害となっている。従って、この図に於ける撮像系の視野範囲は全方位の内の(3a)及び(4a)の領域に制限される事となる。   Further, even in the stereo method using an omnidirectional camera capable of acquiring an omnidirectional image, the other camera is reflected in each field of view, resulting in a blind spot region. FIG. 2 is a diagram showing an example of photographing with an omnidirectional camera in the stereo method. In the field of view of the hyperboloid mirror part (3) that captures all directions as the optical system of the left camera (reference camera) and the hyperboloid mirror part (4) of the right camera (reference camera), the hyperboloid focus (32) and ( Angles of view (3b) and (4b) sandwiched between tangent lines (31) and (41) from 42) to the opposite omnidirectional cameras are the respective blind spots, and are on the field of view for acquiring three-dimensional information in all directions. It is an obstacle. Therefore, the visual field range of the imaging system in this figure is limited to the regions (3a) and (4a) in all directions.

又、特許文献1では、1台の全方位カメラを移動して撮影する時系列画像を取得し、2つの移動位置・時刻より得られる画像を使って、ステレオ視或いは3次元空間内の形状・構造的な対象点の3次元座標情報を計測する方法を述べている。本文献の方法は、カメラ死角は生じないが、町並み景観など静物が対象であり動体への適用は困難であるし、空間内の対象点の測定において、方式的に一定移動時間を必要とする制限が伴う。
特開2002−183714号広報
In Patent Document 1, a time-series image obtained by moving one omnidirectional camera is acquired, and images obtained from two moving positions and times are used for stereo vision or a shape in a three-dimensional space. Describes a method of measuring 3D coordinate information of structural target points. Although the method of this document does not cause blind spots in the camera, it is difficult to apply to moving objects because it is a still life such as a cityscape, and it requires a certain amount of movement time to measure target points in space. There are limitations.
JP 2002-183714 A

本発明は、ステレオ画像及び距離・形状など3次元情報を取得する3次元空間認識入力装置に関して、通常のカメラの限られた視野角では困難であった全方位方向を視野範囲とする為に、全方位カメラを組み合わせ、常時、全方位で高解像度且つ死角の無い入力を取得可能な3次元空間認識入力装置を構成する事を目的とする。   The present invention relates to a stereo image and a three-dimensional space recognition input device that acquires three-dimensional information such as distance and shape, in order to set the omnidirectional direction, which was difficult with a limited viewing angle of a normal camera, as a viewing range. An object is to construct a three-dimensional space recognition input device that can always acquire high resolution and no blind spots in all directions by combining omnidirectional cameras.

請求項1の3次元空間認識入力装置は、ミラー部とカメラ部で構成される光学系により全方位画像を撮像する2つの撮像手段を用いて、撮影空間に於けるステレオ画像及び被写体の形状及び構造を表わす3次元座標値を抽出する3次元空間認識入力装置であって、2つの全方位撮像手段に於いて高解像度視野領域が光学的に重なるように配置する事により、全周囲空間を撮像した各々の前記全方位画像の内、高解像度撮像領域を使う事により画像間の対象点の位置座標を精度よく計測する手段を備えることを特徴とする。   The three-dimensional space recognition input device according to claim 1 uses two image pickup means for picking up an omnidirectional image by an optical system composed of a mirror part and a camera part. A three-dimensional space recognition input device that extracts three-dimensional coordinate values representing a structure, and images the entire surrounding space by arranging the high-resolution field-of-view areas in two omnidirectional imaging means so as to overlap optically. Of these omnidirectional images, there is provided means for accurately measuring the position coordinates of a target point between images by using a high-resolution imaging region.

請求項2の3次元空間認識入力装置は、ミラー部とカメラ部で構成される光学系により全方位画像を撮像する3つ以上の撮像手段を用いて、撮影空間に於けるステレオ画像及び被写体の形状及び構造を表わす3次元座標値を抽出する3次元空間認識入力装置であって、3つ以上の撮像手段を用いる事により全周囲撮影空間に於ける撮像手段自身が写り込む事による死角をなくす手段を備えることを特徴とする。   The three-dimensional space recognition input device according to claim 2 uses three or more image pickup means for picking up an omnidirectional image by an optical system composed of a mirror part and a camera part. A three-dimensional space recognition input device for extracting a three-dimensional coordinate value representing a shape and a structure, and using three or more image pickup means eliminates blind spots caused by the image pickup means itself in the all-around shooting space. Means are provided.

全方位カメラの場合、ミラー部に写し込まれる全方位空間をカメラ部で撮影すると、カメラ撮像素子面分解能が均一である為、ミラー部外縁部周辺に対して中心部になる程狭い領域に空間が写し込まれる事により解像度が低下する傾向にある。この為、可能な限り高解像度が保たれるミラー部外縁部周辺が撮像視野範囲となる様に、3次元空間認識入力装置の光学系を構築する事が有効である。
請求項1の発明によれば、ミラー部とカメラ部で構成される光学系により全方位画像を撮像する2つの撮像手段を用いて、各々の撮像手段に於ける高解像度視野領域がステレオ視或いは3角測量法(ステレオ法)の適応領域となる様に光学的に配置する事により、取得する全周囲空間を撮像した全方位画像からステレオ画像或いは各々の撮像画像間の3角測量法に於ける基準カメラと参照カメラ画像上の対象点の位置座標を精度よく計測する事が可能となる。
In the case of an omnidirectional camera, when the omnidirectional space captured by the mirror unit is photographed by the camera unit, the resolution of the camera image sensor surface is uniform. The resolution tends to decrease due to imprinting. For this reason, it is effective to construct the optical system of the three-dimensional space recognition input device so that the periphery of the outer edge of the mirror part where the resolution is kept as high as possible is within the imaging visual field range.
According to the first aspect of the present invention, two image pickup means for picking up an omnidirectional image by an optical system composed of a mirror part and a camera part are used, and the high-resolution field area in each image pickup means is viewed in stereo or By arranging optically so as to be an adaptive area of the triangulation method (stereo method), a stereo image or a triangulation method between each captured image can be obtained from an omnidirectional image obtained by imaging the entire surrounding space to be acquired. It is possible to accurately measure the position coordinates of the target point on the reference camera image and the reference camera image.

請求項2の発明によれば、ミラー部とカメラ部で構成される光学系により全方位画像を撮像する3つ以上の撮像手段を用いて、撮影空間に於けるステレオ画像及び被写体の形状及び構造を表わす3次元座標値を抽出する3次元空間認識入力装置であって、3つ以上の撮像手段を用いる事により全周囲撮影空間に於ける自身以外の撮像手段が写り込む事による死角をなくす事により、被写体を常に連続した死角の無い全方位の周辺空間として撮像・計測する事が可能となる。   According to the second aspect of the present invention, the stereo image and the shape and structure of the subject in the photographing space are obtained by using three or more image pickup means for picking up an omnidirectional image by the optical system including the mirror part and the camera part. A three-dimensional space recognition input device for extracting a three-dimensional coordinate value that represents an object, and by using three or more image pickup means, eliminate blind spots caused by image pickup means other than itself appearing in the all-around shooting space. Thus, it is possible to always image and measure the subject as an omnidirectional peripheral space without a continuous blind spot.

通常のカメラによる3次元空間認識入力装置の例を示す図The figure which shows the example of the three-dimensional space recognition input device by a normal camera 全方位カメラによる3次元空間認識入力装置の例を示す図The figure which shows the example of the three-dimensional space recognition input device by an omnidirectional camera 本発明の第1の実施例に係る3次元空間認識入力装置の原理を説明する図The figure explaining the principle of the three-dimensional space recognition input device which concerns on 1st Example of this invention. 本発明の第2の実施例に係る3次元空間認識入力装置の構成を説明する図The figure explaining the structure of the three-dimensional space recognition input device which concerns on 2nd Example of this invention. 本発明の第2の実施例に係る3次元空間認識入力装置の原理を説明する図The figure explaining the principle of the three-dimensional space recognition input device which concerns on 2nd Example of this invention. 本発明の第2の実施例に係る3次元空間認識入力装置のその他の構成を説明する図The figure explaining the other structure of the three-dimensional space recognition input device which concerns on 2nd Example of this invention.

以下、本発明の実施例を、図面を参照して説明する。   Embodiments of the present invention will be described below with reference to the drawings.

図3は、本発明の第1の実施例に係る3次元空間認識入力装置の原理を説明する図である。   FIG. 3 is a diagram for explaining the principle of the three-dimensional space recognition input device according to the first embodiment of the present invention.

図3に於いて、3次元空間認識入力装置は、ミラー部(3)とカメラ部(3c)で構成される光学系により全方位画像を撮像する第1の撮像手段(基準カメラ)と、ミラー部(4)とカメラ部(4c)で構成される光学系により全方位画像を撮像する第2の撮像手段(参照カメラ)を用いて、Z軸の周囲の全方位空間を撮影しステレオ画像及び被写体の形状及び構造を表わす3次元座標値を抽出する事が可能である。しかし、全方位を撮像するカメラの場合、ミラー部に写し込まれる全方位空間をカメラ部で撮影すると、カメラ撮像素子面分解能が均一である為、ミラー部外縁部周辺に対して中心部になる程狭い領域に空間が写し込まれる事により解像度が低下する傾向にある。この為、可能な限り高解像度が保たれるミラー部外縁部周辺がステレオ視或いは3角測量法(ステレオ法)に於ける撮像視野範囲となる様に、光学系を構築する必要がある。   In FIG. 3, a three-dimensional space recognition input device includes a first imaging means (reference camera) that captures an omnidirectional image by an optical system including a mirror section (3) and a camera section (3c), a mirror, The second imaging means (reference camera) that captures an omnidirectional image by an optical system including the unit (4) and the camera unit (4c) is used to capture an omnidirectional space around the Z axis, It is possible to extract a three-dimensional coordinate value representing the shape and structure of the subject. However, in the case of a camera that captures images in all directions, if the camera unit captures an omnidirectional space that is imaged on the mirror unit, the resolution of the camera image sensor surface is uniform, so that it becomes the center of the outer periphery of the mirror unit The resolution tends to decrease due to the space being imprinted in a narrow area. For this reason, it is necessary to construct an optical system so that the periphery of the outer edge of the mirror part where the resolution is kept as high as possible is within the imaging field of view in stereo vision or triangulation (stereo method).

図に於いて、第1の撮像手段(3と3c)に於けるZ軸方向視野角は、高解像度撮像可能な(3e)と比して低解像度となる(3f)に領域を分割する。同様に、第2の撮像手段(4と4c)に於けるZ軸方向視野角は、高解像度撮像可能な(4e)と比して低解像度となる(4f)に領域を分割する。尚、高解像度及び低解像度領域の分割の区分けについて、図に於いてはZ軸に垂直な視野角までを例示しているが、これはミラー部及びカメラ部で構成される撮像手段の光学設計に依存し、且つこれに限定されるものではない。本3次元空間認識入力装置は、2つの全方位撮像手段を用いて高解像度視野領域である各々の視野角(3e)と(4e)が光学的に重なるように配置した領域(3g)に於いて全周囲空間を撮像し、各々の撮像手段に於ける全方位画像の内の高解像度撮像空間領域の対象物(3o)の対象点の位置座標を精度よく計測する事が可能となる。又、本構成により各々の撮像手段同士の写り込みも生じない為、撮像する全方位視野に於いて死角領域は存在しない。   In the figure, the viewing angle in the Z-axis direction in the first imaging means (3 and 3c) divides the region into (3f) where the resolution is lower than (3e) where high-resolution imaging is possible. Similarly, the viewing angle in the Z-axis direction in the second imaging means (4 and 4c) divides the region into (4f) where the resolution is lower than (4e) where high-resolution imaging is possible. In the figure, the division of the high-resolution and low-resolution areas is illustrated up to the viewing angle perpendicular to the Z axis, but this is an optical design of the imaging means composed of the mirror part and the camera part. It depends on and is not limited to this. This three-dimensional spatial recognition input device uses two omnidirectional imaging means in a region (3g) arranged so that the viewing angles (3e) and (4e), which are high-resolution viewing regions, optically overlap each other. Thus, it is possible to image the entire surrounding space and accurately measure the position coordinates of the target point of the object (3o) in the high-resolution imaging space area in the omnidirectional image in each imaging means. Further, since this configuration does not cause reflection between the respective imaging means, there is no blind spot region in the omnidirectional visual field to be imaged.

図4は、本発明の第2の実施例に係る3次元空間認識入力装置の構成を説明する。図5は、本発明の第2の実施例に係る3次元空間認識入力装置の原理を説明する。図6は、本発明の第2の実施例に係る3次元空間認識入力装置のその他の構成を説明する図である。   FIG. 4 illustrates the configuration of a three-dimensional space recognition input device according to the second embodiment of the present invention. FIG. 5 illustrates the principle of the three-dimensional space recognition input device according to the second embodiment of the present invention. FIG. 6 is a diagram illustrating another configuration of the three-dimensional space recognition input device according to the second embodiment of the present invention.

図4に於いて、3次元空間認識入力装置は、ミラー部(5)で構成される光学系により全方位画像を撮像する第1の撮像手段と、ミラー部(6)で構成される光学系により全方位画像を撮像する第2の撮像手段と、ミラー部(7)で構成される光学系により全方位画像を撮像する第3の撮像手段を正三角形となる配置で構成する事により、全方位空間を撮影しステレオ画像及び被写体の形状及び構造を表わす3次元座標値を死角の無い全方位画像より抽出する事が可能となる。尚、各撮像手段の配置は正三角形形状で無くとも良く、各撮像手段同士の死角が解消される配置であれば問題は無く、抽出する画像及び3次元座標値の計算に於いて調整する。
ミラー部(5)の第1の撮像手段とミラー部(6)の第2の撮像手段の全方位角中に於ける視野角度(51)は、XY座標原点Oから各々ミラー部(5)双曲面焦点(52)とミラー部(6)双曲面焦点(62)とを結んだ線分の内角とし、F1に示す方向の方位空間を撮像する。ここで視野角度(51)は、第1の撮像手段と第2の撮像手段に於ける各々の死角の生じない視野角(5b)(6a)の範囲内にある。ミラー部(6)の第2の撮像手段とミラー部(7)の第3の撮像手段の全方位角中に於ける視野角度(61)は、XY座標原点Oから各々ミラー部(6)双曲面焦点(62)とミラー部(7)双曲面焦点(72)とを結んだ線分の内角とし、F2に示す方向の方位空間を撮像する。ここで視野角度(61)は、第2の撮像手段と第3の撮像手段に於ける各々の死角の生じない視野角(6b)(7a)の範囲内にある。ミラー部(5)の第1の撮像手段とミラー部(7)の第3の撮像手段の全方位角中に於ける視野角度(71)は、XY座標原点Oから各々ミラー部(5)双曲面焦点(52)とミラー部(7)双曲面焦点(72)とを結んだ線分の内角とし、F3に示す方向の方位空間を撮像する。ここで視野角度(71)は、第1の撮像手段と第3の撮像手段に於ける各々の死角の生じない視野角(7b)(5a)の範囲内にある。
従って、3つ以上の撮像手段を用いる事により、全周囲撮影空間に於ける撮像手段同士が写り込む事により生じる死角を無くし、全方位方向に於ける連続したステレオ画像及び被写体の形状及び構造を表わす3次元座標値を抽出する事が可能である。
In FIG. 4, the three-dimensional space recognition input device includes a first imaging unit that captures an omnidirectional image by an optical system configured by a mirror unit (5), and an optical system configured by a mirror unit (6). By configuring the second image pickup means for picking up an omnidirectional image and the third image pickup means for picking up an omnidirectional image with an optical system constituted by the mirror section (7), the arrangement is an equilateral triangle. It is possible to capture a azimuth space and extract a stereo image and a three-dimensional coordinate value representing the shape and structure of the subject from an omnidirectional image without a blind spot. It should be noted that the arrangement of the image pickup means does not have to be an equilateral triangle, and there is no problem as long as the dead angle between the image pickup means is eliminated, and adjustment is performed in the calculation of the image to be extracted and the three-dimensional coordinate value.
The viewing angles (51) in all azimuth angles of the first imaging means of the mirror section (5) and the second imaging means of the mirror section (6) are respectively from the XY coordinate origin O to the mirror section (5) The azimuth space in the direction indicated by F1 is imaged with the interior angle of the line segment connecting the curved surface focal point (52) and the mirror part (6) hyperbolic surface focal point (62). Here, the viewing angle (51) is within the range of viewing angles (5b) and (6a) where no blind spots are generated in the first and second imaging means. The viewing angles (61) in all azimuth angles of the second imaging means of the mirror section (6) and the third imaging means of the mirror section (7) are respectively from the XY coordinate origin O to the mirror section (6). The azimuth space in the direction indicated by F2 is imaged with the interior angle of the line segment connecting the curved surface focal point (62) and the mirror portion (7) hyperbolic surface focal point (72). Here, the viewing angle (61) is within the range of viewing angles (6b) and (7a) where no blind spots are generated in the second imaging means and the third imaging means. The viewing angles (71) in all azimuth angles of the first imaging means of the mirror section (5) and the third imaging means of the mirror section (7) are respectively from the XY coordinate origin O to the mirror section (5) The azimuth space in the direction indicated by F3 is imaged with the internal angle of the line segment connecting the curved surface focal point (52) and the mirror portion (7) hyperbolic surface focal point (72). Here, the viewing angle (71) is within the range of viewing angles (7b) and (5a) where no blind spots are generated in the first and third imaging means.
Therefore, by using three or more image pickup means, the blind spots caused by the image pickup means appearing in the all-around shooting space are eliminated, and the continuous stereo image and the shape and structure of the subject in all directions are eliminated. It is possible to extract a three-dimensional coordinate value to represent.

図5は、3つ以上の撮像手段を有する3次元空間認識入力装置に於いて、対象物(7o)を検出する方法を説明する。図のミラー部(5)で構成される光学系により全方位画像を撮像する第1の撮像手段と、ミラー部(6)で構成される光学系により全方位画像を撮像する第2の撮像手段と、ミラー部(7)で構成される光学系により全方位画像を撮像する第3の撮像手段に於いて、対象物(7o)に関するステレオ画像及び被写体の形状及び構造を表わす3次元座標値を導出する役割は、ミラー部(5)の第1の撮像手段の視野角(5b)の領域Aとミラー部(6)の第2の撮像手段の視野角(6a)の領域Bを用いて取得する。ここで、対象物(7o)のエッジなどの特徴点は、第1の撮像手段に於いて入射光路(5l)の成す方位(5c)として取得され、第2の撮像手段に於いて入射光路(6l)の成す方位(6c)として取得され、両撮像手段間の既知の距離Nにより3次元座標値を導出する事が可能となる。同様に、第2の撮像手段と第3の撮像手段並びに、第1の撮像手段と第3の撮像手段の組み合わせにより、全方位方向に於ける死角の無いステレオ画像及び被写体の形状及び構造を表わす3次元座標値を導出する事ができる。   FIG. 5 illustrates a method for detecting an object (7o) in a three-dimensional space recognition input device having three or more imaging means. A first image pickup means for picking up an omnidirectional image by an optical system constituted by the mirror section (5) in the figure, and a second image pickup means for picking up an omnidirectional image by an optical system constituted by the mirror section (6). And a third image pickup means for picking up an omnidirectional image by an optical system constituted by the mirror section (7), and a stereo image relating to the object (7o) and a three-dimensional coordinate value representing the shape and structure of the subject. The deriving role is acquired using the region A of the viewing angle (5b) of the first imaging means of the mirror part (5) and the region B of the viewing angle (6a) of the second imaging means of the mirror part (6). To do. Here, a feature point such as an edge of the object (7o) is acquired as an orientation (5c) formed by the incident optical path (5l) in the first imaging means, and the incident optical path ( 6l) is obtained as an orientation (6c), and a three-dimensional coordinate value can be derived from a known distance N between both imaging means. Similarly, the combination of the second imaging means and the third imaging means, and the combination of the first imaging means and the third imaging means represents a stereo image with no blind spots in all directions and the shape and structure of the subject. A three-dimensional coordinate value can be derived.

更に、ミラー部(5)の第1の撮像手段とミラー部(6)の第2の撮像手段に於ける対象物(7o)のエッジなどの特徴点の同定に関して、ミラー部(7)で構成される光学系により全方位画像を撮像する第3の撮像手段の内、未使用領域となっている視野領域Cに入射光路(7l)で撮像される映像を加えて処理する。ミラー部(7)の第3の撮像手段に於ける視野領域Cに撮像される映像は、ミラー部(5)の第1の撮像手段及びミラー部(6)の第2の撮像手段により取得される映像と比べて中間的な方向からの基準映像と成り得る為、対象物(7o)のエッジなどの特徴点の同定に於いて、第1の撮像手段及び第2の撮像手段のみの映像から導出する場合と比較して精度が向上し容易となる。   Furthermore, regarding the identification of feature points such as edges of the object (7o) in the first imaging means of the mirror section (5) and the second imaging means of the mirror section (6), the mirror section (7) is used. Among the third image pickup means for picking up an omnidirectional image by the optical system, an image picked up by the incident optical path (7l) is added to the visual field area C which is an unused area for processing. The video imaged in the visual field C in the third imaging unit of the mirror unit (7) is acquired by the first imaging unit of the mirror unit (5) and the second imaging unit of the mirror unit (6). Therefore, in the identification of feature points such as the edge of the object (7o), the image from only the first imaging means and the second imaging means can be used. Compared with the case of deriving, accuracy is improved and it becomes easy.

図6は、3次元空間認識入力装置に於いて、4つの撮像手段を有する場合について構成を説明する。ミラー部(8)で構成される光学系により全方位画像を撮像する第1の撮像手段と、ミラー部(9)で構成される光学系により全方位画像を撮像する第2の撮像手段と、ミラー部(10)で構成される光学系により全方位画像を撮像する第3の撮像手段と、ミラー部(11)で構成される光学系により全方位画像を撮像する第4の撮像手段を正四角形となる配置で構成する事により、全方位空間を撮影しステレオ画像及び被写体の形状及び構造を表わす3次元座標値を死角の無い全方位画像より抽出する事が可能となる。尚、各撮像手段の配置は正四角形形状で無くとも良く、各撮像手段同士の死角が解消される配置であれば問題は無く、抽出する画像及び3次元座標値の計算に於いて調整する。
ミラー部(8)の第1の撮像手段とミラー部(9)の第2の撮像手段の全方位角中に於ける視野角度(81)は、XY座標原点Oから各々ミラー部(8)双曲面焦点とミラー部(9)双曲面焦点とを結んだ線分の内角とする。ここで視野角度(81)は、第1の撮像手段と第2の撮像手段に於ける各々の死角の生じない視野角(8b)(9a)の範囲内にある。ミラー部(9)の第2の撮像手段とミラー部(10)の第3の撮像手段の全方位角中に於ける視野角度(91)は、XY座標原点Oから各々ミラー部(9)双曲面焦点とミラー部(10)双曲面焦点とを結んだ線分の内角とする。ここで視野角度(91)は、第2の撮像手段と第3の撮像手段に於ける各々の死角の生じない視野角(9b)(10a)の範囲内にある。ミラー部(10)の第3の撮像手段とミラー部(11)の第4の撮像手段の全方位角中に於ける視野角度(101)は、XY座標原点Oから各々ミラー部(10)双曲面焦点とミラー部(11)双曲面焦点とを結んだ線分の内角とする。ここで視野角度(101)は、第3の撮像手段と第4の撮像手段に於ける各々の死角の生じない視野角(10b)(11a)の範囲内にある。ミラー部(8)の第1の撮像手段とミラー部(11)の第4の撮像手段の全方位角中に於ける視野角度(111)は、XY座標原点Oから各々ミラー部(8)双曲面焦点とミラー部(11)双曲面焦点とを結んだ線分の内角とする。ここで視野角度(111)は、第1の撮像手段と第4の撮像手段に於ける各々の死角の生じない視野角(11b)(8a)の範囲内にある。
従って、4つの撮像手段を用いる事により、全周囲撮影空間に於ける撮像手段同士が写り込む事により生じる死角を無くし、全方位方向に於ける連続したステレオ画像及び被写体の形状及び構造を表わす3次元座標値を抽出する事が可能である。
FIG. 6 illustrates the configuration of a three-dimensional space recognition input device having four imaging units. A first image pickup means for picking up an omnidirectional image by an optical system constituted by a mirror section (8); a second image pickup means for picking up an omnidirectional image by an optical system constituted by a mirror section (9); A third imaging unit that captures an omnidirectional image by the optical system configured by the mirror unit (10) and a fourth imaging unit that captures an omnidirectional image by the optical system configured by the mirror unit (11) are positive. By configuring with a quadrangular arrangement, it is possible to capture an omnidirectional space and extract a stereo image and a three-dimensional coordinate value representing the shape and structure of the subject from an omnidirectional image without a blind spot. It should be noted that the arrangement of the image pickup means does not have to be a regular square shape, and there is no problem as long as the dead angle between the image pickup means is eliminated, and adjustment is performed in the calculation of the image to be extracted and the three-dimensional coordinate value.
The viewing angles (81) in all azimuth angles of the first imaging means of the mirror section (8) and the second imaging means of the mirror section (9) are respectively from the XY coordinate origin O to the mirror section (8) The inner angle of the line segment connecting the curved focal point and the mirror part (9) hyperboloid focal point. Here, the viewing angle (81) is within the range of viewing angles (8b) and (9a) where no blind spots are generated in the first imaging means and the second imaging means. The viewing angle (91) in all azimuth angles of the second image pickup means of the mirror section (9) and the third image pickup means of the mirror section (10) is respectively from the XY coordinate origin O to the mirror section (9) The interior angle of the line segment connecting the curved surface focal point and the mirror surface (10) hyperboloid focal point is used. Here, the viewing angle (91) is within the range of viewing angles (9b) and (10a) in which the blind spots in the second imaging means and the third imaging means do not occur. The viewing angles (101) in all azimuth angles of the third imaging means of the mirror section (10) and the fourth imaging means of the mirror section (11) are respectively double from the XY coordinate origin O to the mirror section (10). The interior angle of the line segment connecting the curved surface focal point and the mirror surface (11) hyperbolic surface focal point is used. Here, the viewing angle (101) is within the range of viewing angles (10b) and (11a) where no blind spots are generated in the third and fourth imaging means. The viewing angle (111) in all azimuth angles of the first image pickup means of the mirror section (8) and the fourth image pickup means of the mirror section (11) is respectively double from the XY coordinate origin O to the mirror section (8). The interior angle of the line segment connecting the curved surface focal point and the mirror surface (11) hyperbolic surface focal point is used. Here, the viewing angle (111) is within the range of the viewing angles (11b) and (8a) in which the blind spots in the first imaging means and the fourth imaging means do not occur.
Therefore, the use of four image pickup means eliminates blind spots caused by the image pickup means appearing in the all-around shooting space, and represents a continuous stereo image in all directions and the shape and structure of the subject 3 It is possible to extract dimension coordinate values.

本発明の実施例の3次元空間認識入力装置によれば、全方位の空間を一括して撮影する全方位カメラを用いて、高解像度及び死角の無い全方位方向の撮像映像よりステレオ映像及び3次元空間座標・形状情報を取得する事が可能となる。   According to the three-dimensional space recognition input device of the embodiment of the present invention, using an omnidirectional camera that captures an omnidirectional space at once, a stereo image and 3 Dimensional space coordinates and shape information can be acquired.

1 ステレオ撮影用の左通常カメラ(基準カメラ)
2 ステレオ撮影用の右通常カメラ(参照カメラ)
3 ステレオ撮影用の全方位・基準カメラ
4 ステレオ撮影用の全方位・参照カメラ
5 ステレオ撮影用の全方位カメラ(ミラー部)1/3
6 ステレオ撮影用の全方位カメラ(ミラー部)2/3
7 ステレオ撮影用の全方位カメラ(ミラー部)3/3
8 ステレオ撮影用の全方位カメラ(ミラー部)1/4
9 ステレオ撮影用の全方位カメラ(ミラー部)2/4
10 ステレオ撮影用の全方位カメラ(ミラー部)3/4
11 ステレオ撮影用の全方位カメラ(ミラー部)4/4


1 Left normal camera for stereo shooting (reference camera)
2 Right normal camera for stereo shooting (reference camera)
3 Omni-directional camera for stereo photography 4 Reference camera for stereo photography 5 Omni-directional camera for stereo photography (mirror part) 1/3
6 Omni-directional camera (mirror part) for stereo photography 2/3
7 Omnidirectional camera (mirror part) for stereo shooting 3/3
8 Stereo camera omnidirectional camera (mirror part) 1/4
9 Omnidirectional camera (mirror part) for stereo shooting 2/4
10 Omni-directional camera (mirror part) for stereo shooting 3/4
11 Stereo camera for omnidirectional camera (mirror part) 4/4


Claims (2)

ミラー部とカメラ部で構成される光学系により全方位画像を撮像する2つの撮像手段を用いて、撮影空間に於けるステレオ画像及び被写体の形状及び構造を表わす3次元座標値を抽出する3次元空間認識入力装置であって、2つの全方位撮像手段に於いて高解像度視野領域が光学的に重なるように配置する事により、全周囲空間を撮像した各々の前記全方位画像の内、高解像度撮像領域を使う事により画像間の対象点の位置座標を精度よく計測する手段を備えることを特徴とする3次元空間認識入力装置。   A three-dimensional coordinate value representing a stereo image and a shape and structure of a subject in a photographing space using two image pickup means for picking up an omnidirectional image by an optical system composed of a mirror part and a camera part. A spatial recognition input device, in which the high-resolution field of view is optically overlapped in two omnidirectional imaging means, so that the high-resolution of each of the omnidirectional images obtained by imaging the entire surrounding space. A three-dimensional space recognition input device comprising means for accurately measuring the position coordinates of a target point between images by using an imaging region. ミラー部とカメラ部で構成される光学系により全方位画像を撮像する3つ以上の撮像手段を用いて、撮影空間に於けるステレオ画像及び被写体の形状及び構造を表わす3次元座標値を抽出する3次元空間認識入力装置であって、3つ以上の撮像手段を用いる事により全周囲撮影空間に於ける撮像手段同士が写り込む事による死角をなくす手段を備えることを特徴とする3次元空間認識入力装置。




A stereo image in a photographing space and a three-dimensional coordinate value representing the shape and structure of a subject are extracted using three or more imaging means for capturing an omnidirectional image by an optical system including a mirror unit and a camera unit. A three-dimensional space recognition input device comprising three or more image pickup means and means for eliminating a blind spot caused by the image pickup means appearing in an all-around shooting space. Input device.




JP2009109678A 2009-04-28 2009-04-28 Omnidirectional three-dimensional space recognition input apparatus Pending JP2010256296A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2009109678A JP2010256296A (en) 2009-04-28 2009-04-28 Omnidirectional three-dimensional space recognition input apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2009109678A JP2010256296A (en) 2009-04-28 2009-04-28 Omnidirectional three-dimensional space recognition input apparatus

Publications (1)

Publication Number Publication Date
JP2010256296A true JP2010256296A (en) 2010-11-11

Family

ID=43317361

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2009109678A Pending JP2010256296A (en) 2009-04-28 2009-04-28 Omnidirectional three-dimensional space recognition input apparatus

Country Status (1)

Country Link
JP (1) JP2010256296A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011182003A (en) * 2010-02-26 2011-09-15 Let's Corporation Panorama camera and 360-degree panorama stereoscopic video system
WO2012124275A1 (en) * 2011-03-11 2012-09-20 Sony Corporation Image processing apparatus, image processing method, and program
EP3133557A1 (en) * 2015-08-17 2017-02-22 Nokia Technologies Oy Method, apparatus, and computer program product for personalized depth of field omnidirectional video
JP2019527495A (en) * 2016-07-01 2019-09-26 フェイスブック,インク. Stereo image capture
CN112219086A (en) * 2018-09-18 2021-01-12 株式会社日立制作所 Stereo camera, vehicle-mounted lamp assembly and stereo camera system
JP2021012075A (en) * 2019-07-05 2021-02-04 株式会社日立製作所 Stereo camera
US11004218B2 (en) 2018-03-15 2021-05-11 Hitachi, Ltd. Three-dimensional image processing device and three-dimensional image processing method for object recognition from a vehicle

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011182003A (en) * 2010-02-26 2011-09-15 Let's Corporation Panorama camera and 360-degree panorama stereoscopic video system
WO2012124275A1 (en) * 2011-03-11 2012-09-20 Sony Corporation Image processing apparatus, image processing method, and program
JP2012190299A (en) * 2011-03-11 2012-10-04 Sony Corp Image processing system and method, and program
EP2671045A1 (en) * 2011-03-11 2013-12-11 Sony Corporation Image processing apparatus, image processing method, and program
CN103443582A (en) * 2011-03-11 2013-12-11 索尼公司 Image processing apparatus, image processing method, and program
US20130335532A1 (en) * 2011-03-11 2013-12-19 Sony Corporation Image processing apparatus, image processing method, and program
EP2671045A4 (en) * 2011-03-11 2014-10-08 Sony Corp Image processing apparatus, image processing method, and program
US10291845B2 (en) 2015-08-17 2019-05-14 Nokia Technologies Oy Method, apparatus, and computer program product for personalized depth of field omnidirectional video
EP3133557A1 (en) * 2015-08-17 2017-02-22 Nokia Technologies Oy Method, apparatus, and computer program product for personalized depth of field omnidirectional video
JP2019527495A (en) * 2016-07-01 2019-09-26 フェイスブック,インク. Stereo image capture
JP7133478B2 (en) 2016-07-01 2022-09-08 メタ プラットフォームズ, インク. Stereoscopic image capture
US11004218B2 (en) 2018-03-15 2021-05-11 Hitachi, Ltd. Three-dimensional image processing device and three-dimensional image processing method for object recognition from a vehicle
CN112219086A (en) * 2018-09-18 2021-01-12 株式会社日立制作所 Stereo camera, vehicle-mounted lamp assembly and stereo camera system
US11290703B2 (en) 2018-09-18 2022-03-29 Hitachi, Ltd. Stereo camera, onboard lighting unit, and stereo camera system
CN112219086B (en) * 2018-09-18 2022-05-06 株式会社日立制作所 Stereo camera, vehicle-mounted lamp assembly and stereo camera system
JP2021012075A (en) * 2019-07-05 2021-02-04 株式会社日立製作所 Stereo camera
JP7134925B2 (en) 2019-07-05 2022-09-12 株式会社日立製作所 stereo camera

Similar Documents

Publication Publication Date Title
US9625258B2 (en) Dual-resolution 3D scanner
US9843788B2 (en) RGB-D imaging system and method using ultrasonic depth sensing
JP6223169B2 (en) Information processing apparatus, information processing method, and program
US20140285638A1 (en) Reference image techniques for three-dimensional sensing
JP2010256296A (en) Omnidirectional three-dimensional space recognition input apparatus
KR101737085B1 (en) 3D camera
JPWO2008053649A1 (en) Wide-angle image acquisition method and wide-angle stereo camera device
JP2008096162A (en) Three-dimensional distance measuring sensor and three-dimensional distance measuring method
JP2017175507A5 (en)
JP2010276433A (en) Imaging device, image processor, and distance measuring device
US20210150744A1 (en) System and method for hybrid depth estimation
KR20200124271A (en) Imaging device, image processing device, and image processing method
JP2011149931A (en) Distance image acquisition device
CN108805921A (en) Image-taking system and method
JP2015188251A (en) Image processing system, imaging apparatus, image processing method, and program
JP2010049152A (en) Focus information detecting device
US8908012B2 (en) Electronic device and method for creating three-dimensional image
JP7300895B2 (en) Image processing device, image processing method, program, and storage medium
WO2014171438A1 (en) Three-dimensional shape measurement device, three-dimensional shape measurement method, and three-dimensional shape measurement program
JP2004364212A (en) Object photographing apparatus, object photographing method and object photographing program
JP2008241609A (en) Distance measuring system and distance measuring method
JP2005275789A (en) Three-dimensional structure extraction method
CN206573465U (en) Imaging device on the outside of a kind of ring
KR101857977B1 (en) Image apparatus for combining plenoptic camera and depth camera, and image processing method
WO2018110264A1 (en) Imaging device and imaging method