JPH11120361A - Three-dimensional shape restoring device and restoring method - Google Patents

Three-dimensional shape restoring device and restoring method

Info

Publication number
JPH11120361A
JPH11120361A JP9303373A JP30337397A JPH11120361A JP H11120361 A JPH11120361 A JP H11120361A JP 9303373 A JP9303373 A JP 9303373A JP 30337397 A JP30337397 A JP 30337397A JP H11120361 A JPH11120361 A JP H11120361A
Authority
JP
Japan
Prior art keywords
image
information
image input
feature point
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP9303373A
Other languages
Japanese (ja)
Inventor
Norihiko Murata
憲彦 村田
Takashi Kitaguchi
貴史 北口
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ricoh Co Ltd
Original Assignee
Ricoh Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ricoh Co Ltd filed Critical Ricoh Co Ltd
Priority to JP9303373A priority Critical patent/JPH11120361A/en
Publication of JPH11120361A publication Critical patent/JPH11120361A/en
Pending legal-status Critical Current

Links

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Measurement Of Optical Distance (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To restore a three-dimensional shape highly precisely at low calculation cost under an optional pickup condition. SOLUTION: The feature point high in the possibility of being eliminated by an image of a second viewpoint among the feature points of the image detected at a first viewpoint is calculated from posture information of an image input means 2 calculated by a posture detection means 3 and a parallel motion component of the image input means 2 calculated by a parallel component detection means 4 by a feature point loss detection means 5. The plural feature points of the images measured at the first viewpoint is extracted and the feature points between the images at the first and the second viewpoints are made to correspond except the feature point specified by the feature point loss detection means 5 by a correspondence detection means 6. The three-dimensional shape of a subject is calculated and restored by the posture information, the parallel motion component of the image input means 2 and correspondence between the feature point executed by the correspondence detection means 6 by a three-dimensional calculation means 7.

Description

【発明の詳細な説明】DETAILED DESCRIPTION OF THE INVENTION

【0001】[0001]

【発明の属する技術分野】この発明は、例えば画像計測
装置やデジタルカメラ,ビデオカメラ,携帯情報機器,
移動ロボットの視覚装置,自立走行車及びその他画像機
器等で連続する複数枚の画像より、撮影したときのカメ
ラの位置と姿勢及び撮影した対象物の3次元形状を復元
する3次元形状復元装置及び復元方法に関するものであ
る。
The present invention relates to an image measuring device, a digital camera, a video camera, a portable information device,
A three-dimensional shape restoring device for restoring the position and orientation of the camera at the time of photographing and the three-dimensional shape of the photographed object from a plurality of continuous images by a visual device of a mobile robot, a self-sustained traveling vehicle, and other image equipment; It relates to a restoration method.

【0002】[0002]

【従来の技術】対象物の3次元形状を復元する研究は、
自律移動ロボットの視覚をはじめとして様々な分野で進
められている。特に近年は電子技術の飛躍的な進歩によ
る計算機や電子機器の普及が急速に進み、手軽に3次元
情報の立体表示が楽しめるようになった。それに対し
て、実世界の対象物や情景の3次元情報入力技術の発展
が期待されている。
2. Description of the Related Art Research for restoring the three-dimensional shape of an object is as follows.
It is being promoted in various fields including the vision of autonomous mobile robots. In particular, in recent years, computers and electronic devices have rapidly spread due to the dramatic progress in electronic technology, and three-dimensional display of three-dimensional information can be easily enjoyed. On the other hand, the development of three-dimensional information input technology for objects and scenes in the real world is expected.

【0003】対象物までの距離や形状を測定する方法
は、対象物に光波や超音波を照射する能動的な方法とス
テレオ画像法に代表される受動的な方法とがある。能動
的な方法は、光,電波,音波の波動を対象物に照射し、
対象物からの反射波の伝播時間を計測することにより対
象物までの距離を求める方法や、カメラと位置関係が既
知の光源から特定のパターンを持ったスリット光、スポ
ット光等の光を対象物に照射し、その歪みを観測して対
象物の形状を求める光投影法などがある。能動的方法は
一般に、装置の小型化に問題がある反面、高速かつ高精
度に距離を測定できるという特徴がある。一方、受動的
方法は、多眼立体視と運動立体視に大別される。多眼立
体視は互いの位置,姿勢が既知である複数のカメラを用
いて対象物を撮影した画像から各画像間の特徴点または
領域の対応付けを行い、三角測量の原理で対象物の3次
元形状を計算する方法である。この場合、カメラ間の相
対的な位置,姿勢情報は既知なので、エピ曲線拘束(ep
ipolar constraint)と呼ばれる基礎的な拘束条件を用い
ることができ、対応付けに要する計算量を減らすことが
できる。しかし、重畳されたノイズ等により対応付けの
誤差が存在したり、視差が十分にとれない場合に、大き
な距離測定誤差を生じやすいという問題点がある。運動
立体視は1台のカメラを移動させながらある対象物を撮
影し、連続する画像間の対応付けを行い、カメラの位置
と姿勢及び対象物の3次元形状を計算する方法で行われ
る。この方法も多眼立体視と同様の問題点があるほか、
多眼立体視とは異なり画像間のカメラの位置と姿勢情報
が未知であるため、エピ極線拘束の適用が不可能であ
り、対応付けに要する計算量が多くなる。さらに、カメ
ラの相対的な位置・姿勢を求めるためには、一般に複雑
な非線形方程式を反復演算で解く必要がある。その結
果、3次元復元に要する計算量が膨大であり、かつその
解も不安定になりやすい。
Methods for measuring the distance and shape to an object include an active method of irradiating the object with light waves or ultrasonic waves and a passive method represented by a stereo image method. Active methods irradiate the object with light, radio waves, or sound waves.
A method to determine the distance to the target by measuring the propagation time of the reflected wave from the target, or a light source such as a slit light or spot light with a specific pattern from a light source with a known positional relationship with the camera , And observing the distortion to determine the shape of the object. The active method generally has a problem in that the distance can be measured with high speed and high accuracy, although there is a problem in miniaturization of the device. On the other hand, passive methods are roughly classified into multi-view stereoscopic vision and motion stereoscopic vision. In multi-view stereoscopic vision, feature points or regions between images are associated with each other from images obtained by photographing the object using a plurality of cameras whose positions and orientations are known. This is a method for calculating a dimensional shape. In this case, since the relative position and orientation information between the cameras is known, the epi-curve constraint (ep
A basic constraint condition called ipolar constraint can be used, and the amount of calculation required for association can be reduced. However, there is a problem that a large distance measurement error is likely to occur when there is an association error due to superimposed noise or the like or when parallax cannot be sufficiently obtained. Motion stereoscopic vision is performed by moving a single camera, photographing an object, associating continuous images, and calculating the position and orientation of the camera and the three-dimensional shape of the object. This method has the same problems as multi-view stereoscopic vision,
Unlike multi-view stereoscopic vision, the position and orientation information of the camera between images is unknown, so that epipolar constraint cannot be applied, and the amount of calculation required for association increases. Further, in order to determine the relative position and orientation of the camera, it is generally necessary to solve a complicated nonlinear equation by an iterative operation. As a result, the amount of calculation required for three-dimensional reconstruction is enormous, and the solution tends to be unstable.

【0004】この受動的方法の問題点に対し、画像以外
に距離や加速度,角速度,磁気など画像以外のセンサを
併用して少ない計算コストで3次元形状の復元を図る装
置が、例えば特開平5−196437号公報や特開平7−1810
24号公報,特開平9−81790号公報に開示されている。
In order to solve the problem of the passive method, an apparatus for restoring a three-dimensional shape with a small calculation cost by using a sensor other than an image such as a distance, an acceleration, an angular velocity, and magnetism in addition to an image is disclosed in, for example, Japanese Patent Application Laid-Open No. HEI 5-1993. JP-1996437 and JP-A-7-1810
No. 24, JP-A-9-81790.

【0005】特開平5−196437号公報に示された方法は
被写体上の測定点を直交投影でカメラで撮影し、そのと
きのカメラの姿勢をカメラに固定した3軸ジャイロで求
めてボーティング法により被写体の3次元情報を抽出し
ている。特開平7−181024号公報に示された方法は、画
像入力手段に加速度センサや角速度センサなどの移動量
検出部を設置し、移動量検出部で得られたカメラの移動
量を並進運動成分(基線長)と回転成分に分解し、対応
点探索結果より対象物の3次元形状を復元している。ま
た、特開平9−81790号公報に示された方法は、撮影装
置の動きを角速度センサ及び加速度センサにより検出
し、異なる視点からの光軸が任意の点で交わるように光
軸方向を補正して特徴点の消失を防ぎながら3次元復元
を高速に行うようにしている。
In the method disclosed in Japanese Patent Application Laid-Open No. 5-196437, a measurement point on a subject is photographed by a camera by orthogonal projection, and the attitude of the camera at that time is obtained by a three-axis gyro fixed to the camera. To extract the three-dimensional information of the subject. In the method disclosed in Japanese Patent Application Laid-Open No. Hei 7-181024, a moving amount detecting unit such as an acceleration sensor or an angular velocity sensor is installed in an image input unit, and a moving amount of a camera obtained by the moving amount detecting unit is translated. The three-dimensional shape of the object is restored from the corresponding point search result. Further, the method disclosed in Japanese Patent Application Laid-Open No. 9-81790 detects the movement of a photographing device using an angular velocity sensor and an acceleration sensor, and corrects the optical axis direction so that optical axes from different viewpoints intersect at an arbitrary point. Thus, three-dimensional restoration is performed at high speed while preventing the disappearance of feature points.

【0006】[0006]

【発明が解決しようとする課題】特開平5−196437号公
報に示された方法は、ただ1点の特徴点を追跡するだけ
で対象物の3次元復元が可能であるが、直交投影を前提
としているため、中心投影モデルのカメラで投影した画
像から3次元情報を抽出するには、精度が不十分であ
る。また、特開平7−181024号公報に示された方法は、
移動量検出部の設置により異なる視点間のカメラの相対
的な位置と姿勢を求めることにより、高速化と装置の小
型化が可能になるが、移動量計算時に機械的な角速度セ
ンサなどのセンサ信号を積分する必要があるため、経時
的に移動量の誤差成分が蓄積されるという問題がある。
また、運動立体視ではカメラ運動に応じて前の時刻の画
像において対応付けに使用される点(以下、特徴点とい
う)が、次の時刻の画像において消失することにより、
対応付けの誤りが生じる可能性が大いにあるが、上記2
つの方法においては、このことに関しては全く考慮され
ていない。
The method disclosed in Japanese Patent Application Laid-Open No. 5-196437 enables three-dimensional reconstruction of an object only by tracing a single feature point, but presupposes orthogonal projection. Therefore, accuracy is insufficient to extract three-dimensional information from an image projected by the camera of the central projection model. Further, the method disclosed in Japanese Patent Application Laid-Open No.
Determining the relative position and orientation of the camera between different viewpoints by installing the movement amount detection unit enables high speed and miniaturization of the device, but when calculating the movement amount, sensor signals such as mechanical angular velocity sensors are used. Need to be integrated, there is a problem that error components of the movement amount are accumulated over time.
Further, in the motion stereoscopic view, a point (hereinafter, referred to as a feature point) used for association in an image at a previous time according to a camera motion disappears in an image at the next time,
Although there is a great possibility that a mapping error will occur,
In one method, this is not considered at all.

【0007】これに対して特開平9−81790号公報に示
された方法は、センサ情報及び設定済みの対象物とカメ
ラの距離により算出した動きベクトルの推定値と、画像
処理により求めた動きベクトルとの比較により対象物を
検出しているが、対象物とカメラの距離が予め設定され
ているため、特定の撮影条件下でのみ3次元形状を復元
することができる。さらに、光軸の向きを変えるための
駆動機構が必要であるため、装置の構造が複雑になって
しまう。
On the other hand, the method disclosed in Japanese Patent Application Laid-Open No. Hei 9-81790 discloses a method in which an estimated value of a motion vector calculated based on sensor information and a distance between a set object and a camera, and a motion vector obtained by image processing. Although the object is detected by comparing with the above, since the distance between the object and the camera is set in advance, the three-dimensional shape can be restored only under specific photographing conditions. Further, since a driving mechanism for changing the direction of the optical axis is required, the structure of the device becomes complicated.

【0008】この発明はかかる問題を解消し、任意の撮
影条件下で計算コストか少なくかつ精度の高い3次元形
状復元を実現することができる3次元形状復元装置及び
復元方法を提供することを目的とするものである。
An object of the present invention is to solve such a problem and to provide a three-dimensional shape restoring apparatus and a restoring method capable of realizing a high-precision three-dimensional shape restoration under a small calculation cost under arbitrary photographing conditions. It is assumed that.

【0009】[0009]

【課題を解決するための手段】この発明に係る3次元形
状復元装置は、画像入力手段により複数の視点で被写体
の画像を入力し、各視点における画像入力手段の姿勢情
報と並進運動成分を算出し、入力した画像情報と算出し
た画像入力手段の姿勢情報と並進運動成分から、各画像
の特徴点の対応付けを行い、画像情報と画像入力手段の
姿勢情報と並進運動成分及び特徴点の対応付け結果から
被写体の3次元形状を計算して復元する3次元形状復元
装置において、画像入力手段の姿勢情報と並進運動成分
から、異なる視点で撮影した画像面から消失する特徴点
を検出する特徴点消失検出手段を有し、特徴点消失検出
手段で検出した消失する特徴点を除去して各画像の特徴
点の対応付けを行うことを特徴とする。
A three-dimensional shape restoring apparatus according to the present invention inputs an image of a subject from a plurality of viewpoints by image input means, and calculates attitude information and translational motion components of the image input means at each viewpoint. Then, from the input image information, the calculated attitude information of the image input means and the translational motion component, the feature points of each image are associated, and the correspondence between the image information, the attitude information of the image input means, the translational motion component, and the feature point is obtained. In a three-dimensional shape restoring apparatus for calculating and restoring a three-dimensional shape of a subject from an attached result, a feature point which detects a feature point disappearing from an image plane taken from a different viewpoint is detected from posture information and a translational motion component of an image input unit. The image processing apparatus is characterized in that the image processing apparatus further includes an erasure detection unit, and removes the erasure feature points detected by the feature point erasure detection unit and associates the feature points of each image.

【0010】この発明に係る第2の3次元形状復元装置
は、画像入力手段により複数の視点で被写体の画像を入
力し、各視点における画像入力手段の姿勢情報を算出
し、各視点の画像面上に投影された点に対応する空間上
の点までの距離を算出し、画像入力手段の姿勢情報と空
間上の点までの距離情報から画像入力手段の並進運動成
分を算出し、入力した画像情報と算出した画像入力手段
の姿勢情報と並進運動成分から、各画像の特徴点の対応
付けを行い、画像情報と画像入力手段の姿勢情報と並進
運動成分及び特徴点の対応付け結果から被写体の3次元
形状を計算して復元する3次元形状復元装置において、
画像入力手段の姿勢情報と空間上の点までの距離情報か
ら、異なる視点で撮影した画像面から消失する特徴点を
検出する特徴点消失検出手段を有し、特徴点消失検出手
段で検出した消失する特徴点を除去して各画像の特徴点
の対応付けを行うことを特徴とする。
A second three-dimensional shape restoration apparatus according to the present invention inputs an image of a subject from a plurality of viewpoints by image input means, calculates posture information of the image input means at each viewpoint, and calculates an image plane of each viewpoint. Calculate the distance to a point in space corresponding to the point projected above, calculate the translational motion component of the image input means from the attitude information of the image input means and the distance information to the point in space, and input the image Based on the information and the calculated attitude information of the image input means and the translational motion component, the feature points of each image are associated with each other, and the image information and the attitude information of the image input means are translated from the translational motion component and the feature point. In a three-dimensional shape restoration device that calculates and restores a three-dimensional shape,
A feature point loss detection unit that detects a feature point that disappears from an image plane captured from a different viewpoint based on the attitude information of the image input unit and the distance information to a point in space; The feature points are removed from each other and the feature points of each image are associated with each other.

【0011】この発明に係る3次元形状復元方法は、画
像入力手段により複数の視点で被写体の画像を入力し、
各視点における画像入力手段の姿勢情報と並進運動成分
を算出し、入力した画像情報と算出した画像入力手段の
姿勢情報と並進運動成分から、各画像の特徴点の対応付
けを行い、画像情報と画像入力手段の姿勢情報と並進運
動成分及び特徴点の対応付け結果から被写体の3次元形
状を計算して復元する3次元形状復元方法において、画
像入力手段の姿勢情報と並進運動成分から、異なる視点
で撮影した画像面から消失する特徴点を検出し、消失す
る特徴点を除去して各画像の特徴点の対応付けを行うこ
とを特徴とする。
In the three-dimensional shape restoration method according to the present invention, an image of a subject is input from a plurality of viewpoints by image input means,
Calculate the posture information and the translational motion component of the image input means at each viewpoint, perform correspondence between the feature points of each image from the input image information and the calculated posture information and the translational motion component of the image input means, and In a three-dimensional shape restoring method for calculating and restoring a three-dimensional shape of a subject from the result of associating the attitude information of the image input means with the translational motion component and the feature point, a different viewpoint is obtained from the attitude information of the image input means and the translational motion component. The method is characterized in that disappearing feature points are detected from the image plane photographed in step (1), the disappearing feature points are removed, and the feature points of each image are associated with each other.

【0012】この発明に係る第2の3次元形状復元方法
は、画像入力手段により複数の視点で被写体の画像を入
力し、各視点における画像入力手段の姿勢情報を算出
し、各視点の画像面上に投影された点に対応する空間上
の点までの距離を算出し、画像入力手段の姿勢情報と空
間上の点までの距離情報から画像入力手段の並進運動成
分を算出し、入力した画像情報と算出した画像入力手段
の姿勢情報と並進運動成分から、各画像の特徴点の対応
付けを行い、画像情報と画像入力手段の姿勢情報と並進
運動成分及び特徴点の対応付け結果から被写体の3次元
形状を計算して復元する3次元形状復元方法において、
画像入力手段の姿勢情報と空間上の点までの距離情報か
ら、異なる視点で撮影した画像面から消失する特徴点を
検出し、検出した消失する特徴点を除去して各画像の特
徴点の対応付けを行うことを特徴とする。
In a second three-dimensional shape restoring method according to the present invention, an image of a subject is inputted from a plurality of viewpoints by an image input unit, posture information of the image input unit at each viewpoint is calculated, and an image plane of each viewpoint is calculated. Calculate the distance to a point in space corresponding to the point projected above, calculate the translational motion component of the image input means from the attitude information of the image input means and the distance information to the point in space, and input the image Based on the information and the calculated attitude information of the image input means and the translational motion component, the feature points of each image are associated with each other, and the image information and the attitude information of the image input means are translated from the translational motion component and the feature point. In a three-dimensional shape restoration method for calculating and restoring a three-dimensional shape,
From the attitude information of the image input means and the distance information to points in space, feature points that disappear from the image plane taken from different viewpoints are detected, and the detected disappearing feature points are removed to correspond to the feature points of each image. It is characterized by attaching.

【0013】上記消失が検出された特徴点の空間上での
存在範囲とその画像特徴量を保存し、保存された特徴点
が再び他の画面上に投影されたことを検知すると良い。
It is preferable that the existence range of the feature point in which the disappearance is detected in the space and its image feature amount be stored, and that the stored feature point is projected again on another screen.

【0014】[0014]

【発明の実施の形態】この発明の3次元形状復元装置
は、画像入力手段と姿勢検出手段と並進成分検出手段と
特徴点消失検出手段と対応検出手段及び3次元演算手段
を有する。この3次元形状復元装置で同一対象物を第1
の視点と第2の視点で撮影して3次元形状を復元すると
き、画像入力手段により第1の視点で測定対象物の画像
を撮影すると同時に、姿勢検出手段で画像入力手段の姿
勢を算出して特徴点消失検出手段と対応点検出手段に送
る。次に画像入力手段を第2の視点に移動して測定対象
物の画像を撮影し、姿勢検出手段で画像入力手段の姿勢
を算出し、並進成分検出手段で画像入力手段の並進運動
成分を算出して特徴点消失検出手段と対応検出手段に送
る。特徴点消失検出手段は姿勢検出手段で算出した画像
入力手段の姿勢情報と並進成分検出手段で算出した画像
入力手段の並進運動成分より、第1の視点で検出した画
像の特徴点のなかで第2の視点の画像で消失する可能性
の高い特徴点を求める。対応検出手段は第1の視点で測
定した画像の複数の特徴点を抽出し、特徴点消失検出手
段が指定した特徴点を除いて、第1の視点と第2の視点
における画像間の特徴点の対応付けを行う。3次元演算
手段は画像入力手段の姿勢情報と並進運動成分及び対応
検出手段で行った特徴点の対応付けにより被写体の3次
元形状を計算して復元する。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS A three-dimensional shape restoring apparatus according to the present invention has an image input means, a posture detecting means, a translation component detecting means, a characteristic point disappearance detecting means, a correspondence detecting means, and a three-dimensional calculating means. This three-dimensional shape restoration device can convert the same object to the first
When the three-dimensional shape is restored by photographing at the first and second viewpoints, the image of the object to be measured is photographed at the first viewpoint by the image input unit, and the posture of the image input unit is calculated by the posture detecting unit. To the feature point disappearance detecting means and the corresponding point detecting means. Next, the image input means is moved to the second viewpoint to capture an image of the measurement object, the attitude of the image input means is calculated by the attitude detection means, and the translational motion component of the image input means is calculated by the translation component detection means. Then, it is sent to the feature point disappearance detecting means and the correspondence detecting means. The feature point disappearance detecting means detects the first one of the feature points of the image detected from the first viewpoint from the attitude information of the image input means calculated by the attitude detecting means and the translational motion component of the image input means calculated by the translation component detecting means. A feature point that is likely to disappear in the image of the second viewpoint is obtained. The correspondence detection unit extracts a plurality of feature points of the image measured at the first viewpoint, and removes the feature points between the images at the first viewpoint and the second viewpoint except for the feature points designated by the feature point disappearance detection unit. Is associated. The three-dimensional calculation means calculates and restores the three-dimensional shape of the subject based on the correspondence between the attitude information of the image input means, the translational motion components, and the feature points performed by the correspondence detection means.

【0015】このようにして画像入力手段の姿勢情報と
並進運動成分より特徴点の消失を検出するから、特徴点
の無駄な対応付けに要する計算と誤対応とを防止するこ
とができ、低計算コストで高精度な3次元形状を復元す
ることができる。
As described above, the disappearance of the feature point is detected from the attitude information of the image input means and the translational motion component. Therefore, the calculation required for the unnecessary association of the feature point and the erroneous correspondence can be prevented, and the low calculation can be performed. A highly accurate three-dimensional shape can be restored at a low cost.

【0016】[0016]

【実施例】図1はこの発明の一実施例の構成を示すブロ
ック図である。図に示すように、3次元形状復元装置1
は画像入力手段2と姿勢検出手段3と並進成分検出手段
4と特徴点消失検出手段5と対応検出手段6及び3次元
演算手段7を有する。画像入力手段2は、例えば図2に
示すように、第1視点A1と第2視点A2で測定対象物
Mを撮影し、第1視点A1の画像面I1と第2視点A2
の画像面I2に結像した画像を入力する。姿勢検出手段
3は画像入力手段2に作用する磁場や重力場を検出した
り、画像入力手段2の運動に伴う角速度を検出して第1
視点A1と第2視点A2における画像入力手段2の姿勢
情報(回転成分)Rを算出する。並進成分検出手段4は
第1視点A1から第2視点A2までの画像入力手段2の
運動により生じる慣性力の検出や例えばGPS(Global
Positioning System)等の航法システムを利用して画像
入力手段2の並進運動成分Dを算出する。特徴点消失検
出手段5は姿勢検出手段3で検出した画像入力手段2の
姿勢情報Rと並進成分検出手段3で検出した画像入力手
段2の並進運動成分Dより、第1視点A1で検出した画
像の特徴点のなかで第2視点A2の画像で消失する可能
性の高い特徴点を求める。対応検出手段6は第1視点A
1で測定した画像の複数の特徴点を抽出し、特徴点消失
検出手段が指定した特徴点を除いて、第1視点A1と第
2視点A2における画像間の特徴点の対応付けを行う。
3次元演算手段7は画像入力手段2の姿勢情報Rと並進
運動成分D及び対応点検出手段6で行った特徴点の対応
付けにより被写体の3次元形状を計算する。
FIG. 1 is a block diagram showing the configuration of an embodiment of the present invention. As shown in FIG.
Has image input means 2, attitude detecting means 3, translation component detecting means 4, feature point disappearance detecting means 5, correspondence detecting means 6, and three-dimensional calculating means 7. For example, as shown in FIG. 2, the image input unit 2 captures an image of the measurement target M at a first viewpoint A1 and a second viewpoint A2, and an image plane I1 of the first viewpoint A1 and a second viewpoint A2.
The image formed on the image plane I2 is input. The attitude detecting means 3 detects a magnetic field or a gravitational field acting on the image input means 2 or detects an angular velocity accompanying the movement of the image input means 2 to perform the first
The posture information (rotation component) R of the image input means 2 at the viewpoint A1 and the second viewpoint A2 is calculated. The translation component detection unit 4 detects an inertial force generated by the movement of the image input unit 2 from the first viewpoint A1 to the second viewpoint A2, and detects, for example, a GPS (Global
A translational motion component D of the image input means 2 is calculated using a navigation system such as a positioning system. The feature point disappearance detecting means 5 detects the image detected at the first viewpoint A1 from the attitude information R of the image input means 2 detected by the attitude detecting means 3 and the translational motion component D of the image input means 2 detected by the translational component detecting means 3. Among the feature points, the feature point having a high possibility of disappearing in the image of the second viewpoint A2 is obtained. Correspondence detecting means 6 is the first viewpoint A
A plurality of feature points of the image measured in step 1 are extracted, and the feature points between the images at the first viewpoint A1 and the second viewpoint A2 are associated except for the feature points designated by the feature point disappearance detecting means.
The three-dimensional calculation means 7 calculates the three-dimensional shape of the subject by associating the attitude information R of the image input means 2 with the translational motion component D and the feature points performed by the corresponding point detection means 6.

【0017】上記のように構成された3次元形状復元装
置1で、図2に示すように、同一対象物Mを第1視点A
1と第2視点A2で撮影して3次元形状を復元するとき
の動作を図3のフローチャートを参照して説明する。
In the three-dimensional shape restoring apparatus 1 configured as described above, as shown in FIG.
The operation of restoring the three-dimensional shape by photographing at the first and second viewpoints A2 will be described with reference to the flowchart of FIG.

【0018】画像入力手段2により第1視点A1で測定
対象物Mの画像を撮影すると同時に、姿勢検出手段3で
画像入力手段2の姿勢を算出して特徴点消失検出手段5
と対応検出手段6に送る(ステップS1)。次に画像入
力手段2を第2視点A2に移動して測定対象物Mの画像
を撮影し、姿勢検出手段3で画像入力手段2の姿勢を算
出し、並進成分検出手段4で画像入力手段2の並進運動
成分を算出して特徴点消失検出手段5と対応検出手段6
に送る(ステップS2)。この第1視点A1と第2視点
A2における測定対象物Mの撮影により、図4に示すよ
うに、測定対象物Mのあらかじめ定められた複数の対象
点Pn(n=1〜n)は第1視点A1の画像面I1上に
特徴点P1nとして検出され、第2視点A2の画像面I
2にも特徴点P2nとして検出される。特徴点消失検出
手段5は第1視点A1と第2視点A2間の相対的な姿勢
情報R及び並進運動成分Dより、第1視点A1で検出し
た特徴点P1nのなかで第2視点A2で消失する可能性
の高い特徴点を求める(ステップS3)。
At the same time as capturing an image of the measuring object M from the first viewpoint A 1 by the image input means 2, the attitude of the image input means 2 is calculated by the attitude detection means 3 to detect the feature point disappearance detection means 5.
Is sent to the correspondence detecting means 6 (step S1). Next, the image input means 2 is moved to the second viewpoint A2 to photograph an image of the measuring object M, the attitude of the image input means 2 is calculated by the attitude detection means 3, and the image input means 2 is calculated by the translation component detection means 4. And the correspondence detection means 6
(Step S2). By photographing the measurement target M at the first viewpoint A1 and the second viewpoint A2, as shown in FIG. 4, a plurality of predetermined target points Pn (n = 1 to n) of the measurement target M are set to the first positions. The feature point P1n is detected on the image plane I1 of the viewpoint A1, and the image plane I of the second viewpoint A2 is detected.
2 is also detected as a feature point P2n. The feature point disappearance detecting means 5 disappears at the second viewpoint A2 among the feature points P1n detected at the first viewpoint A1 based on the relative posture information R and the translational motion component D between the first viewpoint A1 and the second viewpoint A2. A feature point that is highly likely to be obtained is obtained (step S3).

【0019】この特徴点消失検出手段5で第1視点A1
で検出した特徴点P1nのなかで第2視点A2で消失す
る可能性の高い特徴点を求めるときの処理を説明する。
画像入力手段2は、図4に示すように、光学中心を原点
にとり、画像面I1,I2上に互いに直交する向きにx
軸とy軸をとり、z軸を光軸方向とするxyz座標系を
カメラ座標系とする既知の焦点距離fの中心射影モデル
であるとする。第1視点A1と第2視点A2間の姿勢情
報は、下記(1)式で示す第1視点A1を基準とした第
1視点A1と第2視点A2との間の画像入力手段2の回
転行列Rで表現される。
The feature point disappearance detecting means 5 uses the first viewpoint A1
A process for obtaining a feature point having a high possibility of disappearing at the second viewpoint A2 from among the feature points P1n detected in step (1) will be described.
As shown in FIG. 4, the image input means 2 takes the optical center as the origin and places x on the image planes I1 and I2 in a direction orthogonal to each other.
It is assumed that this is a center projection model with a known focal length f using an xyz coordinate system having an axis and a y-axis and the z-axis as an optical axis direction and a camera coordinate system. The posture information between the first viewpoint A1 and the second viewpoint A2 is a rotation matrix of the image input means 2 between the first viewpoint A1 and the second viewpoint A2 based on the first viewpoint A1 expressed by the following equation (1). It is represented by R.

【0020】[0020]

【数1】 (Equation 1)

【0021】また、並進運動成分Dは、第1視点A1に
おけるカメラ座標系を基準として、下記(2)式で示す
列ベクトルDで表せる。
The translational motion component D can be represented by a column vector D expressed by the following equation (2) with reference to the camera coordinate system at the first viewpoint A1.

【0022】[0022]

【数2】 (Equation 2)

【0023】ここで図5に基づいて第1視点A1の画像
面I1上の点P1n(x,y,f)が第2視点A2の画
像面I2でどの位置に投影されるかを説明する。第1視
点A1の画像面I1上の点P1n(x,y,f)で示す
空間上の対象点Pnは、第1視点A1におけるカメラ座
標系を基準として、下記(3)式で示すベクトルP1で
表せる。(3)式においてLnは第1視点A1の光学中
心と対象点Pnまでの距離である。
Here, the position of the point P1n (x, y, f) on the image plane I1 of the first viewpoint A1 projected on the image plane I2 of the second viewpoint A2 will be described with reference to FIG. A target point Pn in the space indicated by a point P1n (x, y, f) on the image plane I1 of the first viewpoint A1 is a vector P1 expressed by the following equation (3) with reference to the camera coordinate system at the first viewpoint A1. Can be represented by In equation (3), Ln is the distance between the optical center of the first viewpoint A1 and the target point Pn.

【0024】[0024]

【数3】 (Equation 3)

【0025】一方、第2視点A2におけるカメラ座標系
においては、対象点Pnは下記(4)式で示すベクトル
P2で表される。
On the other hand, in the camera coordinate system at the second viewpoint A2, the target point Pn is represented by a vector P2 expressed by the following equation (4).

【0026】[0026]

【数4】 (Equation 4)

【0027】したがって、第1視点A1の画像面I1上
の点P1nが第2視点A2で撮影した画像面I2におい
ては下記(5)式のベクトルP2で示す点P2nに投影
される。
Therefore, a point P1n on the image plane I1 of the first viewpoint A1 is projected on a point P2n indicated by a vector P2 of the following equation (5) on the image plane I2 photographed at the second viewpoint A2.

【0028】[0028]

【数5】 (Equation 5)

【0029】この第1視点A1の画像面I1上の点P1
nと第2視点A2の画像面I2の点P2nとの関係は、
姿勢情報Rと並進運動成分Dと焦点距離f及び距離L1
を変数とする関数として下記(6)式で表せる。
The point P1 on the image plane I1 of the first viewpoint A1
n and the point P2n of the image plane I2 of the second viewpoint A2
Attitude information R, translational motion component D, focal length f, and distance L 1
Can be expressed by the following equation (6) as a function using

【0030】[0030]

【数6】 (Equation 6)

【0031】ここで姿勢情報Rと並進運動成分Dを検出
し、焦点距離fは既知であるので、(6)式の関数Fは
距離Lnのみに依存する。したがって、ある特徴点P1
nの消失を知るためには距離Lnが既知である必要があ
るが、例えば3次元形状復元装置1が距離LnがLmin〜
Lmaxの間に存在する対象物の形状を復元すると設定さ
れている場合、対象点Pnと第2視点A2の光学中心と
を通る線の存在範囲は、図5の直線PminPmaxの範囲に
限られる。その結果、第1視点A1の画像面I1上の特
徴点P1nが第2視点A2の画像面I2に投影される範
囲が限られる。そこで特徴点消失検出手段5は画像面I
2上から消失する可能性が高い特徴点については対応付
けに使用しないように対応検出手段6に出力する。例え
ば図5において、第2視点A2において直線PminPmax
が画像面I2に投影される部分の画面上での長さをLi
n、投影されない部分の画面上での長さをLoutとする
と、[Lout/(Lin+Lout)]≧0.8となる特徴点は消
失すると判定する。
Here, the posture information R and the translational motion component D are detected, and the focal length f is known. Therefore, the function F in the equation (6) depends only on the distance Ln. Therefore, a certain feature point P1
In order to know the disappearance of n, the distance Ln needs to be known.
If it is set to restore the shape of the object existing between Lmax, the range of the line passing through the target point Pn and the optical center of the second viewpoint A2 is limited to the range of the straight line PminPmax in FIG. As a result, the range in which the feature point P1n on the image plane I1 of the first viewpoint A1 is projected on the image plane I2 of the second viewpoint A2 is limited. Therefore, the feature point disappearance detecting means 5 sets the image plane I
The feature points that are likely to disappear from above are output to the correspondence detection means 6 so as not to be used for correspondence. For example, in FIG. 5, a straight line PminPmax at the second viewpoint A2
Is the length on the screen of the part projected on the image plane I2 is Li.
n, assuming that the length of the unprojected portion on the screen is Lout, it is determined that the feature points satisfying [Lout / (Lin + Lout)] ≧ 0.8 disappear.

【0032】対応検出手段6は、あらかじめ予定された
特徴点P1n群のなかで特徴点消失検出手段5が指定し
た特徴点を除いて、第1視点A1と第2視点A2の画像
間の特徴点の対応付けを行う(ステップS4)。この特
徴点の対応付けを行うときに姿勢情報Rと並進運動成分
Dが測定されるのでエピ極線拘束が利用可能である。そ
して、相関法、特徴照合法、疎密法等の局所的な画像特
徴を用いる方法、時空間微分法を用いて移動領域を算出
する方法等の一般的な手法により対応付けが行われる。
例えば相関法を用いた場合、第1視点A1の画像面I1
においけるi番目の注視点P1i(xi0,yi0)と、第
2視点A2の画像面I2における点(xi0+dx,yi0
+dy)の対応付けを、図6に示すように(2N+1)
と(2P+1)の相関窓61を用いたブロックマッチン
グで行う場合、下記(7)式で計算される相互相関値S
iが最大にする点を対応点として選ぶ。
The correspondence detection means 6 removes the feature points between the images of the first viewpoint A1 and the second viewpoint A2 except for the feature points designated by the feature point disappearance detection means 5 in the group of feature points P1n which are scheduled in advance. (Step S4). Since the attitude information R and the translational motion component D are measured when the feature points are associated, epipolar constraint can be used. Then, the association is performed by a general method such as a method using local image features such as a correlation method, a feature matching method, and a sparse / dense method, and a method of calculating a moving area using a spatiotemporal differentiation method.
For example, when the correlation method is used, the image plane I1 of the first viewpoint A1
Point i (x i0 , y i0 ) and the point (x i0 + dx, y i0 ) of the second viewpoint A2 on the image plane I2
+ Dy) is associated with (2N + 1) as shown in FIG.
When performing block matching using the correlation window 61 of (2P + 1) and (2P + 1), the cross-correlation value S calculated by the following equation (7)
The point that maximizes i is selected as the corresponding point.

【0033】[0033]

【数7】 (Equation 7)

【0034】上記(7)式において、I1(x,y)は第
1視点A1の画像面I1の点(x,y)における濃度、
2(x,y)は第2視点A2の画像面I2の点(x,
y)における濃度、MI1(x,y)は画像面I1の点
(x,y)を中心とする(2N+1),(2P+1)の
相関窓61における平均濃度、MI2(x,y)は画像面
I2の点(x,y)を中心とする(2N+1),(2P+
1)の相関窓61における平均濃度を示し、Kは定数で
ある。
In the above equation (7), I 1 (x, y) is the density at the point (x, y) on the image plane I1 of the first viewpoint A1.
I 2 (x, y) is a point (x, y) on the image plane I2 of the second viewpoint A2.
The density, MI 1 (x, y) in y) is the average density in the (2N + 1), (2P + 1) correlation window 61 centered on the point (x, y) on the image plane I1, and MI 2 (x, y) is (2N + 1), (2P +) centered on the point (x, y) on the image plane I2
The average density in the correlation window 61 of 1) is shown, and K is a constant.

【0035】上記のようにして得た姿勢情報Rと並進運
動成分D及び対応付け結果より、3次元演算手段7は三
角測量の原理で被写体の3次元構造を算出する(ステッ
プS5)。このようにして得られた位置,姿勢情報や3
次元情報及び各画像は必要に応じて不図示の記憶手段に
記憶する(ステップS6,S7)。
From the posture information R obtained as described above, the translational motion component D, and the association result, the three-dimensional calculation means 7 calculates the three-dimensional structure of the subject according to the principle of triangulation (step S5). The position and orientation information and 3
The dimension information and each image are stored in a storage unit (not shown) as necessary (steps S6 and S7).

【0036】このようにして画像入力手段2の姿勢情報
Rと並進運動成分Dより、特徴点の消失を検出するか
ら、特徴点の無駄な対応付けに要する計算と誤対応とを
防止することができ、低計算コストで高精度な3次元形
状を復元することができる。
In this manner, the disappearance of a feature point is detected from the attitude information R of the image input means 2 and the translational motion component D. Therefore, it is possible to prevent the calculation required for useless association of feature points and erroneous correspondence. Thus, a highly accurate three-dimensional shape can be restored at a low calculation cost.

【0037】上記実施例は、並進成分検出手段4で第1
視点A1から第2視点A2までの画像入力手段2の運動
により生じる慣性力の検出や例えばGPS等の航法シス
テムを利用して画像入力手段2の並進運動成分を算出し
た場合について説明したが、画像に写しだされた空間上
のある点までの距離情報と画像入力手段2の姿勢情報R
から画像入力手段2の並進運動成分を算出しても良い。
In the above embodiment, the translation component detecting means 4 uses the first
The case where the inertial force generated by the movement of the image input unit 2 from the viewpoint A1 to the second viewpoint A2 is detected and the translational motion component of the image input unit 2 is calculated using a navigation system such as GPS has been described. Information to a certain point in the space and the attitude information R of the image input means 2
May be used to calculate the translational motion component of the image input means 2.

【0038】この空間上のある点までの距離情報と画像
入力手段2の姿勢情報Rから画像入力手段2の並進運動
成分を算出する第2の実施例の3次元形状復元装置1a
には、図7のブロック図に示すように、空間上の点まで
の距離を検出する距離検出手段8と、距離検出手段8で
検出した距離情報と姿勢検出手段3で算出した画像入力
手段2の姿勢情報Rから画像入力手段2の並進運動成分
Dを算出する並進成分検出手段4aを有する。
The three-dimensional shape restoration apparatus 1a according to the second embodiment for calculating the translational motion component of the image input means 2 from the distance information to a certain point in the space and the attitude information R of the image input means 2
As shown in the block diagram of FIG. 7, a distance detecting means 8 for detecting a distance to a point in space, the distance information detected by the distance detecting means 8 and an image input means 2 calculated by the attitude detecting means 3 Of the image input means 2 from the posture information R of the image input means 2.

【0039】この3次元形状復元装置1aで、図2に示
すように、同一対象物Mを第1視点A1と第2視点A2
で撮影して3次元形状を復元するときは、図8のフロー
チャートに示すように、まず画像撮影前にユーザーが測
定対象物Mの距離を測定する注視点を決定する(ステッ
プS11)。この注視点の決定については、特徴的な濃
度分布を示す小領域を自動選択するなど各種の手法が利
用できる。その後、画像入力手段2により第1視点A1
で測定対象物Mの画像を撮影するとともに姿勢検出手段
3で画像入力手段2の姿勢を算出し、距離検出手段8で
測定対象物Mの注視点までの距離を測定し、画像情報と
姿勢情報及び測定した距離情報を並進成分検出手段4a
に送り、画像情報と姿勢情報を特徴点消失検出手段5に
送る(ステップS12)。この距離検出手段8で測定対
象物Mの注視点までの距離を測定するとき、3角測量の
原理を利用した赤外線ステレオ法、超音波等の波動を投
射し対象からの反射の伝藩時間より距離を計測する方
法、合焦時の距離情報を光学系に設置したエンコーダよ
り得る方法などを利用して測定対象物Mの注視点までの
距離を測定する。次に画像入力手段2を第2視点A2に
移動して測定対象物Mの画像を撮影するとともに姿勢検
出手段3で画像入力手段2の姿勢を算出し、距離検出手
段8で測定対象物Mの注視点までの距離を測定し、画像
情報と姿勢情報及び測定した距離情報を並進成分検出手
段4aに送り、画像情報と姿勢情報を特徴点消失検出手
段5に送る(ステップS13)。並進成分検出手段4a
は送られた第1視点A1と第2視点A2における画像情
報と姿勢情報及び距離情報から画像入力手段2の並進運
動成分Dを算出する(ステップS14)。
With this three-dimensional shape restoring apparatus 1a, as shown in FIG. 2, the same object M is moved from a first viewpoint A1 to a second viewpoint A2.
When restoring the three-dimensional shape by photographing in step S11, as shown in the flowchart of FIG. 8, the user first determines a gazing point for measuring the distance of the measuring object M before photographing the image (step S11). Various methods can be used to determine the gazing point, such as automatically selecting a small region having a characteristic density distribution. Then, the first viewpoint A1 is input by the image input unit 2.
, An image of the measurement object M is photographed, the posture detection means 3 calculates the posture of the image input means 2, the distance detection means 8 measures the distance to the point of gaze of the measurement object M, and obtains image information and posture information. And the measured distance information is used as a translation component detecting means 4a.
And sends the image information and the posture information to the feature point disappearance detecting means 5 (step S12). When the distance to the fixation point of the measuring object M is measured by the distance detecting means 8, the infrared stereo method using the principle of triangulation, the projection of waves such as ultrasonic waves, and the transmission time of reflection from the object are used. The distance to the gazing point of the measurement target M is measured by using a method of measuring a distance, a method of obtaining distance information at the time of focusing from an encoder installed in an optical system, or the like. Next, the image input means 2 is moved to the second viewpoint A2 to capture an image of the measurement target M, the attitude of the image input means 2 is calculated by the attitude detection means 3, and the distance detection The distance to the gazing point is measured, the image information, the posture information, and the measured distance information are sent to the translation component detection means 4a, and the image information and the posture information are sent to the feature point disappearance detection means 5 (step S13). Translation component detecting means 4a
Calculates the translational motion component D of the image input means 2 from the sent image information, attitude information, and distance information at the first viewpoint A1 and the second viewpoint A2 (step S14).

【0040】この並進成分検出手段4aで画像入力手段
2の並進運動成分Dを算出するときは、図9に示すよう
に、第1視点A1と第2視点A2から注視点Qまでの距
離をそれぞれL1、L2とし、第1視点A1と第2視点A
2で撮影した画像面I1,I2における注視点Qの画像
座標をそれぞれQ1(x1,y1,f),Q2(x2,y2
f)とすると、姿勢情報Rは姿勢検出手段3で検出さ
れ、距離L1、L2は距離検出手段8により検出されるか
ら、第1視点A1におけるカメラ座標系を基準にする
と、第1視点A1から注視点Qへの単位視線ベクトルp
1と第2視点A2から注視点Qへの単位視線ベクトルR
2は、それぞれ下記(8)式で表せる。
When the translational motion component D of the image input means 2 is calculated by the translational component detection means 4a, as shown in FIG. 9, the distances from the first viewpoint A1 and the second viewpoint A2 to the gazing point Q are respectively determined. L 1 and L 2 , a first viewpoint A1 and a second viewpoint A
2, the image coordinates of the gazing point Q on the image planes I1 and I2 are Q 1 (x 1 , y 1 , f), Q 2 (x 2 , y 2 ,
f), the posture information R is detected by the posture detecting means 3 and the distances L 1 and L 2 are detected by the distance detecting means 8. Therefore, based on the camera coordinate system at the first viewpoint A 1, the first viewpoint Unit line-of-sight vector p from A1 to point of regard Q
1 and the unit line-of-sight vector R from the second viewpoint A2 to the gazing point Q
p 2 can be expressed by the following equation (8).

【0041】[0041]

【数8】 (Equation 8)

【0042】したがって、注視点Qに対する第1視点A
1と第2視点A2からの距離L1、L2と姿勢情報Rを検
出することにより、下記(9)式により並進運動成分D
を算出することができる。
Therefore, the first viewpoint A with respect to the gazing point Q
By detecting the distances L 1 and L 2 from the first and second viewpoints A2 and L2 and the posture information R, the translational motion component D is calculated by the following equation (9).
Can be calculated.

【0043】[0043]

【数9】 (Equation 9)

【0044】特徴点消失検出手段5は、算出した姿勢情
報Rと並進運動成分D及び焦点距離fを用いて、第1視
点A1における画像の各特徴点に対して第2視点A2で
撮影した画像において消失する可能性があるかを判定す
る(ステップS15)。対応検出手段6はあらかじめ設
定された特徴点群のうち、特徴点消失検出手段5が指定
した特徴点を除いて、第1視点A1と第2視点A2の画
像間の特徴点の対応付けを行う(ステップS16)。3
次元演算手段7は姿勢情報Rと並進運動成分D及び対応
付け結果より、三角測量の原理で被写体の3次元構造を
算出する(ステップS17)。このようにして得られた
位置,姿勢情報や3次元情報及び各画像は必要に応じて
記憶手段に記憶する(ステップS18,S19)。
The feature point disappearance detecting means 5 uses the calculated attitude information R, the translational motion component D, and the focal length f to obtain an image taken at the second viewpoint A2 for each feature point of the image at the first viewpoint A1. It is determined whether there is a possibility of disappearing in (Step S15). The correspondence detection unit 6 associates the feature points between the images of the first viewpoint A1 and the second viewpoint A2, except for the feature points designated by the feature point disappearance detection unit 5 in the preset feature point group. (Step S16). 3
The dimension calculation means 7 calculates the three-dimensional structure of the subject based on the principle of triangulation from the posture information R, the translational motion component D, and the association result (step S17). The position, orientation information, three-dimensional information and each image obtained in this way are stored in the storage means as needed (steps S18, S19).

【0045】このように画像に写しだされた空間上のあ
る点までの距離情報と画像入力手段2の姿勢情報Rから
並進運動成分Dを求めるから、精度の高い並進運動成分
を得ることができ、3次元復元精度を一層向上すること
ができる。
As described above, the translational motion component D is obtained from the distance information to a certain point in the space projected on the image and the attitude information R of the image input means 2, so that a highly accurate translational motion component can be obtained. And the three-dimensional restoration accuracy can be further improved.

【0046】また、特徴点消失検出手段5が指定して対
応検出手段6で除いた特徴点の視線方向とその画像特徴
量を保存することにより、消失する可能性がある特徴点
が再び画像面内に出現したことを速やかに検知し、特徴
点として再設定するようにしても良い。この消失する可
能性がある特徴点を保存する第3の実施例の3次元形状
復元装置1bには、図10のブロック図に示すように、
特徴点保存手段9と特徴点再設定手段10を設けた以外
は図1に示した3次元形状復元装置1と同じ構成であ
る。
Further, by storing the line-of-sight direction of the feature point designated by the feature point disappearance detecting means 5 and removed by the correspondence detecting means 6 and its image feature amount, the feature point which may be lost is restored on the image plane. May be quickly detected and reset as a feature point. As shown in the block diagram of FIG. 10, the three-dimensional shape restoring device 1b of the third embodiment that stores the feature points that may be lost includes
The configuration is the same as that of the three-dimensional shape restoring apparatus 1 shown in FIG. 1 except that a feature point storage unit 9 and a feature point resetting unit 10 are provided.

【0047】この3次元形状復元装置1bで例えば距離
LnがLmin〜Lmaxの間に存在する対象物の形状を復元
すると設定されているときは、図11のフローチャート
に示すように、画像入力手段2により第1視点A1で測
定対象物Mの画像を撮影すると同時に、姿勢検出手段3
で画像入力手段2の姿勢を算出して特徴点消失検出手段
5と対応検出手段6に送る(ステップS21)。次に画
像入力手段2を第2視点A2に移動して測定対象物Mの
画像を撮影し、姿勢検出手段3で画像入力手段2の姿勢
を算出し、並進成分検出手段4で画像入力手段2の並進
運動成分を算出して特徴点消失検出手段5と対応検出手
段6に送る(ステップS22)。特徴点消失検出手段5
は第1視点A1と第2視点A2間の相対的な姿勢情報及
び並進運動成分より、第1視点A1で検出した特徴点P
1nのなかで第2視点A2で消失する可能性の高い特徴
点を求める(ステップS23)。特徴点保存手段9は消
失する可能性が高いと判定された特徴点の空間上での存
在範囲と画像特徴点を保存する(ステップS24)。こ
の消失する可能性が高いと判定された特徴点を保存する
ときに、対象物の距離LnがLminとLmaxの間にあるこ
とより、対応する空間上の点P1の存在範囲である直線
PminPmaxを求める。そして図12に示すように、第1
視点A1と第2視点A2の間の姿勢情報R12と並進運動
成分D12を用いて、(3)〜(5)式より第2視点A2
のカメラ座標系を基準として算出した点Pminと点Pmax
の空間座標と消失すると判定された特徴点の有する例え
ば濃淡パターンやエッジの形状等の画像特徴量を保存す
る。対応検出手段6はあらかじめ予定された特徴点P1
n群のなかで特徴点消失検出手段5が指定した特徴点を
除いて、第1視点A1と第2視点A2の画像間の特徴点
の対応付けを行う(ステップS25)。
When the three-dimensional shape restoring device 1b is set to restore the shape of an object whose distance Ln is between Lmin and Lmax, as shown in the flowchart of FIG. Image of the measurement target M at the first viewpoint A1 by using the
Calculates the attitude of the image input means 2 and sends it to the feature point disappearance detecting means 5 and the correspondence detecting means 6 (step S21). Next, the image input means 2 is moved to the second viewpoint A2 to photograph an image of the measuring object M, the attitude of the image input means 2 is calculated by the attitude detection means 3, and the image input means 2 is calculated by the translation component detection means 4. Is calculated and sent to the feature point disappearance detecting means 5 and the correspondence detecting means 6 (step S22). Feature point disappearance detecting means 5
Is a feature point P detected at the first viewpoint A1 based on relative posture information between the first viewpoint A1 and the second viewpoint A2 and a translational motion component.
In 1n, a feature point having a high possibility of disappearing at the second viewpoint A2 is obtained (step S23). The feature point storage unit 9 saves the existence range in the space of the feature point determined to have a high possibility of disappearance and the image feature point (step S24). When the possibility of this loss to save high and the determined characteristic point, from the distance Ln of the object is between Lmin and Lmax, the presence range of the point P 1 on the corresponding spatial linear PminPmax Ask for. Then, as shown in FIG.
A viewpoint A1 using the attitude information R 12 and translational motion component D 12 between the second viewpoint A2, (3) ~ (5 ) second viewpoint from Eq A2
Pmin and Pmax calculated based on the camera coordinate system of
And the image feature values of the feature points determined to be lost, such as light and shade patterns and edge shapes, are stored. Correspondence detecting means 6 has a feature point P1
The feature points between the images of the first viewpoint A1 and the second viewpoint A2 are associated with each other except for the feature points designated by the feature point disappearance detecting means 5 in the n groups (step S25).

【0048】一方、特徴点再設定手段10は、第2視点
A2から第3視点A3へ画像入力装置2が移動したとき
に、第2視点A2から第3視点A3の間の姿勢情報R23
と並進運動成分D23より、第3視点3のカメラ座標系を
基準とした点Pminと点Pmaxの空間座標を計算する。そ
して(3)〜(5)式により第3視点A3における点P
minと点Pmaxの画像座標を計算して特徴点の再出現を判
定する(ステップS26,S27)。例えば図12にお
いて、第3視点A3において直線PminPmaxの投影され
る部分の画面上での長さをLin、投影されない部分の画
面上での長さをLoutとすると、[Lin/(Lin+Lou
t)]≧0.8であるときに、一旦消失した特徴点が第3視
点A3で再び投影されたと判定する。このように特徴点
の再現が検知されると、保存された特徴点の画像特徴量
とエピ極線拘束を用いて、保存された特徴点が第3視点
A3の画像のどこに再投影されたかを求め,一旦消失し
た特徴点を再び特徴点として再設定する(ステップS2
8)。全ての消失した特徴点に対してこの処理を繰返し
てから姿勢情報Rと並進運動成分Dと特徴点の対応付け
結果より、被写体の3次元形状を復元する(ステップS
29)。このようにして得られた位置,姿勢情報や3次
元情報及び各画像は必要に応じて記憶手段に記憶する
(ステップS30,S31)。
On the other hand, when the image input device 2 moves from the second viewpoint A2 to the third viewpoint A3, the feature point resetting means 10 sets the posture information R 23 between the second viewpoint A2 and the third viewpoint A3.
And from translational motion component D 23, we calculate the spatial coordinates of the third viewpoint 3 of the camera coordinate system point relative to the Pmin and the point Pmax. Then, the point P at the third viewpoint A3 is obtained from the equations (3) to (5).
The image coordinates of min and the point Pmax are calculated, and the reappearance of the feature point is determined (steps S26 and S27). For example, in FIG. 12, assuming that the length of the portion where the straight line PminPmax is projected on the screen at the third viewpoint A3 is Lin and the length of the portion where the straight line PminPmax is not projected on the screen is Lout, [Lin / (Lin + Lou).
t)] When ≧ 0.8, it is determined that the feature point once disappeared is projected again at the third viewpoint A3. When the reproduction of the feature point is detected in this manner, it is determined using the image feature amount of the stored feature point and the epipolar constraint to determine where the stored feature point is reprojected in the image of the third viewpoint A3. The characteristic points that have been obtained and once lost are reset as characteristic points again (step S2).
8). This process is repeated for all the lost feature points, and then the three-dimensional shape of the subject is restored from the result of associating the posture information R, the translational motion component D, and the feature points (step S).
29). The position, orientation information, three-dimensional information and each image obtained in this way are stored in the storage means as needed (steps S30, S31).

【0049】このように、消失した特徴点の視線方向及
びその画像特徴量を保存することにより、消失した特徴
点が再び画像面内に出現したことを速やかに検知して、
特徴点として再設定することができるから、多数枚の画
像を用いて同一の対象物の3次元形状を容易に復元する
ことができ、高精度な3次元形状を復元することができ
る。
In this way, by storing the line-of-sight direction of the lost feature point and its image feature amount, it is quickly detected that the lost feature point has reappeared in the image plane.
Since the feature points can be reset, the three-dimensional shape of the same object can be easily restored using a large number of images, and a highly accurate three-dimensional shape can be restored.

【0050】上記3次元形状復元装置1bは、画像入力
手段2の運動により生じる慣性力の検出や例えばGPS
等の航法システムを利用して画像入力手段2の並進運動
成分Dを算出する場合について説明したが、距離検出手
段8で検出した距離情報と姿勢検出手段3で算出した画
像入力手段2の姿勢情報Rから画像入力手段2の並進運
動成分Dを算出する場合にも同様にして適用することが
できる。
The three-dimensional shape restoring device 1b detects the inertial force generated by the motion of the image input means 2 and detects, for example, the GPS
A case has been described in which the translational motion component D of the image input means 2 is calculated using a navigation system such as that described above. However, the distance information detected by the distance detection means 8 and the attitude information of the image input means 2 calculated by the attitude detection means 3 The same applies to the case where the translational motion component D of the image input means 2 is calculated from R.

【0051】また、上記各実施例は2視点又は3視点で
画像を撮影した場合について説明したが、それ以上の複
数視点で画像を撮影した場合にも同様に適用することが
できる。また、上記処理を実時間処理するのではなく、
画像や距離情報,姿勢情報等を記憶手段にまとめて格納
させておき、その後オフライン処理しても良い。さらに
検出した各種情報をネットワークなどに転送してから処
理するようにしても良い。
In each of the above embodiments, a case where an image is photographed from two or three viewpoints has been described. However, the present invention can be similarly applied to a case where an image is photographed from more than two viewpoints. Also, instead of performing the above processing in real time,
The image, the distance information, the posture information, and the like may be collectively stored in the storage unit, and then may be processed offline. Furthermore, various detected information may be transferred to a network or the like and then processed.

【0052】[0052]

【発明の効果】この発明は以上説明したように、画像入
力手段の姿勢情報と並進運動成分より、特徴点の消失を
検出するから、特徴点の無駄な対応付けに要する計算と
誤対応とを防止することができ、低計算コストで高精度
な3次元形状を復元することができる。
As described above, according to the present invention, the disappearance of a feature point is detected from the attitude information of the image input means and the translational motion component. Thus, a highly accurate three-dimensional shape can be restored at a low calculation cost.

【0053】また、画像入力手段の姿勢情報と画像に写
しだされた空間上のある点までの距離情報から画像入力
手段の並進運動成分を算出して3次元形状を復元するこ
とにより、より精度良く3次元形状を復元することがで
きる。
Further, by calculating the translational motion component of the image input means from the attitude information of the image input means and the distance information to a certain point in the space projected on the image and restoring the three-dimensional shape, more accurate accuracy can be obtained. The three-dimensional shape can be well restored.

【0054】さらに、消失した特徴点の視線方向及びそ
の画像特徴量を保存することにより、消失した特徴点が
再び画像面内に出現したことを速やかに検知して、特徴
点として再設定することができるから、多数枚の画像を
用いて同一の対象物の3次元形状を容易に復元すること
ができ、高精度な3次元形状を復元することができる。
Further, by storing the line-of-sight direction of the lost feature point and its image feature amount, it is possible to quickly detect that the lost feature point has reappeared in the image plane and reset it as a feature point. Therefore, the three-dimensional shape of the same object can be easily restored using a large number of images, and a highly accurate three-dimensional shape can be restored.

【図面の簡単な説明】[Brief description of the drawings]

【図1】この発明の実施例の構成を示すブロック図であ
る。
FIG. 1 is a block diagram showing a configuration of an embodiment of the present invention.

【図2】同一対象物を2か所の視点で撮影する状態を示
す説明図である。
FIG. 2 is an explanatory diagram showing a state in which the same object is photographed from two viewpoints.

【図3】上記実施例の動作を示すフローチャートであ
る。
FIG. 3 is a flowchart showing the operation of the embodiment.

【図4】画像入力手段の光学系を示す配置図である。FIG. 4 is an arrangement diagram showing an optical system of an image input unit.

【図5】消失する特徴点の検知を示す説明図である。FIG. 5 is an explanatory diagram showing detection of a feature point that disappears.

【図6】特徴点の対応付けを示す説明図である。FIG. 6 is an explanatory diagram showing correspondence of feature points.

【図7】第2の実施例の構成を示すブロック図である。FIG. 7 is a block diagram showing a configuration of a second embodiment.

【図8】第2の実施例の動作を示すフローチャートであ
る。
FIG. 8 is a flowchart showing the operation of the second embodiment.

【図9】距離情報から並進運動成分を算出する動作を示
す説明図である。
FIG. 9 is an explanatory diagram showing an operation of calculating a translational motion component from distance information.

【図10】第3の実施例の構成を示すブロック図であ
る。
FIG. 10 is a block diagram illustrating a configuration of a third embodiment.

【図11】第3の実施例の動作を示すフローチャートで
ある。
FIG. 11 is a flowchart showing the operation of the third embodiment.

【図12】消失する特徴点の保存と再出現処理を示す説
明図である。
FIG. 12 is an explanatory diagram showing a process of storing and reappearing disappearing feature points.

【符号の説明】[Explanation of symbols]

1 3次元形状復元装置 2 画像入力手段 3 姿勢検出手段 4 並進成分検出手段 5 特徴点消失検出手段 6 対応検出手段 7 3次元演算手段 8 距離検出手段 9 特徴点保存手段 10 特徴点再設定手段 REFERENCE SIGNS LIST 1 3D shape restoration device 2 Image input means 3 Attitude detection means 4 Translation component detection means 5 Feature point disappearance detection means 6 Correspondence detection means 7 3D calculation means 8 Distance detection means 9 Feature point storage means 10 Feature point resetting means

Claims (6)

【特許請求の範囲】[Claims] 【請求項1】 画像入力手段により複数の視点で被写体
の画像を入力し、各視点における画像入力手段の姿勢情
報と並進運動成分を算出し、入力した画像情報と算出し
た画像入力手段の姿勢情報と並進運動成分から、各画像
の特徴点の対応付けを行い、画像情報と画像入力手段の
姿勢情報と並進運動成分及び特徴点の対応付け結果から
被写体の3次元形状を計算して復元する3次元形状復元
装置において、 画像入力手段の姿勢情報と並進運動成分から、異なる視
点で撮影した画像面から消失する特徴点を検出する特徴
点消失検出手段を有し、特徴点消失検出手段で検出した
消失する特徴点を除去して各画像の特徴点の対応付けを
行うことを特徴とする3次元形状復元装置。
An image of a subject is input from a plurality of viewpoints by an image input unit, posture information and a translational motion component of the image input unit at each viewpoint are calculated, and the input image information and the calculated posture information of the image input unit are calculated. And the translational motion component, the feature points of each image are associated with each other, and the three-dimensional shape of the subject is calculated and restored from the correspondence between the image information, the attitude information of the image input means, the translational motion component, and the feature point. In the three-dimensional shape restoration device, there is provided a feature point disappearance detecting means for detecting a feature point disappearing from an image plane photographed from a different viewpoint from the posture information and the translational motion component of the image input means. A three-dimensional shape restoring apparatus characterized in that disappearing feature points are removed and feature points of each image are associated with each other.
【請求項2】 画像入力手段により複数の視点で被写体
の画像を入力し、各視点における画像入力手段の姿勢情
報を算出し、各視点の画像面上に投影された点に対応す
る空間上の点までの距離を算出し、画像入力手段の姿勢
情報と空間上の点までの距離情報から画像入力手段の並
進運動成分を算出し、入力した画像情報と算出した画像
入力手段の姿勢情報と並進運動成分から、各画像の特徴
点の対応付けを行い、画像情報と画像入力手段の姿勢情
報と並進運動成分及び特徴点の対応付け結果から被写体
の3次元形状を計算して復元する3次元形状復元装置に
おいて、 画像入力手段の姿勢情報と並進運動成分から、異なる視
点で撮影した画像面から消失する特徴点を検出する特徴
点消失検出手段を有し、特徴点消失検出手段で検出した
消失する特徴点を除去して各画像の特徴点の対応付けを
行うことを特徴とする3次元形状復元装置。
2. An image of a subject is input from a plurality of viewpoints by an image input unit, posture information of the image input unit at each viewpoint is calculated, and a position corresponding to a point projected on an image plane of each viewpoint is calculated. Calculate the distance to the point, calculate the translational motion component of the image input means from the attitude information of the image input means and the distance information to the point in space, and input the image information and the calculated attitude information of the image input means and the translation. The three-dimensional shape for associating the feature points of each image from the motion component, and calculating and restoring the three-dimensional shape of the subject from the correspondence between the image information, the posture information of the image input means, the translational motion component, and the feature point The restoration device has feature point disappearance detecting means for detecting a feature point disappearing from an image plane photographed from a different viewpoint from the posture information and the translational motion component of the image input means, and the feature point disappearance detected by the feature point disappearance detecting means is provided. 3D reconstruction apparatus characterized by removing the feature points to associate feature points in each image.
【請求項3】 上記特徴点消失検出手段により消失が検
出された特徴点の空間上での存在範囲とその画像特徴量
を保存する特徴点保存手段と、保存された特徴点が再び
他の画面上に投影されたことを検知する特徴点再設定手
段とを有する請求項1又は2記載の3次元形状復元装
置。
3. A feature point storage means for storing the existence range in the space of the feature point whose loss has been detected by the feature point loss detection means and its image feature quantity, and the stored feature point is displayed on another screen again. 3. The three-dimensional shape restoring device according to claim 1, further comprising a feature point resetting means for detecting that the image is projected on the three-dimensional shape.
【請求項4】 画像入力手段により複数の視点で被写体
の画像を入力し、各視点における画像入力手段の姿勢情
報と並進運動成分を算出し、入力した画像情報と算出し
た画像入力手段の姿勢情報と並進運動成分から、各画像
の特徴点の対応付けを行い、画像情報と画像入力手段の
姿勢情報と並進運動成分及び特徴点の対応付け結果から
被写体の3次元形状を計算して復元する3次元形状復元
方法において、 画像入力手段の姿勢情報と並進運動成分から、異なる視
点で撮影した画像面から消失する特徴点を検出し、消失
する特徴点を除去して各画像の特徴点の対応付けを行う
ことを特徴とする3次元形状復元方法。
4. An image input unit inputs an image of a subject from a plurality of viewpoints, calculates posture information and a translational motion component of the image input unit at each viewpoint, and calculates the input image information and the calculated posture information of the image input unit. And the translational motion component, the feature points of each image are associated with each other, and the three-dimensional shape of the subject is calculated and restored from the correspondence between the image information, the attitude information of the image input means, the translational motion component, and the feature point. In the three-dimensional shape restoring method, feature points disappearing from an image plane taken from different viewpoints are detected from the posture information and the translational motion component of the image input means, and the disappearing feature points are removed to associate feature points of each image. A three-dimensional shape restoring method.
【請求項5】 画像入力手段により複数の視点で被写体
の画像を入力し、各視点における画像入力手段の姿勢情
報を算出し、各視点の画像面上に投影された点に対応す
る空間上の点までの距離を算出し、画像入力手段の姿勢
情報と空間上の点までの距離情報から画像入力手段の並
進運動成分を算出し、入力した画像情報と算出した画像
入力手段の姿勢情報と並進運動成分から、各画像の特徴
点の対応付けを行い、画像情報と画像入力手段の姿勢情
報と並進運動成分及び特徴点の対応付け結果から被写体
の3次元形状を計算して復元する3次元形状復元方法に
おいて、 画像入力手段の姿勢情報と空間上の点までの距離情報か
ら、異なる視点で撮影した画像面から消失する特徴点を
検出し、検出した消失する特徴点を除去して各画像の特
徴点の対応付けを行うことを特徴とする3次元形状復元
方法。
5. An image of a subject is input from a plurality of viewpoints by an image input unit, posture information of the image input unit at each viewpoint is calculated, and a position corresponding to a point projected on an image plane of each viewpoint is calculated. Calculate the distance to the point, calculate the translational motion component of the image input means from the attitude information of the image input means and the distance information to the point in space, and input the image information and the calculated attitude information of the image input means and the translation. The three-dimensional shape for associating the feature points of each image from the motion component, and calculating and restoring the three-dimensional shape of the subject from the correspondence between the image information, the posture information of the image input means, the translational motion component, and the feature point In the restoration method, feature points disappearing from an image plane photographed from different viewpoints are detected from the posture information of the image input means and distance information to a point in space, and the detected disappearing feature points are removed to remove each feature point of each image. Feature point pairs 3D reconstruction method and performing Paste.
【請求項6】 上記消失が検出された特徴点の空間上で
の存在範囲とその画像特徴量を保存し、保存された特徴
点が再び他の画面上に投影されたことを検知する請求項
5又は6記載の3次元形状復元方法。
6. The image processing apparatus according to claim 1, wherein an existence range of the feature point in which the disappearance is detected in a space and an image feature amount thereof are stored, and it is detected that the stored feature point is projected on another screen again. 7. The three-dimensional shape restoration method according to 5 or 6.
JP9303373A 1997-10-20 1997-10-20 Three-dimensional shape restoring device and restoring method Pending JPH11120361A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP9303373A JPH11120361A (en) 1997-10-20 1997-10-20 Three-dimensional shape restoring device and restoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP9303373A JPH11120361A (en) 1997-10-20 1997-10-20 Three-dimensional shape restoring device and restoring method

Publications (1)

Publication Number Publication Date
JPH11120361A true JPH11120361A (en) 1999-04-30

Family

ID=17920228

Family Applications (1)

Application Number Title Priority Date Filing Date
JP9303373A Pending JPH11120361A (en) 1997-10-20 1997-10-20 Three-dimensional shape restoring device and restoring method

Country Status (1)

Country Link
JP (1) JPH11120361A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005024442A (en) * 2003-07-04 2005-01-27 Meidensha Corp Stereoscopic image photographing device
JP2011027718A (en) * 2009-06-16 2011-02-10 Intel Corp Derivation of 3-dimensional information from single camera and movement sensor
JP2019178886A (en) * 2018-03-30 2019-10-17 キヤノン電子株式会社 Distance measurement device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005024442A (en) * 2003-07-04 2005-01-27 Meidensha Corp Stereoscopic image photographing device
JP2011027718A (en) * 2009-06-16 2011-02-10 Intel Corp Derivation of 3-dimensional information from single camera and movement sensor
JP2019178886A (en) * 2018-03-30 2019-10-17 キヤノン電子株式会社 Distance measurement device

Similar Documents

Publication Publication Date Title
US10948297B2 (en) Simultaneous location and mapping (SLAM) using dual event cameras
CN112785702B (en) SLAM method based on tight coupling of 2D laser radar and binocular camera
CN106643699B (en) Space positioning device and positioning method in virtual reality system
JP3732335B2 (en) Image input apparatus and image input method
Chai et al. Three-dimensional motion and structure estimation using inertial sensors and computer vision for augmented reality
EP2959315B1 (en) Generation of 3d models of an environment
JP3833786B2 (en) 3D self-position recognition device for moving objects
WO2018142496A1 (en) Three-dimensional measuring device
JP2004198211A (en) Apparatus for monitoring vicinity of mobile object
JP2004198212A (en) Apparatus for monitoring vicinity of mobile object
EP4155873A1 (en) Multi-sensor handle controller hybrid tracking method and device
US8150143B2 (en) Dynamic calibration method for single and multiple video capture devices
JP2010014450A (en) Position measurement method, position measurement device, and program
CN115371673A (en) Binocular camera target positioning method based on Bundle Adjustment in unknown environment
JP2559939B2 (en) Three-dimensional information input device
JP2001317915A (en) Three-dimensional measurement apparatus
JP3655065B2 (en) Position / attitude detection device, position / attitude detection method, three-dimensional shape restoration device, and three-dimensional shape restoration method
JP3221384B2 (en) 3D coordinate measuring device
JPH11120361A (en) Three-dimensional shape restoring device and restoring method
JP3712847B2 (en) Three-dimensional shape measurement method, three-dimensional shape measurement device, and posture detection device for imaging means
JP3512894B2 (en) Relative moving amount calculating apparatus and relative moving amount calculating method
KR20050013000A (en) Apparatus and method for guiding route of vehicle using three-dimensional information
JP3512919B2 (en) Apparatus and method for restoring object shape / camera viewpoint movement
JP5409451B2 (en) 3D change detector
Fabian et al. One-point visual odometry using a RGB-depth camera pair