JP2013069026A - Device, method, and program for restoring three-dimensional shape of subject - Google Patents

Device, method, and program for restoring three-dimensional shape of subject Download PDF

Info

Publication number
JP2013069026A
JP2013069026A JP2011205735A JP2011205735A JP2013069026A JP 2013069026 A JP2013069026 A JP 2013069026A JP 2011205735 A JP2011205735 A JP 2011205735A JP 2011205735 A JP2011205735 A JP 2011205735A JP 2013069026 A JP2013069026 A JP 2013069026A
Authority
JP
Japan
Prior art keywords
subject
tangent
contact
camera
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2011205735A
Other languages
Japanese (ja)
Other versions
JP5736285B2 (en
Inventor
Akio Ishikawa
彰夫 石川
Hitoshi Naito
整 内藤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KDDI Corp
Original Assignee
KDDI Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KDDI Corp filed Critical KDDI Corp
Priority to JP2011205735A priority Critical patent/JP5736285B2/en
Publication of JP2013069026A publication Critical patent/JP2013069026A/en
Application granted granted Critical
Publication of JP5736285B2 publication Critical patent/JP5736285B2/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

PROBLEM TO BE SOLVED: To provide a device for accurately restoring a three-dimensional shape by calculating many points on a surface of a subject from a multi-viewpoint image.SOLUTION: A device for restoring the three-dimensional shape of a subject, based on a multi-viewpoint image obtained by photographing a subject by a plurality of cameras and camera parameters of the respective cameras, extracts an occluding contour line of the subject in the multi-viewpoint image, calculates a tangent line corresponding to each point constituting the occluding contour line from the camera parameters, estimates the position of a tangent point with respect to a subject surface on the calculated tangent line, and restores the shape of the subject surface with the tangent point as a feature point on the subject surface.

Description

本発明は、多視点画像から、被写体の表面上にある点を多数求め、3次元形状を高精度に復元する装置、方法およびプログラムに関する。   The present invention relates to an apparatus, method, and program for obtaining a number of points on the surface of a subject from a multi-viewpoint image and restoring a three-dimensional shape with high accuracy.

従来から、被写体を取り囲むように複数のカメラで撮影した多視点画像に基づき、被写体の3次元形状を復元する技術について様々な提案がなされている。   Conventionally, various proposals have been made on techniques for restoring a three-dimensional shape of a subject based on multi-viewpoint images taken by a plurality of cameras so as to surround the subject.

従来の3次元形状復元の方法では、被写体の表面上にある可能性が高い点や線などを手がかりとして、これらの点や線を通過する尤もらしい表面を復元している。手がかりとなる点や線としては、被写体の表面上に貼り付けたマーカー等や、被写体の表面上の特徴的な点および線やフロンティア点(非特許文献1)などが用いられる。   In the conventional three-dimensional shape restoration method, a likely surface passing through these points and lines is restored using the points and lines that are highly likely to be on the surface of the subject. As points and lines that serve as clues, markers or the like pasted on the surface of the subject, characteristic points, lines, frontier points (Non-Patent Document 1), etc. on the surface of the subject are used.

冨山,片山,折原,岩舘,“局所的形状特徴に拘束された3次元形状復元手法とそのリアルタイム動画表示,”映像情報メディア学会誌,vol.61,No.4,pp.471−481(2007)Hiyama, Katayama, Orihara, Iwabuchi, “Reconstruction method of 3D shape constrained by local shape features and its real-time video display,” Journal of the Institute of Image Information and Television Engineers, vol. 61, no. 4, pp. 471-481 (2007)

しかしながら、一般に前述の公知の技術を用いて取得できる被写体表面上の点や線は少数であるため、復元される3次元形状の精度が低いことが課題であった。   However, since there are generally a small number of points and lines on the surface of the subject that can be obtained using the above-described known technique, the accuracy of the restored three-dimensional shape is low.

そこで本発明は、多視点画像から、被写体の表面上にある点を多数求め、3次元形状を高精度に復元する装置、方法およびプログラムを提供することを目的とする。   Therefore, an object of the present invention is to provide an apparatus, a method, and a program for obtaining many points on the surface of a subject from a multi-viewpoint image and restoring a three-dimensional shape with high accuracy.

上記目的を実現するため本発明による被写体の3次元形状を復元する装置は、被写体を複数台のカメラで撮影した多視点画像と各カメラのカメラパラメータから、被写体の3次元形状を復元する装置であって、前記多視点画像における被写体の遮蔽輪郭線を抽出し、前記カメラパラメータから、該遮蔽輪郭線を構成する各点において対応する接線を求める接線算出手段と、前記算出した接線上の被写体表面に対する接点の位置を推定する接点推定手段と、前記接点を被写体表面上の特徴点として、被写体の表面の形状を復元する表面復元手段とを備える。   In order to achieve the above object, an apparatus for restoring the three-dimensional shape of a subject according to the present invention is a device for restoring the three-dimensional shape of a subject from a multi-viewpoint image obtained by photographing the subject with a plurality of cameras and camera parameters of each camera. Tangent calculating means for extracting a shielding contour of a subject in the multi-viewpoint image and obtaining a corresponding tangent at each point constituting the shielding contour from the camera parameter; and a subject surface on the calculated tangent Contact estimation means for estimating the position of the contact with respect to the surface, and surface restoration means for restoring the shape of the surface of the subject using the contact as a feature point on the subject surface.

また、前記接点推定手段において、前記接線算出手段で算出した接線を通り被写体表面に接する接平面を設定して、該接平面を接線に沿って平行移動させながら接点の位置を推定することも好ましい。   It is also preferable that the contact estimation means sets a tangent plane that contacts the subject surface through the tangent calculated by the tangent calculation means, and estimates the position of the contact while translating the tangent plane along the tangent. .

また、前記接点推定手段において、前記接平面上の画素を前記各多視点画像上に射影し、各画像上の該当する画素値の一致度を評価する関数を設定し、その関数が最大となる点を接点とすることも好ましい。   In the contact estimation means, a function for projecting the pixels on the tangent plane onto each multi-viewpoint image and evaluating the degree of coincidence of the corresponding pixel values on each image is set, and the function is maximized. It is also preferable to use a point as a contact.

また、前記接点推定手段において、前記一致度を評価する関数として、前記該当する画素値の差分の自乗和に対し単調に減少する関数を用いることも好ましい。   Moreover, in the said contact estimation means, it is also preferable to use the function which decreases monotonously with respect to the square sum of the difference of the said applicable pixel value as a function which evaluates the said matching degree.

また、前記接点推定手段において、前記一致度を評価する関数として、前記該当する画素値の分散もしくは標準偏差に対し単調に減少する関数を用いることも好ましい。   In the contact point estimation means, it is also preferable to use a function that monotonously decreases with respect to the variance or standard deviation of the corresponding pixel value as a function for evaluating the degree of coincidence.

また、前記接点推定手段において、前記一致度を評価する関数を計算する際に、前記多視点画像を撮影したカメラの位置に応じて重みづけを行うことも好ましい。   In the contact point estimation means, it is also preferable to perform weighting according to the position of the camera that has captured the multi-viewpoint image when calculating the function for evaluating the degree of coincidence.

また、前記接点推定手段において、前記重み付けにおいて、前記接平面の法線ベクトルと前記多視点画像を撮影したカメラの光軸ベクトルとの内積に対し単調に増加する重みを設定することも好ましい。   In the contact estimation means, it is also preferable to set a weight that monotonously increases with respect to the inner product of the normal vector of the tangent plane and the optical axis vector of the camera that has captured the multi-viewpoint image.

また、前記接点推定手段において、前記重み付けにおいて、前記接平面の法線ベクトルと前記多視点画像を撮影したカメラの光軸ベクトルとのなす角度に対し単調に減少する重みを設定することも好ましい。   In the contact estimation means, it is also preferable to set a weight that monotonously decreases with respect to an angle formed by a normal vector of the tangent plane and an optical axis vector of a camera that has captured the multi-viewpoint image.

また、前記接平面の代わりに、前記接線算出手段で算出した接線を通り被写体表面に接する錐台面を用いることも好ましい。   It is also preferable to use a frustum surface that passes through the tangent calculated by the tangent calculation means and touches the subject surface instead of the tangent plane.

上記目的を実現するため本発明による被写体の3次元形状を復元する方法は、被写体を複数台のカメラで撮影した多視点画像と各カメラのカメラパラメータから、被写体の3次元形状を復元する方法であって、前記多視点画像における被写体の遮蔽輪郭線を抽出し、前記カメラパラメータから、該遮蔽輪郭線を構成する各点において対応する接線を求める接線算出ステップと、前記算出した接線上の被写体表面に対する接点の位置を推定する接点推定ステップと、前記接点を被写体表面上の特徴点として、被写体の表面の形状を復元する表面復元ステップとを備える。   In order to achieve the above object, the method of restoring the three-dimensional shape of the subject according to the present invention is a method of restoring the three-dimensional shape of the subject from a multi-viewpoint image obtained by photographing the subject with a plurality of cameras and camera parameters of each camera. A tangent calculation step of extracting a shielding contour of a subject in the multi-viewpoint image, obtaining a tangent corresponding to each point constituting the shielding contour from the camera parameter, and a subject surface on the calculated tangent A contact estimation step for estimating the position of the contact with respect to the surface, and a surface restoration step for restoring the shape of the surface of the subject using the contact as a feature point on the subject surface.

上記目的を実現するため本発明によるプログラムは、上記に記載の装置としてコンピュータを機能させる。   In order to achieve the above object, a program according to the present invention causes a computer to function as the above-described apparatus.

本発明の接点推定手段によれば、多視点画像中の遮蔽輪郭線を構成する点に対応する接線の上に必ず1点以上の接点(被写体の表面上の点)が存在するという幾何学的な条件を用いることで、被写体の表面上の点の探索範囲を接線上に限定することができ、被写体表面に対する接点の位置を推定できる。該接点の集合を拘束条件として被写体の表面の3次元形状を復元することで、復元される3次元形状を高精度化できるという効果を得ることができる。   According to the contact point estimation means of the present invention, there is a geometrical feature that at least one contact point (a point on the surface of the object) always exists on a tangent line corresponding to a point constituting the shielding outline in the multi-viewpoint image. By using various conditions, the search range of points on the surface of the subject can be limited to the tangent line, and the position of the contact point with respect to the subject surface can be estimated. By restoring the three-dimensional shape of the surface of the subject using the set of contact points as a constraint, it is possible to obtain an effect that the restored three-dimensional shape can be highly accurate.

本発明による3次元形状復元装置のシステム構成図を示す。The system block diagram of the three-dimensional shape restoration apparatus by this invention is shown. 被写体の表面に引いた接線の算出を示す。The calculation of the tangent line drawn on the surface of the subject is shown. 第1の実施形態による接平面を用いた接点の推定を示す。Fig. 3 shows contact estimation using a tangent plane according to the first embodiment. 第2の実施形態による錐台面を用いた接点の推定を示す。The contact estimation using the frustum surface by 2nd Embodiment is shown. 被写体の表面の3次元形状の復元を示す。The reconstruction of the three-dimensional shape of the surface of the subject is shown.

本発明を実施するための形態について、以下では図面を用いて詳細に説明する。図1は、本発明による3次元形状復元装置のシステム構成図であり、図1に示す様に、3次元形状復元装置1は、多視点画像入力部11、接線算出部12、接点推定部13、および表面復元部14を備えている。   EMBODIMENT OF THE INVENTION The form for implementing this invention is demonstrated in detail below using drawing. FIG. 1 is a system configuration diagram of a 3D shape restoration apparatus according to the present invention. As shown in FIG. 1, the 3D shape restoration apparatus 1 includes a multi-viewpoint image input unit 11, a tangent calculation unit 12, and a contact point estimation unit 13. , And a surface restoration unit 14.

多視点画像入力部11は、被写体を取り囲むように配置した複数のカメラで被写体を同期撮影して得られた画像データ(多視点画像)を入力する。なお、カメラパラメータは既知であるとする。多視点画像のそれぞれにおいて、被写体と背景を分離し、被写体のシルエットを抽出する。ここで、被写体と背景の分離方法は公知の技術を用いてよい。例えば、被写体のシルエットを抽出する方法としては、クロマキーや背景差分もしくは手動による抽出を用いる。抽出されたシルエットから多視点画像における被写体のシルエットの輪郭である遮蔽輪郭を検出する。遮蔽輪郭を検出する方法としては、公知の技術を用いてよい。例えば、Teh−Chinチェーンの近似アルゴリズムを用いて折れ線で近似する方法を用いる。OpenCVのcvFindContours関数を用いてもよい。遮蔽輪郭を用いて視体積交差法を適用することにより、被写体のVisual Hullを取得する。一般に、視体積交差法を用いて取得したVisual Hullは、被写体の真の3次元形状を内包し、かつ、該3次元形状に外接する形状になる。   The multi-viewpoint image input unit 11 inputs image data (multi-viewpoint image) obtained by synchronously shooting a subject with a plurality of cameras arranged so as to surround the subject. It is assumed that camera parameters are already known. In each of the multi-viewpoint images, the subject and the background are separated and the silhouette of the subject is extracted. Here, a known technique may be used as a method for separating the subject and the background. For example, as a method for extracting the silhouette of the subject, chroma key, background difference or manual extraction is used. From the extracted silhouette, the occlusion outline which is the outline of the silhouette of the subject in the multi-viewpoint image is detected. A known technique may be used as a method for detecting the shielding contour. For example, a method of approximating with a polygonal line using the approximation algorithm of the Teh-Chin chain is used. You may use the cvFindContours function of OpenCV. The visual hull of the subject is obtained by applying the visual volume intersection method using the occlusion contour. In general, the Visual Hull acquired using the visual volume intersection method includes a true three-dimensional shape of a subject and circumscribes the three-dimensional shape.

接線算出部12は、前記複数のカメラのそれぞれにおいて、カメラの主点(視点)から被写体の表面に引いた接線を算出する。図2は、被写体の表面に引いた接線の算出を示す。図2aは横から見た図を示し、図2bは上から見た図を示す。図2bに示す様に、一般に、カメラの主点から被写体の表面に引いた接線は、そのカメラの画像における被写体のシルエットの境界の1点に射影される。また、一般に、該接線と被写体との接点において、前記Visual Hullと前記真の3次元形状とが接する。   The tangent calculating unit 12 calculates a tangent drawn from the principal point (viewpoint) of the camera to the surface of the subject in each of the plurality of cameras. FIG. 2 shows the calculation of the tangent line drawn on the surface of the subject. 2a shows a view from the side and FIG. 2b shows a view from above. As shown in FIG. 2b, generally, a tangent drawn from the main point of the camera to the surface of the subject is projected onto one point on the boundary of the silhouette of the subject in the camera image. In general, the Visual Hull and the true three-dimensional shape are in contact with each other at the contact point between the tangent line and the subject.

接点推定部13は、前記接線と被写体との接点の3次元位置を推定する。図2に示す様に、該接点は必ず接線上に存在するが、接線上のどの位置に存在するかは不明である。そこで、第1の実施形態では図3に示す様に、接線を通り被写体の表面に接する接平面を設定して、接平面を接線上で平行移動させながら、他の複数のカメラの画像と整合が取れるように接点の位置を推定する。または、第2の実施形態では図4に示す様に、接線を通り被写体の表面に接する錐台面を設定して、錐台面を接線上で平行移動させながら、他の複数のカメラの画像と整合が取れるように接点の位置を推定する。複数のカメラの画像と整合を取る方法としては、公知の技術を用いてもよい。   The contact point estimation unit 13 estimates the three-dimensional position of the contact point between the tangent line and the subject. As shown in FIG. 2, the contact always exists on the tangent line, but it is unclear at which position on the tangent line. Therefore, in the first embodiment, as shown in FIG. 3, a tangent plane that passes through the tangent line and touches the surface of the subject is set, and the tangent plane is translated on the tangent line while matching with images of other cameras. Estimate the contact position so that Alternatively, in the second embodiment, as shown in FIG. 4, a frustum surface that passes through the tangent line and touches the surface of the subject is set, and the frustum surface is translated on the tangent line and aligned with the images of other cameras. Estimate the contact position so that A known technique may be used as a method of matching with images from a plurality of cameras.

表面復元部14は、図5に示す様に、前記接点推定部13で取得された接点の集合が被写体表面上の点の集合であることに基づき、被写体表面上の他の点の位置も推定して、被写体の表面の3次元形状を復元する。復元の方法としては、グラフカットによる奥行き推定等の公知の技術を用いてもよい。   As shown in FIG. 5, the surface restoration unit 14 estimates the positions of other points on the subject surface based on the fact that the set of contacts acquired by the contact estimation unit 13 is a set of points on the subject surface. Thus, the three-dimensional shape of the surface of the subject is restored. As a restoration method, a known technique such as depth estimation by graph cut may be used.

以下に、接線算出部12による接線の算出手段、および接点推定部13による接点の推定手段を詳細に説明する。多視点画像を取得した各カメラの以下のカメラパラメータは既知である。   Below, the tangent calculation means by the tangent calculation section 12 and the contact estimation means by the contact estimation section 13 will be described in detail. The following camera parameters of each camera that has acquired a multi-viewpoint image are known.

cをカメラのIDとして、カメラcの内部パラメータ行列A、カメラcの回転行列R、カメラcの位置座標Mは、

Figure 2013069026
と表される。 With c as the camera ID, the internal parameter matrix A c of the camera c, the rotation matrix R c of the camera c, and the position coordinate Mc of the camera c are:
Figure 2013069026
It is expressed.

実空間中の点の座標M、カメラc画像中の点の座標mは、

Figure 2013069026
と表され、各カメラにおいて、式(1)の射影変換が成り立つ。
Figure 2013069026
なお、m はmの斉次表現であり、λはスケール不定性を表すスカラーである。 Coordinates M of points in the real space, the coordinates m c points in the camera c image,
Figure 2013069026
In each camera, the projective transformation of Expression (1) is established.
Figure 2013069026
Note that m c ~ is a homogeneous expression of m c , and λ is a scalar representing scale indefiniteness.

遮蔽輪郭を検出する際に得られた折れ線を構成する線素から、画像中の法線ベクトルおよび接線を求める。カメラc画像中で遮蔽輪郭を構成する折れ線の線素に着目し、その端点をそれぞれm=(u,vおよびm=(u,vとおくと、この線素はt(0≦t≦1)を媒介変数として式(2)で与えられる。また、この線素上の点mにおける法線ベクトルn(m)は式(3)で表わされる。

Figure 2013069026
A normal vector and a tangent in the image are obtained from the line elements constituting the polygonal line obtained when detecting the shielding contour. Focusing on line elements of polygonal lines constituting the occluding contour in camera c images, the end points each m s = (u s, v s) T and m e = (u e, v e) putting as T, the The line element is given by Equation (2) with t (0 ≦ t ≦ 1) as a parameter. Further, the normal vector n (m c) at the point m c of the line Motojo is represented by formula (3).
Figure 2013069026

以上のパラメータから、以下のように、接線算出部12は、遮蔽輪郭を構成する各点に対応する接線を算出し、Visual Hullとの共通部分の範囲を求める。   From the above parameters, the tangent calculation unit 12 calculates a tangent corresponding to each point constituting the shielding contour and obtains the range of the common part with the Visual Hull as follows.

カメラc画像の輪郭上の点mに対応する、被写体への接線は、s(s>0)を媒介変数として式(4)で与えられる。

Figure 2013069026
なお、R はRの転置行列であり、A −1はAの逆行列である。 Corresponding to the point m c on the contour of the camera c images, tangent to the subject is given by Equation (4) s of (s> 0) as parametric.
Figure 2013069026
R c T is a transposed matrix of R c , and A c −1 is an inverse matrix of A c .

Visual Hullとの共通部分の範囲、つまり媒介変数sの範囲を求めるため、c以外の全てのカメラ画像(カメラc’画像)に対して、それぞれ接線を射影する。具体的には、式(4)を式(1’)に代入することにより、カメラc’画像へ射影された直線を表わす式(5)が得られる。

Figure 2013069026
なお、A’、R’、M’は、それぞれカメラc’の内部パラメータ行列、カメラc’の回転行列、カメラc’の位置座標を表し、m ’、はカメラc’画像中の点の座標を表す。 In order to obtain the range of the common part with the Visual Hull, that is, the range of the parameter s, tangent lines are projected on all camera images (camera c ′ images) other than c. Specifically, by substituting Equation (4) into Equation (1 ′), Equation (5) representing a straight line projected onto the camera c ′ image is obtained.
Figure 2013069026
A c ′, R c ′, and M c ′ represent the internal parameter matrix of the camera c ′, the rotation matrix of the camera c ′, and the position coordinates of the camera c ′, respectively, and m c to ′ represent the camera c ′ image. Represents the coordinates of the middle point.

カメラc’画像において、この射影された直線と遮蔽輪郭を構成する線素との交点の位置を求める。カメラc’画像中の遮蔽輪郭を構成する線素は、式(2)より、その端点をそれぞれm、m、媒介変数をtとおくと、式(2’)で与えられる。

Figure 2013069026
式(5)と式(2’)を連立すれば、sの値は式(6)で与えられる。
Figure 2013069026
全てのカメラ画像についてsの範囲を算出し、その積集合(共通部分)を求める。これがsの値の範囲になる。 In the camera c ′ image, the position of the intersection between the projected straight line and the line element constituting the shielding contour is obtained. Camera c 'line element constituting an occluding contour in the image, the equation (2), the end point, respectively m s, m e, when the parametric put is t, equation (2' given in).
Figure 2013069026
If Equation (5) and Equation (2 ′) are combined, the value of s is given by Equation (6).
Figure 2013069026
The range of s is calculated for all camera images, and the product set (common part) is obtained. This is the range of the value of s.

次に、接点推定部13による接点の算出手段を説明する。各接線において、接線と被写体との接点の3次元位置を推定する。以下では、第1の実施形態による被写体表面を、接線を通る平面で近似して一致度を算出する方法を説明する(図3)。   Next, the contact point calculation means by the contact point estimation unit 13 will be described. For each tangent, the three-dimensional position of the contact point between the tangent and the subject is estimated. Hereinafter, a method for calculating the degree of coincidence by approximating the surface of the subject according to the first embodiment with a plane passing through a tangent will be described (FIG. 3).

まず、接線上に接平面を設定し、接平面上に格子点を配置する。具体的には、抽出された輪郭上の点mに着目する。点mに対応する接線は式(4)で表わされ、s値の範囲は式(2’)と式(6)を用いて得られる。次に、接線上の点Mにおける被写体への接平面を求め、この接平面上に2次元座標を設定して格子点を配置する。接平面の法線ベクトルをNとおくと、Nは式(7)で表わされる。接平面上で、接線方向の座標軸をS軸、S軸と垂直な方向の座標軸をT軸とおくと、S軸方向の単位ベクトルSは式(8)で表わされ、T軸方向の単位ベクトルTは式(9)で表わされる。接平面上の格子点の座標は式(10)で表わされる。

Figure 2013069026
ただし、格子点の個数を(2N+1)×(2N+1)(N:自然数)として、
Figure 2013069026
とする。具体的には、式(1’)に式(10)を代入すればよい。 First, a tangent plane is set on the tangent line, and lattice points are arranged on the tangent plane. Specifically, attention is focused on a point m c on the extracted contour. Tangent line corresponding to the point m c is represented by the formula (4), the range of s values is obtained using equation (6) and formula (2 '). Next, a tangent plane to the subject at a point M on the tangent line is obtained, and two-dimensional coordinates are set on the tangent plane to arrange grid points. Assuming that the normal vector of the tangent plane is N, N is expressed by Equation (7). On the tangent plane, if the coordinate axis in the tangential direction is the S axis and the coordinate axis in the direction perpendicular to the S axis is the T axis, the unit vector S in the S axis direction is expressed by Equation (8), and the unit in the T axis direction Vector T is expressed by equation (9). The coordinates of the lattice points on the tangent plane are expressed by equation (10).
Figure 2013069026
However, the number of lattice points is (2N + 1) × (2N + 1) (N: natural number),
Figure 2013069026
And Specifically, equation (10) may be substituted into equation (1 ′).

次に、接平面を接線上で平行移動させながら、格子点を各カメラ画像に射影する。式(2’)を用いて得られた範囲内でsの値を変動させながら、式(10)で表わされる各格子点を各カメラ画像に射影する。内挿補間には公知の技術を用いてよい。例えば、Lanczosフィルタ等を用いる。各カメラ画像に射影された格子点でマッチングをとり、格子点の画素値の一致度を評価する関数を設定し、その関数が最大となるsの値を求める。この時のsの値に相当する点が、実空間中における被写体と接線との接点である。   Next, the grid points are projected onto each camera image while the tangent plane is translated on the tangent line. Each lattice point represented by Expression (10) is projected onto each camera image while changing the value of s within the range obtained using Expression (2 ′). A known technique may be used for the interpolation. For example, a Lanczos filter or the like is used. Matching is performed at a grid point projected on each camera image, a function for evaluating the degree of coincidence of pixel values at the grid point is set, and a value of s that maximizes the function is obtained. A point corresponding to the value of s at this time is a contact point between the subject and the tangent line in the real space.

ここで、一致度を評価する関数の例として、次の2通りがある。
・各格子点に対応する画素の画素値の差分の自乗和を用いる。自乗和が小さいほど一致度が高い。
・各格子点に対応する画素の画素値の分散もしくは標準偏差を用いる。分散もしくは標準偏差が小さいほど一致度が高い。
Here, there are the following two examples of functions for evaluating the degree of coincidence.
A sum of squares of pixel value differences of pixels corresponding to each grid point is used. The smaller the sum of squares, the higher the degree of coincidence.
Use the variance or standard deviation of the pixel values of the pixels corresponding to each grid point. The smaller the variance or standard deviation, the higher the degree of coincidence.

また、一致度を評価する関数を計算する際に、多視点画像を撮影したカメラの位置に応じて重みづけを行うこともできる。各カメラ画像の重みづけは、次の3通りがある。ただし、接平面の格子点がオクルージョンにより1個も射影されていないカメラ画像は用いない。ここで、接点からカメラの主点へ向かうベクトルを視線ベクトルV=M−Mとおく。
・各カメラ画像を対等に扱う。
・接平面の法線ベクトルと視線ベクトルとの内積(V・N)を用いる。内積が大きいほど、重み付けは大きい。
・接平面の法線ベクトルと視線ベクトルのなす角度

Figure 2013069026
を用いる。角度が大きいほど、重み付けは小さい。
さらに、ダイクストラ法を用いて、隣接する接点が連続となるように調整してもよい(図5)。 Further, when calculating the function for evaluating the degree of coincidence, weighting can be performed according to the position of the camera that has captured the multi-viewpoint image. Each camera image has the following three weights. However, a camera image in which no grid points on the tangent plane are projected by occlusion is not used. Here, a vector from the contact point toward the principal point of the camera is set as a line-of-sight vector V = M c −M.
・ Each camera image is handled equally.
Use the inner product (V · N) of the normal vector and the line-of-sight vector of the tangent plane. The larger the inner product, the greater the weight.
・ An angle between the normal vector of the tangent plane and the line-of-sight vector
Figure 2013069026
Is used. The greater the angle, the smaller the weight.
Furthermore, you may adjust so that an adjacent contact may become continuous using the Dijkstra method (FIG. 5).

以下では、第2の実施形態による被写体表面を、接線を通る錐台面で近似して一致度を算出する方法を説明する(図4)。   Hereinafter, a method for calculating the degree of coincidence by approximating the surface of the subject according to the second embodiment with a frustum surface passing through a tangent will be described (FIG. 4).

まず、接線上に錐台面を設定し、錐台面上に格子点を配置する。具体的には、抽出された輪郭上の点mに着目する。点mに対応する接線は式(4)で表わされ、sの値の範囲は式(2’)と式(6)を用いて得られる。次に、カメラの主点を頂点とし遮蔽輪郭を底面とする錐台面を求め、この錐台面上に、接線上の点Mを原点とする2次元座標を設定して格子点を配置する。錐台面上で、接線方向の座標軸をS軸とおくと、S軸方向の単位ベクトルSは式(8’)で表わされる。カメラ画像上で遮蔽輪郭に沿ってT離れた点をm”=(u”,v”)とおくと、錐台面上の格子点の座標は式(10’)で表わされる。

Figure 2013069026
ただし、格子点の個数を(2N+1)×(2N+1)(N:自然数)として、
Figure 2013069026
とする。具体的には、式(1’)に式(10’)を代入すればよい。 First, a frustum surface is set on the tangent line, and lattice points are arranged on the frustum surface. Specifically, attention is focused on a point m c on the extracted contour. Tangent line corresponding to the point m c is represented by the formula (4), the range of values of s is obtained by using equation (6) and formula (2 '). Next, a frustum surface having the main point of the camera as a vertex and a shielding contour as a bottom surface is obtained, and lattice points are arranged on the frustum surface by setting two-dimensional coordinates with a point M on the tangent as an origin. If the coordinate axis in the tangential direction is set as the S axis on the frustum surface, the unit vector S in the S axis direction is expressed by the equation (8 ′). If a point that is T apart along the shielding contour on the camera image is set as m c ″ = (u c ″, v c ″), the coordinates of the lattice points on the frustum surface are expressed by Expression (10 ′).
Figure 2013069026
However, the number of lattice points is (2N + 1) × (2N + 1) (N: natural number),
Figure 2013069026
And Specifically, the formula (10 ′) may be substituted into the formula (1 ′).

次に、錐台面を接線上で平行移動させながら、格子点を各カメラ画像に射影する。式(2’)を用いて得られた範囲内でsの値を変動させながら、式(10’)で表わされる各格子点を各カメラ画像に射影する。内挿補間には公知の技術を用いてよい。例えば、Lanczosフィルタ等を用いる。各カメラ画像に射影された格子点でマッチングをとり、格子点の画素値の一致度を評価する関数を設定し、その関数が最大となるsの値を求める。この時のsの値に相当する点が、実空間中における被写体と接線との接点である。   Next, the grid points are projected onto each camera image while the frustum surface is translated on the tangent line. While changing the value of s within the range obtained using Expression (2 '), each lattice point represented by Expression (10') is projected onto each camera image. A known technique may be used for the interpolation. For example, a Lanczos filter or the like is used. Matching is performed at a grid point projected on each camera image, a function for evaluating the degree of coincidence of pixel values at the grid point is set, and a value of s that maximizes the function is obtained. A point corresponding to the value of s at this time is a contact point between the subject and the tangent line in the real space.

一致度を評価する関数の例として、次の2通りがある。
・各格子点に対応する画素の画素値の差分の自乗和を用いる。自乗和が小さいほど一致度が高い。
・各格子点に対応する画素の画素値の分散もしくは標準偏差を用いる。分散もしくは標準偏差が小さいほど一致度が高い。
There are the following two examples of functions for evaluating the degree of coincidence.
A sum of squares of pixel value differences of pixels corresponding to each grid point is used. The smaller the sum of squares, the higher the degree of coincidence.
Use the variance or standard deviation of the pixel values of the pixels corresponding to each grid point. The smaller the variance or standard deviation, the higher the degree of coincidence.

また、一致度を評価する関数を計算する際に、多視点画像を撮影したカメラの位置に応じて重みづけを行うこともできる。各カメラ画像の重みづけは、次の3通りがある。ただし、錐台面の格子点がオクルージョンにより1個も射影されていないカメラ画像は用いない。ここで、接点からカメラの主点へ向かうベクトルを視線ベクトルV=M−Mとおく。
・各カメラ画像を対等に扱う。
・法線ベクトルと視線ベクトルとの内積(V・N)を用いる。内積が大きいほど、重み付けは大きい。
・法線ベクトルと視線ベクトルのなす角度

Figure 2013069026
を用いる。角度が大きいほど、重み付けは小さい。
さらに、ダイクストラ法を用いて、隣接する接点が連続となるように調整してもよい(図5)。 Further, when calculating the function for evaluating the degree of coincidence, weighting can be performed according to the position of the camera that has captured the multi-viewpoint image. Each camera image has the following three weights. However, a camera image in which no grid point on the frustum surface is projected by occlusion is not used. Here, a vector from the contact point toward the principal point of the camera is set as a line-of-sight vector V = M c −M.
・ Each camera image is handled equally.
Use the inner product (V · N) of the normal vector and the line-of-sight vector. The larger the inner product, the greater the weight.
・ An angle between normal vector and line-of-sight vector
Figure 2013069026
Is used. The greater the angle, the smaller the weight.
Furthermore, you may adjust so that an adjacent contact may become continuous using the Dijkstra method (FIG. 5).

以上のように、本願発明は、接線毎に必ず接点(被写体の表面上にある点)が得られるため、3次元位置座標が既知の点や線を多数得られることになり、3次元形状復元を容易にすることができる。また、既知の3次元形状復元方法において、その方法で用いる特徴的な点や線と併せて用いることにより、3次元形状復元を高精度化することができる。   As described above, according to the present invention, since a contact point (a point on the surface of the object) is always obtained for each tangent line, a large number of points and lines with known three-dimensional position coordinates can be obtained. Can be made easier. Further, in a known three-dimensional shape restoration method, the three-dimensional shape restoration can be made highly accurate by using together with characteristic points and lines used in the method.

また、以上述べた実施形態は全て本発明を例示的に示すものであって限定的に示すものではなく、本発明は他の種々の変形態様および変更態様で実施することができる。従って本発明の範囲は特許請求の範囲およびその均等範囲によってのみ規定されるものである。   Moreover, all the embodiments described above are illustrative of the present invention and are not intended to limit the present invention, and the present invention can be implemented in other various modifications and changes. Therefore, the scope of the present invention is defined only by the claims and their equivalents.

1 3次元形状復元装置
11 多視点画像入力部
12 接線算出部
13 接点推定部
14 表面復元部
DESCRIPTION OF SYMBOLS 1 3D shape restoration apparatus 11 Multi-viewpoint image input part 12 Tangent calculation part 13 Contact point estimation part 14 Surface restoration part

Claims (11)

被写体を複数台のカメラで撮影した多視点画像と各カメラのカメラパラメータから、被写体の3次元形状を復元する装置であって、
前記多視点画像における被写体の遮蔽輪郭線を抽出し、前記カメラパラメータから、該遮蔽輪郭線を構成する各点において対応する接線を求める接線算出手段と、
前記算出した接線上の被写体表面に対する接点の位置を推定する接点推定手段と、
前記接点を被写体表面上の特徴点として、被写体の表面の形状を復元する表面復元手段と、
を備えていることを特徴とする被写体の3次元形状を復元する装置。
An apparatus for restoring a three-dimensional shape of a subject from a multi-viewpoint image obtained by photographing the subject with a plurality of cameras and camera parameters of each camera,
A tangent calculating means for extracting a shielding contour of a subject in the multi-viewpoint image and obtaining a corresponding tangent at each point constituting the shielding contour from the camera parameter;
Contact estimation means for estimating the position of the contact with respect to the subject surface on the calculated tangent line;
Surface restoration means for restoring the shape of the surface of the subject using the contact as a feature point on the subject surface;
An apparatus for restoring a three-dimensional shape of a subject characterized by comprising:
前記接点推定手段において、前記接線算出手段で算出した接線を通り被写体表面に接する接平面を設定して、該接平面を接線に沿って平行移動させながら接点の位置を推定することを特徴とする請求項1に記載の装置。   The contact estimation means sets a tangent plane that contacts the subject surface through the tangent calculated by the tangent calculation means, and estimates the position of the contact while translating the tangent plane along the tangent. The apparatus of claim 1. 前記接点推定手段において、前記接平面上の画素を前記各多視点画像上に射影し、各画像上の該当する画素値の一致度を評価する関数を設定し、その関数が最大となる点を接点とすることを特徴とする請求項2に記載の装置。   In the contact point estimation means, a pixel on the tangent plane is projected onto each multi-viewpoint image, a function for evaluating the degree of coincidence of corresponding pixel values on each image is set, and the point at which the function is maximized is set. The device according to claim 2, wherein the device is a contact. 前記接点推定手段において、前記一致度を評価する関数として、前記該当する画素値の差分の自乗和に対し単調に減少する関数を用いることを特徴とする請求項3に記載の装置。   The apparatus according to claim 3, wherein the contact estimation unit uses a function that monotonously decreases with respect to a sum of squares of the difference between the corresponding pixel values as the function for evaluating the degree of coincidence. 前記接点推定手段において、前記一致度を評価する関数として、前記該当する画素値の分散もしくは標準偏差に対し単調に減少する関数を用いることを特徴とする請求項3に記載の装置。   The apparatus according to claim 3, wherein the contact estimation unit uses a function that monotonously decreases with respect to a variance or a standard deviation of the corresponding pixel value as a function for evaluating the degree of coincidence. 前記接点推定手段において、前記一致度を評価する関数を計算する際に、前記多視点画像を撮影したカメラの位置に応じて重みづけを行うことを特徴とする請求項3から5のいずれか1項に記載の装置。   The weighting according to any one of claims 3 to 5, wherein, in the contact estimation means, when calculating the function for evaluating the degree of coincidence, weighting is performed according to a position of a camera that has captured the multi-viewpoint image. The device according to item. 前記接点推定手段において、前記重み付けにおいて、前記接平面の法線ベクトルと前記多視点画像を撮影したカメラの光軸ベクトルとの内積に対し単調に増加する重みを設定することを特徴とする請求項6に記載の装置。   The weight of the contact point estimation unit is set to a monotonically increasing weight with respect to an inner product of a normal vector of the tangent plane and an optical axis vector of a camera that has captured the multi-viewpoint image. 6. The apparatus according to 6. 前記接点推定手段において、前記重み付けにおいて、前記接平面の法線ベクトルと前記多視点画像を撮影したカメラの光軸ベクトルとのなす角度に対し単調に減少する重みを設定することを特徴とする請求項6に記載の装置。   The contact estimation means sets a weight that monotonously decreases with respect to an angle formed by a normal vector of the tangent plane and an optical axis vector of a camera that has captured the multi-viewpoint image in the weighting. Item 7. The apparatus according to Item 6. 前記接平面の代わりに、前記接線算出手段で算出した接線を通り被写体表面に接する錐台面を用いることを特徴とする請求項2から8のいずれか1項に記載の装置。   9. The apparatus according to claim 2, wherein instead of the tangent plane, a frustum surface that passes through a tangent calculated by the tangent calculation unit and is in contact with a subject surface is used. 被写体を複数台のカメラで撮影した多視点画像と各カメラのカメラパラメータから、被写体の3次元形状を復元する方法であって、
前記多視点画像における被写体の遮蔽輪郭線を抽出し、前記カメラパラメータから、該遮蔽輪郭線を構成する各点において対応する接線を求める接線算出ステップと、
前記算出した接線上の被写体表面に対する接点の位置を推定する接点推定ステップと、
前記接点を被写体表面上の特徴点として、被写体の表面の形状を復元する表面復元ステップと、
を備えていることを特徴とする被写体の3次元形状を復元する方法。
A method for restoring a three-dimensional shape of a subject from a multi-viewpoint image obtained by photographing the subject with a plurality of cameras and camera parameters of each camera,
Extracting a shielding contour of a subject in the multi-viewpoint image and calculating a tangent corresponding to each point constituting the shielding contour from the camera parameters; and
A contact estimation step for estimating the position of the contact with respect to the subject surface on the calculated tangent line;
A surface restoration step for restoring the shape of the surface of the subject using the contact point as a feature point on the subject surface;
A method for reconstructing a three-dimensional shape of an object.
請求項1から9のいずれか1項に記載の装置としてコンピュータを機能させるプログラム。   A program that causes a computer to function as the apparatus according to claim 1.
JP2011205735A 2011-09-21 2011-09-21 Apparatus, method, and program for restoring three-dimensional shape of object Expired - Fee Related JP5736285B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2011205735A JP5736285B2 (en) 2011-09-21 2011-09-21 Apparatus, method, and program for restoring three-dimensional shape of object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2011205735A JP5736285B2 (en) 2011-09-21 2011-09-21 Apparatus, method, and program for restoring three-dimensional shape of object

Publications (2)

Publication Number Publication Date
JP2013069026A true JP2013069026A (en) 2013-04-18
JP5736285B2 JP5736285B2 (en) 2015-06-17

Family

ID=48474692

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2011205735A Expired - Fee Related JP5736285B2 (en) 2011-09-21 2011-09-21 Apparatus, method, and program for restoring three-dimensional shape of object

Country Status (1)

Country Link
JP (1) JP5736285B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2016209399A (en) * 2015-05-12 2016-12-15 国立大学法人京都大学 Image processing device and method, and computer program
US9607439B2 (en) 2014-01-16 2017-03-28 Canon Kabushiki Kaisha Information processing apparatus and information processing method
JP2018133059A (en) * 2017-02-17 2018-08-23 キヤノン株式会社 Information processing apparatus and method of generating three-dimensional model
KR101931564B1 (en) 2017-03-03 2018-12-24 한국과학기술원 Device and method for processing image using image registration

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007025863A (en) * 2005-07-13 2007-02-01 Advanced Telecommunication Research Institute International Photographing system, photographing method, and image processing program

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007025863A (en) * 2005-07-13 2007-02-01 Advanced Telecommunication Research Institute International Photographing system, photographing method, and image processing program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9607439B2 (en) 2014-01-16 2017-03-28 Canon Kabushiki Kaisha Information processing apparatus and information processing method
JP2016209399A (en) * 2015-05-12 2016-12-15 国立大学法人京都大学 Image processing device and method, and computer program
JP2018133059A (en) * 2017-02-17 2018-08-23 キヤノン株式会社 Information processing apparatus and method of generating three-dimensional model
KR101931564B1 (en) 2017-03-03 2018-12-24 한국과학기술원 Device and method for processing image using image registration

Also Published As

Publication number Publication date
JP5736285B2 (en) 2015-06-17

Similar Documents

Publication Publication Date Title
CN111066065B (en) System and method for hybrid depth regularization
JP7403528B2 (en) Method and system for reconstructing color and depth information of a scene
US10789765B2 (en) Three-dimensional reconstruction method
US8447099B2 (en) Forming 3D models using two images
US8452081B2 (en) Forming 3D models using multiple images
US9767611B2 (en) Information processing apparatus and method for estimating depth values using an approximate plane
JP6760957B2 (en) 3D modeling method and equipment
CN112686877B (en) Binocular camera-based three-dimensional house damage model construction and measurement method and system
US9191650B2 (en) Video object localization method using multiple cameras
WO2012165491A1 (en) Stereo camera device and computer-readable recording medium
CN107274483A (en) A kind of object dimensional model building method
JP5965293B2 (en) Camera pose estimation device and camera pose estimation program
WO2015014111A1 (en) Optical flow tracking method and apparatus
JP2014197314A (en) Image processor and image processing method
JP2014112055A (en) Estimation method for camera attitude and estimation system for camera attitude
JP5068732B2 (en) 3D shape generator
JP5366258B2 (en) Virtual viewpoint image generation method and program based on geometric information in large space camera arrangement
US10142613B2 (en) Image processing apparatus, image processing system, and image processing method
Oliveira et al. Selective hole-filling for depth-image based rendering
KR101593316B1 (en) Method and apparatus for recontructing 3-dimension model using stereo camera
JP5736285B2 (en) Apparatus, method, and program for restoring three-dimensional shape of object
JP6196562B2 (en) Subject information superimposing apparatus, subject information superimposing method, and program
US8847954B1 (en) Methods and systems to compute 3D surfaces
Xiong et al. Linearly estimating all parameters of affine motion using radon transform
CN112884817B (en) Dense optical flow calculation method, dense optical flow calculation device, electronic device, and storage medium

Legal Events

Date Code Title Description
RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20130408

RD03 Notification of appointment of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7423

Effective date: 20130531

A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20140227

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20141126

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20141210

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20150119

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20150408

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20150420

R150 Certificate of patent or registration of utility model

Ref document number: 5736285

Country of ref document: JP

Free format text: JAPANESE INTERMEDIATE CODE: R150

LAPS Cancellation because of no payment of annual fees