JP2015033047A - Depth estimation device employing plural cameras - Google Patents

Depth estimation device employing plural cameras Download PDF

Info

Publication number
JP2015033047A
JP2015033047A JP2013162428A JP2013162428A JP2015033047A JP 2015033047 A JP2015033047 A JP 2015033047A JP 2013162428 A JP2013162428 A JP 2013162428A JP 2013162428 A JP2013162428 A JP 2013162428A JP 2015033047 A JP2015033047 A JP 2015033047A
Authority
JP
Japan
Prior art keywords
depth
camera
image
camera image
projection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP2013162428A
Other languages
Japanese (ja)
Inventor
浩嗣 三功
Hiroshi Sanko
浩嗣 三功
内藤 整
Hitoshi Naito
整 内藤
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
KDDI Corp
Original Assignee
KDDI Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by KDDI Corp filed Critical KDDI Corp
Priority to JP2013162428A priority Critical patent/JP2015033047A/en
Publication of JP2015033047A publication Critical patent/JP2015033047A/en
Pending legal-status Critical Current

Links

Abstract

PROBLEM TO BE SOLVED: To provide a depth estimation device capable of accurately and fast estimating the depth of an object while using camera images captured by a plurality of cameras.SOLUTION: The depth estimation device comprises: a camera image acquisition part 1 for acquiring camera images from each camera; an object region determination part 6 for determining an object region in a three-dimensional voxel space on the basis of the camera images; an object region projection part 7 for producing a projection image of the object region for each camera image; a corresponding point detection part 61 for detecting a corresponding point by performing corresponding point matching defining the same line as a search clearance between projection images; and a depth estimation part 8 for estimating the depth of the object region on the basis of a parallax of the corresponding points.

Description

本発明は、複数のカメラにより撮影された画像に基づいてオブジェクトの奥行きを推定する装置に係り、特に、多眼3Dディスプレイ等を用いた実空間映像の高品質な運動視差再現等に寄与できる奥行き推定装置に関する。   The present invention relates to an apparatus for estimating the depth of an object based on images taken by a plurality of cameras, and in particular, a depth that can contribute to high-quality motion parallax reproduction of a real space image using a multi-view 3D display or the like. The present invention relates to an estimation device.

次世代の通信サービスとして期待される空間共有コミュニケーションを実現する上で必須となる、実空間の3Dモデリング・カメラ間の仮想視点レンダリングでは、各カメラから被写体までの3次元距離を推定する奥行き推定が不可欠である。   In real-world 3D modeling and virtual viewpoint rendering between cameras, which is indispensable for realizing the space-sharing communication expected as the next-generation communication service, depth estimation that estimates the three-dimensional distance from each camera to the subject is performed. It is essential.

実空間の3Dモデリングを行うための代表的な手法として、非特許文献1には、カメラ間隔が人間の目幅程度であるような、密なカメラ配置を前提に、カメラ間の対応点マッチングに基づく奥行推定、および奥行情報を用いた3D warpingによるカメラ間の補間画像生成手法が開示されている。   As a representative method for 3D modeling of real space, Non-Patent Document 1 describes matching of corresponding points between cameras on the premise of a dense camera arrangement in which the camera interval is about the human eye width. An interpolated image generation method between cameras by 3D warping using depth estimation based on depth information is disclosed.

特許文献1には、距離画像センサで得られる深度マップを、画素毎の奥行き値に基づきポリゴン化することで3次元形状モデルを復元する手法が開示されている。そして、奥行きを正確に推定できるようになれば、多眼3Dディスプレイ等を用いた、実空間映像の高品質な運動視差再現等に寄与できる。   Patent Document 1 discloses a method of restoring a three-dimensional shape model by polygonizing a depth map obtained by a distance image sensor based on a depth value for each pixel. If the depth can be accurately estimated, it is possible to contribute to high-quality motion parallax reproduction of a real space image using a multi-view 3D display or the like.

特願2012−214194号Japanese Patent Application No. 2012-214194

W. R. Mark et al. ``Post-Rendering 3D Warping',' in Proc of Symposium on Interactive 3D Graphics, pp. 7-16, 1997.W. R. Mark et al. `` Post-Rendering 3D Warping ',' in Proc of Symposium on Interactive 3D Graphics, pp. 7-16, 1997.

非特許文献1で提案されている手法では、人間の目幅と同程度の間隔で配置されたカメラアレイを対象に、カメラ間の対応点マッチングに基づき画素毎の奥行推定を行うと共に、視差補償によりカメラ間の補間画像を生成する手法が提案されている。しかしながら、オクルージョン領域に穴が生じる問題が避けられず、特にカメラ間隔が大きくなると合成画質が大きく劣化するという課題があった。   In the method proposed in Non-Patent Document 1, depth estimation for each pixel is performed based on matching of corresponding points between cameras, and parallax compensation is performed on a camera array arranged at an interval approximately equal to the human eye width. A method for generating an interpolated image between cameras has been proposed. However, the problem that a hole is generated in the occlusion area is unavoidable. In particular, when the camera interval is increased, the synthesized image quality is greatly deteriorated.

特許文献1で提案されている手法では、距離画像センサを用いることで、オブジェクト表面の大まかな奥行き推定をロバストに実施可能であることが示されている。しかしながら、エッジ近傍の精度が不十分であるとともに、オブジェクトと床面の境界を判別することが困難であるという課題があった。   The technique proposed in Patent Document 1 shows that rough depth estimation of an object surface can be performed robustly by using a distance image sensor. However, there are problems that the accuracy in the vicinity of the edge is insufficient and it is difficult to determine the boundary between the object and the floor surface.

本発明の目的は、上記した従来技術の課題を解決し、複数のカメラで撮影されたカメラ画像を用いてオブジェクトの奥行きを正確かつ高速に推定できる複数カメラを用いた奥行き推定装置を提供することにある。   SUMMARY OF THE INVENTION An object of the present invention is to provide a depth estimation apparatus using a plurality of cameras that can solve the above-described problems of the prior art and can accurately and quickly estimate the depth of an object using camera images taken by a plurality of cameras. It is in.

上記の目的を達成するために、本発明は、カメラ画像の各ラインが相互に同期する複数のカメラで撮影されたオブジェクトのカメラ画像に基づいて当該オブジェクトの奥行きを推定する装置において、各カメラからカメラ画像を取得するカメラ画像取得手段と、各カメラ画像に基づいて、三次元のボクセル空間におけるオブジェクト領域を決定するオブジェクト領域決定手段と、カメラ画像ごとに前記オブジェクト領域の投影画像を生成するオブジェクト領域投影手段と、投影画像間で同一ライン上を探索空間とする対応点マッチングを行って対応点を検出する対応点検出手段と、各対応点の視差に基づいてオブジェクト領域の奥行を推定する奥行き推定手段とを具備した点に特徴がある。   In order to achieve the above object, the present invention provides an apparatus for estimating the depth of an object based on the camera image of the object captured by a plurality of cameras in which the lines of the camera image are synchronized with each other. Camera image acquisition means for acquiring a camera image, object area determination means for determining an object area in a three-dimensional voxel space based on each camera image, and an object area for generating a projection image of the object area for each camera image Projection means, corresponding point detection means for detecting corresponding points by matching corresponding points on the same line between projected images, and depth estimation for estimating the depth of an object region based on the parallax of each corresponding point And a means.

本発明によれば、各カメラ画像から対応点を検出する際の探索空間がカメラ画像の同一ライン上に限定されるので、高速かつマッチング制度の高い対応点検出が可能になり、オブジェクトの深度を高速かつ正確に推定できるようになる。その結果、カメラ間の任意視点における高精度な映像レンダリングが可能となる   According to the present invention, since the search space when detecting corresponding points from each camera image is limited to the same line of the camera image, corresponding points can be detected at high speed and with a high matching system, and the depth of the object can be reduced. Fast and accurate estimation can be performed. As a result, high-accuracy video rendering at any viewpoint between cameras is possible.

本発明を適用した奥行き推定装置の機能ブロック図である。It is a functional block diagram of the depth estimation apparatus to which the present invention is applied. カメラ画像の一例を示した図である。It is the figure which showed an example of the camera image. 空舞台画像の一例を示した図である。It is the figure which showed an example of the empty stage image. 各ボクセルの背景尤度の算出方法を模式的に示した図である。It is the figure which showed typically the calculation method of the background likelihood of each voxel. 投影画像の一例を示した図である。It is the figure which showed an example of the projection image. 対応点マッチングの方法を示した図である。It is the figure which showed the method of corresponding point matching. 対応点に基づいて視差を算出する方法を示した図である。It is the figure which showed the method of calculating parallax based on a corresponding point. 本発明による取得される奥行き画像の一例を示した図である。It is the figure which showed an example of the depth image acquired by this invention.

以下、図面を参照して本発明の実施の形態について詳細に説明する。図1は、本発明の一実施形態に係る奥行き推定装置の主要部の構成を示した機能ブロック図であり、ここでは、本発明の説明に不要な構成は図示が省略されている。   Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings. FIG. 1 is a functional block diagram showing a configuration of a main part of a depth estimation apparatus according to an embodiment of the present invention. Here, illustration of components unnecessary for the description of the present invention is omitted.

カメラ画像取得部1は、レンダリング対象となる人物等のオブジェクトが存在し得る舞台空間(ボクセル空間)を撮影する複数のカメラCA1,CA2,CA3…からカメラ画像Ica1,Ica2,Ica3…を取得する。前記カメラCA1,CA2,CA3…は全て、そのカメラ画像Ica1,Ica2,Ica3…において水平ラインの位置が相互に同期するように予め調整され、かつ平行化されている。   The camera image acquisition unit 1 acquires camera images Ica1, Ica2, Ica3,... From a plurality of cameras CA1, CA2, CA3,... That capture a stage space (voxel space) in which an object such as a person to be rendered can exist. The cameras CA1, CA2, CA3... Are all adjusted in advance and parallelized so that the positions of the horizontal lines are synchronized with each other in the camera images Ica1, Ica2, Ica3.

図2は、オブジェクトとして人物を撮影した各カメラ画像Ica1,Ica2,Ica3の一例を示した図であり、各カメラ画像Ica1,Ica2,Ica3…の対応する同一ラインにはオブジェクトの同一ラインが表示されている。以下では説明を判りやすくするために、カメラが3台(CA1,CA2,CA3)である場合を例にして説明する。   FIG. 2 is a diagram showing an example of each camera image Ica1, Ica2, Ica3 in which a person is photographed as an object, and the same line of the object is displayed on the same corresponding line of each camera image Ica1, Ica2, Ica3. ing. In the following, in order to make the explanation easy to understand, a case where there are three cameras (CA1, CA2, CA3) will be described as an example.

キャリブレーション部2は、舞台空間の各ボクセルV(3次元空間内の個々の極小ブロック)と、当該各ボクセルVを中心射影行列に基づいて投影された各カメラ画像Ica1,Ica2,Ica3の画素とを次式(1)に基づいて相互に対応付ける。   The calibration unit 2 includes each voxel V in the stage space (individual minimum block in the three-dimensional space), and each camera image Ica1, Ica2, Ica3 projected from the voxel V based on the central projection matrix. Are associated with each other based on the following equation (1).

空舞台画像記憶部3には、レンダリング対象のオブジェクトが存在しない空舞台を各カメラCA1,CA2,CA3で撮影して得られる空舞台画像Ik1,Ik2,Ik3が予め記憶される。図3は、前記図2に示した各カメラ画像Ica1,Ica2,Ica3に対応した空舞台画像Ik1,Ik2,Ik3である。   The sky stage image storage unit 3 stores in advance sky stage images Ik1, Ik2, and Ik3 obtained by photographing the sky stage in which no object to be rendered exists with the cameras CA1, CA2, and CA3. FIG. 3 shows sky stage images Ik1, Ik2, and Ik3 corresponding to the camera images Ica1, Ica2, and Ica3 shown in FIG.

背景尤度算出部4において、背景尤度関数生成部41は、各カメラCA1,CA2,CA3から所定時間にわたって撮影された複数フレーム分の空舞台画像Ik1,Ik2,Ik3に基づいて、カメラごとに各画素値(RGBベクトル)の平均ベクトルuおよび共分散行列Σを算出する。関数計算部42は、オブジェクトの写ったカメラ画像Ica1,Ica2,Ica3の各画素値xならびに前記平均ベクトルuおよび共分散行列Σを次式(2)に適用することにより、カメラ画像Ica1,Ica2,Ica3ごとに各画素が背景画像である尤度を背景尤度f(x;u,Σ)として算出する。   In the background likelihood calculating section 4, the background likelihood function generating section 41 is provided for each camera based on a plurality of frames of empty stage images Ik1, Ik2, Ik3 taken over a predetermined time from each camera CA1, CA2, CA3. An average vector u and a covariance matrix Σ of each pixel value (RGB vector) are calculated. The function calculation unit 42 applies the pixel values x of the camera images Ica1, Ica2, and Ica3 in which the object is captured, the average vector u, and the covariance matrix Σ to the following equation (2), thereby obtaining the camera images Ica1, Ica2, The likelihood that each pixel is a background image is calculated as background likelihood f (x; u, Σ) for each Ica3.

ボクセル背景尤度算出部5は、舞台空間内の各ボクセルVの背景尤度を、各カメラ画像Ica1,Ica2,Ica3における当該ボクセルVに対応した画素の背景尤度に基づいて算出する。   The voxel background likelihood calculation unit 5 calculates the background likelihood of each voxel V in the stage space based on the background likelihood of the pixel corresponding to the voxel V in each camera image Ica1, Ica2, Ica3.

図4は、前記各ボクセルVの背景尤度の算出方法を模式的に示した図であり、舞台空間内に設定されたボクセル空間10の各ボクセルVと、当該ボクセルVが中心射影行列に基づいて投影された各カメラ画像Ica1,Ica2,Ica3の画素とは、前記キャリブレーション部2により予め対応付けられており、上式(1)の対応関係が与えられている。本実施形態では、各カメラ画像Ica1,Ica2,Ica3の対応座標の背景尤度を次式(3)に適用することにより、各ボクセルVの背景尤度ρvが算出される。   FIG. 4 is a diagram schematically showing a method of calculating the background likelihood of each voxel V. Each voxel V in the voxel space 10 set in the stage space and the voxel V are based on the central projection matrix. The pixels of the camera images Ica1, Ica2, and Ica3 projected in this way are associated in advance by the calibration unit 2 and given the correspondence relationship of the above equation (1). In the present embodiment, the background likelihood ρv of each voxel V is calculated by applying the background likelihood of the corresponding coordinates of the camera images Ica1, Ica2, and Ica3 to the following equation (3).

オブジェクト領域決定部6は、各ボクセルの隣接関係を考慮したエネルギー関数E(v,α)の概念を導入し、当該エネルギー関数E(v,α)の値が最小化されるように各ボクセルVの背景尤度を2値化(背景か否か)する。次式(4)は、ボクセルの背景尤度を2値化する計算式であり、データ項(右辺第1項)および平滑化項(右辺第2項)の和で構成される。   The object region determination unit 6 introduces the concept of the energy function E (v, α) in consideration of the adjacency relationship of each voxel, and each voxel V so that the value of the energy function E (v, α) is minimized. The background likelihood is binarized (whether or not it is a background). The following expression (4) is a calculation expression for binarizing the voxel background likelihood and is composed of the sum of the data term (first term on the right side) and the smoothing term (second term on the right side).

ここで、符号αは各ボクセルVが背景であれば「0」、オブジェクト(人物)であれば「1」を割り当てられる2値変数である。データ項および平滑化項は、いずれもαの選択に応じて値が決まり、αをどのように選択すればボクセル空間全体でのエネルギーを最小化できるかという点で最適な割り当てが決定される。 Here, the symbol α is a binary variable to which “0” is assigned if each voxel V is a background, and “1” is assigned if an object (person). Both the data term and the smoothing term are determined in accordance with the selection of α, and the optimal allocation is determined in terms of how α can be selected to minimize the energy in the entire voxel space.

前記データ項は、次式(5),(6)により求められ、前記平滑化項は、隣接ボクセル間の尤度値の差に依存し、次式(7),(8)により求められる。なお、符号Nは、ボクセル空間10において隣接するボクセルの組み合わせ(左右、前後、上下の6パターン、斜めも含めると26パターン)を表し、i,jで組合せを表す。符号Kは、正の定数(パラメータ)である。   The data term is obtained by the following equations (5) and (6), and the smoothing term is obtained by the following equations (7) and (8) depending on the likelihood value difference between adjacent voxels. The symbol N represents a combination of adjacent voxels in the voxel space 10 (6 patterns on the left, right, front, back, top and bottom, 26 patterns including diagonal), and i and j represent the combination. The sign K is a positive constant (parameter).

オブジェクト領域投影部7は、前記オブジェクト領域の各ボクセルVを各カメラ画像Ica1,Ica2,Ica3に投影して投影画像を取得する。本実施例では、各カメラ画像Ica1,Ica2,Ica3の画素ごとに、中心射影行列により光線Rを探索し、オブジェクト候補と交差する光線Rの画素を抽出することにより投影画像Jcaが取得される。図5は、各カメラ画像Ica1,Ica2に対応した投影画像Jca1,Jca2の一例を示した図である。   The object area projection unit 7 projects the voxels V of the object area onto the camera images Ica1, Ica2, and Ica3, and acquires a projection image. In the present embodiment, the projection image Jca is obtained by searching for the light ray R by the central projection matrix for each pixel of the camera images Ica1, Ica2, and Ica3 and extracting the pixel of the light ray R that intersects the object candidate. FIG. 5 is a diagram illustrating an example of projection images Jca1 and Jca2 corresponding to the camera images Ica1 and Ica2.

奥行き推定部8は、対応点検出部61、奥行データ計算部62およびクラスタリング部63を含み、投影画像間の前記対応点の視差に基づいて、当該対応点に対応したボクセルの奥行を推定する。   The depth estimation unit 8 includes a corresponding point detection unit 61, a depth data calculation unit 62, and a clustering unit 63, and estimates the depth of the voxel corresponding to the corresponding point based on the parallax of the corresponding point between the projected images.

前記対応点検出部61は、各投影画像Jca1,Jca2の画素ごとに局所特徴量を抽出し、同一の水平ラインごとに局所特徴量に基づく対応点マッチングを行って対応点を検出する。   The corresponding point detection unit 61 extracts a local feature amount for each pixel of the projection images Jca1 and Jca2, and detects a corresponding point by performing corresponding point matching based on the local feature amount for each identical horizontal line.

図6は、前記対応点検出部61による対応点の検出方法を説明するための図であり、投影画像間(ここでは、Jca1,Jca2)で、各水平ラインL上に位置する全ての画素を対象に、その局所特徴量に基づいて対応点マッチングが行われる。図示の例では、水平ラインLx上で投影画像Jca1の画素f1と投影影画像Jca2の画素f2とが対応点として検出されている。   FIG. 6 is a diagram for explaining a method of detecting corresponding points by the corresponding point detection unit 61. All pixels located on each horizontal line L between projected images (here, Jca1 and Jca2) are shown. Corresponding point matching is performed on the target based on the local feature amount. In the illustrated example, the pixel f1 of the projection image Jca1 and the pixel f2 of the projection shadow image Jca2 are detected as corresponding points on the horizontal line Lx.

奥行データ計算部62は、対応点f1,f2間の距離すなわち視差dに基づいて、当該対応点に対応したボクセルの深度が計算されて奥行きデータとされる。図7は、視差dの計算方法を示した図であり、対応点f1を投影画像Jca2上に投影し、対応点f1,f2間の距離として視差dが算出される。本実施例では、当該視差dおよび各カメラCAの相対的な位置関係を三角測量法に適用することで奥行きが推定される。   The depth data calculation unit 62 calculates the depth of the voxel corresponding to the corresponding point based on the distance between the corresponding points f1 and f2, that is, the parallax d, to obtain depth data. FIG. 7 is a diagram illustrating a method for calculating the parallax d, in which the corresponding point f1 is projected onto the projection image Jca2, and the parallax d is calculated as the distance between the corresponding points f1 and f2. In this embodiment, the depth is estimated by applying the relative positional relationship between the parallax d and each camera CA to the triangulation method.

クラスタリング部63は、以上のようにして求められた多数の奥行き推定値を離散値にクラスタリング(量子化)して出力する。図8は、上記のようにして求められた奥行き推定値で表現された奥行き画像Dca1,Dca2の一例を示した図であり、オブジェクトの部位に応じて相応の奥行き推定値が求められていることが解る。   The clustering unit 63 clusters (quantizes) the large number of depth estimation values obtained as described above into discrete values, and outputs them. FIG. 8 is a diagram showing an example of the depth images Dca1 and Dca2 expressed by the depth estimation values obtained as described above, and corresponding depth estimation values are obtained according to the part of the object. I understand.

本実施例によれば、各カメラ画像から対応点を検出する際の探索空間がカメラ画像の同一ライン上に限定され、高速かつマッチング制度の高い対応点検出が可能になるので、オブジェクトの深度を高速かつ正確に推定できるようになる。その結果、カメラ間の任意視点における高精度な映像レンダリングが可能となる。   According to the present embodiment, the search space for detecting corresponding points from each camera image is limited to the same line of the camera image, and corresponding points can be detected at high speed and with a high matching system. Fast and accurate estimation can be performed. As a result, high-accuracy video rendering at an arbitrary viewpoint between cameras is possible.

1…カメラ画像取得部,2…キャリブレーション部,3…空舞台画像記憶部,4…背景尤度算出部,5…ボクセル背景尤度算出部,6…オブジェクト領域決定部,7…オブジェクト領域投影部,8…奥行き推定部,41…背景尤度関数生成部,42…関数計算部,61…対応点検出部,62…奥行データ計算部,63…クラスタリング部   DESCRIPTION OF SYMBOLS 1 ... Camera image acquisition part, 2 ... Calibration part, 3 ... Empty stage image storage part, 4 ... Background likelihood calculation part, 5 ... Voxel background likelihood calculation part, 6 ... Object area | region determination part, 7 ... Object area projection , 8 ... Depth estimation part, 41 ... Background likelihood function generation part, 42 ... Function calculation part, 61 ... Corresponding point detection part, 62 ... Depth data calculation part, 63 ... Clustering part

Claims (4)

カメラ画像の各ラインが相互に同期する複数のカメラで撮影されたオブジェクトのカメラ画像に基づいて当該オブジェクトの奥行きを推定する装置において、
各カメラからカメラ画像を取得するカメラ画像取得手段と、
各カメラ画像に基づいて、三次元のボクセル空間におけるオブジェクト領域を決定するオブジェクト領域決定手段と、
カメラ画像ごとに前記オブジェクト領域の投影画像を生成するオブジェクト領域投影手段と、
投影画像間で同一ライン上を探索空間とする対応点マッチングを行って対応点を検出する対応点検出手段と、
各対応点の視差に基づいてオブジェクト領域の奥行を推定する奥行き推定手段とを具備したことを特徴とする奧行き推定装置。
In an apparatus for estimating the depth of an object based on camera images of an object photographed by a plurality of cameras in which each line of the camera image is synchronized with each other,
Camera image acquisition means for acquiring a camera image from each camera;
An object region determining means for determining an object region in a three-dimensional voxel space based on each camera image;
Object region projection means for generating a projection image of the object region for each camera image;
Corresponding point detection means for detecting corresponding points by performing corresponding point matching between the projected images on the same line as a search space;
A depth estimation device comprising: depth estimation means for estimating the depth of an object region based on the parallax of each corresponding point.
各カメラ画像の画素ごとに背景尤度を算出する手段と、
前記ボクセル空間の各ボクセルを中心射影行列に基づいて各カメラ画像に投影する手段とを具備し、
前記オブジェクト領域決定手段は、前記投影先の画像の背景尤度に基づいて各ボクセルがオブジェクト領域であるか否かを決定することを特徴とする請求項1に記載の奧行き推定装置。
Means for calculating the background likelihood for each pixel of each camera image;
Means for projecting each voxel in the voxel space onto each camera image based on a central projection matrix;
2. The apparatus according to claim 1, wherein the object area determination unit determines whether each voxel is an object area based on a background likelihood of the projection destination image.
前記オブジェクト領域投影手段は、各カメラ画像の画素ごとに中心射影行列に基づいて光線を探索し、各光線がオブジェクト領域と交差する画素を求めて投影画像とすることを特徴とする請求項1または2に記載の奧行き推定装置。   2. The object area projecting means searches for a ray based on a central projection matrix for each pixel of each camera image, obtains a pixel where each ray intersects the object area, and uses it as a projection image. 2. A heel departure estimation apparatus according to 2. 前記奥行き推定手段は、奥行の各推定値を離散値にクラスタリングして出力することを特徴とする請求項1ないし3のいずれかに記載の奧行き推定装置。   The said depth estimation means clusters each estimated value of depth into a discrete value, and outputs it, The droop estimation apparatus in any one of Claim 1 thru | or 3 characterized by the above-mentioned.
JP2013162428A 2013-08-05 2013-08-05 Depth estimation device employing plural cameras Pending JP2015033047A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2013162428A JP2015033047A (en) 2013-08-05 2013-08-05 Depth estimation device employing plural cameras

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2013162428A JP2015033047A (en) 2013-08-05 2013-08-05 Depth estimation device employing plural cameras

Publications (1)

Publication Number Publication Date
JP2015033047A true JP2015033047A (en) 2015-02-16

Family

ID=52518014

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2013162428A Pending JP2015033047A (en) 2013-08-05 2013-08-05 Depth estimation device employing plural cameras

Country Status (1)

Country Link
JP (1) JP2015033047A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016354A (en) * 2019-05-30 2020-12-01 中国科学院沈阳自动化研究所 Visual recognition-based grain tank loading state detection method for grain transport vehicle
WO2021221436A1 (en) * 2020-04-28 2021-11-04 삼성전자 주식회사 Device and method for acquiring depth of space by using camera
WO2023095375A1 (en) * 2021-11-29 2023-06-01 パナソニックIpマネジメント株式会社 Three-dimensional model generation method and three-dimensional model generation device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003323631A (en) * 2002-03-25 2003-11-14 Thomson Licensing Sa Modeling method for 3d scene
JP2004032244A (en) * 2002-06-25 2004-01-29 Fuji Heavy Ind Ltd Stereo image processing apparatus and method therefor
JP2004127239A (en) * 2002-04-24 2004-04-22 Mitsubishi Electric Research Laboratories Inc Method and system for calibrating multiple cameras using calibration object
JP2004248213A (en) * 2003-02-17 2004-09-02 Kazunari Era Image processing apparatus, imaging apparatus, and program
JP2004342004A (en) * 2003-05-19 2004-12-02 Minolta Co Ltd Image processing device and program
JP2011113177A (en) * 2009-11-25 2011-06-09 Kddi Corp Method and program for structuring three-dimensional object model
JP2012003372A (en) * 2010-06-15 2012-01-05 Kddi Corp Method and program for constructing three dimensional model for object
JP2012208759A (en) * 2011-03-30 2012-10-25 Kddi Corp Method and program for improving accuracy of three-dimensional shape model

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2003323631A (en) * 2002-03-25 2003-11-14 Thomson Licensing Sa Modeling method for 3d scene
JP2004127239A (en) * 2002-04-24 2004-04-22 Mitsubishi Electric Research Laboratories Inc Method and system for calibrating multiple cameras using calibration object
JP2004032244A (en) * 2002-06-25 2004-01-29 Fuji Heavy Ind Ltd Stereo image processing apparatus and method therefor
JP2004248213A (en) * 2003-02-17 2004-09-02 Kazunari Era Image processing apparatus, imaging apparatus, and program
JP2004342004A (en) * 2003-05-19 2004-12-02 Minolta Co Ltd Image processing device and program
JP2011113177A (en) * 2009-11-25 2011-06-09 Kddi Corp Method and program for structuring three-dimensional object model
JP2012003372A (en) * 2010-06-15 2012-01-05 Kddi Corp Method and program for constructing three dimensional model for object
JP2012208759A (en) * 2011-03-30 2012-10-25 Kddi Corp Method and program for improving accuracy of three-dimensional shape model

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112016354A (en) * 2019-05-30 2020-12-01 中国科学院沈阳自动化研究所 Visual recognition-based grain tank loading state detection method for grain transport vehicle
CN112016354B (en) * 2019-05-30 2024-01-09 中国科学院沈阳自动化研究所 Method for detecting loading state of grain tank of grain transporting vehicle based on visual identification
WO2021221436A1 (en) * 2020-04-28 2021-11-04 삼성전자 주식회사 Device and method for acquiring depth of space by using camera
WO2023095375A1 (en) * 2021-11-29 2023-06-01 パナソニックIpマネジメント株式会社 Three-dimensional model generation method and three-dimensional model generation device

Similar Documents

Publication Publication Date Title
US9426444B2 (en) Depth measurement quality enhancement
EP2992508B1 (en) Diminished and mediated reality effects from reconstruction
JP7403528B2 (en) Method and system for reconstructing color and depth information of a scene
EP2848003B1 (en) Method and apparatus for acquiring geometry of specular object based on depth sensor
JP6351238B2 (en) Image processing apparatus, imaging apparatus, and distance correction method
US20190098277A1 (en) Image processing apparatus, image processing method, image processing system, and storage medium
US11348267B2 (en) Method and apparatus for generating a three-dimensional model
US11328479B2 (en) Reconstruction method, reconstruction device, and generation device
KR100953076B1 (en) Multi-view matching method and device using foreground/background separation
WO2016181687A1 (en) Image processing device, image processing method and program
US9367920B2 (en) Method and apparatus for processing images
JP2018151689A (en) Image processing apparatus, control method thereof, program and storage medium
JP2014112055A (en) Estimation method for camera attitude and estimation system for camera attitude
US20170076459A1 (en) Determining scale of three dimensional information
JP2012209895A (en) Stereo image calibration method, stereo image calibration device and stereo image calibration computer program
Anderson et al. Augmenting depth camera output using photometric stereo.
KR101125061B1 (en) A Method For Transforming 2D Video To 3D Video By Using LDI Method
JP2015033047A (en) Depth estimation device employing plural cameras
JP2014167693A (en) Depth estimation device using plurality of cameras
CN116569214A (en) Apparatus and method for processing depth map
De Sorbier et al. Augmented reality for 3D TV using depth camera input
JP7275583B2 (en) BACKGROUND MODEL GENERATING DEVICE, BACKGROUND MODEL GENERATING METHOD AND BACKGROUND MODEL GENERATING PROGRAM
Zhou et al. New eye contact correction using radial basis function for wide baseline videoconference system
KR20120056668A (en) Apparatus and method for recovering 3 dimensional information
KR20190072987A (en) Stereo Depth Map Post-processing Method with Scene Layout

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20160128

RD04 Notification of resignation of power of attorney

Free format text: JAPANESE INTERMEDIATE CODE: A7424

Effective date: 20160810

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20161014

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20161019

A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20170412