JPH0771215B2 - Automatic shooting position determination device - Google Patents

Automatic shooting position determination device

Info

Publication number
JPH0771215B2
JPH0771215B2 JP62312162A JP31216287A JPH0771215B2 JP H0771215 B2 JPH0771215 B2 JP H0771215B2 JP 62312162 A JP62312162 A JP 62312162A JP 31216287 A JP31216287 A JP 31216287A JP H0771215 B2 JPH0771215 B2 JP H0771215B2
Authority
JP
Japan
Prior art keywords
information
unit
amount
latitude
representative
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP62312162A
Other languages
Japanese (ja)
Other versions
JPH01154676A (en
Inventor
輝夫 浜野
健司 小倉
聡 石橋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nippon Telegraph and Telephone Corp
Original Assignee
Nippon Telegraph and Telephone Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nippon Telegraph and Telephone Corp filed Critical Nippon Telegraph and Telephone Corp
Priority to JP62312162A priority Critical patent/JPH0771215B2/en
Publication of JPH01154676A publication Critical patent/JPH01154676A/en
Publication of JPH0771215B2 publication Critical patent/JPH0771215B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Description

【発明の詳細な説明】 (発明の属する技術分野) 本発明は商品、美術品、動植物などの実在する物体を複
数方向から撮影する撮影位置を決定する装置に関するも
のである。本発明によって決定された撮影位置から撮影
された画像情報はセンタ装置に蓄積され、これを伝送路
を介して端末装置に検索提供を行なう画像情報提供装置
などに利用される。
Description: TECHNICAL FIELD The present invention relates to an apparatus for determining a shooting position for shooting an actual object such as a product, a work of art, or an animal or plant from a plurality of directions. The image information photographed from the photographing position determined according to the present invention is accumulated in the center device, and is used for an image information providing device or the like for searching and providing the information to the terminal device via the transmission path.

(従来の技術) 商品、美術品、動植物などの実在する物体を複数方向か
ら撮影する場合、その撮影位置を決定するための従来手
法としては、例えば、第7図に示すような物体Oを覆う
仮想の半球OAを想定し、この半球の経度方向λ(横方
向)と緯度方向ψ(縦方向)に均等な角度間隔で撮影位
置OBを決定する手法がある。
(Prior Art) When a real object such as a product, a work of art, an animal or a plant is photographed from a plurality of directions, a conventional method for determining the photographing position is to cover an object O as shown in FIG. 7, for example. Assuming a virtual hemisphere OA, there is a method of determining the photographing positions OB at equal angular intervals in the longitude direction λ (horizontal direction) and the latitude direction ψ (longitudinal direction) of this hemisphere.

しかしこの場合、仮想半球OAの緯度が高くなる(極に近
付く)につれて撮影位置OBの密度が高くなり、極点付近
OCから撮影した画像は(画面内での物体の姿勢は変化す
るが)実質的にほとんど同じ情報しか持たない。そこで
撮影位置間の半球上での距離を一定にする必要があるこ
とがわかる。しかし仮にこの様にして複数の撮影位置を
決めたとしても、各撮影位置から撮影した画像に対し、
それを見る人が実質的に受ける情報の差は、物体の形状
と撮影位置によって大きく変化する。
However, in this case, as the latitude of the virtual hemisphere OA becomes higher (closer to the pole), the density of the shooting position OB becomes higher, and near the pole.
Images taken from OC have virtually the same information (although the orientation of the object in the screen changes). Therefore, it is understood that the distance on the hemisphere between the shooting positions needs to be constant. However, even if a plurality of shooting positions are decided in this way, for images taken from each shooting position,
The difference in the information that a person who sees it substantially changes greatly depending on the shape of the object and the shooting position.

例えば第8図に示すような直方体をA矢印方向から見た
画像(第9図(a))と、これを10度だけ回転して見た
画像(第9図(b))とを比較してみる場合に感じる画
像間の差異と、B矢印の方向から見た画像(第10図
(a))とこれを10度だけ回転して見た画像(第10図
(b))とを比較した場合の差異とでは明らかにB矢印
方向から見た場合の差異の方が大きく感じられる。
For example, compare an image of a rectangular parallelepiped as shown in FIG. 8 seen from the direction of arrow A (FIG. 9 (a)) with an image seen by rotating it by 10 degrees (FIG. 9 (b)). Compare the difference between the images that you feel when you try and compare the image viewed from the direction of the arrow B (Fig. 10 (a)) with the image viewed by rotating it by 10 degrees (Fig. 10 (b)) The difference when viewed from the direction of the arrow B is clearly larger than the difference when viewed from the direction.

これはたとえ半球上での撮影位置の間隔を一定にして
も、各画像間の情報の差は一定にはならないことを意味
している。従ってこれらの画像を撮影位置の隣接する順
に次々と表示すると、見る人が実質的に受ける情報はあ
る区間では大きくなり、別の区間では小さくなる。すな
わちこのような画像の集合を蓄積すると、見る人に対し
て実質的にどれだけの情報を与えるかという意味での蓄
積効率は悪くなる。
This means that even if the interval between the shooting positions on the hemisphere is constant, the information difference between the images is not constant. Therefore, when these images are displayed one after another in the order in which the photographing positions are adjacent to each other, the information substantially received by the viewer becomes large in one section and becomes small in another section. That is, if such a set of images is stored, the storage efficiency in terms of how much information is given to the viewer becomes poor.

(発明の目的) 本発明の目的は、物体を覆う仮想の半球に沿って複数枚
の画像を撮影する場合に、撮影された各画像間の情報の
差にバラツキが生じる点を解決することにより、各画像
間の情報の差が一定になる撮影位置を自動決定する装置
を提供することにある。
(Object of the Invention) An object of the present invention is to solve the problem that when a plurality of images are taken along a virtual hemisphere covering an object, a difference occurs in information difference between the taken images. An object of the present invention is to provide a device that automatically determines a shooting position where the information difference between images is constant.

(発明の構成) (発明の特徴と従来の技術との差異) 本発明は物体を覆う仮想の半球に沿って複数枚の画像を
撮影する場合に、撮影された各画像間の情報の差に対応
する表面情報変化量が、各画像間で一定になるように撮
影位置を決定することを最も主要な特徴とする。これに
対し、従来の技術では、単純に撮影位置を均等間隔で決
定し、各画像間の物体に関する情報量が一定にならない
のとは異なる。
(Structure of the Invention) (Differences between Features of the Invention and Conventional Technology) The present invention is directed to the difference in information between the captured images when capturing a plurality of images along a virtual hemisphere covering an object. The most important feature is that the photographing position is determined so that the corresponding surface information change amount is constant between the images. On the other hand, in the conventional technique, the photographing positions are simply determined at equal intervals, which is different from the fact that the amount of information regarding an object between images is not constant.

以下、本発明の動作原理を説明するが、ここで上述の表
面情報変化量とは以下のように定義される。まず第3図
に示すように撮影対象となる物体形状をm個の三角パッ
チ等の微細な多角形ωi(i=0,1,..m−1)で平面近
似する。平面近似された物体をOとし、物体Oの表面上
に一様な密度で分布する点集合をPとする。この点集合
Pの各要素p∈Pはpを中心とする半径eの領域r
(p)を持つ。ただし半径eは領域r(p)同士がお互
いに干渉しない最大の半径とし全ての領域r(p)の集
合をR(P)とする。
Hereinafter, the operating principle of the present invention will be described. Here, the above-mentioned surface information change amount is defined as follows. First, as shown in FIG. 3, the shape of an object to be photographed is plane-approximated by a fine polygon ωi (i = 0,1, ... m-1) such as m triangular patches. Let O be a plane-approximated object, and P be a set of points distributed on the surface of the object O with a uniform density. Each element pεP of this point set P is a region r of radius e centered on p
Has (p). However, the radius e is the maximum radius where the regions r (p) do not interfere with each other, and the set of all regions r (p) is R (P).

点集合Pの濃度が充分大きければ、領域r(p)の集合
R(P)は物体O上に一様な密度で分布する非常に微細
な円形領域の集合となる。このとき各r(p)∈R
(P)は物体Oに関して同一の情報量εを持つとみなす
ことができる。
If the density of the point set P is sufficiently large, the set R (P) of the region r (p) is a set of very fine circular regions distributed on the object O with a uniform density. At this time, each r (p) ∈ R
(P) can be regarded as having the same information amount ε with respect to the object O.

いま第3図に示すようにまず位置uから物体Oを撮影し
た場合、領域r(p)のスクリーン面への投影面積がsu
であるとする。次に位置vから物体Oを撮影すると、領
域r(p)のスクリーン面への投影面積はsvであるとす
る。簡単のために平行投影であると仮定すると、もしsu
<svならば領域r(p)に関して ε+=(sv−su)ε/πe2 (1) だけスクリーン面へ投影される情報量は増加する。逆に
su>svならば ε-=(su−sv)ε/πe2 (2) だけ情報量は減少するとみなすことができる。従って位
置uと位置vから撮影した二枚の画像間で、スクリーン
面に投影される領域r(p)の情報量の変化量は Δε=|su−sv| (3) に比例する。従って全ての領域r(p)∈R(P)につ
いてΔεを求めると、その和E E=ΣΔε(p) (4) for all r(p)∈R(P) はスクリーン面に投影される物体Oの情報量の変化量に
比例する。物体Oの表面は微細な多角形ωi(i=0,
1,...m−1)で構成されておい、ωi上に領域r(p)
が一様な密度で分布しているから、位置uから物体Oを
見た場合のωiのスクリーン面への投影面積をSiu、位
置vから見た場合の投影面積をSivとすると、(5)式
の様にして定義されるDuv は(4)式のEに正比例する。そこでこの(5)式のDu
vを物体Oを位置uから撮影した画像と位置vから撮影
した画像間での表面情報変化量と呼ぶことにする。
As shown in FIG. 3, when the object O is first photographed from the position u, the projected area of the region r (p) on the screen surface is su.
Suppose Next, when the object O is photographed from the position v, the projected area of the region r (p) on the screen surface is sv. Assuming parallel projection for simplicity, if su
<Sv, the amount of information projected on the screen surface increases by ε + = (sv-su) ε / πe 2 (1) for the region r (p). vice versa
If su> sv, it can be considered that the amount of information decreases by ε = (su−sv) ε / πe 2 (2). Therefore, the amount of change in the information amount of the area r (p) projected on the screen surface between the two images captured from the positions u and v is proportional to Δε = | su−sv | (3). Therefore, if Δε is obtained for all regions r (p) εR (P), the sum E E = ΣΔε (p) (4) for all r (p) εR (P) is the object projected on the screen surface. It is proportional to the change amount of the information amount of O. The surface of the object O is a fine polygon ωi (i = 0,
1, ... m−1), and a region r (p) on ωi
Are distributed with a uniform density, so that the projected area of ωi on the screen surface when the object O is viewed from the position u is Siu, and the projected area when viewed from the position v is Siv, (5) Duv defined like the formula Is directly proportional to E in equation (4). Therefore, Du in this equation (5)
Let v be referred to as the amount of change in surface information between the image of the object O captured from the position u and the image of the object O captured from the position v.

本発明では表面情報変化量が二枚の画像間の情報量の変
化量に比例し、しかも物体の形状情報だけから抽出可能
な点に着目し、表面情報変化量が隣接する画像間でほぼ
一定になるように撮影位置を選ぶことによって、各画像
間の情報の差を一定にすることを特徴としている。
In the present invention, the amount of change in surface information is proportional to the amount of change in the amount of information between two images, and attention is paid to the point that it can be extracted only from the shape information of the object, and the amount of change in surface information is almost constant between adjacent images. The feature is that the difference in information between the images is made constant by selecting the shooting position so that

(実施例) 第1図は本発明の一実施例の基本構成を示すブロック図
であり、図において、1は形状情報入力部、2は平面近
似部、3は表面情報変化量検出部、4は代表位置抽出部
であり、5は代表撮影位置出力である。
(Embodiment) FIG. 1 is a block diagram showing the basic configuration of an embodiment of the present invention, in which 1 is a shape information input unit, 2 is a plane approximation unit, 3 is a surface information change amount detection unit, 4 Is a representative position extraction unit, and 5 is a representative photographing position output.

上記、具体的な構成を第2図に示し、平面近似部2は近
似部21と多角形情報蓄積部22とでなる。表面情報変化量
検出部3は、緯度方向変化量測定部31と、蓄積部321、
演算部322からなる平均化部32と、緯度方向変化量測定
部33とでなる。また、代表位置抽出部4は、蓄積部41
1、抽出部412からなる代表緯度抽出部41と、蓄積部42
1、抽出部422からなる代表経度抽出部42とでなる。そし
て代表撮影位置5の出力として代表緯度51、代表経度52
が得られる。
The specific configuration is shown in FIG. 2, and the plane approximation unit 2 includes an approximation unit 21 and a polygon information storage unit 22. The surface information change amount detection unit 3 includes a latitude direction change amount measurement unit 31, a storage unit 321,
An averaging unit 32 including a calculation unit 322 and a latitude direction change amount measuring unit 33. In addition, the representative position extraction unit 4 includes a storage unit 41.
1. A representative latitude extraction unit 41 including an extraction unit 412 and a storage unit 42
1. The representative longitude extracting unit 42 including the extracting unit 422. Then, as the output of the representative shooting position 5, a representative latitude 51 and a representative longitude 52
Is obtained.

これを動作するには形状情報入力部1に物体の形状情報
を入力する。平面近似部2は、まず近似部21が形状情報
入力部1から物体の形状情報を読みだし、物体の形状を
m個の微細な多角形ωi(i=0,1,..,m−1)で平面近
似し、各ωiの形状情報とその外向きの法線ベクトル情
報を多角形情報蓄積部22に蓄積する。表面情報変化量検
出部3の緯度方向変化量測定部31は多角形情報蓄積部22
から多角形で平面近似された物体の形状情報を読みだ
し、これに基づいて第4図に示すように物体を撮影する
撮影位置の経度をψ=0に固定し、緯度λを微細な角度
Δλずつ移動させた場合の表面情報変化量D0(λ)を
(5)式から求める。ただしΔλ=π/(2・C0)(C0
は自然数)とする。
To operate this, the shape information of the object is input to the shape information input unit 1. In the plane approximating unit 2, first, the approximating unit 21 reads the shape information of the object from the shape information input unit 1, and the shape of the object is m fine polygons ωi (i = 0,1 ,. ), The shape information of each ωi and its outward normal vector information are stored in the polygon information storage unit 22. The latitude direction change amount measurement unit 31 of the surface information change amount detection unit 3 is a polygon information storage unit 22.
From this, the shape information of the object that is approximated by a polygon in a plane is read out, and based on this, the longitude of the shooting position for shooting the object is fixed at ψ = 0, and the latitude λ is set to a fine angle Δλ. The amount of change D0 (λ) in the surface information in the case of moving in steps is calculated from the equation (5). However, Δλ = π / (2 · C0) (C0
Is a natural number).

ついで経度ψを微細な角度Δψだけ移動し同様にしてD
Δψ(λ)を求める。ただしΔψ=2π/C1(C1は自然
数)とする。これを経度ψ=2πになるまで繰り返し、
緯度方向の表面情報変化量Dψ(λ)(ψ=Δψ,2Δ
ψ,...,2π,λ=Δλ,...,π/2)を演算し、平均化部3
2の蓄積部321に蓄積する。演算部322は蓄積部321に蓄積
された緯度方向の表面情報変化量Dψ(λ)を読みだ
し、経路ψについての平均値Da(λ) を演算し、代表緯度抽出部41の蓄積部411に蓄積する。
Then move longitude ψ by a minute angle Δψ
Calculate Δψ (λ). However, Δψ = 2π / C1 (C1 is a natural number). Repeat this until longitude ψ = 2π,
Latitude direction surface information change amount Dψ (λ) (ψ = Δψ, 2Δ
ψ, ..., 2π, λ = Δλ, ..., π / 2) is calculated, and the averaging unit 3
The data is stored in the second storage unit 321. The calculation unit 322 reads the surface information variation amount Dψ (λ) in the latitude direction accumulated in the accumulation unit 321, and calculates the average value Da (λ) for the route ψ. Is stored in the storage unit 411 of the representative latitude extraction unit 41.

抽出部412は蓄積部411から平均値Da(λ)(λ=Δλ,2
Δλ,...,2π,...,π/2)を読みだすとともに、あらか
じめ決定されている表面情報変化量の間隔値δを用いて
第5図に示すようにDa(λ)の積分値を均等に分割す
る。
The extraction unit 412 receives the average value Da (λ) (λ = Δλ, 2 from the storage unit 411).
(Δλ, ..., 2π, ..., π / 2) is read out and the integral of Da (λ) is calculated as shown in Fig. 5 by using the predetermined interval value δ of the surface information change amount. Divide the value evenly.

そして分割位置となるx個の緯度Λi(i=0,...,x−
1)を代表緯度51として出力するとともに、これを経度
方向変化量測定部33に出力する。この経度方向変化量測
定部33は多角形情報蓄積部22から物体形状を平面近似し
た形状情報を読みだし、ついで代表緯度抽出部41の抽出
部412から出力されたΛi(i=0,...,x−1)を読みだ
す。
And x latitudes Λi (i = 0, ..., x−
1) is output as the representative latitude 51, and this is output to the longitude direction change amount measuring unit 33. The longitude direction change amount measuring unit 33 reads out shape information obtained by approximating the object shape in a plane from the polygon information storage unit 22, and then outputs Λi (i = 0, ...) From the extraction unit 412 of the representative latitude extraction unit 41. ., x-1).

そして緯度方向変化量測定部31と同様にして、経度方向
表面情報変化量DΛi(ψ)(i=0,...,x−1,ψ=Δ
ψ,2Δψ,...,2π)を測定し、代表経度抽出部42の蓄積
部421に蓄積する。
Then, in the same manner as the latitude direction change amount measurement unit 31, the longitude direction surface information change amount DΛi (ψ) (i = 0, ..., x−1, ψ = Δ)
ψ, 2Δψ, ..., 2π) is measured and stored in the storage unit 421 of the representative longitude extraction unit 42.

蓄積部421に蓄積された経度方向表面情報変化量DΛi
(ψ)を読みだし、代表緯度抽出部41と同様にして、第
6図に示すようにDΛi(ψ)の積分値をδ間隔で均等
に分割する。そして分割位置となるyi個の経度Ψij(j
=0,...,yi−1,i=0,...,x−1)を求め抽出部422から
これを代表経度52として出力する。
Longitudinal surface information change amount DΛi accumulated in the accumulation unit 421
(Ψ) is read out, and similarly to the representative latitude extraction unit 41, the integrated value of DΛi (ψ) is equally divided at δ intervals as shown in FIG. Then, yi longitudes Ψij (j
= 0, ..., yi−1, i = 0, ..., x−1) is obtained and is output from the extraction unit 422 as the representative longitude 52.

なお本実施例では表面情報変化量の間隔値δが装置内部
に予め設定されているものとして説明したが、この間隔
値δは外部から入力しても良い。
In the present embodiment, the interval value δ of the surface information change amount is described as being preset inside the apparatus, but this interval value δ may be input from the outside.

(発明の効果) 以上説明した様にして代表緯度と代表経度が決定されれ
ば、この代表緯度と代表経度から定まる仮想半球上の撮
影位置から物体を撮影して画像は、隣接する画像間の表
面情報変化量がほぼ一定になる。従って物体を均等間隔
で撮影した場合に比べて、これらの画像を連続的に表示
するより滑らかに表示することが出来る。またこれらの
画像をデータベース等に蓄積する場合に、単純に均等間
隔で撮影するよりも、見る人に実質的に与える情報量の
意味での蓄積効率を改善することができる。
(Effects of the Invention) If the representative latitude and the representative longitude are determined as described above, the object is photographed from the photographing position on the virtual hemisphere determined by the representative latitude and the representative longitude, and the images are between the adjacent images. The amount of change in surface information becomes almost constant. Therefore, compared with the case where the object is photographed at equal intervals, these images can be displayed more smoothly than continuously displayed. Further, when these images are stored in a database or the like, it is possible to improve the storage efficiency in terms of the amount of information that is substantially given to the viewer, as compared with the case where the images are simply shot at equal intervals.

【図面の簡単な説明】[Brief description of drawings]

第1図は本発明の一実施例の基本構成を示すブロック
図、第2図は第1図の具体的構成のブロック図、第3図
は本発明で取扱う物体Oの表面情報変化量の定義を説明
する図、第4図は物体を撮影する撮影位置の経度を固定
し緯度を微細な角度ずつ変化させ表面情報変化量を求め
る説明図、第5図および第6図はそれぞれ表面情報変化
量の間隔値を用いて緯度、経度の積分値を均等に分割す
る説明図、第7図ないし第10図は従来の物体を複数方向
から撮影する場合の説明図であり、第7図は撮影位置を
決定する従来手法、第8図は物体を撮影する方向、第9
図および第10図は第8図のA,B方向から撮影して時の画
像の差異を示している。 1……形状情報入力部、2……平面近似部、21……近似
部、22……多角形情報蓄積部、3……表面情報変化量検
出部、31……緯度方向変化量測定部、32……平均化部、
321……蓄積部、322……演算部、33……経度方向変化量
測定部、42……代表位置抽出部、41……代表緯度抽出
部、411……蓄積部、412……抽出部、4……代表経度抽
出部、421……蓄積部、422……抽出部、5……代表撮影
位置、51……代表緯度、52……代表経度。
FIG. 1 is a block diagram showing the basic configuration of an embodiment of the present invention, FIG. 2 is a block diagram of the specific configuration of FIG. 1, and FIG. 3 is a definition of the surface information change amount of an object O handled by the present invention. And FIG. 4 are explanatory diagrams for determining the surface information change amount by fixing the longitude of the image capturing position of the object and changing the latitude by minute angles, and FIGS. 5 and 6 are surface information change amounts, respectively. Fig. 7 to 10 are explanatory diagrams for equally dividing the integrated values of latitude and longitude by using the interval value of Fig. 7, and Fig. 7 to Fig. 10 are explanatory diagrams when a conventional object is photographed from a plurality of directions. 8 is a conventional method for determining the
FIG. 10 and FIG. 10 show the difference between the images taken from the A and B directions in FIG. 1 ... Shape information input unit, 2 ... Plane approximation unit, 21 ... Approximation unit, 22 ... Polygonal information storage unit, 3 ... Surface information change amount detection unit, 31 ... Latitude direction change amount measurement unit, 32 …… Averaging section,
321 ... Accumulation unit, 322 ... Calculation unit, 33 ... Longitudinal direction change amount measurement unit, 42 ... Representative position extraction unit, 41 ... Representative latitude extraction unit, 411 ... Accumulation unit, 412 ... Extraction unit, 4 ... Representative longitude extractor, 421 ... Accumulator, 422 ... Extractor, 5 ... Representative shooting position, 51 ... Representative latitude, 52 ... Representative longitude.

Claims (1)

【特許請求の範囲】[Claims] 【請求項1】物体の3次元の形状情報を入力する形状情
報入力部と、該形状情報入力部から入力された前記3次
元の形状情報を複数の多角形平面に近似分割する平面近
似部と、該平面近似部で平面近似された物体を、物体を
覆う仮想的な半球の表面上の異なった2つの視点位置か
ら仮想的に撮影することにより得られる表面情報変化量
を求める第1の手段と、前記仮想的な半球の表面上の経
度方向、緯度方向に撮影位置を微細量ずつ移動させた場
合の隣接した2点間の前記表面情報変化量を前記第1の
手段により順次求める第2の手段と、前記第2の手段に
より得られた前記隣接した2点間の各表面情報変化量の
積分値が均一になるように、前記経度方向、緯度方向の
実際の撮影位置を決定する第3の手段とを備えたことを
特徴とする撮影位置自動決定装置
1. A shape information input section for inputting three-dimensional shape information of an object, and a plane approximation section for approximating the three-dimensional shape information input from the shape information input section into a plurality of polygonal planes. A first means for obtaining a surface information change amount obtained by virtually photographing an object subjected to plane approximation by the plane approximation unit from two different viewpoint positions on the surface of a virtual hemisphere covering the object. And a second means for sequentially obtaining, by the first means, the amount of change in surface information between two adjacent points when the photographing position is moved by a minute amount in the longitude direction and the latitude direction on the surface of the virtual hemisphere. Means for determining the actual photographing position in the longitude direction and the latitude direction so that the integrated values of the surface information change amounts between the two adjacent points obtained by the second means become uniform. Shooting position characterized by having means of 3 Auto determining device
JP62312162A 1987-12-11 1987-12-11 Automatic shooting position determination device Expired - Fee Related JPH0771215B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP62312162A JPH0771215B2 (en) 1987-12-11 1987-12-11 Automatic shooting position determination device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP62312162A JPH0771215B2 (en) 1987-12-11 1987-12-11 Automatic shooting position determination device

Publications (2)

Publication Number Publication Date
JPH01154676A JPH01154676A (en) 1989-06-16
JPH0771215B2 true JPH0771215B2 (en) 1995-07-31

Family

ID=18025985

Family Applications (1)

Application Number Title Priority Date Filing Date
JP62312162A Expired - Fee Related JPH0771215B2 (en) 1987-12-11 1987-12-11 Automatic shooting position determination device

Country Status (1)

Country Link
JP (1) JPH0771215B2 (en)

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS60171410A (en) * 1984-02-17 1985-09-04 Toshiba Corp Stereoscopic processor

Also Published As

Publication number Publication date
JPH01154676A (en) 1989-06-16

Similar Documents

Publication Publication Date Title
US20200111250A1 (en) Method for reconstructing three-dimensional space scene based on photographing
Liang et al. Forest data collection using terrestrial image-based point clouds from a handheld camera compared to terrestrial and personal laser scanning
KR100647807B1 (en) Method for extraction of 3d building information using shadow analysis
Zhang et al. Photogrammetric processing of low‐altitude images acquired by unpiloted aerial vehicles
US8693806B2 (en) Method and apparatus of taking aerial surveys
CN110084785B (en) Power transmission line vertical arc measuring method and system based on aerial images
JP2004163292A (en) Survey system and electronic storage medium
CN109460040A (en) It is a kind of that map system and method are established by mobile phone shooting photo array floor
US20210264666A1 (en) Method for obtaining photogrammetric data using a layered approach
Cosido et al. Hybridization of convergent photogrammetry, computer vision, and artificial intelligence for digital documentation of cultural heritage-a case study: the magdalena palace
JP2014106118A (en) Digital surface model creation method, and digital surface model creation device
CN110766731A (en) Method and device for automatically registering panoramic image and point cloud and storage medium
JP2980195B2 (en) Method and apparatus for measuring rebar diameter
KR101720761B1 (en) system for building three-dimensional space information using MMS and multi-directional tilt aerial photos
JPH05135155A (en) Three-dimensional model constitution device using successive silhouette image
JPH0771215B2 (en) Automatic shooting position determination device
Frobin et al. Calibration and model reconstruction in analytical close-range stereophotogrammetry
JP2573347B2 (en) Automatic shooting position determination device
KR20010087493A (en) A survey equipment and method for rock excavation surface
Hsu Geocoded terrestrial mosaics using pose sensors and video registration
Pyysalo Tree crown determination using terrestrial imaging for laser scanned individual tree recognition
JP2021022846A (en) Inspection method and inspection system
Klette et al. Wide-Angle Image Acquisition, Analysis and Visualisation
JPH06221832A (en) Method and device for calibrating focusing distance of camera
CN116704138B (en) Method and device for establishing oblique photography three-dimensional model

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees