JPH0512413A - Image generating device - Google Patents

Image generating device

Info

Publication number
JPH0512413A
JPH0512413A JP3158974A JP15897491A JPH0512413A JP H0512413 A JPH0512413 A JP H0512413A JP 3158974 A JP3158974 A JP 3158974A JP 15897491 A JP15897491 A JP 15897491A JP H0512413 A JPH0512413 A JP H0512413A
Authority
JP
Japan
Prior art keywords
image
data
target object
shape
viewpoint position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
JP3158974A
Other languages
Japanese (ja)
Inventor
Takahisa Ando
孝久 安東
Isao Imazato
功 今里
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sanyo Electric Co Ltd
Original Assignee
Sanyo Electric Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sanyo Electric Co Ltd filed Critical Sanyo Electric Co Ltd
Priority to JP3158974A priority Critical patent/JPH0512413A/en
Publication of JPH0512413A publication Critical patent/JPH0512413A/en
Pending legal-status Critical Current

Links

Abstract

PURPOSE:To exactly generate regenerative images, for which an object is observed from an arbitrary view position, with a small quantity of data. CONSTITUTION:The image data of the photographed object are stored in an image storage device 14 by an image input device 12. The shape data such as the apexes of the three-dimensional shape of the object extracted by a three- dimensional shape measuring instrument 16 are stored in a shape data storage device 18. Based on view data showing the arbitrary view position to observe the object outputted from a view position input device 22, an image calculating device 20 executes an arithmetic operation to the image data and the shape data. Then, the observed regenerative images are generated from the view position shown by the view data and displayed on an image display device 24.

Description

【発明の詳細な説明】Detailed Description of the Invention

【0001】[0001]

【産業上の利用分野】この発明は画像生成装置に関し、
特にたとえば対象物体から実際に取得したデータを送信
側から通信網を通して受信側に送り、受信側でそのデー
タを基に対象物体を任意の視点位置から観察した画像を
生成する場合などに用いられ得る、画像生成装置に関す
る。
BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image generating device,
In particular, it can be used when, for example, data actually acquired from the target object is sent from the transmission side to the reception side through the communication network, and the reception side generates an image of the target object observed from an arbitrary viewpoint position based on the data. , An image generation device.

【0002】[0002]

【従来の技術】従来より、対象物体を任意の視点位置か
ら観察した画像を生成するには、実際に多方面からの画
像を撮影し記録しておく方法、3次元形状計測装置によ
って対象物体の3次元形状を計測する方法、あるいは平
成1年9月12日付で出願公告された特公平1−424
27号に開示された方法などがある。
2. Description of the Related Art Conventionally, in order to generate an image obtained by observing a target object from an arbitrary viewpoint position, a method of actually photographing and recording images from various directions is used to measure the target object with a three-dimensional shape measuring apparatus. Method for measuring three-dimensional shape, or Japanese Patent Publication No. 1-424 filed on September 12, 1991
No. 27, and the like.

【0003】[0003]

【発明が解決しようとする課題】このような従来技術に
おいて、第1の方法では、画像データが膨大となり、ま
た、観察者には記録される画像以外は観察することはで
きず、観察者の視点位置も限定される。第2の方法で
は、対象物体の空間的な表面位置座標が得られるだけ
で、対象物体の表面の色彩,模様および艶などの人間の
視覚で感じられる情報すなわち画像を得ることができな
かった。
In such a conventional technique, in the first method, the image data becomes enormous, and the observer cannot observe anything other than the recorded image. The viewpoint position is also limited. According to the second method, only the spatial surface position coordinates of the target object can be obtained, but it is not possible to obtain information that is visually perceptible to human eyes, that is, an image, such as the color, pattern, and gloss of the surface of the target object.

【0004】さらに、特公平1−42427号で開示さ
れた第3の方法では、形状モデルに接していない部分に
ついては正確な画像が得られなかった。それゆえに、こ
の発明の主たる目的は、任意の視点位置から対象物体を
観察した画像を少ないデータで正確に生成することがで
きる、画像生成装置を提供することである。
Further, according to the third method disclosed in Japanese Examined Patent Publication No. 1-242427, an accurate image cannot be obtained for a portion which is not in contact with the shape model. Therefore, a main object of the present invention is to provide an image generation device capable of accurately generating an image of a target object observed from an arbitrary viewpoint position with a small amount of data.

【0005】[0005]

【課題を解決するための手段】この発明は、対象物体を
撮影する画像入力手段、対象物体の画像を画像データと
して記憶する画像記憶手段、対象物体の3次元形状を計
測する3次元形状計測手段、3次元形状を形状データと
して記憶する形状データ記憶手段、対象物体を観察する
任意の視点位置を示す視点データを出力する視点位置入
力手段、および視点位置入力装置からの視点データに応
じて画像記憶手段からの画像データと形状データ記憶手
段からの形状データとを演算処理し、視点位置から観察
した画像を発生する画像演算手段を備える、画像生成装
置である。
The present invention is directed to an image inputting means for photographing a target object, an image storing means for storing an image of the target object as image data, and a three-dimensional shape measuring means for measuring a three-dimensional shape of the target object. Shape data storage means for storing a three-dimensional shape as shape data, viewpoint position input means for outputting viewpoint data indicating an arbitrary viewpoint position for observing a target object, and image storage according to viewpoint data from the viewpoint position input device. An image generating apparatus is provided with image calculation means for calculating the image data from the viewpoint data and the image data from the shape data storage means and the shape data from the shape data storage means.

【0006】[0006]

【作用】画像入力手段によって対象物体を撮影し、その
画像を画像データとして画像記憶手段に記憶させる。ま
た、3次元形状計測手段によって対象物体の3次元形状
の頂点等、その点を結べば対象物体が再現できる点を抽
出し、その点の形状データを形状データ記憶手段に記憶
させる。画像演算手段において、視点位置入力手段から
の視点データに応じて、画像データと形状データとを演
算処理する。そして、視点データによって示された視点
位置から対象物体を観察した画像を、たとえば画像表示
装置に表示する。
The object is photographed by the image input means, and the image is stored in the image storage means as image data. The three-dimensional shape measuring means extracts a point such as the vertex of the three-dimensional shape of the target object that can be reproduced by connecting the points, and stores the shape data of the point in the shape data storage means. The image calculation means calculates the image data and the shape data according to the viewpoint data from the viewpoint position input means. Then, an image obtained by observing the target object from the viewpoint position indicated by the viewpoint data is displayed on, for example, an image display device.

【0007】[0007]

【発明の効果】この発明によれば、対象物体をある方向
から撮影した画像データと対象物体の3次元形状のうち
抽出された形状データとを視点データに従って演算処理
するだけで、任意の視点位置から対象物体を観察した画
像を正確に得られる。したがって、従来のように多方面
からの画像データを記憶する必要もないのでデータ量を
削減でき、また、視点位置を任意に選ぶことができる。
また、実際に撮影したデータに基づいて演算しているの
で、対象物体の表面の色,模様および艶なども表現され
た画像を得ることができる。
According to the present invention, an arbitrary viewpoint position can be obtained only by arithmetically processing the image data obtained by photographing the target object from a certain direction and the shape data extracted from the three-dimensional shape of the target object according to the viewpoint data. An image obtained by observing the target object can be accurately obtained from. Therefore, it is not necessary to store image data from various directions as in the conventional case, so that the data amount can be reduced and the viewpoint position can be arbitrarily selected.
Further, since the calculation is performed based on the actually photographed data, it is possible to obtain an image in which the color, pattern, gloss, etc. of the surface of the target object are expressed.

【0008】この発明の上述の目的,その他の目的,特
徴および利点は、図面を参照して行う以下の実施例の詳
細な説明から一層明らかとなろう。
The above-mentioned objects, other objects, features and advantages of the present invention will become more apparent from the detailed description of the embodiments below with reference to the drawings.

【0009】[0009]

【実施例】図1を参照して、この実施例の画像生成装置
10は、たとえばテレビジョンカメラ等で構成される画
像入力装置12を含む。画像入力装置12は、対象物体
26(後述)を撮影し、撮影画像を画像データとして画
像記憶装置14に記憶させる。また、対象物体26の3
次元形状の頂点等、その点を結べば対象物体26が再現
できる点を3次元形状計測装置16によって抽出し、そ
の点の距離等の形状データを形状データ記憶装置18に
記憶させる。画像記憶装置14および形状データ記憶装
置18にそれぞれ記憶されている画像データおよび形状
データは画像演算装置20に入力される。また、画像演
算装置20には、対象物体26を観察する任意の視点位
置を示す視点データが、視点位置入力装置22から入力
される。画像演算装置20では、視点データに応じて、
画像データおよび形状データを演算処理し、視点データ
によって示された視点位置から対象物体を観察したとき
の再生画像を生成し、たとえばブラウン管などの画像表
示装置24に表示する。
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Referring to FIG. 1, an image generating apparatus 10 of this embodiment includes an image input apparatus 12 which is, for example, a television camera. The image input device 12 photographs a target object 26 (described later) and stores the photographed image in the image storage device 14 as image data. In addition, 3 of the target object 26
The three-dimensional shape measuring device 16 extracts points such as the vertices of the three-dimensional shape that can be reproduced by the target object 26 by connecting the points, and the shape data storage device 18 stores shape data such as distances of the points. The image data and the shape data respectively stored in the image storage device 14 and the shape data storage device 18 are input to the image calculation device 20. Further, viewpoint data indicating an arbitrary viewpoint position for observing the target object 26 is input to the image calculation device 20 from the viewpoint position input device 22. In the image processing device 20, according to the viewpoint data,
Image data and shape data are arithmetically processed to generate a reproduced image when the target object is observed from the viewpoint position indicated by the viewpoint data and displayed on the image display device 24 such as a cathode ray tube.

【0010】図2を参照して、撮影画像と対象物体26
との位置関係を説明する。撮影画像は、注視線L0 上に
かつ視点位置S0 から対象物体26側に距離dだけ隔て
た位置に垂直に立てられたスクリーン28に中心投影さ
れた画像と考えられる。ここで、注視線L0 は、撮影時
の視点位置S0 からどこを見ているかの基準となる注視
点Oを結んだものである。距離dは、画像入力装置12
の光学特性に対応した距離、すなわち焦点距離と等しく
設定される。なお、中心投影とは、ある点(この実施例
では視点位置S0 )に収束するように投影する方法であ
る。
Referring to FIG. 2, the photographed image and the target object 26
The positional relationship with The captured image is considered to be an image center-projected on the screen 28 that is vertically set on the gaze line L 0 and at a position separated from the viewpoint position S 0 on the target object 26 side by the distance d. Here, the gaze line L 0 connects the gaze point O, which serves as a reference of where to look from the viewpoint position S 0 at the time of shooting. The distance d is the image input device 12
Is set equal to the focal length. The central projection is a method of projecting so as to converge on a certain point (in this embodiment, the viewpoint position S 0 ).

【0011】したがって、対象物体26の各点の位置を
3次元の表面位置座標T(X,Y,Z)で表し、それに
対応するスクリーン28上の投影点を2次元座標I
(p,q)で表すと、たとえば視点位置S0 と対象物体
26上の表面位置座標T0 (X0 ,Y0 ,Z0 )とを結
ぶ直線が視線M0 となり、視線M0 とスクリーン28と
の交点が投影点となって、2次元座標I0 (p0
0 )で表される。また、表面位置座標T(X,Y,
Z)から2次元座標I(p,q)への変換は、自明な座
標変換によって行われる。この変換をfとすると、表面
位置座標Tと2次元座標Iとの関係は、(1) 式で表すこ
とができる。
Therefore, the position of each point of the target object 26 is represented by the three-dimensional surface position coordinate T (X, Y, Z), and the corresponding projected point on the screen 28 is the two-dimensional coordinate I.
When expressed by (p, q), for example, the straight line connecting the viewpoint position S 0 and the surface position coordinate T 0 (X 0 , Y 0 , Z 0 ) on the target object 26 becomes the line of sight M 0 , and the line of sight M 0 and the screen. The intersection with 28 becomes the projection point, and the two-dimensional coordinate I 0 (p 0 ,
q 0 ). Further, the surface position coordinate T (X, Y,
The transformation from Z) to the two-dimensional coordinate I (p, q) is performed by trivial coordinate transformation. When this conversion is f, the relationship between the surface position coordinate T and the two-dimensional coordinate I can be expressed by the equation (1).

【0012】 I(p,q)=f{T(X,Y,Z)} …(1) このようにして、対象物体26はスクリーン28上に投
影される。そして、各投影点すなわち2次元座標I
(p,q)に対応して、画像記憶装置14内の2次元行
列のメモリ(図示せず)に各々の画像データが画素毎に
記憶される。画像データは、対象物体26の表面の色
彩,模様および艶などに応じて対象物体26の各点毎
に、輝度値(白黒の場合)あるいはRGB値(カラーの
場合)で与えられる。
I (p, q) = f {T (X, Y, Z)} (1) In this way, the target object 26 is projected on the screen 28. Then, each projection point, that is, the two-dimensional coordinate I
Corresponding to (p, q), each image data is stored for each pixel in a two-dimensional matrix memory (not shown) in the image storage device 14. The image data is given as a brightness value (in the case of black and white) or an RGB value (in the case of color) for each point of the target object 26 according to the color, pattern and gloss of the surface of the target object 26.

【0013】一方、形状データ記憶装置18のメモリ
(図示せず)に記憶される形状データは、表面位置座標
T(X,Y,Z)のうち、その点を結べば対象物体26
が再現できる点のみを抽出し、その点の3次元形状に関
するデータである。形状データのスクリーン28上の位
置は、上述の(1) 式によって2次元座標I(p,q)で
特定される。
On the other hand, the shape data stored in the memory (not shown) of the shape data storage device 18 is the target object 26 if the points among the surface position coordinates T (X, Y, Z) are connected.
Is the data regarding the three-dimensional shape of the points extracted. The position of the shape data on the screen 28 is specified by the two-dimensional coordinate I (p, q) according to the above-mentioned formula (1).

【0014】再生画像を生成するとき、再生時の視点位
置S1が決まれば、再生画像と対象物体26の間の位置
関係は、上述の撮影画像と対象物体26との位置関係と
同様に取り扱うことができる。再生画像を生成するに
は、たとえば再生画像用のスクリーン30上で2次元座
標I1 (a1 ,b1 )で表される点について説明する
と、2次元座標I1 (a1 ,b1 )に対応する対象物体
26の輝度値、またはRGB値を求めればよい。そのた
めには、まず、再生時の視点位置S1 と2次元座標I1
(a1 ,b1 )とを結んだ視線M1 を延長し、対象物体
26との交点を求める。視線M1 と対象物体26との交
点を表面位置座標Tt (Xt ,Yt ,Zt )とすると、
撮影時の視点位置S0 より対象物体26上の表面位置座
標Tt (Xt ,Yt ,Zt )を観察する視線Mt がスク
リーン28と交差する投影点の2次元座標It (pt
t )は、(1) 式と同様にして、(2) 式で表される。
When generating a reproduced image, if the viewpoint position S1 at the time of reproduction is determined, the positional relationship between the reproduced image and the target object 26 should be treated in the same manner as the positional relationship between the photographed image and the target object 26 described above. You can To generate a reproduced image, for example, it will be described for the points represented by two-dimensional coordinates I 1 on the screen 30 for reproduction image (a 1, b 1), 2 -dimensional coordinates I 1 (a 1, b 1 ) The brightness value or RGB value of the target object 26 corresponding to For that purpose, first, the viewpoint position S 1 and the two-dimensional coordinate I 1 at the time of reproduction are set.
The line of sight M 1 connecting (a 1 , b 1 ) is extended to find the intersection with the target object 26. Letting the intersection point of the line of sight M 1 and the target object 26 be surface position coordinates T t (X t , Y t , Z t ),
Two-dimensional coordinates I t (p) of the projection point where the line of sight M t observing the surface position coordinates T t (X t , Y t , Z t ) on the target object 26 from the viewpoint position S 0 at the time of shooting intersects the screen 28. t ,
q t ) is represented by the equation (2) in the same manner as the equation (1).

【0015】 It (pt ,qt )=f{Tt (Xt ,Yt ,Zt )} …(2) (2) 式より再生画像上の2次元座標I1 (a1 ,b1
に対応する撮影画像上の2次元座標It (pt ,qt
が求まるので、2次元座標I1 (a1,b1 )の輝度値
またはRGB値は、2次元座標It (pt ,qt )の輝
度値またはRGB値を対応させることができる。この処
理を、対象物体26の表面位置座標T(X,Y,Z)に
対応し得る再生画像上の全ての2次元座標I(a,b)
についてそれぞれ行えば、所望の再生画像が得られる。
I t (p t , q t ) = f {T t (X t , Y t , Z t )} (2) From equation (2), the two-dimensional coordinate I 1 (a 1 , b 1 )
Two-dimensional coordinates I t (p t , q t ) on the captured image corresponding to
Therefore, the brightness value or RGB value of the two-dimensional coordinate I 1 (a 1 , b 1 ) can be associated with the brightness value or RGB value of the two-dimensional coordinate I t ( pt , q t ). This processing is performed for all two-dimensional coordinates I (a, b) on the reproduced image that can correspond to the surface position coordinates T (X, Y, Z) of the target object 26.
Then, a desired reproduced image can be obtained.

【0016】このような画像生成装置10において、画
像入力装置12で撮影画像が1枚だけでは、任意の方向
から観察した再生画像を生成する際に、撮影画像では観
察できなかった部分については再生画像が生成できな
い。この場合には、画像入力装置12に、対象物体26
の形状に応じて視点を変えた2枚以上の撮影画像を入力
することによって解決できる。視点を変えた2枚以上の
撮影画像を入力するには、画像入力装置12を複数台装
備したり、1台だけの場合であっても撮影する位置を変
えればよい。また、2枚以上の撮影画像を入力しても、
なお再生できない部分については、撮影画像がないこと
を容易に判別できるたとえばワイヤフレームや単色(白
色等)の高輝度表示か点滅表示などの表示を施すように
してもよい。
In such an image generating apparatus 10, when only one photographed image is generated by the image input apparatus 12, when a reproduced image observed from an arbitrary direction is generated, a portion which cannot be observed in the photographed image is reproduced. Images cannot be generated. In this case, the target object 26 is displayed on the image input device 12.
The problem can be solved by inputting two or more photographed images whose viewpoints are changed according to the shape of. In order to input two or more captured images from different viewpoints, a plurality of image input devices 12 may be provided, or even if only one image capturing device 12 is provided, the capturing position may be changed. Also, even if you input two or more captured images,
For the portion that cannot be reproduced, a display such as a wire frame or single color (white or the like) high-intensity display or blinking display may be provided so that it can be easily determined that there is no captured image.

【0017】なお、3次元形状計測装置16としては、
計測する対象物体26に直接センサ等を接触させて計測
する装置や、光やレーザ等を利用して非接触で計測する
装置等の他、対象物体26の3次元的な表面位置座標が
計測できるものであれば任意の装置を利用し得る。さら
に、多断層撮影装置(CT装置,MRI装置等)のよう
に、対象物体26の内部情報をボクセルデータとして取
得して、対象物体26の表面に対応する輪郭を抽出する
手段を有する装置を利用してもよい。
As the three-dimensional shape measuring device 16,
In addition to a device that directly measures a target object 26 to be measured by contacting a sensor or the like, a device that performs non-contact measurement using light, laser, or the like, three-dimensional surface position coordinates of the target object 26 can be measured. Any device may be used as long as it is one. Further, a device having a means for acquiring internal information of the target object 26 as voxel data and extracting a contour corresponding to the surface of the target object 26, such as a multi-tomography apparatus (CT apparatus, MRI apparatus, etc.) is used. You may.

【図面の簡単な説明】[Brief description of drawings]

【図1】この発明の一実施例を示すブロック図である。FIG. 1 is a block diagram showing an embodiment of the present invention.

【図2】対象物体と撮影画像および再生画像との関係を
示す図解図である。
FIG. 2 is an illustrative view showing a relationship between a target object, a captured image, and a reproduced image.

【符号の説明】[Explanation of symbols]

10 …画像生成装置 12 …画像入力装置 14 …画像記憶装置 16 …3次元形状計測装置 18 …形状データ記憶装置 20 …画像演算装置 22 …視点位置入力装置 24 …画像表示装置 10 ... Image generation device 12 ... Image input device 14 ... Image storage device 16 ... Three-dimensional shape measuring device 18 ... Shape data storage device 20 ... Image operation device 22 ... Viewpoint position input device 24 ... Image display device

Claims (1)

【特許請求の範囲】 【請求項1】対象物体を撮影する画像入力手段、 前記対象物体の画像を画像データとして記憶する画像記
憶手段、 前記対象物体の3次元形状を計測する3次元形状計測手
段、 前記3次元形状を形状データとして記憶する形状データ
記憶手段、 前記対象物体を観察する任意の視点位置を示す視点デー
タを出力する視点位置入力手段、および前記視点位置入
力手段からの視点データに応じて前記画像記憶手段から
の前記画像データと前記形状データ記憶手段からの前記
形状データとを演算処理し、前記視点位置から観察した
画像を発生する画像演算手段を備える、画像生成装置。
Claim: What is claimed is: 1. An image input means for photographing a target object, an image storage means for storing an image of the target object as image data, and a three-dimensional shape measuring means for measuring a three-dimensional shape of the target object. Shape data storage means for storing the three-dimensional shape as shape data, viewpoint position input means for outputting viewpoint data indicating an arbitrary viewpoint position for observing the target object, and viewpoint data from the viewpoint position input means An image generation apparatus comprising: an image calculation means for calculating the image data from the image storage means and the shape data from the shape data storage means to generate an image observed from the viewpoint position.
JP3158974A 1991-06-28 1991-06-28 Image generating device Pending JPH0512413A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP3158974A JPH0512413A (en) 1991-06-28 1991-06-28 Image generating device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP3158974A JPH0512413A (en) 1991-06-28 1991-06-28 Image generating device

Publications (1)

Publication Number Publication Date
JPH0512413A true JPH0512413A (en) 1993-01-22

Family

ID=15683445

Family Applications (1)

Application Number Title Priority Date Filing Date
JP3158974A Pending JPH0512413A (en) 1991-06-28 1991-06-28 Image generating device

Country Status (1)

Country Link
JP (1) JPH0512413A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6463176B1 (en) 1994-02-02 2002-10-08 Canon Kabushiki Kaisha Image recognition/reproduction method and apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6463176B1 (en) 1994-02-02 2002-10-08 Canon Kabushiki Kaisha Image recognition/reproduction method and apparatus
US6907140B2 (en) 1994-02-02 2005-06-14 Canon Kabushiki Kaisha Image recognition/reproduction method and apparatus

Similar Documents

Publication Publication Date Title
JP3728160B2 (en) Depth image measuring apparatus and method, and mixed reality presentation system
JP2874710B2 (en) 3D position measuring device
JP3347508B2 (en) Captured image processing device and captured image processing method
JP3624353B2 (en) Three-dimensional shape measuring method and apparatus
JP5872923B2 (en) AR image processing apparatus and method
JP2001346226A (en) Image processor, stereoscopic photograph print system, image processing method, stereoscopic photograph print method, and medium recorded with processing program
US9183634B2 (en) Image processing apparatus and image processing method
KR100686952B1 (en) Method, apparatus and writable medium for compositing images, and method, apparatus and writable medium for rendering three-dimensional model
TW201408041A (en) Method and system for converting 2D images to 3D images and computer-readable medium
JP2002071315A (en) Projection planar measuring system
JP3237414B2 (en) Stereo camera calibration device
JP2003346185A (en) Information display system and personal digital assistant
JPH08147497A (en) Picture processing method and device therefor
JP2013231607A (en) Calibration tool display device, calibration tool display method, calibration device, calibration method, calibration system and program
JP2002032744A (en) Device and method for three-dimensional modeling and three-dimensional image generation
JP6073123B2 (en) Stereoscopic display system, stereoscopic image generating apparatus, and stereoscopic image generating program
JPH0512413A (en) Image generating device
JP2006059165A (en) Three-dimensional modeling device, geometric pattern, three-dimensional modeling data generating method, three-dimensional modeling program and recording medium
JP6073121B2 (en) 3D display device and 3D display system
JP2003256804A (en) Visual field video generating device and method, and visual field video generating program and recording medium with its program recorded
JP2002135807A (en) Method and device for calibration for three-dimensional entry
KR20020021700A (en) Device And Method For 3 Dimensional Measurement
JPH09229648A (en) Input/output method and device for image information
CN111489384A (en) Occlusion assessment method, device, equipment, system and medium based on mutual view
JP2001084380A (en) Method and device for estimating light source direction and recording medium

Legal Events

Date Code Title Description
A02 Decision of refusal

Free format text: JAPANESE INTERMEDIATE CODE: A02

Effective date: 20000718