JP2709066B2 - A method of synthesizing a camera image and a computer-generated image - Google Patents

A method of synthesizing a camera image and a computer-generated image

Info

Publication number
JP2709066B2
JP2709066B2 JP1952888A JP1952888A JP2709066B2 JP 2709066 B2 JP2709066 B2 JP 2709066B2 JP 1952888 A JP1952888 A JP 1952888A JP 1952888 A JP1952888 A JP 1952888A JP 2709066 B2 JP2709066 B2 JP 2709066B2
Authority
JP
Japan
Prior art keywords
image
computer
depth
camera
generated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP1952888A
Other languages
Japanese (ja)
Other versions
JPH01196672A (en
Inventor
良三 武内
浩之 雨川
宗利 鵜沼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to JP1952888A priority Critical patent/JP2709066B2/en
Publication of JPH01196672A publication Critical patent/JPH01196672A/en
Application granted granted Critical
Publication of JP2709066B2 publication Critical patent/JP2709066B2/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Landscapes

  • Processing Or Creating Images (AREA)

Description

【発明の詳細な説明】 〔産業上の利用分野〕 本発明は画像合成方法に係り、特に実在しない画像の
生成に好適なカメラ撮影画像と計算機生成画像の合成方
法に関する。
Description: BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to an image synthesizing method, and more particularly, to a method of synthesizing a camera photographed image and a computer generated image suitable for generating a non-existent image.

〔従来の技術〕[Conventional technology]

従来のカメラ入力画像への他の画像の入力方法は、ワ
イプによる合成方法やクロマキーによる合成方法が用い
られていた。これらの方法は2次元画像上へ他の2次元
画像を合成すめために、合成領域の指定を精密に行なう
必要がある。このため例えばクロマキーによる方法で
は、背景色で合成領域を指定している。なお、これらの
画像合成方法については、例えば町田正彦編著「コンピ
ユータイメージング」コロナ社刊(昭和59年11月)第14
7頁から第154頁に述べられている。
As a conventional method for inputting another image to a camera input image, a composition method using a wipe or a composition method using a chroma key has been used. In these methods, in order to combine another two-dimensional image on a two-dimensional image, it is necessary to precisely specify a combination area. For this reason, for example, in the method using the chroma key, the synthesis area is specified by the background color. These image synthesizing methods are described in, for example, “Computer Imaging”, edited by Masahiko Machida, Corona Publishing Co. (November 1984), No. 14.
It is described on pages 7 to 154.

〔発明が解決しようとする課題〕[Problems to be solved by the invention]

上述従来技術は合成する領域を指定するので合成前の
画像が複雑に絡み合つたような合成が難しいうえ、合成
前の画像の遠近を含めた画像合成までは配慮がされてお
らず、物体の前後関係を保つ合成ができなかつた。さら
に最近では計算機を用いて複雑な画像が生成できるよう
になつたが、この計算機生成画像とカメラで撮影した画
像との合成も合成領域を指定する方法が採用されてお
り、上記同様の問題があつた。
In the above-described prior art, since an area to be synthesized is specified, it is difficult to perform synthesis such that the image before synthesis is complicatedly entangled, and no consideration is given to image synthesis including the perspective of the image before synthesis. The composition that maintains the context cannot be created. More recently, it has become possible to generate a complex image using a computer.However, a method of specifying a synthesis area has also been adopted for synthesizing this computer-generated image with an image captured by a camera. Atsuta.

本発明の目的はカメラで撮影した画像と計算機で生成
した画像とを簡便に合成できるカメラ撮影画像と計算機
生成画像の合成方法を提供するにある。
An object of the present invention is to provide a method of synthesizing an image captured by a camera and an image generated by a computer, which can easily synthesize an image captured by a camera and an image generated by a computer.

〔課題を解決するための手段〕[Means for solving the problem]

上記目的は、カメラで撮影する画像を立体視カメラを
構成する規定距離だけ離した2台のカメラで人間が見る
ように奥行きを持つた画像として撮影し、その2台のカ
メラで撮影した2枚の画像で同一点がどの程度離れてい
るかによつて奥行きを計算し、一方の計算機で生成する
画像も謂ゆるデプスバツフア法と呼ばれる方法で奥行き
を有する画像として求めておき、画面の各画素毎にカメ
ラ撮影画像の奥行きと計算機生成画像の奥行きとを比較
し、奥行きの近い側の画像情報(色情報)を残すことに
より合成画像を生成するカメラ撮影画像と計算機生成画
像の合成方法により達成される。
The above-mentioned object is to capture images taken by the cameras as images having a depth as seen by a human using two cameras separated by a specified distance constituting a stereoscopic camera, and two images taken by the two cameras Depth is calculated based on how far the same point is apart from each other in an image, and an image generated by one computer is also obtained as an image having depth by a method called a so-called depth buffer method, and for each pixel of the screen, This is achieved by a method for synthesizing a camera-captured image and a computer-generated image in which a depth of a camera-captured image is compared with a depth of a computer-generated image, and image information (color information) on a side closer to the depth is left to generate a composite image. .

〔作用〕[Action]

上記カメラ撮像画像と計算機生成画像の合成方法は、
立体視カメラの規定寸法離した左右の2台のカメラで撮
影した2枚の画像上での同一地点の画像上での位置のず
れから該地点までの距離(奥行き)を算出し、これを画
像を形成する画素毎に実行することにより各画素毎の色
情報と奥行き情報を得、一方の計算機で生成する画像に
ついても3次元で記述した物体から画像を生成するの
で、各画素毎の色情報と奥行き情報として出力すること
ができ、これらのカメラ撮影画像と計算機生成画像の同
一位置の画素毎に奥行き情報を比較し、奥行きの近い側
の色情報を残すようにしてカメラ撮影画像と計算機生成
画像とを合成することにより、違和感のない合成画像を
容易に得ることができる。
The method of synthesizing the camera-captured image and the computer-generated image includes:
The distance (depth) to the point is calculated from the displacement of the position of the same point on the two images taken by the two left and right cameras separated by the specified size of the stereoscopic camera from the image, and this is calculated as an image. Is performed for each pixel to form the color information and depth information for each pixel, and an image generated by one computer is also generated from an object described in three dimensions. The depth information can be output for each pixel at the same position in the image captured by the camera and the image generated by the computer, and the color information on the side closer to the depth is left so that the image captured by the camera and the image generated by the computer can be output. By synthesizing with the image, it is possible to easily obtain a synthesized image without a sense of discomfort.

〔実施例〕〔Example〕

以下に本発明の一実施例を第1図および第2図により
説明する。
An embodiment of the present invention will be described below with reference to FIGS.

第1図は本発明によるカメラ撮影画像と計算機生成画
像の合成方法の一実施例を示す流れ図である。第1図の
ステツプ1aで計算機による画像生成を行なつて謂ゆるコ
ンピユータ・グラフイツクス画像を作り、ステツプ1bで
該画像を画像毎の色情報と奥行き情報としてそれぞれフ
レームバツフア1とデプスバツフア1へ記憶する。一
方、ステツプ1cで立体視カメラにより奥行き情報付き画
像を撮影し、ステップ1dで該画像の画像毎の色情報と奥
行き情報をそれぞれフレームバツフア2とデプスバツフ
ア2へ記憶する。次に、ステツプ1eでデプスバツフア1
とデプスバツフア2の同番号にある画素毎に奥行き情報
を比較して近い側のフレームバツフア1または2の色情
報をフレームバツフア3へ記憶してゆく。このようにし
て、ステツプ1fでフレームバツフア3上に合成画像を生
成する。
FIG. 1 is a flow chart showing one embodiment of a method for synthesizing a camera photographed image and a computer generated image according to the present invention. In step 1a of FIG. 1, an image is generated by a computer to generate a so-called computer graphics image, and in step 1b, the image is stored in the frame buffer 1 and the depth buffer 1 as color information and depth information for each image. . On the other hand, in step 1c, an image with depth information is photographed by a stereoscopic camera, and in step 1d, color information and depth information for each image of the image are stored in the frame buffer 2 and the depth buffer 2, respectively. Next, the depth buffer 1 is set in step 1e.
Then, the depth information is compared for each pixel having the same number in the depth buffer 2 and the color information of the frame buffer 1 or 2 on the near side is stored in the frame buffer 3. In this way, a composite image is generated on the frame buffer 3 in step 1f.

第2図は本発明によるカメラ撮影画像と計算機生成画
像の合成方法の一実施例を示す装置構成図である。第2
図の撮影スイツチ1からの撮影信号SGNによつて、規定
寸法だけ離して配置した2台のカメラで構成される立体
視カメラ2で撮影を行ない、該立体視カメラ2の左と右
の2台のカメラで撮影された画像情報を光電変換して、
左目出力L−DATAと右目出力R−DATAを出力する。ここ
で事前に立体視カメラ2からのリセツト信号C−RSTに
よつて初期化された左目メモリ3と右目メモリ4にそれ
ぞれ出力される左目出力L−DATAと右目出力R−DATAを
立体視カメラ2からの同期信号C−CLKに従つて記憶し
ておく。一方、キーボードやマウスおよびジヨイステイ
ツクなどのデータ入力手段6から入力されるデータK−
DATAを計算機7で画像化し、該画像情報を画素毎の色情
報CLR2と奥行き情報DPT2として、それぞれフレームバツ
フア8とデプスバツフア9へ出力する。これらの画素毎
の色情報CRT2と奥行き情報DPT2は同時に計算機7から出
力される画素の番地ADR2で指定されるフレームバツフア
8とデプスバツフア9のメモリ位置へ計算機7からのタ
イミング信号G−CLKに同期してそれぞれ記憶される。
FIG. 2 is an apparatus configuration diagram showing one embodiment of a method of synthesizing a camera-captured image and a computer-generated image according to the present invention. Second
In accordance with a photographing signal SGN from a photographing switch 1 shown in the figure, photographing is performed by a stereoscopic camera 2 composed of two cameras arranged at a predetermined distance from each other, and two cameras, left and right, of the stereoscopic camera 2 Photoelectrically converts the image information taken by the camera of
The left-eye output L-DATA and the right-eye output R-DATA are output. Here, the left-eye output L-DATA and the right-eye output R-DATA output to the left-eye memory 3 and the right-eye memory 4, respectively, which have been initialized by the reset signal C-RST from the stereoscopic camera 2 in advance, are output to the stereoscopic camera 2. Is stored in accordance with the synchronization signal C-CLK. On the other hand, the data K- input from the data input means 6 such as a keyboard, a mouse, and a joystick.
DATA is converted into an image by a computer 7, and the image information is output to a frame buffer 8 and a depth buffer 9 as color information CLR2 and depth information DPT2 for each pixel. The color information CRT2 and the depth information DPT2 for each pixel are simultaneously synchronized with the timing signal G-CLK from the computer 7 to the memory positions of the frame buffer 8 and the depth buffer 9 specified by the pixel address ADR2 output from the computer 7. And are respectively stored.

この状態で計算機7から画像合成開始信号STRTが画素
毎奥行き算出回路5へ出力されると、画素毎奥行き算出
回路5から編集信号E−CLKが左目メモリ3と右目メモ
リ4へ出力され、該左目メモリ3と右目メモリ4からそ
れぞれ左目出力L−DATAと右目出力R−DATAが画素毎奥
行き算出回路5へ出力される。これらの左目出力L−DA
TAと右目出力R−DATAから画素毎奥行き算出回路5で画
素毎の奥行き情報DPT1が算出されて比較回路10へ出力さ
れると同時に、画素毎の色情報CLR1として左目出力L−
DATAがそのままフレームバツフア8へ出力される。ただ
し画像毎の色情報CLR1は左目出力L−DATAと右目出力R
−DATAのどちらか一方を選択すればよい。これらの出力
と同時に画素毎奥行き算出回路5から出力画素の番地AD
R1がフレームバツフア8とデプスバツフア9へ出力さ
れ、これらの出力の有効信号OE1に同期してデプスバツ
フア9から同番地の奥行き情報DPT3が比較回路10へ出力
される。この2種類の奥行き情報DPT1と奥行き情報DPT3
とが比較回路10で比較され、奥行き情報DPT1の方が小さ
い場合すなわちカメラ撮影画像の該当画素の方が計算機
生成画像の該当画素よりも近くにある場合にはメモリ書
替え信号CHGが比較回路10からアンド回路11へ出力さ
れ、デブスバツフア9からアンド回路11へ出力される同
期化信号OE2に同期してアンド回路11の書替え同期信号S
LCTがフレームバツフア8へ出力され、これによりフレ
ームバツフア8上の該当画素の色情報CLR2が該当画素の
色情報CLR1に書き替えられる。これらの処理を画面全体
で繰り返すことによつてフレームバツフア8上にカメラ
撮影画像と計算機生成画像の合成画像OUTを得ることが
できる。なおこの実施例ではフレームバツフア8とデプ
スバツフア9がそれぞれ1個ずつで構成されている。
In this state, when the computer 7 outputs the image synthesis start signal STRT to the pixel-by-pixel depth calculation circuit 5, the pixel-by-pixel depth calculation circuit 5 outputs the edit signal E-CLK to the left-eye memory 3 and the right-eye memory 4, and The left-eye output L-DATA and the right-eye output R-DATA are output from the memory 3 and the right-eye memory 4 to the depth calculation circuit 5 for each pixel. These left eye outputs L-DA
The depth information DPT1 for each pixel is calculated by the depth calculation circuit 5 for each pixel from the TA and the right-eye output R-DATA and output to the comparison circuit 10, and at the same time, the left-eye output L- is output as color information CLR1 for each pixel.
DATA is output to the frame buffer 8 as it is. However, the color information CLR1 for each image includes left-eye output L-DATA and right-eye output R
Only one of -DATA should be selected. At the same time as these outputs, the depth AD calculating circuit 5 outputs the address AD of the output pixel.
R1 is output to the frame buffer 8 and the depth buffer 9, and the depth information DPT3 of the same address is output from the depth buffer 9 to the comparison circuit 10 in synchronization with the output valid signal OE1. These two types of depth information DPT1 and depth information DPT3
Are compared by the comparison circuit 10, and when the depth information DPT1 is smaller, that is, when the corresponding pixel of the camera captured image is closer to the corresponding pixel of the computer-generated image, the memory rewrite signal CHG is output from the comparison circuit 10. The rewriting synchronization signal S of the AND circuit 11 is output in synchronization with the synchronization signal OE2 output to the AND circuit 11 and output from the depth buffer 9 to the AND circuit 11.
The LCT is output to the frame buffer 8, whereby the color information CLR2 of the corresponding pixel on the frame buffer 8 is rewritten to the color information CLR1 of the corresponding pixel. By repeating these processes over the entire screen, a composite image OUT of the camera photographed image and the computer-generated image can be obtained on the frame buffer 8. In this embodiment, one frame buffer 8 and one depth buffer 9 are provided.

〔発明の効果〕 本発明によれば、カメラ撮影による景色や実在の物体
画像と計算機で生成された仮想的な画像とを簡便に合成
できるので、1つはカメラ撮影画像と異なつて実在しな
い画像を得ることができ、2つは計算機のみで画像を生
成する場合に必要となる膨大な物体データの入力作業を
カメラ撮影画像で代替えすることで軽減できるなどの効
果がある。
[Effects of the Invention] According to the present invention, it is possible to easily combine a scene or real object image captured by a camera with a virtual image generated by a computer. And two are advantageous in that the input work of huge object data required when an image is generated only by a computer can be reduced by substituting an image taken by a camera.

【図面の簡単な説明】 第1図は本発明の一実施例を示す流れ図、第2図は本発
明の一実施例を示す装置構成図である。 1……撮影スイツチ、2……立体視カメラ、3……左目
メモリ、4……右目メモリ、5……画素毎奥行き算出回
路、6……データ入力手段、7……計算機、8……フレ
ームバツフア、9……デプスバツフア、10……比較回
路、11……アンド回路。
BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a flowchart showing one embodiment of the present invention, and FIG. 2 is a device configuration diagram showing one embodiment of the present invention. 1 photographic switch, 2 stereoscopic camera, 3 left eye memory, 4 right eye memory, 5 depth calculation circuit for each pixel, 6 data input means, 7 calculator, 8 frame Buffer 9, depth buffer 10, comparison circuit 11, AND circuit.

───────────────────────────────────────────────────── フロントページの続き (56)参考文献 特開 昭59−106071(JP,A) 特開 昭62−223720(JP,A) 特開 昭61−162085(JP,A) ──────────────────────────────────────────────────続 き Continuation of the front page (56) References JP-A-59-106071 (JP, A) JP-A-62-2223720 (JP, A) JP-A-61-162085 (JP, A)

Claims (1)

(57)【特許請求の範囲】(57) [Claims] 【請求項1】立体視カメラで撮影して得られる左目画像
と右目画像から、画素毎に奥行き情報と色情報で構成さ
れる画像情報を算出し、また計算機で生成する画像も画
素毎に奥行き情報と色情報で構成し、両画像の対応する
画素の奥行き値の小さい側の色情報で合成画像が構成す
ることを特徴とするカメラ撮像画像と計算機生成画像の
合成方法。
An image information composed of depth information and color information is calculated for each pixel from a left-eye image and a right-eye image obtained by photographing with a stereoscopic camera, and an image generated by a computer is also generated for each pixel. A method for synthesizing a camera-captured image and a computer-generated image, comprising: image information and color information; and a synthesized image is formed by color information on a side having a smaller depth value of corresponding pixels of both images.
JP1952888A 1988-02-01 1988-02-01 A method of synthesizing a camera image and a computer-generated image Expired - Fee Related JP2709066B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP1952888A JP2709066B2 (en) 1988-02-01 1988-02-01 A method of synthesizing a camera image and a computer-generated image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP1952888A JP2709066B2 (en) 1988-02-01 1988-02-01 A method of synthesizing a camera image and a computer-generated image

Publications (2)

Publication Number Publication Date
JPH01196672A JPH01196672A (en) 1989-08-08
JP2709066B2 true JP2709066B2 (en) 1998-02-04

Family

ID=12001837

Family Applications (1)

Application Number Title Priority Date Filing Date
JP1952888A Expired - Fee Related JP2709066B2 (en) 1988-02-01 1988-02-01 A method of synthesizing a camera image and a computer-generated image

Country Status (1)

Country Link
JP (1) JP2709066B2 (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH03205976A (en) * 1989-10-27 1991-09-09 Nec Corp Picture synthesizing system using special effector
US5077608A (en) * 1990-09-19 1991-12-31 Dubner Computer Systems, Inc. Video effects system able to intersect a 3-D image with a 2-D image
JPH0546161A (en) * 1991-08-12 1993-02-26 Casio Comput Co Ltd Virtual reality display device
JPH05328408A (en) * 1992-05-26 1993-12-10 Olympus Optical Co Ltd Head mounted display device
JP3331752B2 (en) * 1994-07-01 2002-10-07 松下電器産業株式会社 Image synthesis device
JP2000350236A (en) * 2000-01-01 2000-12-15 Casio Comput Co Ltd Image display device
JP4217100B2 (en) 2003-04-17 2009-01-28 本田技研工業株式会社 Image composition method, apparatus, and program, and stereo model rendering method, apparatus, and program

Also Published As

Publication number Publication date
JPH01196672A (en) 1989-08-08

Similar Documents

Publication Publication Date Title
US11076142B2 (en) Real-time aliasing rendering method for 3D VR video and virtual three-dimensional scene
US7295699B2 (en) Image processing system, program, information storage medium, and image processing method
US6747610B1 (en) Stereoscopic image display apparatus capable of selectively displaying desired stereoscopic image
US5077608A (en) Video effects system able to intersect a 3-D image with a 2-D image
US6014163A (en) Multi-camera virtual set system employing still store frame buffers for each camera
JP2003219271A (en) System for synthesizing multipoint virtual studio
JP2003187261A (en) Device and method for generating three-dimensional image, three-dimensional image processing apparatus, three-dimensional image photographing display system, three-dimensional image processing method and storage medium
KR101723210B1 (en) Method For Producting Virtual Stereo Studio Image In Real-Time Virtual Stereo Studio System
US8345085B2 (en) Method and apparatus for generating files for stereographic image display and method and apparatus for controlling stereographic image display
JP4110560B2 (en) Image processing method and apparatus
JP3724117B2 (en) Image generating apparatus and image generating method
JP2709066B2 (en) A method of synthesizing a camera image and a computer-generated image
JP4214529B2 (en) Depth signal generation device, depth signal generation program, pseudo stereoscopic image generation device, and pseudo stereoscopic image generation program
KR20080034419A (en) 3d image generation and display system
US7009606B2 (en) Method and apparatus for generating pseudo-three-dimensional images
JPH1188912A (en) Compound eye camera and display control method for the compound eye camera
JP3091644B2 (en) 3D image conversion method for 2D images
JPH09147134A (en) Animation generating device
WO2021171982A1 (en) Image processing device, three-dimensional model generating method, learning method, and program
JP7011728B2 (en) Image data output device, content creation device, content playback device, image data output method, content creation method, and content playback method
JPH1188910A (en) Three-dimension model generating device, three-dimension model generating method, medium recording three-dimension model generating program three-dimension model reproduction device, three-dimension model reproduction method and medium recording three-dimension model reproduction program
JP4006105B2 (en) Image processing apparatus and method
JP3406781B2 (en) Method and apparatus for forming compressed image for lenticular lens
JPH11150741A (en) Three-dimensional picture displaying method and its device by stereo photographing
JP2002095012A (en) Device and method for generating stereo image

Legal Events

Date Code Title Description
LAPS Cancellation because of no payment of annual fees