WO2008029529A1 - Stereoscoptic video image synthesizing device, shape data generating method and its program - Google Patents

Stereoscoptic video image synthesizing device, shape data generating method and its program Download PDF

Info

Publication number
WO2008029529A1
WO2008029529A1 PCT/JP2007/056012 JP2007056012W WO2008029529A1 WO 2008029529 A1 WO2008029529 A1 WO 2008029529A1 JP 2007056012 W JP2007056012 W JP 2007056012W WO 2008029529 A1 WO2008029529 A1 WO 2008029529A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
subject
shape
shape data
stereoscopic video
Prior art date
Application number
PCT/JP2007/056012
Other languages
French (fr)
Japanese (ja)
Inventor
Shiro Ozawa
Takao Abe
Noriyuki Naruto
Itaru Kamiya
Original Assignee
Ntt Comware Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ntt Comware Corporation filed Critical Ntt Comware Corporation
Publication of WO2008029529A1 publication Critical patent/WO2008029529A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators

Definitions

  • the present invention relates to a stereoscopic video composition device, a shape data generation method, and a program therefor, in particular, in addition to the synthesis of a stereoscopic video, and the output of shape data to a haptic haptic presentation device.
  • This application claims priority based on Japanese Patent Application No. 2006-244198 filed on Sep. 8, 2006, the contents of which are incorporated herein by reference.
  • a conventional stereoscopic image display device prepares an image that also has two viewpoint power corresponding to the left and right eyes, and uses a Noria method (for example, see Patent Document 1 and Patent Document 2) or a polarizing glass screen. By displaying on a three-dimensional display with a chatter method, the user can perceive in three dimensions.
  • a force feedback device that has a pen-type operation unit that allows you to experience tactile sensation by operating the pen, and to experience the tactile sensation of the hand when worn on the arm.
  • a haptic display device such as a boutique device that can do this.
  • Patent Document 1 JP-A-8-248355
  • Patent Document 2 Japanese Translation of Special Publication 2003-521181
  • the conventional 3D image display device only presents 3D images, and there is a problem that even if an object appears to be raised, it cannot be touched. There is.
  • shape data such as CAD (Computer Aided Design) data
  • CG Computer Graphics
  • shape data that matches the video must be prepared, and even if it can be applied to CG that requires shape data to generate video, it cannot be applied to live-action video shot with a video camera, etc. There was a problem.
  • the present invention has been made in view of such circumstances, and an object thereof is a video camera or the like.
  • 3D image data that can be presented by the tactile sense display device is output, while the 3D image display device presents the 3D image of the live-action image taken with the 3D image display device.
  • An object of the present invention is to provide a 3D image synthesizing apparatus capable of performing the above.
  • the present invention has been made to solve the above-described problem, and the stereoscopic video composition apparatus according to the present invention includes a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint.
  • a shape calculation unit that calculates shape data of a subject from the left image and the right image, and shape data calculated by the shape calculation unit
  • a shape output unit for outputting to the tactile sense presentation device.
  • the shape calculating unit extracts an image of a specific subject from each of the left image and the right image, and for each pixel of the extracted subject image, an image of the subject is extracted. According to the distance from the outline, a vertical coordinate is given to the image to generate shape data of the subject, and the position of the specific subject in the display space of the stereoscopic video synthesized by the device is extracted. Based on the parallax of the image of the specific subject, the shape data in which the generated shape data is arranged at the calculated position is calculated.
  • the shape calculation unit extracts an image of a specific subject from each of the left image and the right image, and performs stereo measurement based on the parallax of the extracted image to thereby determine the shape of the subject. Calculate the data.
  • the shape calculation unit calculates shape data of a subject by stereo measurement based on a parallax between the left image and the right image.
  • the shape data generation method of the present invention is the shape data generation method in the stereoscopic video composition device that synthesizes a stereoscopic video from a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint.
  • the program of the present invention causes a computer to function as a stereoscopic video composition device that synthesizes a stereoscopic video from a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint.
  • the shape calculation unit for calculating the shape data of the subject from the left image and the right image, and the shape data calculated by the shape calculation unit are output to the haptic sense device. It also functions as a shape output unit.
  • the stereoscopic video synthesizing device matches the stereoscopic video displayed on the stereoscopic video display device by inputting the real shot video shot by the left and right viewpoint power video cameras and the like to the device. It is possible to generate shape data that can be provided by the tactile sense presentation device.
  • FIG. 1 is a block diagram showing an outline of an embodiment of the present invention.
  • FIG. 2 is a schematic block diagram showing a configuration of a system using the stereoscopic video image synthesizing apparatus 300 in the same embodiment.
  • FIG. 3 is a schematic block diagram showing a configuration of a stereoscopic video image synthesizing apparatus 300 in the same embodiment.
  • FIG. 4 is a flowchart illustrating a method for generating a subject image in a shape calculation unit 33 in the same embodiment.
  • FIG. 5 is a flowchart for explaining a shape data generation method of a subject based on a left subject image or a right subject image in a shape calculation unit 33 in the same embodiment.
  • FIG. 6 is a diagram illustrating the contents of state S in the same embodiment.
  • FIG. 7 is a diagram illustrating a process of calculating the depth of the subject area A1 by the shape calculation unit 33 in the same embodiment.
  • FIG. 3 is a diagram for explaining a method.
  • the 3D video composition device 300 combines the video from the left eye viewpoint and the video of the right eye viewpoint captured by the left video photography device 100 and the right video photography device 200 respectively. Then, when displaying on the stereoscopic video display device 400, shape data is calculated from the captured video and output to the haptic sense / tactile sense presentation device 500. As a result, the user sees the 3D image displayed on the 3D image display device 400 and is displayed at the same time! The haptic haptic sensation for the shape matching the movement of the 3D image is displayed. Can be obtained.
  • FIG. 2 is a schematic block diagram showing the configuration of the stereoscopic video image synthesizing apparatus 300 according to the embodiment of the present invention.
  • the left image capturing device 100 is a video camera that captures images of the left eye's viewpoint power.
  • the right image capturing device 200 is a video camera that is installed in parallel to the right side of the left image capturing device 100 and captures images with the right eye viewpoint.
  • the three-dimensional video composition device 300 receives the left-eye and right-eye images from the left video photographing device 100 and the right video photographing device 200, synthesizes and outputs the three-dimensional video to the three-dimensional video display device 400, and Shape data that matches the motion is calculated and output to the haptic * tactile presentation device 500.
  • FIG. 3 is a schematic block diagram showing a configuration of the stereoscopic video image synthesizing apparatus 300 according to the present embodiment.
  • Reference numeral 32 denotes a right video data input unit that receives a video input from the right video shooting device 200 and outputs a right image extracted from the video frame by frame.
  • the shape calculation unit 33 generates a left subject image and a right subject image obtained by extracting only the subject from the left image and the right image received from the left video data input unit 31 and the right video data input unit 32.
  • the shape calculation unit calculates shape data synchronized with the stereoscopic video data generated by the stereoscopic video synthesis unit 35 by calculating the shape data of the subject based on the above. Details of the subject image generation method and the subject shape data calculation method in the shape calculation unit 33 will be described later.
  • Reference numeral 34 denotes a shape output unit that outputs the shape data calculated by the shape calculation unit 33 to the haptic * tactile sense presentation device 500.
  • the stereoscopic video composition unit 35 synthesizes the left image and the right image received from the left video data input unit 31 and the right video data input unit 32, and generates solid video data in a format suitable for the stereoscopic video display device 400. To do.
  • a stereoscopic video output unit 36 outputs the stereoscopic video data generated by the stereoscopic video synthesis unit 35 to the stereoscopic video display device 400.
  • FIG. 4 is a flowchart for explaining a method for generating a subject image in the shape calculation unit 33.
  • the flowchart in FIG. 4 shows the processing when the subject is extracted from the left image to generate the left subject image, but the shape calculation unit 33 extracts the subject for the right image in the same manner as the left image. A right subject image is generated.
  • the shape calculating unit 33 captures only the background with the left image capturing device 100 and stores the left background image received and extracted by the left image data input unit 31 in advance (Sal).
  • the shape calculating unit 33 calculates the red, green, and blue component values of the pixel i in the left image and the red, green, and blue component values of the pixel corresponding to the pixel i in the left background image. Acquire (Sa3), and determine whether the power of red, green, and blue components of these two pixels match (Sa4). If it is determined in step Sa4 that they match, the process directly proceeds to step Sa6. If it is determined that they do not match, the shape calculation unit 33 extracts the pixel i in the left image as a pixel of the subject image (the pixel After setting the color (as the subject) (Sa5), the process proceeds to step Sa6.
  • step Sa6 the shape calculation unit 33 adds 1 to the value of i, and in step Sa7, determines whether i is less than or equal to Imax. That is, it is determined whether or not the processing of steps Sa3 to Sa5 has been performed for all the pixels i constituting the left image based on whether i is equal to or less than Imax. If i is equal to or less than Imax, the process proceeds to step Sa3. The above process is repeated, and if i is not less than Imax, the process ends.
  • the image with pixel power extracted by this is the left subject image. That is, in the left image, the color of the pixel in the subject area (that is, the left subject image) is set, but the color is set in the pixel outside the subject area.
  • FIG. 5 illustrates a method for calculating a direction (depth direction) perpendicular to the subject image, among the shape data generation methods of the subject based on the left subject image or the right subject image in the shape calculation unit 33. It is a flowchart to do.
  • Fig. 6 is a diagram showing the possible values of state S used in the flowchart.
  • State S has a subject area and a subject area, and the subject area has a depth added and a depth not given. There are each state with grant. Pixels for which a color is set are the subject area (corresponding to the subject image), and pixels that are set and are dark are outside the subject area.
  • the position in the direction perpendicular to the image is calculated for each pixel of one of the left subject image and the right subject image (object region).
  • the position of each pixel in the horizontal and vertical axis directions of the image is the center of gravity of the subject area in the left subject image and the right subject. It is obtained by moving to the average position of the center of gravity of the subject area in the image.
  • the shape calculation unit 33 calculates the initial value D of the depth to be added using the method described later with reference to FIG. 8 (Sbl).
  • the shape calculation unit 33 sets the depth of each pixel in the subsequent step SblO. Set the initial value D to the variable D to be set (Sb2).
  • step Sb4 the shape calculation unit 33 acquires the state S of the pixel i in the right or left image including the subject image (Sb4).
  • the state S whether the pixel i is outside the subject area or the subject area (that is, the subject image) can be determined by whether or not the pixel i is set in color.
  • Whether the depth is given or not given is determined by whether the depth of the pixel i is given!
  • Pixels to which depth has been assigned are processed as if the depth has not been assigned during the execution of the loop (i.e., before the loop is exited and the depth value is changed and the next loop is executed before the loop is executed).
  • the shape calculation unit 33 determines whether or not the state S of the pixel i acquired in step Sb4 is the subject area (Sb5).
  • the determination in step Sb5 can be made based on whether or not a color is set for the pixel. If the color is set, it is the subject area, and if it is not set, it is outside the subject area.
  • step Sb5 If it is determined in step Sb5 that the subject area is not !, the process proceeds to step Sbl la, and if it is determined that it is the subject area because the color to be processed for the next pixel is set, the process proceeds to step Sb6. Transition.
  • step Sb6 the shape calculation unit 33 determines whether or not a depth is not applied to the pixel.
  • the shape calculation unit 33 determines whether there are any pixel states Sn out of the subject area among the eight pixel states Sn. Is determined (Sb8). If the shape calculation unit 33 determines that at least one of the eight pixels is outside the subject area, the shape calculation unit 33 proceeds to step SblO and assigns the value of depth D as the depth value of pixel i (SblO). Then, the process proceeds to step Sb 11a, and the process for the next pixel is performed. If it is determined in step Sb8 that there are no objects outside the subject area in state Sn, the process proceeds to step Sb9, and the shape calculation unit 33 adds depth to state Sn. It is determined whether or not there is (Sb9).
  • step Sb 11a If it is determined that there is no assigned one, the process proceeds to step Sb 11a, and the process for the next pixel is performed.
  • step Sb9 If it is determined in step Sb9 that depth has been added, the process proceeds to step SblO, and the shape calculation unit 33 assigns the value of depth D as the depth value of pixel i (SblO), Transition to step Sbl la.
  • step Sbl la the shape calculation unit 33 adds 1 to the value of i, and while i is equal to or less than Imax (step Sb l ib), returns to step Sb4, and for all pixels! /, Steps Sb4 to Sb Perform 10 steps.
  • step Sbl2 the shape calculation unit 33 determines whether or not there is a force that has not been given depth to the pixels in the subject area of the subject image (Sb 12). If it is determined that there are no unassigned ones, the shape calculating unit 33 terminates the process, but if it is determined that there are unassigned ones, the process proceeds to step Sbl4 and the preset addition value AD (Sbl3) is used. Add to variable D, and proceed to step Sb3 to repeat the above process. In this way, the shape calculation unit 33 can calculate the position of the depth for all the pixels in the subject area according to the distance of the outer force of the subject area.
  • addition value A D added to the variable D in step Sbl3 described above may be a constant or a value that varies depending on the number of additions.
  • FIG. 7 is a diagram for explaining a process of calculating the depth of the subject area A1 by the method of FIG.
  • the shape calculating unit 33 When there is a subject area A1 shown in FIG. 7 (a), the shape calculating unit 33 first assigns a depth D to the outermost area A2 shown in FIG. 7 (b). Next, the shape calculation unit 33 performs No. 2 shown in FIG.
  • Depth D + AD is given to the outer area A3.
  • the shape calculation unit 33 performs the process shown in FIG. Depth D + 2 ⁇ AD is given to the third outer area A4 shown. Now all subject areas
  • FIG. 8 is a diagram for explaining a method of calculating the initial value D in step S1 of FIG.
  • the coordinate XL is the coordinate in the horizontal axis direction of the center of gravity of the subject area Ml (corresponding to the left subject image) extracted from the left image G1 by the shape calculation unit 33 described in FIG. 4, and the left end of the left image G1 Is the origin.
  • the position of the center of gravity is obtained by averaging the coordinates of all the pixels in the subject area Ml.
  • the coordinate XR is the coordinate in the horizontal axis direction of the center of gravity of the subject area M2 (corresponding to the right subject image) extracted from the right image G2 by the shape calculation unit 33 described in FIG. 4, and the left end of the right image G2 is The origin.
  • the stereoscopic image display device 400 displays the stereoscopic video displayed on the user's viewpoint as the origin, and the shape calculation unit 33 performs the direction perpendicular to the image.
  • the coordinate Z that is, the initial value D of the given depth, is calculated using equation (1).
  • the shape calculation unit 33 calculates the Z coordinate in the stereoscopic image display space, that is, the initial value D of the assigned depth, using Equation (2).
  • the value of the Z coordinate is very small compared to the value of the X coordinate and the Y coordinate, but this is the value of the Z coordinate obtained by equation (2) X coordinate, Y coordinate This is because the scale is different from, and this is adjusted by multiplying the Z coordinate by a predetermined constant C. Further, the size of the predetermined constant C may be adjusted so as to emphasize the position in the Z-axis direction.
  • the position of the 3D image in the display space is calculated as follows.
  • the average (Xm, Ym) of the center of gravity of subject area Ml and the center of gravity of subject area M2 is
  • the position of the pixel I in the stereoscopic image display space is (63, 39, 0.03).
  • the stereoscopic video composition apparatus 300 calculates and outputs shape data that matches the movement of the live-action stereoscopic video, and displays the real-time stereoscopic video on the stereoscopic video display device 400. At the same time, it is possible to realize the tactile sensation requested by anyone intuitively by using the haptic haptic presentation device 500 that receives shape data.
  • the system using the 3D image synthesizing apparatus 300 of the present invention since the system using the 3D image synthesizing apparatus 300 of the present invention has a simple configuration, it can be easily used regardless of the installation conditions, and therefore it can be used particularly effectively in fields such as education and manual presentation. Can do.
  • pixels prepared in different colors are compared between the left background image and the right background image prepared in advance and the left image and the right image.
  • a chroma process that extracts pixels different from a predetermined color set in advance may be used.
  • the background image There is no need to shoot or to make the positional relationship between the left image capturing device 100 and the right image capturing device 200 and the background constant, but the background needs to be a predetermined color when shooting.
  • the depth is set according to the distance from the outline of the subject area as a method for calculating the shape data of the subject, but based on the parallax between the left subject image and the right subject image.
  • the shape data may be calculated by stereo measurement for calculating the depth (JP-A-8-254416, JP-A-11-94527, JP-A-2001-241928, etc.).
  • the above-described chroma processing may be used.
  • the amount of calculation for calculating the shape data is larger than setting the depth according to the distance from the outline of the subject area, it is possible to calculate shape data close to the actual shape.
  • the shape calculation unit 33 generates a left subject image and a right subject image obtained by extracting a subject from the left image and the right image, and based on the generated left subject image and right subject image, Although the shape data is calculated, the stereo measurement process is performed based on the parallax between the left and right images without extracting the subject from the left and right images. Shape data may be calculated.
  • the shape data close to the actual shape can be calculated.
  • the input device refers to an input device such as a keyboard and a mouse.
  • a display device is a CRT (Cathode Ray Tube) or a liquid crystal display device.
  • the right video data input unit 32, the shape calculation unit 33, the shape output unit 34, the stereoscopic video composition unit 35, and the stereoscopic video output unit 36 in FIG. Is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system and executed to execute The processing of the image data input unit 31, the right video data input unit 32, the shape calculation unit 33, the shape output unit 34, the stereoscopic video synthesis unit 35, and the stereoscopic video output unit 36 may be performed.
  • the “computer system” here includes the OS and hardware such as peripheral devices.
  • the "computer system” includes a home page providing environment (or a display environment) if a WWW system is used.
  • the “computer-readable recording medium” refers to a storage device such as a flexible disk, a magneto-optical disk, a portable medium such as a ROM and a CD-ROM, and a hard disk incorporated in a computer system.
  • a “computer-readable recording medium” is a program that dynamically holds a program for a short time, like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line.
  • a network such as the Internet
  • a communication line such as a telephone line.
  • the server and the client in that case, such as the volatile memory inside the computer system, hold the program for a certain period of time and then include it.
  • the above program may be for realizing a part of the above-described functions. Furthermore, the above-described function may be realized in combination with a program already recorded in a computer system. good.
  • the stereoscopic video composition device of the present invention has a simple configuration and does not select installation conditions, and is therefore not limited to this force suitable for use in education or manual presentation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

A stereoscopic video image synthesizing device is provided for synthesizing a left image seen from a view point of the left eye and a right image seen from a view point of the right eye to form a stereoscopic video image. The stereoscopic video image synthesizing device is comprised of a shape calculating unit for calculating shape data from the left and right images, and a shape output unit for providing the shape data calculated by the shape calculating unit to a kinesthetic-and-tactile sense indicating unit. Typically, the shape calculating unit extracts an image of a specific object from each of the left and right images and gives coordinates in a vertical direction to the image in accordance with distance from the outer contour of the object image to generate the shape data of the object.

Description

明 細 書  Specification
立体映像合成装置、形状データ生成方法およびそのプログラム 技術分野  3D image composition device, shape data generation method and program thereof
[0001] 本発明は、特に立体映像の合成とともに、カ覚 '触覚提示装置に形状データを出 力する立体映像合成装置、形状データ生成方法およびそのプログラムに関する。 本願は、 2006年 9月 8日に出願された特願 2006— 244198号に基づき優先権を 主張し、その内容をここに援用する。  TECHNICAL FIELD [0001] The present invention relates to a stereoscopic video composition device, a shape data generation method, and a program therefor, in particular, in addition to the synthesis of a stereoscopic video, and the output of shape data to a haptic haptic presentation device. This application claims priority based on Japanese Patent Application No. 2006-244198 filed on Sep. 8, 2006, the contents of which are incorporated herein by reference.
背景技術  Background art
[0002] 従来の立体映像表示装置は、あら力じめ左右の目に対応する 2視点力も見た映像 を用意しておき、ノリア方式 (例えば、特許文献 1、特許文献 2参照)や偏光グラスシ ャッター方式の三次元ディスプレイ上で表示することで、ユーザが立体的に知覚する ことができる。  [0002] A conventional stereoscopic image display device prepares an image that also has two viewpoint power corresponding to the left and right eyes, and uses a Noria method (for example, see Patent Document 1 and Patent Document 2) or a polarizing glass screen. By displaying on a three-dimensional display with a chatter method, the user can perceive in three dimensions.
また、ペンタイプの操作部を持ち、ペンを操作することでカ覚ゃ触覚を体験すること ができるフォースフィードバック装置や、腕に装着し、腕全体のカ覚ゃ手の触感を体 験することができるハブティック装置などの力覚'触覚提示装置もある。  In addition, a force feedback device that has a pen-type operation unit that allows you to experience tactile sensation by operating the pen, and to experience the tactile sensation of the hand when worn on the arm. There is also a haptic display device such as a boutique device that can do this.
特許文献 1:特開平 8 - 248355号公報  Patent Document 1: JP-A-8-248355
特許文献 2:特表 2003— 521181号公報  Patent Document 2: Japanese Translation of Special Publication 2003-521181
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0003] し力しながら、従来の立体映像表示装置にあっては、立体映像を提示するのみで あり、立体感があり、物体が浮き出て見えたとしても、それに触れることはできないとい う問題がある。また、 CAD (Computer Aided Design)データなどの形状データに基づ き、 CG (Computer Graphics)を表示しながら、カ覚 '触覚提示装置でカ覚および触 覚を提示することはできたが、予め映像と一致した形状データを用意しなければなら ず、映像を生成するのに形状データが必要な CGには適用できても、ビデオカメラな どで撮影した実写映像には適用できな 、と 、う問題があった。  [0003] However, the conventional 3D image display device only presents 3D images, and there is a problem that even if an object appears to be raised, it cannot be touched. There is. In addition, based on shape data such as CAD (Computer Aided Design) data, while displaying CG (Computer Graphics), it was possible to present a haptic and tactile sensation with a haptic haptic device. Shape data that matches the video must be prepared, and even if it can be applied to CG that requires shape data to generate video, it cannot be applied to live-action video shot with a video camera, etc. There was a problem.
[0004] 本発明は、このような事情に鑑みてなされたもので、その目的は、ビデオカメラなど で撮影した実写映像の立体映像を立体映像表示装置にて提示しつつ、該立体映像 と一致したカ覚 '触覚をカ覚 '触覚提示装置にて提示可能な立体映像データと形状 データとを出力できる立体映像合成装置を提供することにある。 [0004] The present invention has been made in view of such circumstances, and an object thereof is a video camera or the like. 3D image data that can be presented by the tactile sense display device is output, while the 3D image display device presents the 3D image of the live-action image taken with the 3D image display device. An object of the present invention is to provide a 3D image synthesizing apparatus capable of performing the above.
課題を解決するための手段  Means for solving the problem
[0005] この発明は上述した課題を解決するためになされたもので、本発明の立体映像合 成装置は、左目の視点から見た左画像と右目の視点カゝら見た右画像とから立体映像 を合成する立体映像合成装置にお!ヽて、前記左画像と前記右画像とから被写体の 形状データを算出する形状算出部と、前記形状算出部が算出した形状データを、力 覚 '触覚提示装置に出力する形状出力部とを備えることを特徴とする。  [0005] The present invention has been made to solve the above-described problem, and the stereoscopic video composition apparatus according to the present invention includes a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint. In a stereoscopic video composition device that synthesizes a stereoscopic video, a shape calculation unit that calculates shape data of a subject from the left image and the right image, and shape data calculated by the shape calculation unit And a shape output unit for outputting to the tactile sense presentation device.
[0006] 典型例として、前記形状算出部は、前記左画像および前記右画像それぞれから特 定の被写体の画像を抽出し、前記抽出した被写体の画像の各画素について、該被 写体の画像の外郭からの距離に応じて画像に対して鉛直方向の座標を与えて該被 写体の形状データを生成し、自装置が合成する立体映像の表示空間における前記 特定の被写体の位置を、前記抽出した特定の被写体の画像の視差に基づき算出し 、前記生成した形状データを該算出した位置に配置した形状データを算出する。  [0006] As a typical example, the shape calculating unit extracts an image of a specific subject from each of the left image and the right image, and for each pixel of the extracted subject image, an image of the subject is extracted. According to the distance from the outline, a vertical coordinate is given to the image to generate shape data of the subject, and the position of the specific subject in the display space of the stereoscopic video synthesized by the device is extracted. Based on the parallax of the image of the specific subject, the shape data in which the generated shape data is arranged at the calculated position is calculated.
[0007] 別の典型例として、前記形状算出部は、前記左画像および前記右画像それぞれか ら特定の被写体の画像を抽出し、前記抽出した画像の視差に基づくステレオ計測に より、被写体の形状データを算出する。  [0007] As another typical example, the shape calculation unit extracts an image of a specific subject from each of the left image and the right image, and performs stereo measurement based on the parallax of the extracted image to thereby determine the shape of the subject. Calculate the data.
[0008] 別の典型例として、前記形状算出部は、前記左画像と前記右画像との視差に基づ くステレオ計測により、被写体の形状データを算出する。  [0008] As another typical example, the shape calculation unit calculates shape data of a subject by stereo measurement based on a parallax between the left image and the right image.
[0009] また、本発明の形状データ生成方法は、左目の視点から見た左画像と右目の視点 力も見た右画像とから立体映像を合成する立体映像合成装置における形状データ 生成方法において、前記立体映像合成装置が、前記左画像と前記右画像とから被 写体の形状データを算出する第 1の過程と、前記立体映像合成装置が、前記第 1の 過程にて算出した形状データを、力覚*触覚提示装置に出力する第 2の過程とを備 えることを特徴とする。  [0009] Further, the shape data generation method of the present invention is the shape data generation method in the stereoscopic video composition device that synthesizes a stereoscopic video from a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint. A first process in which the stereoscopic video synthesizer calculates shape data of the object from the left image and the right image, and the shape data calculated in the first process by the stereoscopic video synthesizer, And a second process of outputting to a haptic * tactile presentation device.
[0010] また、本発明のプログラムは、コンピュータを、左目の視点から見た左画像と右目の 視点から見た右画像とから立体映像を合成する立体映像合成装置として機能させる ためのプログラムにお 1、て、前記左画像と前記右画像とから被写体の形状データを 算出する形状算出部、前記形状算出部が算出した形状データを、力覚'触覚提示装 置に出力する形状出力部としても機能させる。 [0010] Further, the program of the present invention causes a computer to function as a stereoscopic video composition device that synthesizes a stereoscopic video from a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint. The shape calculation unit for calculating the shape data of the subject from the left image and the right image, and the shape data calculated by the shape calculation unit are output to the haptic sense device. It also functions as a shape output unit.
発明の効果  The invention's effect
[0011] この発明によれば、立体映像合成装置は、左右 2つの視点力 ビデオカメラなどで 撮影した実写映像を本装置に入力させることで、立体映像表示装置に表示させた立 体映像と一致した力覚 '触覚をカ覚'触覚提示装置にて提供させる形状データを生 成することができる。  [0011] According to the present invention, the stereoscopic video synthesizing device matches the stereoscopic video displayed on the stereoscopic video display device by inputting the real shot video shot by the left and right viewpoint power video cameras and the like to the device. It is possible to generate shape data that can be provided by the tactile sense presentation device.
図面の簡単な説明  Brief Description of Drawings
[0012] [図 1]この発明の一実施形態の概略を示すブロック図である。 FIG. 1 is a block diagram showing an outline of an embodiment of the present invention.
[図 2]同実施形態における立体映像合成装置 300を用いたシステムの構成を示す概 略ブロック図である。  FIG. 2 is a schematic block diagram showing a configuration of a system using the stereoscopic video image synthesizing apparatus 300 in the same embodiment.
[図 3]同実施形態における立体映像合成装置 300の構成を示す概略ブロック図であ る。  FIG. 3 is a schematic block diagram showing a configuration of a stereoscopic video image synthesizing apparatus 300 in the same embodiment.
[図 4]同実施形態における形状算出部 33における被写体画像の生成方法を説明す るフローチャートである。  FIG. 4 is a flowchart illustrating a method for generating a subject image in a shape calculation unit 33 in the same embodiment.
[図 5]同実施形態における形状算出部 33における左被写体画像もしくは右被写体画 像に基づく被写体の形状データ生成方法を説明するフローチャートである。  FIG. 5 is a flowchart for explaining a shape data generation method of a subject based on a left subject image or a right subject image in a shape calculation unit 33 in the same embodiment.
[図 6]同実施形態における状態 Sの内容を説明する図である。  FIG. 6 is a diagram illustrating the contents of state S in the same embodiment.
[図 7]同実施形態における形状算出部 33にて被写体領域 A1の奥行きを算出する経 過を説明する図である。  FIG. 7 is a diagram illustrating a process of calculating the depth of the subject area A1 by the shape calculation unit 33 in the same embodiment.
[図 8]同実施形態における形状算出部 33にて図 5のステップ S1の初期値 Dの算出  [Fig. 8] Calculation of initial value D in step S1 in Fig. 5 by shape calculation unit 33 in the same embodiment.
0 方法を説明する図である。  FIG. 3 is a diagram for explaining a method.
符号の説明  Explanation of symbols
[0013] 100…左映像撮影装置  [0013] 100 ... Left image photographing device
200…右映像撮影装置  200 ... Right image shooting device
300…立体映像合成装置  300 ... stereoscopic image synthesizer
400…立体映像表示装置 500…カ覚 '触覚提示装置 400… 3D image display device 500… Katsu sense 'tactile presentation device
31· ··左映像データ入力部  31 ···· Left image data input section
32· ··右映像データ入力部  32..Right video data input section
33· ··形状算出部  33 ... Shape calculation part
34· ··形状データ出力部  34..Shape data output section
35…立体映像合成部  35… 3D image composition part
36· ··立体映像出力部  36 ··· 3D image output
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0014] 本発明の一実施形態の概要を説明する。 [0014] An outline of an embodiment of the present invention will be described.
立体映像合成装置 300は、図 1に示すように、左映像撮影装置 100と右映像撮影 装置 200がそれぞれ撮影した左目の視点からの映像と右目の視点力ゝらの映像とを立 体映像合成して、立体映像表示装置 400にて表示する際に、撮影した映像から形状 データを算出して、力覚'触覚提示装置 500に出力する。これにより、ユーザは立体 映像表示装置 400にて表示された立体映像を見ると同時に、表示されて!、る立体映 像の動きと一致した形状に対する力覚'触覚を力覚'触覚提示装置 500により得るこ とがでさる。  As shown in FIG. 1, the 3D video composition device 300 combines the video from the left eye viewpoint and the video of the right eye viewpoint captured by the left video photography device 100 and the right video photography device 200 respectively. Then, when displaying on the stereoscopic video display device 400, shape data is calculated from the captured video and output to the haptic sense / tactile sense presentation device 500. As a result, the user sees the 3D image displayed on the 3D image display device 400 and is displayed at the same time! The haptic haptic sensation for the shape matching the movement of the 3D image is displayed. Can be obtained.
[0015] 以下、図面を参照して、本発明の実施の形態について説明する。  Hereinafter, embodiments of the present invention will be described with reference to the drawings.
図 2は、この発明の一実施形態による立体映像合成装置 300の構成を示す概略ブ ロック図である。  FIG. 2 is a schematic block diagram showing the configuration of the stereoscopic video image synthesizing apparatus 300 according to the embodiment of the present invention.
左映像撮影装置 100は、左目の視点力ゝらの映像を撮影するビデオカメラである。右 映像撮影装置 200は、左映像撮影装置 100の右側に平行に設置され、右目の視点 力もの映像を撮影するビデオカメラである。立体映像合成装置 300は、左映像撮影 装置 100と右映像撮影装置 200とから左目用および右目用の映像を受付けて、立体 映像を合成して立体映像表示装置 400に出力するとともに、立体映像の動きと一致 している形状データを算出して、力覚*触覚提示装置 500に出力する。  The left image capturing device 100 is a video camera that captures images of the left eye's viewpoint power. The right image capturing device 200 is a video camera that is installed in parallel to the right side of the left image capturing device 100 and captures images with the right eye viewpoint. The three-dimensional video composition device 300 receives the left-eye and right-eye images from the left video photographing device 100 and the right video photographing device 200, synthesizes and outputs the three-dimensional video to the three-dimensional video display device 400, and Shape data that matches the motion is calculated and output to the haptic * tactile presentation device 500.
[0016] 図 3は、本実施形態による立体映像合成装置 300の構成を示す概略ブロック図で ある。 FIG. 3 is a schematic block diagram showing a configuration of the stereoscopic video image synthesizing apparatus 300 according to the present embodiment.
31は、左映像撮影装置 100から入力された映像を受付けて、該映像から 1フレーム ずつ抽出した左画像を出力する左映像データ入力部である。 31 accepts an image input from the left image capturing device 100 and outputs one frame from the image. This is a left video data input unit for outputting left images extracted one by one.
32は、右映像撮影装置 200から入力された映像を受付けて、該映像から 1フレーム ずつ抽出した右画像を出力する右映像データ入力部である。  Reference numeral 32 denotes a right video data input unit that receives a video input from the right video shooting device 200 and outputs a right image extracted from the video frame by frame.
33は、左映像データ入力部 31と右映像データ入力部 32とから受けた左画像およ び右画像カゝら被写体のみを抽出した左被写体画像および右被写体画像を生成し、 この 2つの画像に基づき被写体の形状データを算出することで、立体映像合成部 35 にて生成する立体映像データと同期した形状データを算出する形状算出部である。 形状算出部 33における被写体画像の生成方法と、被写体の形状データの算出方法 につ 、ての詳細は後述する。  33 generates a left subject image and a right subject image obtained by extracting only the subject from the left image and the right image received from the left video data input unit 31 and the right video data input unit 32. The shape calculation unit calculates shape data synchronized with the stereoscopic video data generated by the stereoscopic video synthesis unit 35 by calculating the shape data of the subject based on the above. Details of the subject image generation method and the subject shape data calculation method in the shape calculation unit 33 will be described later.
[0017] 34は、形状算出部 33により算出された形状データを力覚*触覚提示装置 500に出 力する形状出力部である。 Reference numeral 34 denotes a shape output unit that outputs the shape data calculated by the shape calculation unit 33 to the haptic * tactile sense presentation device 500.
立体映像合成部 35は、左映像データ入力部 31と右映像データ入力部 32とから受 けた左画像および右画像を合成して、立体映像表示装置 400に合わせた形式の立 体映像データを生成する。  The stereoscopic video composition unit 35 synthesizes the left image and the right image received from the left video data input unit 31 and the right video data input unit 32, and generates solid video data in a format suitable for the stereoscopic video display device 400. To do.
36は、立体映像合成部 35が生成した立体映像データを、立体映像表示装置 400 に出力する立体映像出力部である。  A stereoscopic video output unit 36 outputs the stereoscopic video data generated by the stereoscopic video synthesis unit 35 to the stereoscopic video display device 400.
[0018] 図 4は、形状算出部 33における被写体画像の生成方法を説明するフローチャート である。図 4のフローチャートは、左画像から被写体を抽出して左被写体画像を生成 する際の処理であるが、形状算出部 33は、右画像についても左画像の場合と同様 にして被写体を抽出して右被写体画像を生成する。 FIG. 4 is a flowchart for explaining a method for generating a subject image in the shape calculation unit 33. The flowchart in FIG. 4 shows the processing when the subject is extracted from the left image to generate the left subject image, but the shape calculation unit 33 extracts the subject for the right image in the same manner as the left image. A right subject image is generated.
形状算出部 33は、左映像撮影装置 100にて背景のみを撮影し、左映像データ入 力部 31が受け付けて抽出した左背景画像を、予め記憶しておく(Sal)。  The shape calculating unit 33 captures only the background with the left image capturing device 100 and stores the left background image received and extracted by the left image data input unit 31 in advance (Sal).
形状算出部 33は、左映像データ入力部 31から左画像を受けると、ステップ Salに て記憶した左背景画像を取得し、左画像を構成する i=0から Imaxまでの全ての画 素 iにつ!/、てステップ Sa3からステップ Sa6を繰り返す(Sa2)。  When the shape calculation unit 33 receives the left image from the left video data input unit 31, the shape calculation unit 33 obtains the left background image stored in step Sal and applies to all the pixels i from i = 0 to Imax constituting the left image. Tsu! /, Repeat steps Sa3 to Sa6 (Sa2).
[0019] まず、形状算出部 33は、左画像の画素 iの赤、緑、青成分の値と、左背景画像にお ける画素 iに対応する画素の赤、緑、青成分の値とを取得し (Sa3)、これら 2つの画素 の赤、緑、青成分の値が一致する力否かを判定する(Sa4)。 ステップ Sa4にて一致すると判定したときは、そのままステップ Sa6に遷移し、一致 しないと判定したときは、形状算出部 33は、左画像における該画素 iを被写体の画像 の画素として抽出(その画素の (被写体としての)色を設定する)した後に(Sa5)、ス テツプ Sa6に遷移する。 [0019] First, the shape calculating unit 33 calculates the red, green, and blue component values of the pixel i in the left image and the red, green, and blue component values of the pixel corresponding to the pixel i in the left background image. Acquire (Sa3), and determine whether the power of red, green, and blue components of these two pixels match (Sa4). If it is determined in step Sa4 that they match, the process directly proceeds to step Sa6. If it is determined that they do not match, the shape calculation unit 33 extracts the pixel i in the left image as a pixel of the subject image (the pixel After setting the color (as the subject) (Sa5), the process proceeds to step Sa6.
ステップ Sa6では、形状算出部 33は、 iの値を 1加算し、ステップ Sa7で iが Imax以 下であるか否かを判断する。即ち、左画像を構成する全ての画素 iについてステップ Sa3〜Sa5の処理を行つたかを iが Imax以下であるか否かで判定し、 iが Imax以下 であるときは、ステップ Sa3に遷移して、前述の処理を繰り返し、 iが Imax以下でない ときは、処理を終了する。  In step Sa6, the shape calculation unit 33 adds 1 to the value of i, and in step Sa7, determines whether i is less than or equal to Imax. That is, it is determined whether or not the processing of steps Sa3 to Sa5 has been performed for all the pixels i constituting the left image based on whether i is equal to or less than Imax. If i is equal to or less than Imax, the process proceeds to step Sa3. The above process is repeated, and if i is not less than Imax, the process ends.
これにより抽出された画素力 なる画像が左被写体画像である。すなわち、左画像 のうち、被写体の領域の画素 (即ち、左被写体画像)は色が設定されているが、被写 体の領域外の画素は色が設定されて 、な 、。  The image with pixel power extracted by this is the left subject image. That is, in the left image, the color of the pixel in the subject area (that is, the left subject image) is set, but the color is set in the pixel outside the subject area.
[0020] 図 5は、形状算出部 33における左被写体画像もしくは右被写体画像に基づく被写 体の形状データ生成方法のうち、被写体の画像に対して垂直な方向(奥行き方向) の算出方法を説明するフローチャートである。 FIG. 5 illustrates a method for calculating a direction (depth direction) perpendicular to the subject image, among the shape data generation methods of the subject based on the left subject image or the right subject image in the shape calculation unit 33. It is a flowchart to do.
図 6は、フローチャート中で用いられている状態 Sのとりうる値を示した図であり、状 態 Sには被写体領域外と被写体領域とがあり、被写体領域には奥行き付与済みと奥 行き未付与との各状態がある。色が設定されている画素は被写体領域 (上記被写体 画像に対応)であり、設定されて ヽな ヽ画素は被写体領域外である。  Fig. 6 is a diagram showing the possible values of state S used in the flowchart. State S has a subject area and a subject area, and the subject area has a depth added and a depth not given. There are each state with grant. Pixels for which a color is set are the subject area (corresponding to the subject image), and pixels that are set and are dark are outside the subject area.
本方法にて左被写体画像もしくは右被写体画像のうちの片方の被写体画像 (被写 体領域)の各画素について、画像に対して垂直な方向(奥行き方向)の位置を算出 する。各画素の、画像の横軸方向および縦軸方向の位置(立体映像の表示空間に おける位置)については、該被写体領域の重心位置を、左被写体画像における被写 体領域の重心位置と右被写体画像における被写体領域の重心位置との平均の位置 へ並行移動することで求める。  With this method, the position in the direction perpendicular to the image (depth direction) is calculated for each pixel of one of the left subject image and the right subject image (object region). Regarding the position of each pixel in the horizontal and vertical axis directions of the image (position in the stereoscopic image display space), the position of the center of gravity of the subject area is the center of gravity of the subject area in the left subject image and the right subject. It is obtained by moving to the average position of the center of gravity of the subject area in the image.
[0021] 予め本処理を行う前に、形状算出部 33は、図 8にて後述する方法を用いて、付与 する奥行きの初期値 Dを算出しておく(Sbl)。 Before performing this processing in advance, the shape calculation unit 33 calculates the initial value D of the depth to be added using the method described later with reference to FIG. 8 (Sbl).
0  0
処理を開始すると、形状算出部 33は、後のステップ SblOにて各画素の奥行きとし て設定する変数 Dに初期値 Dを設定する(Sb2)。 When the process is started, the shape calculation unit 33 sets the depth of each pixel in the subsequent step SblO. Set the initial value D to the variable D to be set (Sb2).
0  0
次に、形状算出部 33は、該被写体画像を構成する i=0から Imaxまでの全ての画 素について、本ステップ Sb3とステップ Sbl laに挟まれたステップ Sb4〜SblOの処 理を行う(Sb3)。  Next, the shape calculation unit 33 performs the processing of Steps Sb4 to SblO sandwiched between Step Sb3 and Step Sblla for all the pixels from i = 0 to Imax constituting the subject image (Sb3 ).
ステップ Sb4では、形状算出部 33は、被写体画像を含む右または左画像における 画素 iの状態 Sを取得する(Sb4)。状態 Sを取得するにあたって、画素 iが被写体領域 外か被写体領域 (即ち、該被写体画像)かは、該画素 iが色を設定されているか否か で判定することができる。  In step Sb4, the shape calculation unit 33 acquires the state S of the pixel i in the right or left image including the subject image (Sb4). In acquiring the state S, whether the pixel i is outside the subject area or the subject area (that is, the subject image) can be determined by whether or not the pixel i is set in color.
奥行き付与済みか奥行き未付与かは、該画素 iの奥行きが付与されて!、るか否か で判定する。ここで、ステップ Sb3〜Sbl laの i=0から Imaxのループは、ステップ Sb 14を経て異なる奥行きの値を付与するように何度も繰り返されるが、同じ値の奥行き を付与するループの間に奥行きを付与された画素は、そのループの実行中は奥行き 未付与とみなして処理を行う(即ち、当該ループを抜け、奥行きの値が変更されて次 のループが実行される前に、当該ループ中で奥行きが付与された画素を「奥行き付 与済み」に設定する)。  Whether the depth is given or not given is determined by whether the depth of the pixel i is given! Here, the loop from i = 0 to Imax in steps Sb3 to Sbl la is repeated many times to give different depth values through step Sb14, but between the loops giving the same value of depth. Pixels to which depth has been assigned are processed as if the depth has not been assigned during the execution of the loop (i.e., before the loop is exited and the depth value is changed and the next loop is executed before the loop is executed). Set the pixel with depth added to “Depth added”).
[0022] 次に、形状算出部 33は、ステップ Sb4にて取得した画素 iの状態 Sが被写体領域か 否かを判定する(Sb5)。このステップ Sb5の判定は該画素に色が設定されているか 否かにより判定することができる。色が設定されていれば被写体領域であり、設定さ れて 、なければ被写体領域外である。  Next, the shape calculation unit 33 determines whether or not the state S of the pixel i acquired in step Sb4 is the subject area (Sb5). The determination in step Sb5 can be made based on whether or not a color is set for the pixel. If the color is set, it is the subject area, and if it is not set, it is outside the subject area.
このステップ Sb5にて被写体領域でな!、と判定すると、ステップ Sbl laに遷移して、 次の画素についての処理に移る力 色が設定されており被写体領域であると判定す るとステップ Sb6に遷移する。  If it is determined in step Sb5 that the subject area is not !, the process proceeds to step Sbl la, and if it is determined that it is the subject area because the color to be processed for the next pixel is set, the process proceeds to step Sb6. Transition.
[0023] ステップ Sb6では、形状算出部 33は、該画素に奥行きが未付与力否かを判定する  [0023] In step Sb6, the shape calculation unit 33 determines whether or not a depth is not applied to the pixel.
(Sb6)。該画素には既に奥行きが付与されており未付与でないと判定するとステップ Sbl laに遷移して、次の画素についての処理に移る力 該画素には未だ奥行きが付 与されておらず未付与であると判定すると、形状算出部 33は、画素 iを囲む 8点の画 素の状態 Sn (n= 1〜8)を取得する(Sb7)。  (Sb6). If it is determined that the pixel has already been given a depth and is not yet assigned, the process proceeds to step Sbl la, and the process moves to the next pixel. The pixel has not yet been given a depth and has not been assigned. If it is determined that there is, the shape calculation unit 33 acquires the state Sn (n = 1 to 8) of the eight points surrounding the pixel i (Sb7).
形状算出部 33は、 8点の画素の状態 Snの中に被写体領域外のものがあるか否か を判定する(Sb8)。形状算出部 33は、該 8点の画素に一つでも被写体領域外の画 素があると判定するとステップ SblOに遷移して、画素 iの奥行きの値として奥行き Dの 値を付与し(SblO)、ステップ Sb 11aに遷移して、次の画素についての処理に移る。 また、ステップ Sb8の判定にて、状態 Snの中に被写体領域外のものがないと判定 すると、ステップ Sb9に遷移して、形状算出部 33は、状態 Snの中に奥行きを付与済 みのものがあるか否かを判定する(Sb9)。 The shape calculation unit 33 determines whether there are any pixel states Sn out of the subject area among the eight pixel states Sn. Is determined (Sb8). If the shape calculation unit 33 determines that at least one of the eight pixels is outside the subject area, the shape calculation unit 33 proceeds to step SblO and assigns the value of depth D as the depth value of pixel i (SblO). Then, the process proceeds to step Sb 11a, and the process for the next pixel is performed. If it is determined in step Sb8 that there are no objects outside the subject area in state Sn, the process proceeds to step Sb9, and the shape calculation unit 33 adds depth to state Sn. It is determined whether or not there is (Sb9).
[0024] 付与済みのものが無いと判定すると、ステップ Sb 11aに遷移して、次の画素につい ての処理に移る。 If it is determined that there is no assigned one, the process proceeds to step Sb 11a, and the process for the next pixel is performed.
また、ステップ Sb9の判定にて奥行き付与済みのものがあると判定すると、ステップ SblOに遷移して、形状算出部 33は、画素 iの奥行きの値として奥行き Dの値を付与 し(SblO)、ステップ Sbl laに遷移する。  If it is determined in step Sb9 that depth has been added, the process proceeds to step SblO, and the shape calculation unit 33 assigns the value of depth D as the depth value of pixel i (SblO), Transition to step Sbl la.
ステップ Sbl laでは、形状算出部 33は、 iの値に 1加算し、 iが Imax以下である間は (ステップ Sb l ib)ステップ Sb4に戻り、全画素につ!/、てステップ Sb4〜Sb 10の処理 を行う。  In step Sbl la, the shape calculation unit 33 adds 1 to the value of i, and while i is equal to or less than Imax (step Sb l ib), returns to step Sb4, and for all pixels! /, Steps Sb4 to Sb Perform 10 steps.
iが Imaxより大きくなると、ステップ Sbl2に遷移して、形状算出部 33は、該被写体 画像の被写体領域の画素に奥行き未付与のものがある力否かを判定する(Sb 12)。 未付与のものがないと判定すると形状算出部 33は該処理を終了するが、未付与の ものがあると判定するとステップ Sbl4に遷移して、予め設定しておいた加算値 A D ( Sbl3)を変数 Dに加算し、ステップ Sb3に遷移して前述の処理を繰り返す。このよう にして、形状算出部 33は、被写体領域の全ての画素について、被写体領域の外郭 力もの距離に応じて奥行きの位置を算出することができる。  When i becomes larger than Imax, the process proceeds to step Sbl2, and the shape calculation unit 33 determines whether or not there is a force that has not been given depth to the pixels in the subject area of the subject image (Sb 12). If it is determined that there are no unassigned ones, the shape calculating unit 33 terminates the process, but if it is determined that there are unassigned ones, the process proceeds to step Sbl4 and the preset addition value AD (Sbl3) is used. Add to variable D, and proceed to step Sb3 to repeat the above process. In this way, the shape calculation unit 33 can calculate the position of the depth for all the pixels in the subject area according to the distance of the outer force of the subject area.
なお、上述のステップ Sbl3にて変数 Dに加算した加算値 A Dは、定数であってもよ いし、何回目の加算であるかによって変わる値であってもよい。  Note that the addition value A D added to the variable D in step Sbl3 described above may be a constant or a value that varies depending on the number of additions.
[0025] 図 7は、図 5の方法にて被写体領域 A1の奥行きを算出する経過を説明する図であ る。 FIG. 7 is a diagram for explaining a process of calculating the depth of the subject area A1 by the method of FIG.
図 7 (a)に示す被写体領域 A1がある場合、形状算出部 33は、まず図 7 (b)に示す 最外郭領域 A2に奥行き Dを付与する。次に、形状算出部 33は、図 7 (c)に示す 2番  When there is a subject area A1 shown in FIG. 7 (a), the shape calculating unit 33 first assigns a depth D to the outermost area A2 shown in FIG. 7 (b). Next, the shape calculation unit 33 performs No. 2 shown in FIG.
0  0
目外郭領域 A3に奥行き D + A Dを付与する。次に、形状算出部 33は、図 7 (d)に 示す 3番目外郭領域 A4に奥行き D + 2· A Dを付与する。これで、全ての被写体領 Depth D + AD is given to the outer area A3. Next, the shape calculation unit 33 performs the process shown in FIG. Depth D + 2 · AD is given to the third outer area A4 shown. Now all subject areas
0  0
域の画素について奥行きを付与したので、図 5の処理を終了する。  Since the depth is given to the pixels in the area, the processing in FIG. 5 is finished.
図 8は、図 5のステップ S1の初期値 Dの算出方法を説明する図である。  FIG. 8 is a diagram for explaining a method of calculating the initial value D in step S1 of FIG.
0  0
座標 XLは、図 4にて説明した形状算出部 33が左画像 G1から抽出した被写体領域 Ml (左被写体画像に対応)、の重心位置のうち横軸方向の座標であり、左画像 G1の 左端を原点としている。  The coordinate XL is the coordinate in the horizontal axis direction of the center of gravity of the subject area Ml (corresponding to the left subject image) extracted from the left image G1 by the shape calculation unit 33 described in FIG. 4, and the left end of the left image G1 Is the origin.
重心位置は、被写体領域 Mlの全ての画素の座標を平均することで求める。  The position of the center of gravity is obtained by averaging the coordinates of all the pixels in the subject area Ml.
座標 XRは、図 4にて説明した形状算出部 33が右画像 G2から抽出した被写体領域 M2 (右被写体画像に対応)の重心位置のうち横軸方向の座標であり、右画像 G2の 左端を原点としている。  The coordinate XR is the coordinate in the horizontal axis direction of the center of gravity of the subject area M2 (corresponding to the right subject image) extracted from the right image G2 by the shape calculation unit 33 described in FIG. 4, and the left end of the right image G2 is The origin.
画像に対して垂直な方向にっ 、ては、立体映像表示装置 400が表示して 、る立体 映像を視聴するユーザの視点を原点とし、形状算出部 33は、画像に対して垂直な方 向の座標 Zすなわち付与奥行きの初期値 Dを(1)式にて算出する。  In the direction perpendicular to the image, the stereoscopic image display device 400 displays the stereoscopic video displayed on the user's viewpoint as the origin, and the shape calculation unit 33 performs the direction perpendicular to the image. The coordinate Z, that is, the initial value D of the given depth, is calculated using equation (1).
0  0
D = 1/ (XL-XR) …' ) D = 1 / (XL-XR)… ')
ο  ο
[0026] 例えば、形状算出部 33が算出した左画像における被写体領域 Mlの重心位置の X座標 XL = 80、 Y座標 YL=42であり、右画像における被写体領域 M2の重心位置 の X座標 XR= 50、 Y座標 YR=40であるときは、形状算出部 33は、立体映像の表 示空間における Z座標すなわち付与奥行きの初期値 Dを (2)式で算出する。 [0026] For example, the X coordinate XL = 80 and the Y coordinate YL = 42 of the center of gravity of the subject region Ml in the left image calculated by the shape calculation unit 33, and the X coordinate XR = of the center of gravity of the subject region M2 in the right image When 50 and the Y coordinate YR = 40, the shape calculation unit 33 calculates the Z coordinate in the stereoscopic image display space, that is, the initial value D of the assigned depth, using Equation (2).
0  0
D = 1/ (XL-XR) = 1/ (80- 50) =0. 033 · · · (2) D = 1 / (XL-XR) = 1 / (80-50) = 0.033 (2)
0 ここで、 Z座標の値が、 X座標、 Y座標の値に比べて非常に小さな値となっているが 、これは、(2)式により求められる Z座標の値力 X座標、 Y座標とは異なる縮尺となつ ているためであり、所定の定数 Cを Z座標に乗じることで、これを調整する。また、 Z軸 方向の位置を強調するように、所定の定数 Cの大きさを調整してもよい。  0 Here, the value of the Z coordinate is very small compared to the value of the X coordinate and the Y coordinate, but this is the value of the Z coordinate obtained by equation (2) X coordinate, Y coordinate This is because the scale is different from, and this is adjusted by multiplying the Z coordinate by a predetermined constant C. Further, the size of the predetermined constant C may be adjusted so as to emphasize the position in the Z-axis direction.
[0027] さらに、被写体領域 Mlの最外郭領域にあり X座標 X= 78、 Y座標 Y=40の画素 I の立体映像の表示空間における位置は次のようにして算出する。 [0027] Further, the pixel I is located in the outermost region of the subject region Ml and has an X coordinate X = 78 and a Y coordinate Y = 40. The position of the 3D image in the display space is calculated as follows.
奥行きは、最外郭領域なので D =0. 033である。  Since the depth is the outermost region, D = 0.033.
0  0
被写体領域 Mlの重心位置と被写体領域 M2の重心位置との平均 (Xm、 Ym)は、  The average (Xm, Ym) of the center of gravity of subject area Ml and the center of gravity of subject area M2 is
Xm= (XL+XR) /2= (80 + 50) /2 = 65 Xm = (XL + XR) / 2 = (80 + 50) / 2 = 65
Ym= (YL+YR) /2= (42+40) /2=41 である。  Ym = (YL + YR) / 2 = (42 + 40) / 2 = 41.
(XL, YL)から(Xm、 Ym)への移動は、 X軸方向に Xm— XL = 65— 80=— 15、 Y軸方向に Ym— YL=41— 42=— 1である。  The movement from (XL, YL) to (Xm, Ym) is Xm—XL = 65—80 = —15 in the X axis direction and Ym—YL = 41—42 = —1 in the Y axis direction.
(X= 78, Y=40)に対して、(XL、 YL)から (Xm、 Ym)までの平行移動をすると、 X座標力 S 78 =— 15 = 63、 Y座標力40— 1 = 39となる。  When (X = 78, Y = 40) is translated from (XL, YL) to (Xm, Ym), the X coordinate force S 78 = — 15 = 63, Y coordinate force 40— 1 = 39 It becomes.
これらにより、画素 Iの立体映像の表示空間における位置は、(63, 39, 0. 033)と なる。  Accordingly, the position of the pixel I in the stereoscopic image display space is (63, 39, 0.03).
[0028] これにより、本実施形態の立体映像合成装置 300は、実写の立体映像の動きと一 致している形状データを算出して出力し、実写の立体映像を立体映像表示装置 400 で表示した際に、誰もが直感的に要求する触感への要望に対して、形状データを受 けた力覚'触覚提示装置 500によってそれを実現することができる。  Accordingly, the stereoscopic video composition apparatus 300 according to the present embodiment calculates and outputs shape data that matches the movement of the live-action stereoscopic video, and displays the real-time stereoscopic video on the stereoscopic video display device 400. At the same time, it is possible to realize the tactile sensation requested by anyone intuitively by using the haptic haptic presentation device 500 that receives shape data.
従来の実写の立体映像は単に立体物として見るのみであった力 触感が加わること によってより確実な立体物の把握が可能であるとともに新たなメディア、インターフエ ースの可能性が広がる。  With the addition of the tactile sensation that was only seen as a three-dimensional object in conventional live-action three-dimensional images, it is possible to grasp the three-dimensional object more reliably and expand the possibilities of new media and interfaces.
また、本発明の立体映像合成装置 300を用いたシステムは、構成が単純なため設 置条件を選ばず容易に効果を発揮できるため、教育やマニュアル提示などの分野で 特に効果的に利用することができる。  In addition, since the system using the 3D image synthesizing apparatus 300 of the present invention has a simple configuration, it can be easily used regardless of the installation conditions, and therefore it can be used particularly effectively in fields such as education and manual presentation. Can do.
[0029] なお、本実施形態では、形状算出部 33における被写体画像の生成方法として、予 め用意した左背景画像および右背景画像と、左画像および右画像とを比較して、色 が異なる画素を抽出しているが、背景画像に替えて、予め設定しておいた所定の色 と異なる画素を抽出するクロマキ一処理を用いてもよい。これにより、背景画像を予め 撮影する必要や左映像撮影装置 100および右映像撮影装置 200と背景との位置関 係を一定にする必要はなくなるが、撮影する際に背景を所定の色にする必要がある In the present embodiment, as a method for generating a subject image in the shape calculation unit 33, pixels prepared in different colors are compared between the left background image and the right background image prepared in advance and the left image and the right image. However, instead of the background image, a chroma process that extracts pixels different from a predetermined color set in advance may be used. As a result, the background image There is no need to shoot or to make the positional relationship between the left image capturing device 100 and the right image capturing device 200 and the background constant, but the background needs to be a predetermined color when shooting.
[0030] また、本実施形態では、被写体の形状データの算出方法として、被写体領域の外 郭からの距離に応じて奥行きを設定しているが、左被写体画像と右被写体画像との 視差に基づき奥行きを算出するステレオ計測 (特開平 8— 254416、特開平 11— 94 527、特開 2001— 241928など)により形状データを算出してもよい。 In the present embodiment, the depth is set according to the distance from the outline of the subject area as a method for calculating the shape data of the subject, but based on the parallax between the left subject image and the right subject image. The shape data may be calculated by stereo measurement for calculating the depth (JP-A-8-254416, JP-A-11-94527, JP-A-2001-241928, etc.).
このとき、形状算出部 33における被写体画像の生成方法として、上述のクロマキ一 処理を用いても良い。これにより、被写体領域の外郭からの距離に応じて奥行きを設 定するよりも、形状データを算出するための演算量は多くなるが、実際の形状に近い 形状データを算出することができる。  At this time, as the method for generating the subject image in the shape calculation unit 33, the above-described chroma processing may be used. Thus, although the amount of calculation for calculating the shape data is larger than setting the depth according to the distance from the outline of the subject area, it is possible to calculate shape data close to the actual shape.
[0031] また、本実施形態では、形状算出部 33は、左画像および右画像から被写体を抽出 した左被写体画像および右被写体画像を生成し、この生成した左被写体画像および 右被写体画像に基づき、形状データを算出しているが、左画像および右画像から被 写体を抽出せずに、左画像と右画像との視差に基づきステレオ計測の処理を行 ヽ各 画素の奥行きを算出することで形状データを算出してもよい。  In the present embodiment, the shape calculation unit 33 generates a left subject image and a right subject image obtained by extracting a subject from the left image and the right image, and based on the generated left subject image and right subject image, Although the shape data is calculated, the stereo measurement process is performed based on the parallax between the left and right images without extracting the subject from the left and right images. Shape data may be calculated.
これにより、演算量は多くなるが、背景画像を予め撮影する必要や左映像撮影装置 100および右映像撮影装置 200と背景との位置関係を一定にする必要、撮影する際 に背景を所定の色にする必要がなくなり、実際の形状に近い形状データを算出する ことができる。  This increases the amount of computation, but it is necessary to capture a background image in advance, or to maintain a constant positional relationship between the left image capturing device 100 and the right image capturing device 200, and the background. The shape data close to the actual shape can be calculated.
[0032] また、この立体映像合成装置 300には、周辺機器として入力装置、表示装置等 ( 、 ずれも図示せず)が接続されるものとする。  [0032] Also, it is assumed that an input device, a display device, and the like (not shown) are connected as peripheral devices to the stereoscopic video composition device 300.
ここで、入力装置とはキーボード、マウス等の入力デバイスのことをいう。 表示装置とは CRT (Cathode Ray Tube)や液晶表示装置等のことをいう。  Here, the input device refers to an input device such as a keyboard and a mouse. A display device is a CRT (Cathode Ray Tube) or a liquid crystal display device.
[0033] また、図 3における左映像データ入力部 31、右映像データ入力部 32、形状算出部 33、形状出力部 34、立体映像合成部 35、立体映像出力部 36の機能を実現するた めのプログラムをコンピュータ読み取り可能な記録媒体に記録して、この記録媒体に 記録されたプログラムをコンピュータシステムに読み込ませ、実行することにより左映 像データ入力部 31、右映像データ入力部 32、形状算出部 33、形状出力部 34、立 体映像合成部 35、立体映像出力部 36の処理を行ってもよい。 [0033] Also, in order to realize the functions of the left video data input unit 31, the right video data input unit 32, the shape calculation unit 33, the shape output unit 34, the stereoscopic video composition unit 35, and the stereoscopic video output unit 36 in FIG. Is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system and executed to execute The processing of the image data input unit 31, the right video data input unit 32, the shape calculation unit 33, the shape output unit 34, the stereoscopic video synthesis unit 35, and the stereoscopic video output unit 36 may be performed.
なお、ここでいう「コンピュータシステム」とは、 OSや周辺機器等のハードウェアを含 むものとする。  The “computer system” here includes the OS and hardware such as peripheral devices.
[0034] また、「コンピュータシステム」は、 WWWシステムを利用して!/、る場合であれば、ホ ームページ提供環境 (ある 、は表示環境)も含むものとする。  [0034] In addition, the "computer system" includes a home page providing environment (or a display environment) if a WWW system is used.
また、「コンピュータ読み取り可能な記録媒体」とは、フレキシブルディスク、光磁気 ディスク、 ROM, CD— ROM等の可搬媒体、コンピュータシステムに内蔵されるハー ドディスク等の記憶装置のことを 、う。  The “computer-readable recording medium” refers to a storage device such as a flexible disk, a magneto-optical disk, a portable medium such as a ROM and a CD-ROM, and a hard disk incorporated in a computer system.
さらに「コンピュータ読み取り可能な記録媒体」とは、インターネット等のネットワーク や電話回線等の通信回線を介してプログラムを送信する場合の通信線のように、短 時間の間、動的にプログラムを保持するもの、その場合のサーバやクライアントとなる コンピュータシステム内部の揮発性メモリのように、一定時間プログラムを保持してい るちのち含むちのとする。  Furthermore, a “computer-readable recording medium” is a program that dynamically holds a program for a short time, like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line. In this case, it is assumed that the server and the client in that case, such as the volatile memory inside the computer system, hold the program for a certain period of time and then include it.
また上記プログラムは、前述した機能の一部を実現するためのものであっても良ぐ さらに前述した機能をコンピュータシステムにすでに記録されているプログラムとの組 み合わせで実現できるものであっても良い。  The above program may be for realizing a part of the above-described functions. Furthermore, the above-described function may be realized in combination with a program already recorded in a computer system. good.
[0035] 以上、この発明の実施形態を図面を参照して詳述してきたが、具体的な構成はこの 実施形態に限られるものではなぐこの発明の要旨を逸脱しない範囲の設計等も含 まれる。 As described above, the embodiment of the present invention has been described in detail with reference to the drawings. However, the specific configuration is not limited to this embodiment, and includes design and the like within the scope not departing from the gist of the present invention. It is.
産業上の利用可能性  Industrial applicability
[0036] 本発明の立体映像合成装置は、構成が単純であり設置条件を選ばないため、教育 やマニュアル提示などに用いて好適である力 これに限られるものではない。 [0036] The stereoscopic video composition device of the present invention has a simple configuration and does not select installation conditions, and is therefore not limited to this force suitable for use in education or manual presentation.

Claims

請求の範囲 The scope of the claims
[1] 左目の視点から見た左画像と右目の視点から見た右画像とから立体映像を合成す る立体映像合成装置において、  [1] In a stereoscopic video composition device that synthesizes a stereoscopic video from a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint,
前記左画像と前記右画像とから被写体の形状データを算出する形状算出部と、 前記形状算出部が算出した形状データを、カ覚 '触覚提示装置に出力する形状出 力部と  A shape calculation unit that calculates shape data of a subject from the left image and the right image, and a shape output unit that outputs the shape data calculated by the shape calculation unit to a haptic display device.
を備えることを特徴とする立体映像合成装置。  A stereoscopic image synthesizing apparatus comprising:
[2] 前記形状算出部は、前記左画像および前記右画像それぞれから特定の被写体の 画像を抽出し、前記抽出した被写体の画像の各画素について、該被写体の画像の 外郭からの距離に応じて画像に対して鉛直方向の座標を与えて該被写体の形状デ ータを生成し、自装置が合成する立体映像の表示空間における前記特定の被写体 の位置を、前記抽出した特定の被写体の画像の視差に基づき算出し、前記生成した 形状データを該算出した位置に配置した形状データを算出することを特徴とする請 求項 1に記載の立体映像合成装置。  [2] The shape calculation unit extracts an image of a specific subject from each of the left image and the right image, and for each pixel of the extracted subject image, according to a distance from the outline of the subject image A vertical coordinate is given to the image to generate the shape data of the subject, and the position of the specific subject in the display space of the stereoscopic video synthesized by the device is determined as the position of the extracted image of the specific subject. The 3D image synthesizing apparatus according to claim 1, wherein the 3D image synthesizing device is calculated based on parallax and calculates shape data in which the generated shape data is arranged at the calculated position.
[3] 前記形状算出部は、前記左画像および前記右画像それぞれから特定の被写体の 画像を抽出し、前記抽出した画像の視差に基づくステレオ計測により、被写体の形 状データを算出することを特徴とする請求項 1に記載の立体映像合成装置。  [3] The shape calculation unit extracts an image of a specific subject from each of the left image and the right image, and calculates shape data of the subject by stereo measurement based on the parallax of the extracted image. The stereoscopic video composition apparatus according to claim 1.
[4] 前記形状算出部は、前記左画像と前記右画像との視差に基づくステレオ計測によ り、被写体の形状データを算出することを特徴とする請求項 1に記載の立体映像合 成装置。  [4] The stereoscopic image synthesizing device according to claim 1, wherein the shape calculating unit calculates shape data of a subject by stereo measurement based on a parallax between the left image and the right image. .
[5] 左目の視点から見た左画像と右目の視点から見た右画像とから立体映像を合成す る立体映像合成装置における形状データ生成方法において、  [5] In a shape data generation method in a stereoscopic video composition device that synthesizes a stereoscopic video from a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint,
前記立体映像合成装置が、前記左画像と前記右画像とから被写体の形状データ を算出する第 1の過程と、  A first process in which the stereoscopic video composition device calculates shape data of a subject from the left image and the right image;
前記立体映像合成装置が、前記第 1の過程にて算出した形状データを、カ覚 '触 覚提示装置に出力する第 2の過程と  A second process in which the stereoscopic image synthesizer outputs the shape data calculated in the first process to the haptic sense presentation apparatus;
を備えることを特徴とする形状データ生成方法。  A shape data generation method comprising:
[6] コンピュータを、左目の視点から見た左画像と右目の視点から見た右画像とから立 体映像を合成する立体映像合成装置として機能させるためのプログラムにおいて、 前記コンピュータを、 [6] Stand up the computer from the left image seen from the left eye viewpoint and the right image seen from the right eye viewpoint. In a program for causing a computer to function as a stereoscopic video synthesis device that synthesizes body video,
前記左画像と前記右画像とから被写体の形状データを算出する形状算出部、 前記形状算出部が算出した形状データを、カ覚 '触覚提示装置に出力する形状出 力部  A shape calculation unit for calculating shape data of a subject from the left image and the right image, and a shape output unit for outputting the shape data calculated by the shape calculation unit to a haptic display device
としても機能させるプログラム。  Program that also functions as
PCT/JP2007/056012 2006-09-08 2007-03-23 Stereoscoptic video image synthesizing device, shape data generating method and its program WO2008029529A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006-244198 2006-09-08
JP2006244198A JP2008067169A (en) 2006-09-08 2006-09-08 Three-dimensional video composing device, shape data generation method and program therefor

Publications (1)

Publication Number Publication Date
WO2008029529A1 true WO2008029529A1 (en) 2008-03-13

Family

ID=39156974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/056012 WO2008029529A1 (en) 2006-09-08 2007-03-23 Stereoscoptic video image synthesizing device, shape data generating method and its program

Country Status (2)

Country Link
JP (1) JP2008067169A (en)
WO (1) WO2008029529A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012022639A (en) 2010-07-16 2012-02-02 Ntt Docomo Inc Display device, image display device, and image display method

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11150741A (en) * 1997-11-18 1999-06-02 Asahi Optical Co Ltd Three-dimensional picture displaying method and its device by stereo photographing

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11150741A (en) * 1997-11-18 1999-06-02 Asahi Optical Co Ltd Three-dimensional picture displaying method and its device by stereo photographing

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KOBAYASHI M. ET AL.: "Stereo Chojo Hyoji ni yoru Real Scale Video System", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 40, no. 11, 1999, pages 3834 - 3846 *
OZAWA S. ET AL.: "Jissha 3D Eizo Satsuei Hyoji System", THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN DAI 24 KAI KENKYUKAI KOEN YOKO, 17 March 2006 (2006-03-17), pages 109 - 112 *
TANAKA S. ET AL.: "Haptic Vision ni Motozuku Nodoteki Buttai Juryo Suitei", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 44, no. SIG17, 2003, pages 51 - 60 *

Also Published As

Publication number Publication date
JP2008067169A (en) 2008-03-21

Similar Documents

Publication Publication Date Title
KR102495447B1 (en) Providing a tele-immersive experience using a mirror metaphor
JP2019125929A (en) Image processing apparatus, image processing method, and program
JP2011090400A (en) Image display device, method, and program
JP2008140271A (en) Interactive device and method thereof
JP2014109802A (en) Image processor, image processing method and program
JPWO2017141511A1 (en) Information processing apparatus, information processing system, information processing method, and program
KR20140121529A (en) Method and apparatus for formating light field image
JP2014010805A (en) Image processing device, image processing method and image processing program
JP2015231114A (en) Video display device
JP2022058753A (en) Information processing apparatus, information processing method, and program
JP2017033294A (en) Three-dimensional drawing system and three-dimensional drawing program
JP2003067784A (en) Information processor
KR101212223B1 (en) Device taking a picture and method to generating the image with depth information
KR101632514B1 (en) Method and apparatus for upsampling depth image
JP2009212582A (en) Feedback system for virtual studio
CN109814704B (en) Video data processing method and device
WO2008029529A1 (en) Stereoscoptic video image synthesizing device, shape data generating method and its program
JP5326816B2 (en) Remote conference system, information processing apparatus, and program
JP6405539B2 (en) Label information processing apparatus for multi-viewpoint image and label information processing method
JP2021131490A (en) Information processing device, information processing method, and program
CN111344744A (en) Method for presenting a three-dimensional object, and related computer program product, digital storage medium and computer system
JP4777193B2 (en) Stereoscopic image synthesizing apparatus, shape data generation method and program thereof
JP2005011275A (en) System and program for displaying stereoscopic image
JP7072706B1 (en) Display control device, display control method and display control program
JP5520772B2 (en) Stereoscopic image display system and display method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07739453

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07739453

Country of ref document: EP

Kind code of ref document: A1