WO2008029529A1 - Dispositif de synthèse d'images vidéostéréoscopiques, procédé de génération de données de forme et son programme - Google Patents

Dispositif de synthèse d'images vidéostéréoscopiques, procédé de génération de données de forme et son programme Download PDF

Info

Publication number
WO2008029529A1
WO2008029529A1 PCT/JP2007/056012 JP2007056012W WO2008029529A1 WO 2008029529 A1 WO2008029529 A1 WO 2008029529A1 JP 2007056012 W JP2007056012 W JP 2007056012W WO 2008029529 A1 WO2008029529 A1 WO 2008029529A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
subject
shape
shape data
stereoscopic video
Prior art date
Application number
PCT/JP2007/056012
Other languages
English (en)
Japanese (ja)
Inventor
Shiro Ozawa
Takao Abe
Noriyuki Naruto
Itaru Kamiya
Original Assignee
Ntt Comware Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ntt Comware Corporation filed Critical Ntt Comware Corporation
Publication of WO2008029529A1 publication Critical patent/WO2008029529A1/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • G06T7/593Depth or shape recovery from multiple images from stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators

Definitions

  • the present invention relates to a stereoscopic video composition device, a shape data generation method, and a program therefor, in particular, in addition to the synthesis of a stereoscopic video, and the output of shape data to a haptic haptic presentation device.
  • This application claims priority based on Japanese Patent Application No. 2006-244198 filed on Sep. 8, 2006, the contents of which are incorporated herein by reference.
  • a conventional stereoscopic image display device prepares an image that also has two viewpoint power corresponding to the left and right eyes, and uses a Noria method (for example, see Patent Document 1 and Patent Document 2) or a polarizing glass screen. By displaying on a three-dimensional display with a chatter method, the user can perceive in three dimensions.
  • a force feedback device that has a pen-type operation unit that allows you to experience tactile sensation by operating the pen, and to experience the tactile sensation of the hand when worn on the arm.
  • a haptic display device such as a boutique device that can do this.
  • Patent Document 1 JP-A-8-248355
  • Patent Document 2 Japanese Translation of Special Publication 2003-521181
  • the conventional 3D image display device only presents 3D images, and there is a problem that even if an object appears to be raised, it cannot be touched. There is.
  • shape data such as CAD (Computer Aided Design) data
  • CG Computer Graphics
  • shape data that matches the video must be prepared, and even if it can be applied to CG that requires shape data to generate video, it cannot be applied to live-action video shot with a video camera, etc. There was a problem.
  • the present invention has been made in view of such circumstances, and an object thereof is a video camera or the like.
  • 3D image data that can be presented by the tactile sense display device is output, while the 3D image display device presents the 3D image of the live-action image taken with the 3D image display device.
  • An object of the present invention is to provide a 3D image synthesizing apparatus capable of performing the above.
  • the present invention has been made to solve the above-described problem, and the stereoscopic video composition apparatus according to the present invention includes a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint.
  • a shape calculation unit that calculates shape data of a subject from the left image and the right image, and shape data calculated by the shape calculation unit
  • a shape output unit for outputting to the tactile sense presentation device.
  • the shape calculating unit extracts an image of a specific subject from each of the left image and the right image, and for each pixel of the extracted subject image, an image of the subject is extracted. According to the distance from the outline, a vertical coordinate is given to the image to generate shape data of the subject, and the position of the specific subject in the display space of the stereoscopic video synthesized by the device is extracted. Based on the parallax of the image of the specific subject, the shape data in which the generated shape data is arranged at the calculated position is calculated.
  • the shape calculation unit extracts an image of a specific subject from each of the left image and the right image, and performs stereo measurement based on the parallax of the extracted image to thereby determine the shape of the subject. Calculate the data.
  • the shape calculation unit calculates shape data of a subject by stereo measurement based on a parallax between the left image and the right image.
  • the shape data generation method of the present invention is the shape data generation method in the stereoscopic video composition device that synthesizes a stereoscopic video from a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint.
  • the program of the present invention causes a computer to function as a stereoscopic video composition device that synthesizes a stereoscopic video from a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint.
  • the shape calculation unit for calculating the shape data of the subject from the left image and the right image, and the shape data calculated by the shape calculation unit are output to the haptic sense device. It also functions as a shape output unit.
  • the stereoscopic video synthesizing device matches the stereoscopic video displayed on the stereoscopic video display device by inputting the real shot video shot by the left and right viewpoint power video cameras and the like to the device. It is possible to generate shape data that can be provided by the tactile sense presentation device.
  • FIG. 1 is a block diagram showing an outline of an embodiment of the present invention.
  • FIG. 2 is a schematic block diagram showing a configuration of a system using the stereoscopic video image synthesizing apparatus 300 in the same embodiment.
  • FIG. 3 is a schematic block diagram showing a configuration of a stereoscopic video image synthesizing apparatus 300 in the same embodiment.
  • FIG. 4 is a flowchart illustrating a method for generating a subject image in a shape calculation unit 33 in the same embodiment.
  • FIG. 5 is a flowchart for explaining a shape data generation method of a subject based on a left subject image or a right subject image in a shape calculation unit 33 in the same embodiment.
  • FIG. 6 is a diagram illustrating the contents of state S in the same embodiment.
  • FIG. 7 is a diagram illustrating a process of calculating the depth of the subject area A1 by the shape calculation unit 33 in the same embodiment.
  • FIG. 3 is a diagram for explaining a method.
  • the 3D video composition device 300 combines the video from the left eye viewpoint and the video of the right eye viewpoint captured by the left video photography device 100 and the right video photography device 200 respectively. Then, when displaying on the stereoscopic video display device 400, shape data is calculated from the captured video and output to the haptic sense / tactile sense presentation device 500. As a result, the user sees the 3D image displayed on the 3D image display device 400 and is displayed at the same time! The haptic haptic sensation for the shape matching the movement of the 3D image is displayed. Can be obtained.
  • FIG. 2 is a schematic block diagram showing the configuration of the stereoscopic video image synthesizing apparatus 300 according to the embodiment of the present invention.
  • the left image capturing device 100 is a video camera that captures images of the left eye's viewpoint power.
  • the right image capturing device 200 is a video camera that is installed in parallel to the right side of the left image capturing device 100 and captures images with the right eye viewpoint.
  • the three-dimensional video composition device 300 receives the left-eye and right-eye images from the left video photographing device 100 and the right video photographing device 200, synthesizes and outputs the three-dimensional video to the three-dimensional video display device 400, and Shape data that matches the motion is calculated and output to the haptic * tactile presentation device 500.
  • FIG. 3 is a schematic block diagram showing a configuration of the stereoscopic video image synthesizing apparatus 300 according to the present embodiment.
  • Reference numeral 32 denotes a right video data input unit that receives a video input from the right video shooting device 200 and outputs a right image extracted from the video frame by frame.
  • the shape calculation unit 33 generates a left subject image and a right subject image obtained by extracting only the subject from the left image and the right image received from the left video data input unit 31 and the right video data input unit 32.
  • the shape calculation unit calculates shape data synchronized with the stereoscopic video data generated by the stereoscopic video synthesis unit 35 by calculating the shape data of the subject based on the above. Details of the subject image generation method and the subject shape data calculation method in the shape calculation unit 33 will be described later.
  • Reference numeral 34 denotes a shape output unit that outputs the shape data calculated by the shape calculation unit 33 to the haptic * tactile sense presentation device 500.
  • the stereoscopic video composition unit 35 synthesizes the left image and the right image received from the left video data input unit 31 and the right video data input unit 32, and generates solid video data in a format suitable for the stereoscopic video display device 400. To do.
  • a stereoscopic video output unit 36 outputs the stereoscopic video data generated by the stereoscopic video synthesis unit 35 to the stereoscopic video display device 400.
  • FIG. 4 is a flowchart for explaining a method for generating a subject image in the shape calculation unit 33.
  • the flowchart in FIG. 4 shows the processing when the subject is extracted from the left image to generate the left subject image, but the shape calculation unit 33 extracts the subject for the right image in the same manner as the left image. A right subject image is generated.
  • the shape calculating unit 33 captures only the background with the left image capturing device 100 and stores the left background image received and extracted by the left image data input unit 31 in advance (Sal).
  • the shape calculating unit 33 calculates the red, green, and blue component values of the pixel i in the left image and the red, green, and blue component values of the pixel corresponding to the pixel i in the left background image. Acquire (Sa3), and determine whether the power of red, green, and blue components of these two pixels match (Sa4). If it is determined in step Sa4 that they match, the process directly proceeds to step Sa6. If it is determined that they do not match, the shape calculation unit 33 extracts the pixel i in the left image as a pixel of the subject image (the pixel After setting the color (as the subject) (Sa5), the process proceeds to step Sa6.
  • step Sa6 the shape calculation unit 33 adds 1 to the value of i, and in step Sa7, determines whether i is less than or equal to Imax. That is, it is determined whether or not the processing of steps Sa3 to Sa5 has been performed for all the pixels i constituting the left image based on whether i is equal to or less than Imax. If i is equal to or less than Imax, the process proceeds to step Sa3. The above process is repeated, and if i is not less than Imax, the process ends.
  • the image with pixel power extracted by this is the left subject image. That is, in the left image, the color of the pixel in the subject area (that is, the left subject image) is set, but the color is set in the pixel outside the subject area.
  • FIG. 5 illustrates a method for calculating a direction (depth direction) perpendicular to the subject image, among the shape data generation methods of the subject based on the left subject image or the right subject image in the shape calculation unit 33. It is a flowchart to do.
  • Fig. 6 is a diagram showing the possible values of state S used in the flowchart.
  • State S has a subject area and a subject area, and the subject area has a depth added and a depth not given. There are each state with grant. Pixels for which a color is set are the subject area (corresponding to the subject image), and pixels that are set and are dark are outside the subject area.
  • the position in the direction perpendicular to the image is calculated for each pixel of one of the left subject image and the right subject image (object region).
  • the position of each pixel in the horizontal and vertical axis directions of the image is the center of gravity of the subject area in the left subject image and the right subject. It is obtained by moving to the average position of the center of gravity of the subject area in the image.
  • the shape calculation unit 33 calculates the initial value D of the depth to be added using the method described later with reference to FIG. 8 (Sbl).
  • the shape calculation unit 33 sets the depth of each pixel in the subsequent step SblO. Set the initial value D to the variable D to be set (Sb2).
  • step Sb4 the shape calculation unit 33 acquires the state S of the pixel i in the right or left image including the subject image (Sb4).
  • the state S whether the pixel i is outside the subject area or the subject area (that is, the subject image) can be determined by whether or not the pixel i is set in color.
  • Whether the depth is given or not given is determined by whether the depth of the pixel i is given!
  • Pixels to which depth has been assigned are processed as if the depth has not been assigned during the execution of the loop (i.e., before the loop is exited and the depth value is changed and the next loop is executed before the loop is executed).
  • the shape calculation unit 33 determines whether or not the state S of the pixel i acquired in step Sb4 is the subject area (Sb5).
  • the determination in step Sb5 can be made based on whether or not a color is set for the pixel. If the color is set, it is the subject area, and if it is not set, it is outside the subject area.
  • step Sb5 If it is determined in step Sb5 that the subject area is not !, the process proceeds to step Sbl la, and if it is determined that it is the subject area because the color to be processed for the next pixel is set, the process proceeds to step Sb6. Transition.
  • step Sb6 the shape calculation unit 33 determines whether or not a depth is not applied to the pixel.
  • the shape calculation unit 33 determines whether there are any pixel states Sn out of the subject area among the eight pixel states Sn. Is determined (Sb8). If the shape calculation unit 33 determines that at least one of the eight pixels is outside the subject area, the shape calculation unit 33 proceeds to step SblO and assigns the value of depth D as the depth value of pixel i (SblO). Then, the process proceeds to step Sb 11a, and the process for the next pixel is performed. If it is determined in step Sb8 that there are no objects outside the subject area in state Sn, the process proceeds to step Sb9, and the shape calculation unit 33 adds depth to state Sn. It is determined whether or not there is (Sb9).
  • step Sb 11a If it is determined that there is no assigned one, the process proceeds to step Sb 11a, and the process for the next pixel is performed.
  • step Sb9 If it is determined in step Sb9 that depth has been added, the process proceeds to step SblO, and the shape calculation unit 33 assigns the value of depth D as the depth value of pixel i (SblO), Transition to step Sbl la.
  • step Sbl la the shape calculation unit 33 adds 1 to the value of i, and while i is equal to or less than Imax (step Sb l ib), returns to step Sb4, and for all pixels! /, Steps Sb4 to Sb Perform 10 steps.
  • step Sbl2 the shape calculation unit 33 determines whether or not there is a force that has not been given depth to the pixels in the subject area of the subject image (Sb 12). If it is determined that there are no unassigned ones, the shape calculating unit 33 terminates the process, but if it is determined that there are unassigned ones, the process proceeds to step Sbl4 and the preset addition value AD (Sbl3) is used. Add to variable D, and proceed to step Sb3 to repeat the above process. In this way, the shape calculation unit 33 can calculate the position of the depth for all the pixels in the subject area according to the distance of the outer force of the subject area.
  • addition value A D added to the variable D in step Sbl3 described above may be a constant or a value that varies depending on the number of additions.
  • FIG. 7 is a diagram for explaining a process of calculating the depth of the subject area A1 by the method of FIG.
  • the shape calculating unit 33 When there is a subject area A1 shown in FIG. 7 (a), the shape calculating unit 33 first assigns a depth D to the outermost area A2 shown in FIG. 7 (b). Next, the shape calculation unit 33 performs No. 2 shown in FIG.
  • Depth D + AD is given to the outer area A3.
  • the shape calculation unit 33 performs the process shown in FIG. Depth D + 2 ⁇ AD is given to the third outer area A4 shown. Now all subject areas
  • FIG. 8 is a diagram for explaining a method of calculating the initial value D in step S1 of FIG.
  • the coordinate XL is the coordinate in the horizontal axis direction of the center of gravity of the subject area Ml (corresponding to the left subject image) extracted from the left image G1 by the shape calculation unit 33 described in FIG. 4, and the left end of the left image G1 Is the origin.
  • the position of the center of gravity is obtained by averaging the coordinates of all the pixels in the subject area Ml.
  • the coordinate XR is the coordinate in the horizontal axis direction of the center of gravity of the subject area M2 (corresponding to the right subject image) extracted from the right image G2 by the shape calculation unit 33 described in FIG. 4, and the left end of the right image G2 is The origin.
  • the stereoscopic image display device 400 displays the stereoscopic video displayed on the user's viewpoint as the origin, and the shape calculation unit 33 performs the direction perpendicular to the image.
  • the coordinate Z that is, the initial value D of the given depth, is calculated using equation (1).
  • the shape calculation unit 33 calculates the Z coordinate in the stereoscopic image display space, that is, the initial value D of the assigned depth, using Equation (2).
  • the value of the Z coordinate is very small compared to the value of the X coordinate and the Y coordinate, but this is the value of the Z coordinate obtained by equation (2) X coordinate, Y coordinate This is because the scale is different from, and this is adjusted by multiplying the Z coordinate by a predetermined constant C. Further, the size of the predetermined constant C may be adjusted so as to emphasize the position in the Z-axis direction.
  • the position of the 3D image in the display space is calculated as follows.
  • the average (Xm, Ym) of the center of gravity of subject area Ml and the center of gravity of subject area M2 is
  • the position of the pixel I in the stereoscopic image display space is (63, 39, 0.03).
  • the stereoscopic video composition apparatus 300 calculates and outputs shape data that matches the movement of the live-action stereoscopic video, and displays the real-time stereoscopic video on the stereoscopic video display device 400. At the same time, it is possible to realize the tactile sensation requested by anyone intuitively by using the haptic haptic presentation device 500 that receives shape data.
  • the system using the 3D image synthesizing apparatus 300 of the present invention since the system using the 3D image synthesizing apparatus 300 of the present invention has a simple configuration, it can be easily used regardless of the installation conditions, and therefore it can be used particularly effectively in fields such as education and manual presentation. Can do.
  • pixels prepared in different colors are compared between the left background image and the right background image prepared in advance and the left image and the right image.
  • a chroma process that extracts pixels different from a predetermined color set in advance may be used.
  • the background image There is no need to shoot or to make the positional relationship between the left image capturing device 100 and the right image capturing device 200 and the background constant, but the background needs to be a predetermined color when shooting.
  • the depth is set according to the distance from the outline of the subject area as a method for calculating the shape data of the subject, but based on the parallax between the left subject image and the right subject image.
  • the shape data may be calculated by stereo measurement for calculating the depth (JP-A-8-254416, JP-A-11-94527, JP-A-2001-241928, etc.).
  • the above-described chroma processing may be used.
  • the amount of calculation for calculating the shape data is larger than setting the depth according to the distance from the outline of the subject area, it is possible to calculate shape data close to the actual shape.
  • the shape calculation unit 33 generates a left subject image and a right subject image obtained by extracting a subject from the left image and the right image, and based on the generated left subject image and right subject image, Although the shape data is calculated, the stereo measurement process is performed based on the parallax between the left and right images without extracting the subject from the left and right images. Shape data may be calculated.
  • the shape data close to the actual shape can be calculated.
  • the input device refers to an input device such as a keyboard and a mouse.
  • a display device is a CRT (Cathode Ray Tube) or a liquid crystal display device.
  • the right video data input unit 32, the shape calculation unit 33, the shape output unit 34, the stereoscopic video composition unit 35, and the stereoscopic video output unit 36 in FIG. Is recorded on a computer-readable recording medium, and the program recorded on the recording medium is read into a computer system and executed to execute The processing of the image data input unit 31, the right video data input unit 32, the shape calculation unit 33, the shape output unit 34, the stereoscopic video synthesis unit 35, and the stereoscopic video output unit 36 may be performed.
  • the “computer system” here includes the OS and hardware such as peripheral devices.
  • the "computer system” includes a home page providing environment (or a display environment) if a WWW system is used.
  • the “computer-readable recording medium” refers to a storage device such as a flexible disk, a magneto-optical disk, a portable medium such as a ROM and a CD-ROM, and a hard disk incorporated in a computer system.
  • a “computer-readable recording medium” is a program that dynamically holds a program for a short time, like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line.
  • a network such as the Internet
  • a communication line such as a telephone line.
  • the server and the client in that case, such as the volatile memory inside the computer system, hold the program for a certain period of time and then include it.
  • the above program may be for realizing a part of the above-described functions. Furthermore, the above-described function may be realized in combination with a program already recorded in a computer system. good.
  • the stereoscopic video composition device of the present invention has a simple configuration and does not select installation conditions, and is therefore not limited to this force suitable for use in education or manual presentation.

Landscapes

  • Engineering & Computer Science (AREA)
  • Signal Processing (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Image Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

L'invention concerne un dispositif de synthèse d'images vidéostéréoscopiques pour la synthèse d'une image gauche vue depuis un point de vue de l'oeil gauche et une image droite vue depuis un point de vue de l'oeil droit afin de former une image vidéostéréoscopique. Le dispositif de synthèse d'images vidéostéréoscopiques est constitué d'une unité de calcul de forme destinée à calculer des données de forme à partir des images droite et gauche, et une unité de sortie de forme destinée à fournir les données de forme calculées par l'unité de calcul de forme à une unité d'indication de sens cinesthésique et tactile. Généralement, l'unité de calcul de forme extrait une image d'un objet spécifique à partir de chacune des images gauche et droite et donne des coordonnées dans un sens vertical relativement à l'image en fonction d'une distance par rapport au contour extérieur de l'image de l'objet afin de générer les données de forme de l'objet.
PCT/JP2007/056012 2006-09-08 2007-03-23 Dispositif de synthèse d'images vidéostéréoscopiques, procédé de génération de données de forme et son programme WO2008029529A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006244198A JP2008067169A (ja) 2006-09-08 2006-09-08 立体映像合成装置、形状データ生成方法およびそのプログラム
JP2006-244198 2006-09-08

Publications (1)

Publication Number Publication Date
WO2008029529A1 true WO2008029529A1 (fr) 2008-03-13

Family

ID=39156974

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/056012 WO2008029529A1 (fr) 2006-09-08 2007-03-23 Dispositif de synthèse d'images vidéostéréoscopiques, procédé de génération de données de forme et son programme

Country Status (2)

Country Link
JP (1) JP2008067169A (fr)
WO (1) WO2008029529A1 (fr)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2012022639A (ja) 2010-07-16 2012-02-02 Ntt Docomo Inc 表示装置、映像表示システムおよび映像表示方法

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11150741A (ja) * 1997-11-18 1999-06-02 Asahi Optical Co Ltd ステレオ写真撮影による3次元画像表示方法および装置

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11150741A (ja) * 1997-11-18 1999-06-02 Asahi Optical Co Ltd ステレオ写真撮影による3次元画像表示方法および装置

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KOBAYASHI M. ET AL.: "Stereo Chojo Hyoji ni yoru Real Scale Video System", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 40, no. 11, 1999, pages 3834 - 3846 *
OZAWA S. ET AL.: "Jissha 3D Eizo Satsuei Hyoji System", THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN DAI 24 KAI KENKYUKAI KOEN YOKO, 17 March 2006 (2006-03-17), pages 109 - 112 *
TANAKA S. ET AL.: "Haptic Vision ni Motozuku Nodoteki Buttai Juryo Suitei", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 44, no. SIG17, 2003, pages 51 - 60 *

Also Published As

Publication number Publication date
JP2008067169A (ja) 2008-03-21

Similar Documents

Publication Publication Date Title
KR102495447B1 (ko) 거울 메타포를 사용한 원격 몰입 경험 제공
EP0969418A2 (fr) Appareil de traítement d'images pour afficher des images tridimensionelles
JP2019125929A (ja) 画像処理装置、画像処理方法、及びプログラム
US20060126926A1 (en) Horizontal perspective representation
JP2008140271A (ja) 対話装置及びその方法
JPWO2017141511A1 (ja) 情報処理装置、情報処理システム、および情報処理方法、並びにプログラム
JP5809607B2 (ja) 画像処理装置、画像処理方法及び画像処理プログラム
KR20140121529A (ko) 광 필드 영상을 생성하는 방법 및 장치
JP2017033294A (ja) 3次元描画システム及び3次元描画プログラム
JP2015231114A (ja) 映像表示装置
JP2022058753A (ja) 情報処理装置、情報処理方法及びプログラム
JP2003067784A (ja) 情報処理装置
KR101212223B1 (ko) 촬영장치 및 깊이정보를 포함하는 영상의 생성방법
KR101632514B1 (ko) 깊이 영상 업샘플링 방법 및 장치
JP6405539B2 (ja) 多視点画像に対するラベル情報の処理装置及びそのラベル情報の処理方法
JP2009212582A (ja) バーチャルスタジオ用フィードバックシステム
CN109814704B (zh) 一种视频数据处理方法和装置
WO2008029529A1 (fr) Dispositif de synthèse d'images vidéostéréoscopiques, procédé de génération de données de forme et son programme
JP5326816B2 (ja) 遠隔会議システム、情報処理装置、及びプログラム
JP2021131490A (ja) 情報処理装置、情報処理方法、プログラム
CN111344744A (zh) 用于展示三维物体的方法以及相关计算机程序产品、数字存储介质和计算机系统
JP4777193B2 (ja) 立体映像合成装置、形状データ生成方法およびそのプログラム
JP2005011275A (ja) 立体画像表示システム及び立体画像表示プログラム
JP7072706B1 (ja) 表示制御装置、表示制御方法および表示制御プログラム
JP5520772B2 (ja) 立体画像の表示システム及び表示方法

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07739453

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07739453

Country of ref document: EP

Kind code of ref document: A1