WO2008029531A1 - Dispositif de synthèse d'images vidéo stéréoscopiques, procédé de production de données de forme et programme correspondant - Google Patents

Dispositif de synthèse d'images vidéo stéréoscopiques, procédé de production de données de forme et programme correspondant Download PDF

Info

Publication number
WO2008029531A1
WO2008029531A1 PCT/JP2007/056038 JP2007056038W WO2008029531A1 WO 2008029531 A1 WO2008029531 A1 WO 2008029531A1 JP 2007056038 W JP2007056038 W JP 2007056038W WO 2008029531 A1 WO2008029531 A1 WO 2008029531A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
stereoscopic video
calculation unit
marker
unit
Prior art date
Application number
PCT/JP2007/056038
Other languages
English (en)
Japanese (ja)
Inventor
Shiro Ozawa
Takao Abe
Noriyuki Naruto
Itaru Kamiya
Original Assignee
Ntt Comware Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ntt Comware Corporation filed Critical Ntt Comware Corporation
Publication of WO2008029531A1 publication Critical patent/WO2008029531A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Definitions

  • the present invention relates to a stereoscopic video composition device, a shape data generation method, and a program therefor, and in particular, a stereoscopic video composition device that outputs shape data to a haptic haptic presentation device together with the synthesis of a stereoscopic image, and a shape
  • the present invention relates to a data generation method and a program thereof.
  • a conventional stereoscopic image display device prepares an image that also has two viewpoint power corresponding to the left and right eyes, and uses a Noria method (for example, see Patent Document 1 and Patent Document 2) or a polarizing glass screen. By displaying on a three-dimensional display with a chatter method, the user can perceive in three dimensions.
  • a force feedback device that has a pen-type operation unit that allows you to experience tactile sensation by operating the pen, and to experience the tactile sensation of the hand when worn on the arm.
  • a haptic display device such as a boutique device that can do this.
  • the conventional 3D image display device only presents 3D images, and there is a problem that even if an object appears to be raised, it cannot be touched. There is. Also, based on the shape data such as CAD (Computer Aided Design) data, while the CG (Computer Graphics) was displayed, the haptic and tactile sensation presentation device was able to present the haptic and tactile sensation. However, there is a problem that even if it can be applied to CG that requires shape data to generate an image, it cannot be applied to a live-action image taken with a video camera or the like. there were.
  • Patent Document 1 JP-A-8-248355
  • Patent Document 2 Japanese Translation of Special Publication 2003-521181
  • the present invention has been made in view of the above circumstances, and its purpose is to be a video camera. 3D image data and shape data that can be displayed on the 3D image display device while the 3D image display device displays the 3D image that matches the 3D image. Is to provide a stereoscopic video composition apparatus capable of outputting
  • the present invention has been made to solve the above-described problem, and the stereoscopic video composition apparatus according to the present invention includes a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint.
  • an image position calculation unit that calculates the position of a specific subject in each of the left image and the right image, and the left image and the right calculated by the image position calculation unit
  • the stereoscopic position calculation unit that calculates the position of the specific subject in the stereoscopic video display space synthesized by the device, and the position in the display space calculated by the stereoscopic position calculation unit has a predetermined shape.
  • the stereoscopic video composition apparatus of the present invention is the above-described stereoscopic video composition apparatus, and preferably, the stereoscopic position calculation unit includes the left image calculated by the image position calculation unit. The difference in the horizontal axis direction between the position of the subject and the position of the subject in the right image is calculated, and the reciprocal of the calculated difference is calculated to obtain the position in the vertical direction with respect to the image.
  • the stereoscopic video composition device of the present invention is any one of the above-described stereoscopic video composition devices, and preferably, the image position calculation unit is configured to perform predetermined processing for each of the left image and the right image.
  • a pixel group having the largest number of pixels in each of the left image and the right image is selected from among the pixel groups in which the pixels in contact with each other are collected.
  • the position of the subject in each of the left image and the right image is calculated by calculating the center position of the selected pixel group.
  • the stereoscopic video composition device of the present invention is any one of the above-described stereoscopic video composition devices, and preferably each of the left image and the right image before the own device synthesizes the stereoscopic video. And an image placement unit that inserts predetermined image data at each position of the left image and the right image calculated by the image position calculation unit.
  • the shape data generation method of the present invention provides shape data in a stereoscopic video composition apparatus that synthesizes a stereoscopic video from an input left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint.
  • a first step of calculating a position of a specific subject in each of the left image and the right image, and a position of the subject in each of the left image and the right image calculated in the first step Based on the above, predetermined shape data is arranged in the second process of calculating the position of the subject in the display space of the stereoscopic video synthesized by the device and the position in the display space calculated in the second process. It is characterized by comprising a third step and a fourth step of outputting the shape data arranged in the third step to the haptic sense presentation device.
  • the program of the present invention is a program for causing a computer to function as a stereoscopic video composition device that synthesizes a stereoscopic video from a left image viewed from the left eye viewpoint and a right image viewed from the right eye viewpoint.
  • An image position calculation unit that calculates the position of a specific object in each of the left image and the right image, and the device combines them based on the positions in the left image and the right image calculated by the image position calculation unit.
  • a three-dimensional position calculation unit that calculates the position of the specific subject in the display space of a three-dimensional image, a shape arrangement unit that arranges predetermined shape data at a position in the display space calculated by the three-dimensional position calculation unit, and the shape arrangement unit It also functions as a shape output unit that outputs the shape data arranged by the haptic device.
  • the stereoscopic video image synthesizing device prepares a shape to be given a tactile sensation at the position of a specific subject as predetermined shape data, and the specific subject is viewed from two left and right viewpoints. By inputting actual images taken with a camera, etc., to this device, force data that matches the 3D image displayed on the 3D image display device is generated. be able to.
  • FIG. 1 is a block diagram showing an outline of one embodiment of the present invention.
  • FIG. 2 is a schematic block diagram showing a configuration of a system using the stereoscopic video image synthesizing apparatus 300 in the same embodiment.
  • FIG. 3 is a schematic block diagram showing the configuration of the stereoscopic video image synthesizing apparatus 300 according to the embodiment.
  • FIG. 3 is a schematic block diagram showing the configuration of the stereoscopic video image synthesizing apparatus 300 according to the embodiment.
  • FIG. 4 is a flowchart explaining the operation of the marker image position calculation unit 33 in the same embodiment.
  • FIG. 5 is a diagram for explaining the calculation of the Z coordinate of the marker by the marker three-dimensional position calculation unit 34 in the same embodiment.
  • the stereoscopic video composition device 300 of the present embodiment includes a left eye viewpoint image and a right eye viewpoint image captured by the left image capturing device 100 and the right image capturing device 200.
  • a 3D image is synthesized and displayed on the 3D image display device 400
  • a 3D position in a 3D image display space of a marker having a predetermined color is extracted from the captured image and prepared in advance. Place the acquired shape data at the extracted 3D position, Output to the sense presentation device 500.
  • the user sees the 3D image displayed on the 3D image display device 400 and is displayed at the same time!
  • the force sense 'force sense by the tactile sense presentation device 500' A tactile sense can be obtained.
  • FIG. 2 is a schematic block diagram showing the configuration of the stereoscopic video image synthesizing apparatus 300 in the present embodiment.
  • the left image capturing device 100 is a video camera that captures an image viewed from the viewpoint of the left eye.
  • the right image capturing apparatus 200 is a video camera that is installed in parallel to the right side of the left image capturing apparatus 100 and captures an image viewed from the viewpoint of the right eye.
  • the stereoscopic video composition device 300 accepts the video with the left eye viewpoint and the video with the right eye viewpoint from the left video imaging device 100 and the right video imaging device 200, and combines the stereoscopic video to display the stereoscopic video display device. 400, and shape data is arranged at the 3D position of a specific subject and output to the haptic display device 500
  • FIG. 3 is a schematic block diagram showing a configuration of the stereoscopic video image synthesizing apparatus 300 according to the present embodiment.
  • 31 denotes a left video data input unit that receives a video input from the left video shooting device 100 and outputs a left image extracted from the video frame by frame.
  • Reference numeral 32 denotes a right video data input unit that receives a video input from the right video shooting device 200 and outputs a right image extracted from the video frame by frame.
  • Reference numeral 33 denotes a marker image position calculation unit that calculates the marker position in each of the left image and the right image received from the left image data input unit 31 and the right image data input unit 32. Details of the powerful image position calculation unit 33 will be described later.
  • [0017] 34 indicates the position of the marker in the display space of the stereoscopic video synthesized by the stereoscopic video synthesis unit 40 based on the marker position in each of the left image and the right image calculated by the marker image position calculation unit 33. It is a marker solid position calculation part to calculate. Details of the marker three-dimensional position calculation unit 34 will be described later.
  • the shape data 36 stored in the storage unit of the 3D image synthesizing device is subjected to conversion to translate the shape data 36 to the position calculated by the marker 3D position calculation unit 34.
  • the position calculated by the marker solid position calculation unit 34 for the three-dimensional shape to be represented It is a shape arrangement
  • the data format of the shape data is data such as polygon data representing a three-dimensional shape at a specific position in the three-dimensional space, although it depends on the haptic sense presentation device 500.
  • a shape output unit 37 outputs the shape data generated by the shape arranging unit 35 to the haptic / tactile sensation presentation device 500.
  • [0018] 38 is an image in which image data 39 prepared in advance and stored in the storage unit of the stereoscopic video image synthesizing device is arranged at the marker position in each of the left image and the right image calculated by the marker image position calculation unit 33. Arrangement part.
  • an object having a complicated shape or color that is composed of only a simple subject such as a marker can be a target for presenting a tactile sensation.
  • the stereoscopic video composition unit 40 synthesizes the left image and the right image in which the image data 39 is arranged by the image arrangement unit 38, and generates stereoscopic video data in a format suitable for the stereoscopic video display device 400.
  • a stereoscopic video output unit 41 outputs the stereoscopic video data generated by the stereoscopic video synthesis unit 40 to the stereoscopic video display device 400.
  • FIG. 4 is a flowchart for explaining a method of calculating the marker position in the left image and the right image in the marker image position calculation unit 33.
  • the marker image position calculation unit 33 calculates the position of the marker by performing the processing of the flowchart shown in FIG. 4 for each of the left image and the right image.
  • color values for expressing the color of each pixel are represented by red, green, and blue component values.
  • the upper limit values (Rmax, Gmax, Bmax) and lower limit values (Rmin, Gmin, Bmin) of the red, green, and blue components of the color are set by the user operation, and the marker image position calculation unit 33 stores these values in the storage unit (Sl).
  • This step SI is performed in advance before the left image capturing device 100 and the right image capturing device 200 capture images and the 3D image combining device 300 performs the 3D image combining.
  • step S4 If it is determined in step S4 that it is within the range of the upper limit value and the lower limit value, the process proceeds to step S5, and the marker image position calculation unit 33 determines the position in the image for the i-th pixel at this time. After storing, the process proceeds to step S6.
  • step S4 if it is determined in step S4 that the value is not within the range of the upper limit value and the lower limit value, the process directly proceeds to step S6.
  • step S6 the marker image position calculation unit 33 increments the value of i by 1 and, when i is smaller than the value power max of i, returns to step S3 and repeats the above-described processing. In this way, the marker image position calculation unit 33 repeats steps S3 to S6 until the value power i of i is reached, that is, for all pixels of the image.
  • step S6 the process proceeds to step S7, and the marker image position calculation unit 33 is adjacent to the position in the image vertically or horizontally among the pixels whose positions are stored in step S6.
  • the marker image position calculation unit 33 is adjacent to the position in the image vertically or horizontally among the pixels whose positions are stored in step S6.
  • the powerful image position calculation unit 33 determines that the group having the largest area, that is, the one having the largest number of pixels, is extracted from the group generated in step S7 (S8). ) Calculates and outputs the position of the center of gravity of the extracted marker by averaging the coordinates of the pixels constituting the marker (S9).
  • X and Y of the pixel whose position is stored by the marker image position calculation unit 33 in step S6 Coordinate force (10, 10), (10, 11), (11, 11), (25, 60), (24, 61), (25, 61), (26, 61), (25, 62)
  • the marker image position calculation unit 33 performs gnorape 1 having three pixel forces (10, 10), (10, 11), (11, 11), and (25, 6), (24, 61), (25, 61), (26, 61), and (25, 62).
  • step S8 the marker image position calculation unit 33 compares the number of pixels of the group 1 of 3 pixels and the group 2 of 5 pixels, and determines that the group 2 having a large number of pixels is a marker and extracts it.
  • FIG. 5 is a diagram for explaining a method in which the marker three-dimensional position calculation unit 34 calculates the position of the marker in the Z-axis direction, that is, the direction perpendicular to the image (depth direction) in the stereoscopic video display space.
  • the coordinate XL is a coordinate in the horizontal axis direction among the barycentric positions of the marker image M1 in the left image G1 calculated by the marker image position calculation unit 33, and the left end of the left image G1 is the origin.
  • the coordinate XR is a horizontal coordinate of the gravity center position of the marquee image M2 in the right image G2 calculated by the marker image position calculation unit 33, and has the left end of the right image G2 as the origin.
  • the direction perpendicular to the image (Z-axis) is based on the viewpoint of the user viewing the stereoscopic video displayed on the stereoscopic video display device 400, and the marker stereoscopic position calculation unit 34
  • the vertical coordinate Z is calculated using equation (4).
  • the marker three-dimensional position calculating unit 34 calculates the average of the barycentric positions of the marker images in the left image and the right image calculated by the marker image position calculating unit 33, so that the X-axis direction of the marker in the stereoscopic video display space, that is, the image Calculate the horizontal coordinate and Y-axis direction, that is, the vertical coordinate of the image.
  • the marker 3D position calculation unit 34 uses the X coordinate in the stereoscopic image display space as equation (5), the Y coordinate as equation (6), and the Z coordinate as equation (7).
  • (X, Y, ⁇ ) (65, 41, 0. 033).
  • the value of the Z coordinate is very small compared to the value of the X coordinate and the Y coordinate. This is because the value of the Z coordinate obtained by equation (4) is the X coordinate and the Y coordinate. This is because the scale is different from, and this is adjusted by multiplying the Z coordinate by a predetermined constant C. Further, the size of the predetermined constant C may be adjusted so as to emphasize the position in the Z-axis direction.
  • the 3D image synthesizing apparatus 300 of the present embodiment outputs shape data synchronized with the live-action 3D video, and when the live-action video is displayed on the 3D video display device 400, everyone is intuitive.
  • the tactile sensation requested by the user can be realized by the force / tactile sense presentation device 500 that receives the shape data. With the addition of the tactile sensation that was only seen as a three-dimensional object in a conventional live-action three-dimensional image, it is possible to grasp the three-dimensional object more reliably and expand the possibilities of new media and interfaces.
  • the stereoscopic video composition apparatus 300 uses a marker as a specific subject to present a stereoscopic video and a shape as a visual / force / tactile sense even for an object that is not actually present on the spot. Therefore, it can be used particularly effectively in fields such as remote work and amusement.
  • the storage unit of the three-dimensional video composition apparatus 300 includes a hard disk device, a magneto-optical disk device, a nonvolatile memory such as a flash memory, a CR-ROM (Compact Disc-Read
  • an input device a display device, and the like (both not shown) are connected to the stereoscopic video composition device 300 as peripheral devices.
  • the input device refers to an input device such as a keyboard and a mouse.
  • Display devices include CRT (Cathode Ray Tube) and liquid crystal display devices.
  • a specific subject is a marker having a predetermined color, and the marker image position is determined.
  • the position of the left image and the right image is calculated by detecting the color, but a subject having an arbitrary shape Z color can be used as a specific subject.
  • a specific subject may be extracted by a chroma process or a background difference process, and the position may be calculated.
  • the background is set to a specific color only when shooting a subject.
  • the video shot by the left video shooting device 100 and the right video shooting device 200 is input to the stereoscopic video synthesis device 300.
  • the specific color used as a background when shooting is stored in advance, and a pixel whose color does not match the specific color is identified in the left image and the right image. Extract as
  • the background image for the left eye and the right eye which are captured only by the left image capturing device 100 and the right image capturing device 200 in advance, are stored in the marker image position calculation unit 33 in advance.
  • the left image is compared with the background image for the left eye
  • the right image is compared with the background image for the right eye, and a pixel whose color does not match is extracted as a specific subject.
  • the force described as having one marker is used to photograph a plurality of different colors, and the marker image position calculation unit 33 sets each upper limit value corresponding to the color of each marker.
  • the position of each marker is calculated by preparing the lower limit value, and the shape placement unit 35 displays the shape data 36 of each marker by displaying the shape data 36 of each marker. Place the image data 39 of each marker on the left image and the right image by arranging the image data 39 of each marker in the image placement unit 38 in the space.
  • the stereoscopic video composition apparatus 300 does not include the image arrangement unit 38 and the image data 39, and the stereoscopic video composition unit 40 includes the left video data input unit 31 and the right video data.
  • the left image and the right image output from the input unit 32 may be combined to generate stereoscopic video data.
  • the program for realizing the functions of the stereoscopic video synthesis unit 40 and the stereoscopic video output unit 41 is recorded on a computer-readable recording medium and recorded on the recording medium.
  • the image placement unit 38, the stereoscopic video composition unit 40, and the stereoscopic video output unit 41 may be processed.
  • the “computer system” mentioned here includes OS (Operating System) and hardware such as peripheral devices.
  • the "computer system” includes a homepage providing environment (some! / Is a display environment) in the case of using a WWW (World Wide Web) system.
  • the “computer-readable recording medium” refers to a storage device such as a flexible disk, a magneto-optical disk, a portable medium such as a ROM and a CD-ROM, and a hard disk incorporated in a computer system.
  • a “computer-readable recording medium” is a program that dynamically holds a program for a short time, like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line.
  • the program may be for realizing a part of the functions described above, or may be a program capable of realizing the functions described above in combination with a program already recorded in the computer system. .
  • the stereoscopic image synthesizing device of the present invention is not limited to the force suitable for use in remote work or amusement.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Processing Or Creating Images (AREA)
  • Length Measuring Devices By Optical Means (AREA)
  • Position Input By Displaying (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

Dans un dispositif de synthèse d'images vidéo stéréoscopiques destiné à la synthèse d'une image gauche vue par l'oeil gauche et d'une image droite vue par l'oeil droit afin de former une image vidéo stéréoscopique et de permettre au dispositif de synthèse d'images vidéo stéréoscopiques d'indiquer une sensibilité kinesthésique et tactile conformément à l'image vidéo stéréoscopique tout en indiquant l'image stéréoscopique d'une image vidéo réellement filmée; une unité de calcul (33) calcule la position d'un marqueur dans chaque image gauche et droite. Une unité de calcul (34) de position stéréoscopique du marqueur calcule la position d'un marqueur dans un espace d'affichage de l'image stéréoscopique synthétisée par son propre dispositif de synthèse d'image vidéo stéréoscopique en fonction de la position du marqueur dans chaque image gauche et droite. L'invention concerne également une unité de disposition de forme (35) qui dispose des données de forme prédéfinies (36) au niveau de la position du marqueur dans l'espace d'affichage. Elle concerne aussi une unité de sortie de forme (37) qui émet des données de forme (36) disposées par l'unité de disposition de forme (35) par rapport à une unité indicatrice de sensibilité kinesthésique et tactile.
PCT/JP2007/056038 2006-09-08 2007-03-23 Dispositif de synthèse d'images vidéo stéréoscopiques, procédé de production de données de forme et programme correspondant WO2008029531A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2006244197A JP4777193B2 (ja) 2006-09-08 2006-09-08 立体映像合成装置、形状データ生成方法およびそのプログラム
JP2006-244197 2006-09-08

Publications (1)

Publication Number Publication Date
WO2008029531A1 true WO2008029531A1 (fr) 2008-03-13

Family

ID=39156976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2007/056038 WO2008029531A1 (fr) 2006-09-08 2007-03-23 Dispositif de synthèse d'images vidéo stéréoscopiques, procédé de production de données de forme et programme correspondant

Country Status (2)

Country Link
JP (1) JP4777193B2 (fr)
WO (1) WO2008029531A1 (fr)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07218251A (ja) * 1994-02-04 1995-08-18 Matsushita Electric Ind Co Ltd ステレオ画像計測方法およびステレオ画像計測装置

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07218251A (ja) * 1994-02-04 1995-08-18 Matsushita Electric Ind Co Ltd ステレオ画像計測方法およびステレオ画像計測装置

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
KOBAYASHI M. ET AL.: "Stereo Chojo Hyoji ni yoru Real Scale Video System (Sharing Impression of Size: Stereoscopic Approach for Real Scale Video)", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 40, no. 11, 1999, pages 3834 - 3846, XP003021685 *
OZAWA S. ET AL.: "Jissha 3D Eizo Satsuei Hyoji System (Live 3D Scenography System)", THE INSTITUTE OF IMAGE ELECTRONICS ENGINEERS OF JAPAN DAI 224 KAI KENKYUKAI KOEN YOKO, 17 March 2006 (2006-03-17), pages 109 - 112, XP003021686 *
TANAKA S. ET AL.: "Haptic Vision ni Motozuku Nodoteki Buttai Juryo Suitei (Estimating Mass Based on Haptic Vision)", TRANSACTIONS OF INFORMATION PROCESSING SOCIETY OF JAPAN, vol. 44, no. SIG17, 2003, pages 51 - 60, XP003021687 *

Also Published As

Publication number Publication date
JP4777193B2 (ja) 2011-09-21
JP2008065683A (ja) 2008-03-21

Similar Documents

Publication Publication Date Title
AU2008204084B2 (en) Method and apparatus for generating stereoscopic image from two-dimensional image by using mesh map
US20070291035A1 (en) Horizontal Perspective Representation
US9619105B1 (en) Systems and methods for gesture based interaction with viewpoint dependent user interfaces
US9549174B1 (en) Head tracked stereoscopic display system that uses light field type data
EP2395760A2 (fr) Programme de contrôle d'affichage stéréoscopique, procédé de contrôle d'affichage stéréoscopique, appareil de contrôle d'affichage stéréoscopique, et système de contrôle d'affichage stéréoscopique
TWI547901B (zh) 模擬立體圖像顯示方法及顯示設備
JP2006325165A (ja) テロップ発生装置、テロップ発生プログラム、及びテロップ発生方法
JP2020173529A (ja) 情報処理装置、情報処理方法、及びプログラム
JP2022058753A (ja) 情報処理装置、情報処理方法及びプログラム
KR20070010306A (ko) 촬영장치 및 깊이정보를 포함하는 영상의 생성방법
CN116610213A (zh) 虚拟现实中的交互显示方法、装置、电子设备、存储介质
JP2013257621A (ja) 画像表示システム、パズルゲームシステム、画像表示方法、パズルゲーム方法、画像表示装置、パズルゲーム装置、画像表示プログラム、および、パズルゲームプログラム
KR101850134B1 (ko) 3차원 동작 모델 생성 방법 및 장치
JP4777193B2 (ja) 立体映像合成装置、形状データ生成方法およびそのプログラム
JP2021131490A (ja) 情報処理装置、情報処理方法、プログラム
JP2008065684A (ja) 立体映像合成装置、形状データ生成方法およびそのプログラム
JP2005011275A (ja) 立体画像表示システム及び立体画像表示プログラム
Lubos et al. The interactive spatial surface-blended interaction on a stereoscopic multi-touch surface
WO2008029529A1 (fr) Dispositif de synthèse d'images vidéostéréoscopiques, procédé de génération de données de forme et son programme
JP2014222848A (ja) 画像処理装置、方法、及びプログラム
US11422670B2 (en) Generating a three-dimensional visualization of a split input device
KR20240093013A (ko) 특정 공간에 대한 동영상과 복수의 이미지들을 이용하여 3차원 공간 모델링 데이터를 생성하는 장치 및 방법과 이를 위한 프로그램
JP2000353252A (ja) 映像重畳方法、映像重畳装置及び映像重畳プログラムを記録した記録媒体
CN118444877A (zh) 一种数据处理方法、装置及电子设备
CN115767068A (zh) 一种信息处理方法、装置和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 07739479

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 07739479

Country of ref document: EP

Kind code of ref document: A1