WO2012166593A2 - Système et procédé pour créer un environnement de réalité virtuelle tridimensionnel, panoramique, navigable ayant un champ de vision ultra-large - Google Patents

Système et procédé pour créer un environnement de réalité virtuelle tridimensionnel, panoramique, navigable ayant un champ de vision ultra-large Download PDF

Info

Publication number
WO2012166593A2
WO2012166593A2 PCT/US2012/039572 US2012039572W WO2012166593A2 WO 2012166593 A2 WO2012166593 A2 WO 2012166593A2 US 2012039572 W US2012039572 W US 2012039572W WO 2012166593 A2 WO2012166593 A2 WO 2012166593A2
Authority
WO
WIPO (PCT)
Prior art keywords
video image
image data
video
virtual reality
navigable
Prior art date
Application number
PCT/US2012/039572
Other languages
English (en)
Other versions
WO2012166593A3 (fr
Inventor
Thomas Seidl
Ron IGRA
Original Assignee
Thomas Seidl
Igra Ron
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomas Seidl, Igra Ron filed Critical Thomas Seidl
Publication of WO2012166593A2 publication Critical patent/WO2012166593A2/fr
Publication of WO2012166593A3 publication Critical patent/WO2012166593A3/fr
Priority to US14/090,132 priority Critical patent/US9007430B2/en
Priority to US14/685,234 priority patent/US9883174B2/en
Priority to US15/849,885 priority patent/US20180309982A1/en
Priority to US16/653,329 priority patent/US20200288113A1/en
Priority to US17/159,026 priority patent/US11528468B2/en
Priority to US17/985,225 priority patent/US20230328220A1/en

Links

Classifications

    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/017Head mounted
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0101Head-up displays characterised by optical features
    • G02B2027/0138Head-up displays characterised by optical features comprising image capture systems, e.g. camera
    • GPHYSICS
    • G02OPTICS
    • G02BOPTICAL ELEMENTS, SYSTEMS OR APPARATUS
    • G02B27/00Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
    • G02B27/01Head-up displays
    • G02B27/0179Display position adjusting means not related to the information to be displayed
    • G02B2027/0187Display position adjusting means not related to the information to be displayed slaved to motion of at least a part of the body of the user, e.g. head, eye
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/698Control of cameras or camera modules for achieving an enlarged field of view, e.g. panoramic image capture

Definitions

  • the present invention relates generally to systems and methods for creating a navigable, panoramic three-dimensional environment of a real-world scene, and more particularly, but not exclusively, to capturing video images that extend beyond the field of view of a user and manipulating the images to create a navigable, panoramic three-dimensional virtual reality environment in a head mounted display that permits the user to control the direction they look in the recorded real world environment.
  • Existing three-dimensional image capture systems may capture images of limited field of view using two cameras mounted side by side and display these two images separately into each eye of a user by various means. Since the field of view of the originally captured images is limited, such systems do not readily permit the user to turn their head, for example 90° to the left or right, to see what is to the left or right of the originally captured images in the real-world scene.
  • capturing video data of wide field of view images poses a number of difficulties which must be overcome in order to present the images to the user in a manner that accurately reflects the real-world scene and the user's movement relative to the real- world scene.
  • the present invention relates to systems and methods for creating a navigable, panoramic three-dimensional virtual reality environment in which stereoscopic perception of three-dimensional depth to a user is achieved.
  • a "three-dimensional virtual reality environment" is defined to be one in which stereoscopic perception of three-dimensional depth can be perceived by a user, e.g., as in a 3-D TV.
  • the systems and methods may provide for the recording of a real-world scene of up to a full 360° by 360° field of view in three dimensions, manipulating the recorded images, and then displaying the images to a person using a head mounted display with a tracking device.
  • the present invention may provide a system for creating a navigable, panoramic three-dimensional virtual reality environment, comprising a wide field of view optical imaging device having at least two optical imaging elements configured to image at least two different viewpoints of a scene.
  • a wide field of view optical imaging device having at least two optical imaging elements configured to image at least two different viewpoints of a scene.
  • the optical imaging device may include a fisheye lens and/or a mirror, for example.
  • the optical imaging device may include at least one optical imaging detector configured to record first and second video image data of the different viewpoints of the scene, respectively.
  • the optical imaging device may be configured to alternately display the images of the at least two different view points on the optical imaging detector to provide the first and second video image data.
  • a separate optical imaging detector may provided for each respective optical imaging element.
  • the system may include a tracking device configured to track the movement of a user, and a position detector disposed in communication with the tracking device to receive tracking data from the tracking device and configured to determine a direction of view of the user.
  • An image renderer may be provided and disposed in communication with the optical imaging device to receive the first and second video image data.
  • the image renderer may also be disposed in communication with the position detector to receive the direction of view, and may be configured to compute respective regions of interest of a portion of the first and second video image data based on the user's direction of view.
  • a head mounted display may be disposed in communication with the image renderer to receive image data associated with the respective regions of interest and may be configured to display the image data associated with the respective regions of interest the user, whereby a navigable, panoramic three-dimensional virtual reality environment is created.
  • the renderer may include a computer usable medium having a computer readable program code embodied therein.
  • the computer readable program code may be adapted to be executed to implement a method for rendering the first and second video image data to create a navigable, panoramic three-dimensional virtual reality environment.
  • the method may also include the steps of creating first and second wireframe spheres, and transforming each of the first and second video images by wrapping the first and second video data onto the first and second wireframe spheres respectively.
  • the present invention may provide a computer usable medium comprising a computer readable program code embodied therein.
  • the computer readable program code may be adapted to be executed to implement a method for rendering video image data to create a navigable, panoramic three- dimensional virtual reality environment.
  • the method may include the steps of receiving video image data of a scene having a wide field of view comprising first and second video image streams taken from different viewpoints.
  • the first video image stream may have a plurality of first image frames and the second video image stream may have a plurality of second image frames.
  • the method may synchronize the respective pairs of the first and second image frames to create a video output stream of the synchronized image pairs.
  • the synchronization may be performed by merging respective pairs of the first and second image frames to create a video output stream of the merged image pairs.
  • the step of synchronizing respective pairs of the first and second image frames may also include removing distortion from the first and second image frames.
  • the method may also include receiving position data indicating a direction of sight of a user, calculating a first and second region of interest of the respective first and second image frames based on the position data, and displaying the first and second regions of interest on a stereoscopic display, whereby a navigable three- dimensional virtual reality environment is created of the recorded real world environment.
  • the method may also create first and second wireframe spheres and wrap a respective first and second section of the synchronized or merged image pairs onto the respective first and second wireframe spheres.
  • the method may receive position data indicating a direction of sight of a user and calculate a first and second region of interest of the respective first and second wireframe spheres based on the position data.
  • the first and second regions of interest may be displayed on a stereoscopic display,
  • the step of displaying the first and second regions of interest may include rotating the first and second wireframe spheres in response to the position data.
  • Figure 1 schematically illustrates an exemplary system for creating a navigable, panoramic three-dimensional virtual reality environment in accordance with the present invention
  • Figures 2A, 2B schematically illustrate an exemplary method for creating a navigable, panoramic three-dimensional virtual reality environment in accordance with the present invention.
  • Figures 3A-3D schematically illustrate exemplary configurations for optical imaging devices used in the present invention, with Figs. 3A, 3B showing side elevational and top views, respectively, of a mirror system, Fig. 3C showing a top view of a lens system have forward and rearward pairs of imaging devices, and Fig. 3D showing a top view of a hexagonally configured system having six imaging devices.
  • the present invention relates to systems and methods for creating a navigable, panoramic three-dimensional virtual reality environment having an ultra- wide field of view, Figs. 1, 2A, 2B.
  • the systems and methods provide a navigable, panoramic three-dimensional virtual reality environment by capturing video image data of a scene having a wide field of view.
  • the systems and methods of the present invention can create a navigable, panoramic three-dimensional virtual reality environment that can rapidly render and present to the user video information of new portions of the scene that come into the user's view as the user moves their head.
  • the user gets the feeling of being inside a virtual environment, being able to perceive depth and pan and zoom, due to the manipulation and display of the video image data by the system and method of the present invention.
  • an exemplary system in accordance with the present invention may include a wide field of view optical imaging device 100 which may include at least two optical elements 106, 108 each oriented to image different viewpoints of a scene. Respective optical imaging detectors 102, 104 may be associated with each optical element 106, 108. Alternatively, a single imaging detector may be provided and configured to alternately receive images from each of the optical elements 106, 108.
  • the optical elements 106, 108 may include any suitable optical imaging device capable of imaging a wide field of view, such as fisheye lenses, Fresnel lenses, mirror lenses, and catadioptric systems, for example.
  • the two optical elements 106, 108 may be provided in the form of fisheye lenses which can capture video image data over a field of 180° by 180°.
  • the optical elements may be provided in the form of parabolic reflectors 306, 308 and associated detectors 302, 304, Figs. 3A, 3B.
  • two pairs of optical imaging detectors with associated optical elements may be used to capture video image data having a wider field of view.
  • a pair of forward facing optical imaging detectors 312 and a pair of rearward facing optical imaging detectors 310 may be employed, Fig. 3C.
  • five (or four) optical imaging detectors and optical elements 314 maybe spaced equidistant around the circle with an optional sixth optical imaging detector and optical element 314 pointing upward out of the plane of the circle, Fig. 3D.
  • the optical imaging detectors 102, 104 may record first and second video image data comprising left and right video image streams 2, 4 and may communicate directly with an image renderer 1 10 to transmit the respective left and right video image streams 2, 4 acquired by the optical imaging detectors 102, 104 to the image renderer 110.
  • Such a configuration in which the optical imaging detectors 102, 104 communication directly with the image renderer 110 may be used for real-time imaging.
  • the optical imaging device 100 may record and store the left and right video image streams 2, 4 for subsequent download to the image renderer 110.
  • the left and right video image streams 2, 4 may be subsequently loaded on to a hard-drive or other suitable storage device to be accessed by the image renderer 110 at a later date.
  • the image renderer 110 may communicate with a head mounted display 130 (iWear ® VR920, Vuzix Corporation, Rochester, NY) worn by a user which may include a tracking device 140 configured to track the movement of the user.
  • a position detector 120 may be disposed in communication with the tracking device 140 to receive tracking data from the tracking device 140 to determine the direction of view of the user.
  • the position detector 120 is shown as a separate element in Fig. 1, the position detector 120 may optionally be provided as an integral part of the tracking device 140.
  • the position detector 120 may be disposed in communication with the image renderer 1 10 to supply the direction of view data to the image renderer 110.
  • the image renderer 1 10 may control the rendering and display of selected portions of the left and right video image streams 2, 4 corresponding to the user's direction of view on respective left and right screens 134, 132 of the head mounted display 130 so the user perceives a navigable, panoramic three-dimensional virtual reality environment.
  • the image renderer 110 may take the left and right video image streams 2, 4, and a perform polar to rectangular conversion to merge respective temporally matched image frames of the left and right video image streams 2, 4 into a merged image pair 8. That is, for each image frame of the left video image stream 2 the temporally corresponding image frame of the right video image stream 4 is merged thereto to create a series of merged image pairs 8 for all video frames to provide an output stream of the merged image pairs 8. Merging temporally matched image frames 8 from the left and right video image streams 2, 4 ensures that the video image data from each optical imaging detector 102, 104 remains temporally synchronized, and is but one example of how the image streams 2, 4 may be synchronized.
  • the output stream can be processed to play in the head mounted display 130.
  • one optical element 106, 108 may image the other, in which case the image renderer 110 may patch the images of the left and right video image streams 2, 4 to obscure the image of the other optical element.
  • the image renderer 1 10 may create virtual wireframe spheres 12, 14 (one for each eye) to provide a framework for the virtual reality environment. As each merged image pair 8 is encountered in the output stream, the merged image pair 8 may be split into respective left and right portions that are then wrapped onto respective left, right wireframe sphere 12, 14. The image renderer 1 10 may virtually place a camera inside the left sphere 14 to create a feed to the left screen 134 of the head mounted display 130 for display to the left eye of the user. Likewise, the image renderer 110 may virtually place a camera inside the right sphere 12 to create a feed to the right screen 132 of the head mounted display 130 for display to the right eye of the user.
  • a navigable, panoramic three-dimensional virtual reality environment having ultra- wide field of view may be created using a fisheye stereo side-by-side video publisher for creating the output stream of merged image pairs 8, Fig. 2A, in conjunction with a spherical stereo side-by-side viewer for processing and displaying the output stream to the user, Fig. 2B.
  • Fig. 2A fisheye stereo side-by-side video publisher for creating the output stream of merged image pairs 8
  • Fig. 2A in conjunction with a spherical stereo side-by-side viewer for processing and displaying the output stream to the user
  • Fig. 2B Turning first to the publisher, Fig.
  • the publisher may be implemented using FFMpeg open source libraries for handling multimedia data and Microsoft ® DirectX ® protocols, though other suitable libraries or algorithms may be used to effect the creation of the output stream of merged image pairs 8.
  • the video publisher may initialize global variables, which may include initializing the bit rate, frame size, audio channel, and audio rate, as well as selecting the output video stream compression, and output FPS frame rate. (The name of the output video file may also be selected at step 200.)
  • the publisher may receive the left and right video image streams 2, 4, with the left video image stream 2 having a plurality of left image frames and the right video image stream 4 having a plurality of right image frames.
  • publication of the merged image pairs 8 may proceed by creating two parallel publication threads for processing the left and right video image streams 2, 4, at steps 210a and 210b, respectively.
  • Initialization of variables for each publication thread may include initialization of the DirectX ® variables: FilterGraph, SampleGrabber, GraphBuilder, IMediaControl, rVideoWindow, IBasicAudio, and IMediaSeek, steps 210a, 210b.
  • the rendering modes associated with sharpening, brightness, contrast, and noise median may be set, steps 220a, 220b.
  • the left and right video image streams 2, 4 may be processed to remove distortion introduced by the optical imaging device 100 using the transformation provided in Table 1, steps 230a, 230b.
  • the distortion-free streams may then be rendered to respective left and right bitmap streams, steps 240a, 240b, which may then be merged to create the merged image pairs 8 of the output stream, step 250.
  • the image renderer 110 may proceed with transformation of the output stream for display in the head mounted display 130 to create the navigable, panoramic three-dimensional virtual reality environment.
  • the video publisher may be initialized by initializing the DirectX ® D3D interfaces (e.g., IDirect3D9, IDirect3DDevice9, IDirect3DVertexBuffer9, and IDirect3DTexture9), Fig. 2B.
  • the system may be configured to initialize the DirectShow interface (e.g., FilterGraph, SampleGrabber, GraphBuilder, IMediaControl, IVideoWindow, IBasicAudio, IMediaSeek) as well as common parameters, such as all parameters for the stereo video stream and look-up parameters for left and right eye image buffers.
  • rendering may proceed by creating two parallel threads for rendering the output video stream of merged image pairs 8 and operating the tracking device 140, at steps 270 and 290, respectively.
  • the process may include the steps for initializing the tracking device 140, step 290, and tracking the user's movement using the position detector 140, step 292.
  • the 3D environment may be initialized by filling quad vertices (D3D), creating the Direct3D interface, creating a D3D device and setting the DirectX D3D display mode, setting a render state for the D3D device, setting light for D3D device, creating a vertex buffer for D3D device, preparing backbuffer flipping, and preparing a D3D scene.
  • the frame-by-frame process of rendering the video stream may proceed at step 272. Specifically, the process may retrieve a current video frame containing a merged image pair 8 and may split the merged image pairs 8 of the video output stream into respective left and right images for use in a left eye buffer and right eye buffer, step 272.
  • the left and right images generated at step 272 may be wrapped around the virtual spheres 12, 14 using the exemplary code listed in Table 2 in view of the following geometric conventions.
  • the points on the sphere with radius r can be parameterized via
  • xdx + y dy + zdz 0.
  • step 276 Having transformed the left and right images from a rectangular to spherical form by wrapping the left and right images around respective spheres 12, 14, further processing can proceed by selecting that portion of the wrapped images for display on each of the left and right eyes of the user, step 276. That is, in response to the user's direction of view as determined at step 292, a region of interest for each of the left and right wrapped images is selected for display in the head mounted display 130, step 276.
  • the process may, for example, execute the steps for D3D device render by: preparing the D3D matrix, choosing the left and right regions of interest, and controlling the video stream (play, stop, pause) as well as controlling panning (up, down, left, right, zoom).
  • the left and right regions of interest may then be displayed on the respective left and right displays 134, 132 of the head mounted display 130.
  • display may be accomplished by having D3D render left and right image buffers containing the left and right regions of interest to the left and right displays 134, 132 of the head mounted display 130, step 278.
  • the process may be repeated for each video frame by sending a signal to the rendered 110 to return to step 272, step 280, to create a navigable, panoramic three-dimensional virtual reality environment having ultra- wide field of view.
  • pan, tilt - camera direction point (degrees) pan, tilt - camera direction point (degrees) .
  • r (sqrt(w*w + h*h) / 2.0) / tan (degreeToRadians(fov/2.0) ) ;
  • pan degreeToRadians( pan ) ;
  • tilt degreeToRadians ( -tilt ) ;
  • vx2 vx*cost*cosp - vy*sinp + vz*cosp*sint;
  • vy2 vx*cost*sinp + vy*cosp + vz*sint*sinp;
  • vz2 -vx*sint + vz*cost
  • panDest radinasToDegree( a resin (sinDest) );
  • panDest 180 - panDest
  • panDest -180 - panDest
  • tiltDest radiansToDegree( a resin (sin Pest) ) ;

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Optics & Photonics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Processing Or Creating Images (AREA)

Abstract

La présente invention concerne un système et un procédé pour capturer une vidéo d'une scène du monde réel sur un champ de vision qui peut dépasser le champ de vision d'un utilisateur, manipuler la vidéo capturée puis afficher de manière stéréoscopique l'image manipulée à l'utilisateur dans un dispositif d'affichage monté sur la tête pour créer un environnement virtuel ayant une longueur, une largeur et une profondeur dans l'image. Par capture et manipulation de la vidéo pour un champ de vision qui dépasse le champ de vision de l'utilisateur, le système et le procédé peuvent rapidement répondre à un déplacement par l'utilisateur pour mettre à jour le dispositif d'affichage permettant à l'utilisateur de regarder et de faire un panoramique, c'est-à-dire, de naviguer, à l'intérieur de l'environnement virtuel tridimensionnel.
PCT/US2012/039572 2011-05-27 2012-05-25 Système et procédé pour créer un environnement de réalité virtuelle tridimensionnel, panoramique, navigable ayant un champ de vision ultra-large WO2012166593A2 (fr)

Priority Applications (6)

Application Number Priority Date Filing Date Title
US14/090,132 US9007430B2 (en) 2011-05-27 2013-11-26 System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
US14/685,234 US9883174B2 (en) 2011-05-27 2015-04-13 System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
US15/849,885 US20180309982A1 (en) 2011-05-27 2017-12-21 System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
US16/653,329 US20200288113A1 (en) 2011-05-27 2019-10-15 System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
US17/159,026 US11528468B2 (en) 2011-05-27 2021-01-26 System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
US17/985,225 US20230328220A1 (en) 2011-05-27 2022-11-11 System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201161490656P 2011-05-27 2011-05-27
US61/490,656 2011-05-27

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/090,132 Continuation US9007430B2 (en) 2011-05-27 2013-11-26 System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view

Publications (2)

Publication Number Publication Date
WO2012166593A2 true WO2012166593A2 (fr) 2012-12-06
WO2012166593A3 WO2012166593A3 (fr) 2013-04-04

Family

ID=47260214

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2012/039572 WO2012166593A2 (fr) 2011-05-27 2012-05-25 Système et procédé pour créer un environnement de réalité virtuelle tridimensionnel, panoramique, navigable ayant un champ de vision ultra-large

Country Status (1)

Country Link
WO (1) WO2012166593A2 (fr)

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015142174A1 (fr) 2014-03-18 2015-09-24 Changa Anand Avinash Jayanth Codage et décodage de données d'image tridimensionnelle
GB2526263A (en) * 2014-05-08 2015-11-25 Sony Comp Entertainment Europe Image capture method and apparatus
EP3057316A1 (fr) * 2015-02-10 2016-08-17 DreamWorks Animation LLC Génération d'imagerie tridimensionnelle en complément du contenu existant
EP3070513A1 (fr) * 2015-02-06 2016-09-21 Sony Computer Entertainment Europe Ltd. Système d'affichage à assemblage avec tête
CN106899841A (zh) * 2017-02-13 2017-06-27 广东欧珀移动通信有限公司 图片的显示方法、装置及计算机设备
US9703100B2 (en) 2013-06-11 2017-07-11 Sony Computer Entertainment Europe Limited Change nature of display according to overall motion
US9721385B2 (en) 2015-02-10 2017-08-01 Dreamworks Animation Llc Generation of three-dimensional imagery from a two-dimensional image using a depth map
US9773350B1 (en) 2014-09-16 2017-09-26 SilVR Thread, Inc. Systems and methods for greater than 360 degree capture for virtual reality
CN109300182A (zh) * 2017-07-25 2019-02-01 中国移动通信有限公司研究院 全景图像数据处理方法、处理设备及存储介质
WO2019083266A1 (fr) * 2017-10-24 2019-05-02 엘지전자 주식회사 Procédé de transmission/réception de vidéo à 360 degrés comprenant des informations vidéo de type ultra-grand-angulaire, et dispositif associé
EP3462283A4 (fr) * 2016-09-20 2019-08-14 Tencent Technology (Shenzhen) Company Limited Procédé et dispositif d'affichage d'images utilisés dans un appareil basé sur la réalité virtuelle
US10788791B2 (en) 2016-02-22 2020-09-29 Real View Imaging Ltd. Method and system for displaying holographic images within a real object
US10795316B2 (en) 2016-02-22 2020-10-06 Real View Imaging Ltd. Wide field of view hybrid holographic display
US10877437B2 (en) 2016-02-22 2020-12-29 Real View Imaging Ltd. Zero order blocking and diverging for holographic imaging
EP4012482A1 (fr) * 2013-03-25 2022-06-15 Sony Interactive Entertainment Inc. Écran
CN115639976A (zh) * 2022-10-28 2023-01-24 深圳市数聚能源科技有限公司 一种虚拟现实内容多模式多角度同步展示方法及系统
US11663937B2 (en) 2016-02-22 2023-05-30 Real View Imaging Ltd. Pupil tracking in an image display system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090009522A1 (en) * 2002-01-15 2009-01-08 Canon Kabushiki Kaisha Information processing apparatus and method
US20100259619A1 (en) * 2009-04-10 2010-10-14 Nicholson Timothy J Hmd with elevated camera
JP2010256534A (ja) * 2009-04-23 2010-11-11 Fujifilm Corp 全方位画像表示用ヘッドマウントディスプレイ装置
US20110112934A1 (en) * 2008-06-10 2011-05-12 Junichi Ishihara Sensory three-dimensional virtual real space system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090009522A1 (en) * 2002-01-15 2009-01-08 Canon Kabushiki Kaisha Information processing apparatus and method
US20110112934A1 (en) * 2008-06-10 2011-05-12 Junichi Ishihara Sensory three-dimensional virtual real space system
US20100259619A1 (en) * 2009-04-10 2010-10-14 Nicholson Timothy J Hmd with elevated camera
JP2010256534A (ja) * 2009-04-23 2010-11-11 Fujifilm Corp 全方位画像表示用ヘッドマウントディスプレイ装置

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4012482A1 (fr) * 2013-03-25 2022-06-15 Sony Interactive Entertainment Inc. Écran
US9703100B2 (en) 2013-06-11 2017-07-11 Sony Computer Entertainment Europe Limited Change nature of display according to overall motion
NL2012462A (en) * 2014-03-18 2015-12-08 Avinash Jayanth Changa Anand Encoding and decoding of three-dimensional image data.
WO2015142174A1 (fr) 2014-03-18 2015-09-24 Changa Anand Avinash Jayanth Codage et décodage de données d'image tridimensionnelle
GB2526263A (en) * 2014-05-08 2015-11-25 Sony Comp Entertainment Europe Image capture method and apparatus
GB2526263B (en) * 2014-05-08 2019-02-06 Sony Interactive Entertainment Europe Ltd Image capture method and apparatus
US9579574B2 (en) 2014-05-08 2017-02-28 Sony Computer Entertainment Europe Limited Image capture method and apparatus
US9773350B1 (en) 2014-09-16 2017-09-26 SilVR Thread, Inc. Systems and methods for greater than 360 degree capture for virtual reality
US10187633B2 (en) 2015-02-06 2019-01-22 Sony Interactive Entertainment Europe Limited Head-mountable display system
EP3070513A1 (fr) * 2015-02-06 2016-09-21 Sony Computer Entertainment Europe Ltd. Système d'affichage à assemblage avec tête
GB2534921B (en) * 2015-02-06 2021-11-17 Sony Interactive Entertainment Inc Head-mountable display system
EP3287837A1 (fr) * 2015-02-06 2018-02-28 Sony Interactive Entertainment Europe Limited Visiocasque
US9721385B2 (en) 2015-02-10 2017-08-01 Dreamworks Animation Llc Generation of three-dimensional imagery from a two-dimensional image using a depth map
US10096157B2 (en) 2015-02-10 2018-10-09 Dreamworks Animation L.L.C. Generation of three-dimensional imagery from a two-dimensional image using a depth map
US9897806B2 (en) 2015-02-10 2018-02-20 Dreamworks Animation L.L.C. Generation of three-dimensional imagery to supplement existing content
EP3057316A1 (fr) * 2015-02-10 2016-08-17 DreamWorks Animation LLC Génération d'imagerie tridimensionnelle en complément du contenu existant
US10877437B2 (en) 2016-02-22 2020-12-29 Real View Imaging Ltd. Zero order blocking and diverging for holographic imaging
US10788791B2 (en) 2016-02-22 2020-09-29 Real View Imaging Ltd. Method and system for displaying holographic images within a real object
US10795316B2 (en) 2016-02-22 2020-10-06 Real View Imaging Ltd. Wide field of view hybrid holographic display
US11543773B2 (en) 2016-02-22 2023-01-03 Real View Imaging Ltd. Wide field of view hybrid holographic display
US11663937B2 (en) 2016-02-22 2023-05-30 Real View Imaging Ltd. Pupil tracking in an image display system
US11754971B2 (en) 2016-02-22 2023-09-12 Real View Imaging Ltd. Method and system for displaying holographic images within a real object
EP3462283A4 (fr) * 2016-09-20 2019-08-14 Tencent Technology (Shenzhen) Company Limited Procédé et dispositif d'affichage d'images utilisés dans un appareil basé sur la réalité virtuelle
US10754420B2 (en) 2016-09-20 2020-08-25 Tencent Technology (Shenzhen) Company Limited Method and device for displaying image based on virtual reality (VR) apparatus
CN106899841A (zh) * 2017-02-13 2017-06-27 广东欧珀移动通信有限公司 图片的显示方法、装置及计算机设备
CN109300182A (zh) * 2017-07-25 2019-02-01 中国移动通信有限公司研究院 全景图像数据处理方法、处理设备及存储介质
WO2019083266A1 (fr) * 2017-10-24 2019-05-02 엘지전자 주식회사 Procédé de transmission/réception de vidéo à 360 degrés comprenant des informations vidéo de type ultra-grand-angulaire, et dispositif associé
CN115639976A (zh) * 2022-10-28 2023-01-24 深圳市数聚能源科技有限公司 一种虚拟现实内容多模式多角度同步展示方法及系统
CN115639976B (zh) * 2022-10-28 2024-01-30 深圳市数聚能源科技有限公司 一种虚拟现实内容多模式多角度同步展示方法及系统

Also Published As

Publication number Publication date
WO2012166593A3 (fr) 2013-04-04

Similar Documents

Publication Publication Date Title
US11528468B2 (en) System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
WO2012166593A2 (fr) Système et procédé pour créer un environnement de réalité virtuelle tridimensionnel, panoramique, navigable ayant un champ de vision ultra-large
US11575876B2 (en) Stereo viewing
US9948919B2 (en) Stereoscopic 3D camera for virtual reality experience
CN107636534B (zh) 用于图像处理的方法和系统
US11218683B2 (en) Method and an apparatus and a computer program product for adaptive streaming
US6947059B2 (en) Stereoscopic panoramic image capture device
US20170280133A1 (en) Stereo image recording and playback
CA2927046A1 (fr) Methode et systeme de surveillance a afficheur installe sur la tete et couvrant 360 degres entre des modules de programme logiciel employant une video ou le partage de texture d'image
US11812009B2 (en) Generating virtual reality content via light fields
US10115227B2 (en) Digital video rendering
US20230018560A1 (en) Virtual Reality Systems and Methods
KR20200064998A (ko) 재생 장치 및 방법, 그리고 생성 장치 및 방법
EP3057316B1 (fr) Génération d'imagerie tridimensionnelle en complément du contenu existant

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12792953

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 07/04/2014)

122 Ep: pct application non-entry in european phase

Ref document number: 12792953

Country of ref document: EP

Kind code of ref document: A2