WO2009109804A1 - Procédé et appareil de traitement d'image - Google Patents

Procédé et appareil de traitement d'image Download PDF

Info

Publication number
WO2009109804A1
WO2009109804A1 PCT/IB2008/001375 IB2008001375W WO2009109804A1 WO 2009109804 A1 WO2009109804 A1 WO 2009109804A1 IB 2008001375 W IB2008001375 W IB 2008001375W WO 2009109804 A1 WO2009109804 A1 WO 2009109804A1
Authority
WO
WIPO (PCT)
Prior art keywords
meshes
image
layers
layer
images
Prior art date
Application number
PCT/IB2008/001375
Other languages
English (en)
Inventor
Stephane Jean Louis Jacob
Original Assignee
Dooworks Fz Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dooworks Fz Co filed Critical Dooworks Fz Co
Priority to PCT/IB2008/001375 priority Critical patent/WO2009109804A1/fr
Publication of WO2009109804A1 publication Critical patent/WO2009109804A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals

Definitions

  • the invention relates to image processing and, in particular, to the processing of two dimensional images to provide three dimensional or stereoscopic effects.
  • the two images are projected onto the screen using orthogonal polarising filters and the viewer watches the images through a pair of glasses that have similarly orthogonal polarising filters in each eye.
  • the effect is that each eye sees one of the two images and the brain forms these into a single stereoscopic image.
  • the present invention aims to provide a method and apparatus which improves on the prior art systems described above.
  • the invention processes image data to divide the image into different layers, each of which is processed independently.
  • the user can navigate through the layers giving an illusion of three dimensions.
  • a sequence of images such as a movie is separated into a number of layers which are then texture mapped onto meshes.
  • an area of the meshes must be selected.
  • this is achieved using a virtual camera whose position is moveable around the meshes.
  • the viewing experience will be one of passing into the depth of the film behind objects of image areas that are in the innermost, a foreground layer.
  • the illusion of three dimensional is created.
  • n - 1 masks are created corresponding to the image areas or objects selected for that layer.
  • areas or objects assigned to a layer can be tracked from frame to frame, for example by using motion detection techniques.
  • the meshes may be concentric nested meshes, for example spherical or hemispherical.
  • the meshes may be flat.
  • the meshes are parallel and may be equi-spaced.
  • Figure 1 is a top view of an image to be divided into layers and showing the positioning of a pre-rendering virtual camera
  • Figure 2 is a side view of the image of Figure 1 ;
  • Figure 3 is an equirectangular view of three layers of the image produced from Figures 1 and 2;
  • Figure 4 is an equirectangular view of a background layer of the image produced from Figures 1 and 2;
  • Figure 5 is an equirectangular view of a mid-ground layer of the image of Figures 1 and 2 together with a mask defined in an alpha channel for forming the layer;
  • Figure 6 is an equirectangular view of a foreground layer of the image of Figures 1 and 2 together with a mask defined in an alpha channel for forming the layer;
  • Figure 7 is a schematic view of a system and process embodying the invention.
  • Figure 8 shows a games controller which may be used to control the position of a virtual camera to determine the output video;
  • Figures 10 and 11 show similar meshes suitable where the input images are flat.
  • Figure 12 is a schematic view of the main components of a system embodying the invention.
  • the embodiment to be described allows stereoscopic illusion and interactivity to be achieved in a 3D computer graphic environment or with a video file which may be flat video or immersive, for example 360° field of view.
  • the source content which may be computer generated or video is converted into multiple layers which are synchronised and which are accompanied by an alpha channel which carries data regarding the layers and which may comprise masks for some or all of the layers.
  • the layered content is played into the specific player at which point each layer is played out at the same time.
  • the final displayed output is generated in real time from the output data using an end user input that modifies the position angle and zoom function of a virtual camera to give an illusion of stereoscopy and the sense of movement by the user through the layers of the content and interactivity with the content.
  • Figures 1 to 6 show how layering data may be generated in a 360 immersive environment.
  • the embodiments allow a conventional image source to be processed to produce a stereoscopic or three dimensional effect. Where the input images are video, it is convenient to convert them first into CG format and then process them in the same manner as CG images.
  • the image source is not relevant to the invention.
  • Embodiments of the invention could be used with movie film, in which case the film is first converted to video using a telecine or similar device.
  • the data on an image frame is divided into a number of layers n.
  • the system operator assigns objects from the image to a layer. In Figures 1 to 6, the first layer is a foreground layer, the second or middle layer is a mid-ground layer and the third layer is a background layer.
  • objects need only be assigned to n-1 layers as any remaining unassigned objects or image areas must then belong to the remaining layer. In this example, objects are assigned to the first and second layers.
  • Object assignment is performed on a frame-by-frame basis, but objects may be tracked from frame to frame so that, for example, once a particular object is assigned to the foreground layer, it will be tagged as foreground and by using known tracking techniques, such as edge detection and motion vectors, the object may be detected automatically in the next frame.
  • Assignment of objects in a frame, or areas of an image, to a layer will be performed by an operator working on a display of the image and identifying areas or objects using a pointer or other device to select an image portion.
  • Figures 1 to 6 show generation of three layers in the 360, for example.
  • Figures 1 and 2 show top and side views, respectively of the image that is being sub-divided into layers. It can be seen that the image includes a number of objects: two trees, a car, a flower, a house, an aeroplane and the sun.
  • the layers are assembled by selecting the objects that form a given layer and generating a mask that makes the remainder of the image opaque.
  • the first layer can then be generated by placing a virtual camera at the centre of the image and sweeping the camera through 360° to render the frame. It follows that objects only need to be assigned to (n - 1) layers as any unassigned objects will form part of the n th layer, in this case the background layer.
  • the layers are produced as equirectangular images and Figure 3 shows the equirectangular view of the three layers placed on top of each other.
  • Figure 4 shows the equirectangular view of the background layer, showing those objects that have not been tagged as belonging to the first or second layers.
  • Figures 5 and 6 show, respectively, the equirectangular views of the mid- ground and foreground layers together with their masks which ensure that only the selected objects are seen by the virtual camera.
  • Figure 5 only the house has been assigned to the mid-ground.
  • Figure 6 the car and the flower have been assigned to the foreground layer.
  • the mesh is the inverse of the image objects or areas.
  • the masks are defined in an alpha channel which, as will be described, is used to control rendering of the layers.
  • alpha compositing is well known and combines an image with a background to create the appearance of partial transparency. It is often usual to render image elements in separate passes and then combine the resulting multiple 2D images in a single, final image in a process called compositing. Compositing is used widely when combining computer rendered image elements with live footage.
  • the matte contains the shape of the geometry being drawn, that is the shape of the element, making it possible to distinguish between parts of the image where the geometry was actually drawn, and parts of the image which are empty.
  • each layer has its own alpha channel, however a single alpha channel could be provided for all the layers.
  • an additional value is stored in the alpha channel containing a value from 0 to 1.
  • a value of 0 means that the pixel does not have any coverage information and is fully transparent, i.e. there was no colour contribution from any geometry because the geometry did not overlap this pixel.
  • a value of 1 means that the pixel is fully opaque because the geometry completely overlapped the pixel.
  • the masks can be defined by setting the alpha channel value to 1 for all pixels not included in a layer.
  • the layer comprises image areas which are pre-selected. These image areas may equate to objects, such as a tree, house etc, but are defined in the alpha channel as pixel references.
  • the three layers are produced as three separate video streams that can be processed in parallel and then reassembled to provide a single layered image as shown in Figure 12 and as explained below.
  • the equirectangular images are mapped onto a mesh.
  • Figure 15 shows an example of the meshes that may be used.
  • the preferred mesh is a hemisphere or sphere but it will be noted that the diameter of the mesh is different from each of the layers. It will be seen that the foreground layer is mapped onto the smallest diameter mesh followed by the mid-ground layer and then the background layer.
  • the meshes and the mapped images are nested.
  • the nested meshes are preferably concentric but this is not essential.
  • the effect of zooming can be achieved by manipulation of the relative mesh sizes between layers.
  • the mesh size for the foreground layer were reduced, the effect would be that objects in that layer would appear larger relative to objects in other layers.
  • the user has control of the positioning of the virtual camera which enables movement of the camera around the meshes and between meshes, resulting in a sequence of images that give the illusion of a viewing point that meshes around the image and into the depth of the image behind the n-1 layers in front of the background layer.
  • Figure 7 shows an embodiment of the invention which incorporates the layer information into a video processing process.
  • the embodiment is based on the system disclosed in our earlier applications GB 0718015 and GB 0723538 the content of which is hereby incorporated by reference.
  • the embodiment described is based on the starting point being computer generated data. However, as mentioned above, it could be flat video or immersive video or have originated on another medium, such as film.
  • a 3D world is created 3D in software at 10 and pre-rendered at 20 using particular parameters. A selected portion of the 3D world is displayed under the control of user defined pan, tilt and zoom parameters input into the system.
  • the process is able to match the final frame rate displayed at 50 to the frame rate created 10.
  • the 3D world 10 may be created using any known 3D graphic design software.
  • the 3D world 10 is composed with multiple elements to create the illusion of volume and reality.
  • Each object is composed with a plurality of meshes, textures, lights, movement. Then, a virtual camera is located in that environment.
  • the virtual camera parameters are then set up and sent to the renderer 20.
  • the renderer 20 has parameters set up, such as resolution, frame rate with number of frames, and the type of renderer algorithm. Any suitable algorithm may be used. Known algorithms include Phong, Gouraud, Ray Tracing and Radiosity. For the avoidance of doubt, the term rendering applies to the processing of computer generated images.
  • the renderer operates on either CG images or images acquired from other sources such as a video camera. In this embodiment a radiosity renderer is preferred.
  • the renderer 20 rendered a single video file.
  • the renderer 20 renders the elements of the 3D world separately under the control of the alpha channel to obtain the different layers.
  • the layers are generated, they are imported into converter 30 where they are combined to form a single synchronised file.
  • This file embeds metadata such as the number of layers, distances between layers, virtual camera position, distance between camera and layers, the area that is displayed, camera aspect ratio and other parameters.
  • the multi-layered data that is output from the converter is played out by the player 40 under the control of a user.
  • the main camera and renderer common parameter is to set the view projection as equirectangular, with an image ratio to 2:1. This parameter may be changed according to user preference.
  • the image sequence is rendered and saved as an images file sequence or a video file for each layer.
  • the resulting file or files are then converted as a texture file or files. This is a pre-rendering step.
  • a virtual camera is then located in the centre of that sphere.
  • the mapping onto the mesh is the key step that saves computation time. This is a rendering step.
  • the virtual camera parameters can be modified in real time. For example, the focal length can be very short, providing a wide angle view.
  • the mesh whether flat or a sphere, the texture and the virtual camera are combined into software giving the end user control the pan, tilt and zoom function of the virtual camera.
  • This combination has a 3D frame rate (rendered in real-time).
  • the texture has a 2D frame rate (pre-rendered).
  • Figure 8 shows a suitable control device, in this case a games controller for a Sony (RTM) PS3 (RTM) games console.
  • the controller may be configured such that, for example, the left joystick controls point and tilt of the virtual camera, the right joystick control Y and X positioning of the virtual camera and the top two buttons control zoom and Z axis position in the 3D real-time world.
  • the zoom control may also control focus modification applied to the virtual camera while it is moving.
  • the 2D frame rate is still continuing during all the sequence.
  • the 3D frame rate continues until the end user stops the application.
  • the software application can embed normal movie player's functions such as pause, rewind, forward and stop. These functions act on the 2D frame rate.
  • a 3D view of the world as first created by any known 3D graphic techniques and rendered into a 2D equirectangular view comprised of several layers.
  • the equirectangular views are part of a sequence of 2D representations with a 2D frame rate being a sequence representing a view as a camera tracks through a scene.
  • the camera is, of course, a virtual camera as the image is computer-generated.
  • the key step in software is then taking the equirectangular image frames of the layers and mapping these onto a mesh, in this case a spherical mesh, in such a way that a virtual camera located at the centre of that mesh views a non- distorted view of the image in any direction.
  • a camera with given pan, tilt and zoom parameters, will view a portion of the scene represented on the mesh substantially non-distorted.
  • the user can alter the pan, tilt and zoom parameters of the virtual camera, in any direction, to select a portion of the scene to view.
  • the user also has control of the whole sequence of frames, the user is able to step forwards or backwards, in the manner of re-winding or playing, as a virtual video player.
  • Texture mapping techniques are used to map the equirectangular layer images onto the mesh. A portion of the spherical image may be selected for viewing substantially without distortion.
  • the virtual camera is a tool for selecting a defined portion of the image rendered onto the 3D mesh for viewing.
  • Figures 9a - 9c show an example of a suitable mesh for a 360 immersive image.
  • the meshes comprises three nested, concentric meshes 200, 210, 220 each of which is spherical.
  • the virtual camera 230 is shown at the centre, but as explained above can be moved around by the user to select which parts of the images mapped onto the meshes will be displayed.
  • the virtual camera may be moved to any point between the centre and the outermost mesh and may be zoomed, that is, the number of pixels viewed by the camera is reduced.
  • the camera moves radially outwards from the centre, it will pass first though the innermost mesh and then through the middle mesh as it travels towards the outermost mesh.
  • the displayed video will appear to have depth as the point of view travels through the layers.
  • the user can position the camera between layers and then view layers from behind. This control enables the illusion of stereoscopy and 3D to be created and provides interactivity.
  • the 3D world onto which the equirectangular image is mapped is a sphere. This is a convenient shape and particularly suited to mapping computer generated images. However, the choice of 3D shape is defined by the system provider.
  • the input is immersive video, acquired using a camera and a fish eye lens, it is appropriate to use a mesh which approximates the fish eye lens. Thus, a 180° fish eye will use a hemispherical mesh.
  • the mesh may be adjusted to compensate for optical distortions in the lens.
  • the present invention is not limited to any particular mesh shape, although for a given input, the correct choice of mesh is key to outputting good distortion free images.
  • the image source is video
  • the equirectangular images of the computer graphics example are replaced by mapped texture images.
  • Figures 10 and 11 show the use of flat, rather than spherical or hemispherical meshes. Such an arrangement may be appropriate where the source is flat or less than 180 degrees.
  • the images have been acquired from movie film.
  • the meshes are shown arranged parallel to one another and again comprise a foreground layer 320, a mid-layer 310 and a background layer 300.
  • the virtual camera is positioned in front of the meshes and may move in the X, Y and Z directions under the user's control.
  • the field of view of the camera is indicated by the chain dotted line 330 in Figures 10 and 11 and is a cone, thus, as the camera moves towards the meshes, the area displayed is reduced. Again, the camera position can move through the foreground and mid-ground meshes to give the impression of depth.
  • the meshes are all the same size, are equi-spaced and parallel. However, none of these properties is essential.
  • FIG 12 shows, schematically, the manner in which the data may be processed.
  • the image source is shown at 400. This may be computer generated, flat or immersive video. It may be captured by a camera and fish eye lens system.
  • the layers are assigned and the data for the alpha channel is derived at 410 and the streams of image data transmitted to a converter which operates on data from the source, which may be in a variety of formats.
  • the connector may include any suitable digital video Codec, but it is presently preferred to use the systems described in our co- pending applications GB 0709711 and GB 0718015.
  • a virtual camera is then located in the centre of that mesh, where the mesh is spherical or hemispherical, or in front of the mesh where it is flat as shown in Figures 9 and 10.
  • the process of playing a sequence of video frames as provided by the pre-rendering process is computationally simple and may be undertaken by a standard graphics card on a PC.
  • a virtual camera is defined an arranged in software to view the pre-rendered scene on the image mesh.
  • User definable parameters such as pan, tilt and zoom
  • a games console controller at an input and applied to the virtual camera, so that an appropriate selection of the pixels from the image mesh is made and represented on a 2D screen.
  • the selection of the pixels does not require computationally intensive processes such as interpolation because the pre-rendering process has ensured that the pixel arrangement, as transformed onto the mesh, is such that a simple selection of a portion of the pixels in any direction (pan or tilt) or of any size (zoom) is already appropriate for display on a 2D display.
  • the selection merely involves selecting the appropriate part of the image mapped onto the mesh. This process can be repeated for each frame of an image thereby creating an video player.
  • the image player takes data mapped on the image mesh and transforms this to a 2D image.
  • This step is computationally simple as it only involves taking the pixels mapped on the mesh, and the pan, tilt and zoom parameters input by the user, to select the pixels to be presented in a 2D image.
  • the virtual camera parameters can be modified in real time.
  • the focal length can be very short, providing a wide angle view.
  • the sphere or other mesh shape, the texture and the virtual camera are combined in software which the end user controls by adjusting pan, tilt and zoom function of the virtual camera.
  • the embodiment described enables a computer generated movie, for example, or a conventionally filmed movie converted to CG format or video acquired from another source, such as an immersive camera, to be divided into a number of layers and for a user to be able to navigate through those layers giving an impression of stereoscopy, depth and three-dimensions.
  • This effect may be utilised in a wide range of environments, for example, in the computer games industry which uses the technique to enable users to explore actual movie scenes which have been processed using an embodiment of the invention to form the images into a number of layers.
  • a conventional games controller may be used to navigate around the images and through the layers. It will be appreciated that although the generation of layers, including the assignment of objects to layers, is performed frame-by-frame on a non- real time basis, the play out of the layered video, and the input of user information, may be performed in real time with a frame rate equal to the frame rate of the original image source.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Selon l'invention, un flux d'images est acquis à partir d'une source générée par ordinateur ou d'ailleurs. Des éléments des images sont attribués à différentes couches sur une base trame par trame et des données de stratification sont conservées dans un canal alpha associé. Les couches sont ensuite rendues et texturées sur un réseau respectif parmi une pluralité de réseaux. Une partie des données mappées est sélectionnée par une caméra virtuelle qui est commandée par un utilisateur pour se déplacer autour des réseaux et à travers ceux-ci. Les données sélectionnées incluent des données provenant d'un ou plusieurs des réseaux et sont affichées à un utilisateur.
PCT/IB2008/001375 2008-03-07 2008-03-07 Procédé et appareil de traitement d'image WO2009109804A1 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/IB2008/001375 WO2009109804A1 (fr) 2008-03-07 2008-03-07 Procédé et appareil de traitement d'image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/IB2008/001375 WO2009109804A1 (fr) 2008-03-07 2008-03-07 Procédé et appareil de traitement d'image

Publications (1)

Publication Number Publication Date
WO2009109804A1 true WO2009109804A1 (fr) 2009-09-11

Family

ID=40091889

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IB2008/001375 WO2009109804A1 (fr) 2008-03-07 2008-03-07 Procédé et appareil de traitement d'image

Country Status (1)

Country Link
WO (1) WO2009109804A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2413104A1 (fr) * 2010-07-30 2012-02-01 Pantech Co., Ltd. Appareil et procédé pour la fourniture d'une carte routière
US10216381B2 (en) 2012-12-25 2019-02-26 Nokia Technologies Oy Image capture
WO2020008284A1 (fr) * 2018-07-03 2020-01-09 Sony Corporation Génération d'un contenu multimédia de réalité virtuelle dans une structure à couches multiples sur la base d'une profondeur de champ
US11184599B2 (en) 2017-03-15 2021-11-23 Pcms Holdings, Inc. Enabling motion parallax with multilayer 360-degree video

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266068B1 (en) * 1998-03-13 2001-07-24 Compaq Computer Corporation Multi-layer image-based rendering for video synthesis
EP1347656A1 (fr) * 2000-12-15 2003-09-24 Sony Corporation Processeur d'images, methode de production d'un signal d'image, support d'enregistrement d'informations, et programme de traitement d'images

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6266068B1 (en) * 1998-03-13 2001-07-24 Compaq Computer Corporation Multi-layer image-based rendering for video synthesis
EP1347656A1 (fr) * 2000-12-15 2003-09-24 Sony Corporation Processeur d'images, methode de production d'un signal d'image, support d'enregistrement d'informations, et programme de traitement d'images

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
HORRY Y ET AL: "TOUR INTO THE PICTURE: USING A SPIDERY MESH INTERFACE TO MAKE ANIMATION FROM A SINGLE IMAGE", COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH 97. LOS ANGELES, AUG. 3 - 8, 1997; [COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH], READING, ADDISON WESLEY, US, 3 August 1997 (1997-08-03), pages 225 - 232, XP000765820, ISBN: 978-0-201-32220-0 *
KANG S B: "A Survey of Image-based Rendering Techniques", INTERNET CITATION, August 1997 (1997-08-01), XP002508149, Retrieved from the Internet <URL:http://www.hpl.hp.com/techreports/Compaq-DEC/CRL-97-4.html> [retrieved on 20081211] *
POSE R ED - CALDER P ET AL: "Steerable interactive television: virtual reality technology changes user interfaces of viewers and of program producers", USER INTERFACE CONFERENCE, 2001. AUIC 2001. PROCEEDINGS. SECOND AUSTRA LASIAN GOLD COAST, QLD., AUSTRALIA 29 JAN.-1 FEB. 2001, LOS ALAMITOS, CA, USA,IEEE COMPUT. SOC, US, 29 January 2001 (2001-01-29), pages 77 - 84, XP010534525, ISBN: 978-0-7695-0969-3 *
REHG J M ET AL: "Video Editing Using Figure Tracking and Image-Based Rendering", INTERNET CITATION, December 1999 (1999-12-01), XP002261649, Retrieved from the Internet <URL:http://www.hpl.hp.com/techreports/Compaq-DEC/CRL-99-8.pdf> [retrieved on 20031114] *
SHADE J ET AL: "Layered depth images", COMPUTER GRAPHICS. SIGGRAPH 98 CONFERENCE PROCEEDINGS. ORLANDO, FL, JULY 19- - 24, 1998; [COMPUTER GRAPHICS PROCEEDINGS. SIGGRAPH], NEW YORK, NY : ACM, US, 19 July 1998 (1998-07-19), pages 231 - 242, XP002270434, ISBN: 978-0-89791-999-9 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2413104A1 (fr) * 2010-07-30 2012-02-01 Pantech Co., Ltd. Appareil et procédé pour la fourniture d'une carte routière
CN102420936A (zh) * 2010-07-30 2012-04-18 株式会社泛泰 用于提供道路视图的装置和方法
CN102420936B (zh) * 2010-07-30 2014-10-22 株式会社泛泰 用于提供道路视图的装置和方法
US10216381B2 (en) 2012-12-25 2019-02-26 Nokia Technologies Oy Image capture
US11184599B2 (en) 2017-03-15 2021-11-23 Pcms Holdings, Inc. Enabling motion parallax with multilayer 360-degree video
US11711504B2 (en) 2017-03-15 2023-07-25 Interdigital Vc Holdings, Inc. Enabling motion parallax with multilayer 360-degree video
WO2020008284A1 (fr) * 2018-07-03 2020-01-09 Sony Corporation Génération d'un contenu multimédia de réalité virtuelle dans une structure à couches multiples sur la base d'une profondeur de champ

Similar Documents

Publication Publication Date Title
US11575876B2 (en) Stereo viewing
US20170280133A1 (en) Stereo image recording and playback
CN108141578B (zh) 呈现相机
US10115227B2 (en) Digital video rendering
WO2012140397A2 (fr) Système d&#39;affichage tridimensionnel
WO2009109804A1 (fr) Procédé et appareil de traitement d&#39;image
JP7479386B2 (ja) シーンを表す画像信号
WO2018109265A1 (fr) Procédé et équipement technique de codage de contenu de média
US10110876B1 (en) System and method for displaying images in 3-D stereo
KR20080007451A (ko) 깊이 착시 디지털 이미지화
Cohen et al. A multiuser multiperspective stereographic QTVR browser complemented by java3D visualizer and emulator
WO2017141139A1 (fr) Procédé de transformation d&#39;image
JP2022517499A (ja) 画像特性画素構造の生成および処理
Kuchelmeister Universal capture through stereographic multi-perspective recording and scene reconstruction

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08751068

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08751068

Country of ref document: EP

Kind code of ref document: A1