EP3520417A1 - Procédés, dispositifs et flux destinés à fournir une indication de mappage d'images omnidirectionnelles - Google Patents
Procédés, dispositifs et flux destinés à fournir une indication de mappage d'images omnidirectionnellesInfo
- Publication number
- EP3520417A1 EP3520417A1 EP17777041.9A EP17777041A EP3520417A1 EP 3520417 A1 EP3520417 A1 EP 3520417A1 EP 17777041 A EP17777041 A EP 17777041A EP 3520417 A1 EP3520417 A1 EP 3520417A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- mapping
- video
- indication
- image
- omnidirectional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000013507 mapping Methods 0.000 title claims abstract description 272
- 238000000034 method Methods 0.000 title claims abstract description 106
- 238000009877 rendering Methods 0.000 claims abstract description 83
- 238000005070 sampling Methods 0.000 claims description 9
- 230000000153 supplemental effect Effects 0.000 claims description 3
- 238000004590 computer program Methods 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 39
- 238000012545 processing Methods 0.000 description 44
- 230000006870 function Effects 0.000 description 43
- 230000015654 memory Effects 0.000 description 39
- 238000004891 communication Methods 0.000 description 27
- 210000000887 face Anatomy 0.000 description 14
- 230000003190 augmentative effect Effects 0.000 description 9
- 238000007781 pre-processing Methods 0.000 description 8
- 230000033001 locomotion Effects 0.000 description 6
- 238000005259 measurement Methods 0.000 description 6
- 230000003287 optical effect Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 238000012546 transfer Methods 0.000 description 4
- 230000008901 benefit Effects 0.000 description 3
- 230000006835 compression Effects 0.000 description 3
- 238000007906 compression Methods 0.000 description 3
- 229920001690 polydopamine Polymers 0.000 description 3
- 230000002441 reversible effect Effects 0.000 description 3
- 230000011664 signaling Effects 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 230000008451 emotion Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 230000001131 transforming effect Effects 0.000 description 2
- 241001290864 Schoenoplectus Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 210000003128 head Anatomy 0.000 description 1
- 230000004886 head movement Effects 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 238000004091 panning Methods 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 230000035900 sweating Effects 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000032258 transport Effects 0.000 description 1
- 239000011800 void material Substances 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/172—Processing image signals image signals comprising non-image signal components, e.g. headers or format information
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/10—Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/12—Panospheric to cylindrical image transformations
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/16—Spatio-temporal transformations, e.g. video cubism
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/161—Encoding, multiplexing or demultiplexing different image signal components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/194—Transmission of image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/46—Embedding additional information in the video signal during the compression process
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/50—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
- H04N19/597—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N19/00—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
- H04N19/70—Methods or arrangements for coding, decoding, compressing or decompressing digital video signals characterised by syntax aspects related to video coding, e.g. related to compression standards
Definitions
- the present disclosure relates to the domain of encoding immersive videos for example when such immersive videos are processed in a system for virtual reality, augmented reality or augmented virtuality and for instance when displayed in a head mounted display device.
- the purpose of the present disclosure is to overcome the problem of providing the decoding system or the rendering system with a set of information that describes properties of the immersive video.
- the present disclosure relates to signaling syntax and semantics adapted to provide mapping properties of an omnidirectional video into a rectangular two-dimensional frame to the decoding and rendering application.
- a decoding method comprises decoding an image of a video, the video being a 2D video into which an omnidirectional video is mapped; and decoding an indication of the mapping of the omnidirectional video into the 2D video, the indication comprising a first item representative of the type of surface used for the mapping belonging to a group comprising at least one of an equirectangular mapping, a cube mapping or a pyramid mapping.
- the indication is used in decoding of the video image itself or in the immersive rendering of the decoded image.
- the indication is encoded as a supplemental enhancement information message, or as a sequence-level header information, or as an image- level header information.
- the indication further comprises a second item representative of the orientation of the mapping surface in the 3D space.
- the indication further comprises a third item representative of the density of the pixel mapped on the surface.
- the indication further comprises a fourth item representative of the layout of the mapping surface into the image.
- the indication further comprises a fifth item representative of a generic mapping comprising for each pixel of the video image to encode, spherical coordinates of the corresponding pixel into the omnidirectional video.
- the indication further comprises a sixth item representative of a generic mapping comprising for each sampled pixel of a sphere into the omnidirectional video, 2D coordinates of the pixel on the video image.
- the indication further comprises a seventh item representative of an intermediate sampling space, of a first generic mapping comprising for each sampled pixel of a sphere into the omnidirectional video, coordinates of the pixel in the intermediate sampling space; and of a second generic mapping comprising for each sampled pixel of in the intermediate space, 2D coordinates of the pixel on the video image.
- a video encoding method comprises encoding an image of a video, the video being a 2D video into which an omnidirectional video is mapped; and encoding an indication of the mapping of the omnidirectional video into the 2D video, the indication comprising a first item representative of the type of surface used for the mapping belonging to a group comprising at least one of an equirectangular mapping, a cube mapping or a pyramid mapping.
- a video transmitting method comprises transmitting an encoded image of a video, the video being a 2D video into which an omnidirectional video is mapped; and transmitting an encoded indication of the mapping of the omnidirectional video into the 2D video, the indication comprising a first item representative of the type of surface used for the mapping belonging to a group comprising at least one of an equirectangular mapping, a cube mapping or a pyramid mapping.
- an apparatus comprises a decoder for decoding an image of a video, the video being a 2D video into which an omnidirectional video is mapped; and for decoding an indication of the mapping of the omnidirectional video into the 2D video, the indication comprising a first item representative of the type of surface used for the mapping belonging to a group comprising at least one of an equirectangular mapping, a cube mapping or a pyramid mapping.
- an apparatus comprises an encoder for encoding an image of a video, the video being a 2D video into which an omnidirectional video is mapped; and encoding an indication of the mapping of the omnidirectional video into the 2D video, the indication comprising a first item representative of the type of surface used for the mapping belonging to a group comprising at least one of an equirectangular mapping, a cube mapping or a pyramid mapping.
- an apparatus comprises an interface for transmitting an encoded image of a video, the video being a 2D video into which an omnidirectional video is mapped; and transmitting an encoded indication of the mapping of the omnidirectional video into the 2D video, the indication comprising a first item representative of the type of surface used for the mapping belonging to a group comprising at least one of an equirectangular mapping, a cube mapping or a pyramid mapping.
- a video signal data comprises an encoded image of a video, the video being a 2D video into which an omnidirectional video is mapped; and an encoded an indication of the mapping of the omnidirectional video into the 2D video, the indication comprising a first item representative of the type of surface used for the mapping belonging to a group comprising at least one of an equirectangular mapping, a cube mapping or a pyramid mapping.
- a processor readable medium that has stored therein a video signal data that comprises an encoded image of a video, the video being a 2D video into which an omnidirectional video is mapped; and an encoded an indication of the mapping of the omnidirectional video into the 2D video, the indication comprising a first item representative of the type of surface used for the mapping belonging to a group comprising at least one of an equirectangular mapping, a cube mapping or a pyramid mapping.
- a computer program product comprising program code instructions to execute the steps of any of the disclosed methods (decoding, encoding, rendering or transmitting) when this program is executed on a computer is disclosed.
- a non-transitory program storage device that is readable by a computer, tangibly embodies a program of instructions executable by the computer to perform any of the disclosed methods (decoding, encoding, rendering or transmitting).
- mapping syntax elements While not explicitly described, the present embodiments and characteristics may be employed in any combination or sub-combination.
- the present principles is not limited to the described mapping syntax elements and any syntax elements encompassed with the disclosed mapping techniques can be used.
- any characteristic or embodiment described for the decoding method is compatible with the other disclosed methods (decoding, encoding, rendering or transmitting), with a device intended to process the disclosed methods and with a computer-readable storage medium storing program instructions.
- FIG. 1 represents a functional overview of an encoding and decoding system according to an example environment of the embodiments of the disclosure
- FIG. 2 to 6 represent a first embodiment of a system according to particular embodiments of the present principles
- Figures 10 to 12 represent a first embodiment of an immersive video rendering device according to particular embodiments of the present principles
- FIG. 13 illustrates an example of mapping an omnidirectional video on a frame according to two different mapping functions of the present disclosure
- Figure 14 illustrates an example of possible layout of the equi-rectangular mapping according to the present disclosure
- Figure 15 illustrates two examples of possible layout of the faces of a cube mapping according to the present disclosure
- Figure 16 illustrates two examples of possible layout of the faces of a pyramidal mapping according to the present disclosure
- Figure 17 illustrates the processing of a point in the frame F to the local rendering frame of P in case of a generic mapping
- Figure 18 illustrates forward and backward transform between the 2D Cartesian coordinate system of the coded frame F and the Polar coordinates system used to parametrize the sphere S in 3D space according to the present principles
- Figure 19 diagrammatically illustrates a method of encoding an image and transmitting an encoded image according to a particular embodiment of the present principles
- Figure 20 diagrammatically illustrates a method of decoding an image according to a particular embodiment of the present principles
- Figure 21 diagrammatically illustrates a method of rendering an image according to a particular embodiment of the present principles
- Figure 22 illustrates a particular embodiment of the data structure of a bit stream 220.
- Figure 23 shows a hardware embodiment of an apparatus configured to implement methods described in relation with figures 19, 20 or 21 according to a particular embodiment of the present principles.
- a large field-of-view content may be, among others, a three-dimension computer graphic imagery scene (3D CGI scene), a point cloud or an immersive video.
- immersive videos such as for example virtual Reality (VR), 360, panoramic, 4 ⁇ , steradians, immersive, omnidirectional, large field of view.
- VR virtual Reality
- panoramic panoramic
- 4 ⁇ steradians
- immersive, omnidirectional large field of view
- traditional video codec such as HEVC, H.264/AVC
- Each picture of the omnidirectional video is thus first projected on one or more 2D pictures (two-dimension array of pixels, i.e. element of color information), for example one or more rectangular pictures, using a suitable projection function.
- a picture from the omnidirectional video is represented as a 3D surface.
- mapping or projection usually a convex and simple surface such as a sphere, or a cube, or a pyramid are used for the projection.
- the 2D video comprising the projected 2D pictures representative of the omnidirectional video are then coded using a traditional video codec. Such operation resulting in establishing a correspondence between a pixel of the 3D surface and a pixel of the 2D picture is also called mapping of the omnidirectional video to a 2D video.
- mapping or projection and their derivatives, projection function or mapping function, projection format or mapping surface are used indifferently hereafter.
- Figure 13 shows an example of projecting a frame of an omnidirectional video mapped on a surface represented as a sphere (130) onto one rectangular picture (131 ) using an equi- rectangular projection and another example where the surface is represented as a cube (132) onto six pictures or faces of another rectangular picture (133).
- the projected rectangular picture of the surface can then be coded using conventional video coding standards such as HEVC, H.264/AVC, etc...
- video coding standards such as HEVC, H.264/AVC, etc.
- Pixels may be encoded according to a mapping function in the frame.
- the mapping function may depend on the mapping surface.
- several mapping functions are possible.
- the faces of a cube may be structured according to different layouts within the frame surface.
- a sphere may be mapped according to an equirectangular projection or to a gnomonic projection for example.
- the organization of pixels resulting from the selected projection function modifies or breaks lines continuities, orthonormal local frame, pixel densities and introduces periodicity in time and space. These are typical features that are used to encode and decode videos. There is a lack of taking specificities of immersive videos into account in encoding and decoding methods.
- FIG. 1 illustrates a general overview of an encoding and decoding system according to an example embodiment.
- the system of figure 1 is a functional system.
- a pre-processing module 300 may prepare the content for encoding by the encoding device 400.
- the pre-processing module 300 may perform multi-image acquisition, merging of the acquired multiple images in a common space (typically a 3D sphere if we encode the directions), and mapping of the 3D sphere into a 2D frame using, for example, but not limited to, an equirectangular mapping or a cube mapping.
- the pre-processing module 300 may also accept an omnidirectional video in a particular format (for example, equirectangular) as input, and pre-processes the video to change the mapping into a format more suitable for encoding.
- the pre-processing module 300 may perform a mapping space change.
- the encoding device 400 and the encoding method will be described with respect to other figures of the specification.
- the data which may encode immersive video data or 3D CGI encoded data for instance, are sent to a network interface 500, which can be typically implemented in any network interface, for instance present in a gateway.
- the data are then transmitted through a communication network, such as internet but any other network can be foreseen.
- Network interface 600 can be implemented in a gateway, in a television, in a set-top box, in a head mounted display device, in an immersive (projective) wall or in any immersive video rendering device.
- the data are sent to a decoding device 700.
- Decoding function is one of the processing functions described in the following figures 2 to 12. Decoded data are then processed by a player 800.
- Player 800 prepares the data for the rendering device 900 and may receive external data from sensors or users input data. More precisely, the player 800 prepares the part of the video content that is going to be displayed by the rendering device 900.
- the decoding device 700 and the player 800 may be integrated in a single device (e.g., a smartphone, a game console, a STB, a tablet, a computer, etc.). In a variant, the player 800 is integrated in the rendering device 900.
- a first system, for processing augmented reality, virtual reality, or augmented virtuality content is illustrated in figures 2 to 6.
- Such a system comprises processing functions, an immersive video rendering device which may be a head-mounted display (HMD), a tablet or a smartphone for example and may comprise sensors.
- the immersive video rendering device may also comprise additional interface modules between the display device and the processing functions.
- the processing functions can be performed by one or several devices. They can be integrated into the immersive video rendering device or they can be integrated into one or several processing devices.
- the processing device comprises one or several processors and a communication interface with the immersive video rendering device, such as a wireless or wired communication interface.
- the processing device can also comprise a second communication interface with a wide access network such as internet and access content located on a cloud, directly or through a network device such as a home or a local gateway.
- the processing device can also access a local storage through a third interface such as a local access network interface of Ethernet type.
- the processing device may be a computer system having one or several processing units.
- it may be a smartphone which can be connected through wired or wireless links to the immersive video rendering device or which can be inserted in a housing in the immersive video rendering device and communicating with it through a connector or wirelessly as well.
- Communication interfaces of the processing device are wireline interfaces (for example a bus interface, a wide area network interface, a local area network interface) or wireless interfaces (such as a IEEE 802.1 1 interface or a Bluetooth® interface).
- the immersive video rendering device can be provided with an interface to a network directly or through a gateway to receive and/or transmit content.
- the system comprises an auxiliary device which communicates with the immersive video rendering device and with the processing device.
- this auxiliary device can contain at least one of the processing functions.
- the immersive video rendering device may comprise one or several displays.
- the device may employ optics such as lenses in front of each of its display.
- the display can also be a part of the immersive display device like in the case of smartphones or tablets.
- displays and optics may be embedded in a helmet, in glasses, or in a visor that a user can wear.
- the immersive video rendering device may also integrate several sensors, as described later on.
- the immersive video rendering device can also comprise several interfaces or connectors. It might comprise one or several wireless modules in order to communicate with sensors, processing functions, handheld or other body parts related devices or sensors.
- the immersive video rendering device can also comprise processing functions executed by one or several processors and configured to decode content or to process content.
- processing content it is understood all functions to prepare a content that can be displayed. This may comprise, for instance, decoding a content, merging content before displaying it and modifying the content to fit with the display device.
- One function of an immersive content rendering device is to control a virtual camera which captures at least a part of the content structured as a virtual volume.
- the system may comprise pose tracking sensors which totally or partially track the user's pose, for example, the pose of the user's head, in order to process the pose of the virtual camera. Some positioning sensors may track the displacement of the user.
- the system may also comprise other sensors related to environment for example to measure lighting, temperature or sound conditions.
- Such sensors may also be related to the users' bodies, for instance, to measure sweating or heart rate. Information acquired through these sensors may be used to process the content.
- the system may also comprise user input devices (e.g. a mouse, a keyboard, a remote control, a joystick). Information from user input devices may be used to process the content, manage user interfaces or to control the pose of the virtual camera. Sensors and user input devices communicate with the processing device and/or with the immersive rendering device through wired or wireless communication interfaces.
- Figure 2 illustrates a particular embodiment of a system configured to decode, process and render immersive videos.
- the system comprises an immersive video rendering device 10, sensors 20, user inputs devices 30, a computer 40 and a gateway 50 (optional).
- the immersive video rendering device 10, illustrated on Figure 10, comprises a display 101 .
- the display is, for example of OLED or LCD type.
- the immersive video rendering device 10 is, for instance a HMD, a tablet or a smartphone.
- the device 10 may comprise a touch surface 102 (e.g. a touchpad or a tactile screen), a camera 103, a memory 105 in connection with at least one processor 104 and at least one communication interface 106.
- the at least one processor 104 processes the signals received from the sensors 20. Some of the measurements from sensors are used to compute the pose of the device and to control the virtual camera. Sensors used for pose estimation are, for instance, gyroscopes, accelerometers or compasses. More complex systems, for example using a rig of cameras may also be used.
- the at least one processor performs image processing to estimate the pose of the device 10. Some other measurements are used to process the content according to environment conditions or user's reactions. Sensors used for observing environment and users are, for instance, microphones, light sensor or contact sensors. More complex systems may also be used like, for example, a video camera tracking user's eyes. In this case the at least one processor performs image processing to operate the expected measurement. Data from sensors 20 and user input devices 30 can also be transmitted to the computer 40 which will process the data according to the input of these sensors.
- Memory 105 includes parameters and code program instructions for the processor 104. Memory 105 can also comprise parameters received from the sensors 20 and user input devices 30.
- Communication interface 106 enables the immersive video rendering device to communicate with the computer 40.
- the Communication interface 106 of the processing device is wireline interfaces (for example a bus interface, a wide area network interface, a local area network interface) or wireless interfaces (such as a IEEE 802.1 1 interface or a Bluetooth® interface).
- Computer 40 sends data and optionally control commands to the immersive video rendering device 10.
- the computer 40 is in charge of processing the data, i.e. prepare them for display by the immersive video rendering device 10. Processing can be done exclusively by the computer 40 or part of the processing can be done by the computer and part by the immersive video rendering device 10.
- the computer 40 is connected to internet, either directly or through a gateway or network interface 50.
- the computer 40 receives data representative of an immersive video from the internet, processes these data (e.g. decodes them and possibly prepares the part of the video content that is going to be displayed by the immersive video rendering device 10) and sends the processed data to the immersive video rendering device 10 for display.
- the system may also comprise local storage (not represented) where the data representative of an immersive video are stored, said local storage can be on the computer 40 or on a local server accessible through a local area network for instance (not represented).
- Figure 3 represents a second embodiment.
- a STB 90 is connected to a network such as internet directly (i.e. the STB 90 comprises a network interface) or via a gateway 50.
- the STB 90 is connected through a wireless interface or through a wired interface to rendering devices such as a television set 100 or an immersive video rendering device 200.
- STB 90 comprises processing functions to process video content for rendering on the television 100 or on any immersive video rendering device 200. These processing functions are the same as the ones that are described for computer 40 and are not described again here.
- Sensors 20 and user input devices 30 are also of the same type as the ones described earlier with regards to Figure 2.
- the STB 90 obtains the data representative of the immersive video from the internet.
- the STB 90 obtains the data representative of the immersive video from a local storage (not represented) where the data representative of the immersive video are stored.
- Figure 4 represents a third embodiment related to the one represented in Figure 2.
- the game console 60 processes the content data.
- Game console 60 sends data and optionally control commands to the immersive video rendering device 10.
- the game console 60 is configured to process data representative of an immersive video and to send the processed data to the immersive video rendering device 10 for display. Processing can be done exclusively by the game console 60 or part of the processing can be done by the immersive video rendering device 10.
- the game console 60 is connected to internet, either directly or through a gateway or network interface 50.
- the game console 60 obtains the data representative of the immersive video from the internet.
- the game console 60 obtains the data representative of the immersive video from a local storage (not represented) where the data representative of the immersive video are stored, said local storage can be on the game console 60 or on a local server accessible through a local area network for instance (not represented).
- the game console 60 receives data representative of an immersive video from the internet, processes these data (e.g. decodes them and possibly prepares the part of the video that is going to be displayed) and sends the processed data to the immersive video rendering device 10 for display.
- the game console 60 may receive data from sensors 20 and user input devices 30 and may use them to process the data representative of an immersive video obtained from the internet or from the from the local storage.
- Figure 5 represents a fourth embodiment of said first type of system where the immersive video rendering device 70 is formed by a smartphone 701 inserted in a housing 705.
- the smartphone 701 may be connected to internet and thus may obtain data representative of an immersive video from the internet.
- the smartphone 701 obtains data representative of an immersive video from a local storage (not represented) where the data representative of an immersive video are stored, said local storage can be on the smartphone 701 or on a local server accessible through a local area network for instance (not represented).
- Immersive video rendering device 70 is described with reference to Figure 1 1 which gives a preferred embodiment of immersive video rendering device 70. It optionally comprises at least one network interface 702 and the housing 705 for the smartphone 701 .
- the smartphone 701 comprises all functions of a smartphone and a display. The display of the smartphone is used as the immersive video rendering device 70 display. Therefore no display other than the one of the smartphone 701 is included. However, optics 704, such as lenses, are included for seeing the data on the smartphone display.
- the smartphone 701 is configured to process (e.g. decode and prepare for display) data representative of an immersive video possibly according to data received from the sensors 20 and from user input devices 30. Some of the measurements from sensors are used to compute the pose of the device and to control the virtual camera.
- Sensors used for pose estimation are, for instance, gyroscopes, accelerometers or compasses. More complex systems, for example using a rig of cameras may also be used. In this case, the at least one processor performs image processing to estimate the pose of the device 10. Some other measurements are used to process the content according to environment conditions or user's reactions. Sensors used for observing environment and users are, for instance, microphones, light sensor or contact sensors. More complex systems may also be used like, for example, a video camera tracking user's eyes. In this case the at least one processor performs image processing to operate the expected measurement.
- FIG 6 represents a fifth embodiment of said first type of system in which the immersive video rendering device 80 comprises all functionalities for processing and displaying the data content.
- the system comprises an immersive video rendering device 80, sensors 20 and user input devices 30.
- the immersive video rendering device 80 is configured to process (e.g. decode and prepare for display) data representative of an immersive video possibly according to data received from the sensors 20 and from the user input devices 30.
- the immersive video rendering device 80 may be connected to internet and thus may obtain data representative of an immersive video from the internet.
- the immersive video rendering device 80 obtains data representative of an immersive video from a local storage (not represented) where the data representative of an immersive video are stored, said local storage can be on the rendering device 80 or on a local server accessible through a local area network for instance (not represented).
- the immersive video rendering device 80 is illustrated on Figure 12.
- the immersive video rendering device comprises a display 801 .
- the display can be for example of OLED or LCD type, a touchpad (optional) 802, a camera (optional) 803, a memory 805 in connection with at least one processor 804 and at least one communication interface 806.
- Memory 805 comprises parameters and code program instructions for the processor 804.
- Memory 805 can also comprise parameters received from the sensors 20 and user input devices 30.
- Memory can also be large enough to store the data representative of the immersive video content. For this several types of memories can exist and memory 805 can be a single memory or can be several types of storage (SD card, hard disk, volatile or non-volatile memory...)
- Communication interface 806 enables the immersive video rendering device to communicate with internet network.
- the processor 804 processes data representative of the video in order to display them of display 801 .
- the camera 803 captures images of the environment for an image processing step. Data are extracted from this step in order to control the immersive video rendering device.
- a second system, for processing augmented reality, virtual reality, or augmented virtuality content is illustrated in figures 7 to 9.
- Such a system comprises an immersive wall.
- Figure 7 represents a system of the second type. It comprises a display 1000 which is an immersive (projective) wall which receives data from a computer 4000.
- the computer 4000 may receive immersive video data from the internet.
- the computer 4000 is usually connected to internet, either directly or through a gateway 5000 or network interface.
- the immersive video data are obtained by the computer 4000 from a local storage (not represented) where the data representative of an immersive video are stored, said local storage can be in the computer 4000 or in a local server accessible through a local area network for instance (not represented).
- This system may also comprise sensors 2000 and user input devices 3000.
- the immersive wall 1000 can be of OLED or LCD type. It can be equipped with one or several cameras.
- the immersive wall 1000 may process data received from the sensor 2000 (or the plurality of sensors 2000).
- the data received from the sensors 2000 may be related to lighting conditions, temperature, environment of the user, e.g. position of objects.
- the immersive wall 1000 may also process data received from the user inputs devices 3000.
- the user input devices 3000 send data such as haptic signals in order to give feedback on the user emotions.
- Examples of user input devices 3000 are handheld devices such as smartphones, remote controls, and devices with gyroscope functions.
- Sensors 2000 and user input devices 3000 data may also be transmitted to the computer 4000.
- the computer 4000 may process the video data (e.g. decoding them and preparing them for display) according to the data received from these sensors/user input devices.
- the sensors signals can be received through a communication interface of the immersive wall.
- This communication interface can be of Bluetooth type, of WIFI type or any other type of connection, preferentially wireless but can also be a wired connection.
- Computer 4000 sends the processed data and optionally control commands to the immersive wall 1000.
- the computer 4000 is configured to process the data, i.e. preparing them for display, to be displayed by the immersive wall 1000. Processing can be done exclusively by the computer 4000 or part of the processing can be done by the computer 4000 and part by the immersive wall 1000.
- Figure 8 represents another system of the second type. It comprises an immersive
- (projective) wall 6000 which is configured to process (e.g. decode and prepare data for display) and display the video content. It further comprises sensors 2000, user input devices 3000.
- the immersive wall 6000 receives immersive video data from the internet through a gateway 5000 or directly from internet.
- the immersive video data are obtained by the immersive wall 6000 from a local storage (not represented) where the data representative of an immersive video are stored, said local storage can be in the immersive wall 6000 or in a local server accessible through a local area network for instance (not represented).
- This system may also comprise sensors 2000 and user input devices 3000.
- the immersive wall 6000 can be of OLED or LCD type. It can be equipped with one or several cameras.
- the immersive wall 6000 may process data received from the sensor 2000 (or the plurality of sensors 2000).
- the data received from the sensors 2000 may be related to lighting conditions, temperature, environment of the user, e.g. position of objects.
- the immersive wall 6000 may also process data received from the user inputs devices 3000.
- the user input devices 3000 send data such as haptic signals in order to give feedback on the user emotions.
- Examples of user input devices 3000 are handheld devices such as smartphones, remote controls, and devices with gyroscope functions.
- the immersive wall 6000 may process the video data (e.g. decoding them and preparing them for display) according to the data received from these sensors/user input devices.
- the sensors signals can be received through a communication interface of the immersive wall.
- This communication interface can be of Bluetooth type, of WIFI type or any other type of connection, preferentially wireless but can also be a wired connection.
- the immersive wall 6000 may comprise at least one communication interface to communicate with the sensors and with internet.
- FIG 9 illustrates a third embodiment where the immersive wall is used for gaming.
- One or several gaming consoles 7000 are connected, preferably through a wireless interface to the immersive wall 6000.
- the immersive wall 6000 receives immersive video data from the internet through a gateway 5000 or directly from internet.
- the immersive video data are obtained by the immersive wall 6000 from a local storage (not represented) where the data representative of an immersive video are stored, said local storage can be in the immersive wall 6000 or in a local server accessible through a local area network for instance (not represented).
- Gaming console 7000 sends instructions and user input parameters to the immersive wall 6000.
- Immersive wall 6000 processes the immersive video content possibly according to input data received from sensors 2000 and user input devices 3000 and gaming consoles 7000 in order to prepare the content for display.
- the immersive wall 6000 may also comprise internal memory to store the content to be displayed.
- methods and devices for decoding video images from a stream the video being a two-dimensional video (2D video) into which an omnidirectional video (360° video or 3D video) is mapped, are disclosed.
- Methods and devices for encoding video images in a stream, the video being a 2D video into which an omnidirectional video is mapped are, also disclosed.
- a stream comprising indication (syntaxes) describing the mapping of an omnidirectional video into a two-dimensional video is also disclosed.
- Methods and devices for transmitting a stream including such indication are also disclosed.
- 3D-to-2D mapping indication inserted in a bit stream
- a stream comprises encoded data representative of a sequence of images (or video), wherein an image (or frame or picture) is a two-dimensional array of pixels into which an omnidirectional image is mapped.
- the 2D image is associated with indication representative of the mapping of the omnidirectional video to a two-dimensional video.
- an indication is encoded with the stream. That indication comprises items, also called high-level syntax elements by the skilled in the art of compression, describing the way the coded video has been mapped from the 360° environment to the 2D coding environment. Specific embodiments for such syntax elements are described hereafter.
- the indication comprising a first item representative of the type of surface used for the mapping.
- the mapping belongs to a group comprising at least one of an equirectangular mapping, a cube mapping or a pyramid mapping.
- the indication thus allows both the decoding device and the immersive rendering device to determine a mapping function among a set of default mapping functions or pre-defined mapping functions by using a mapping identifier (mapping-ID).
- mapping-ID mapping identifier
- both the decoding device and the immersive rendering device know the type of projection used in the omnidirectional-to-2D mapping.
- the equirectangular mapping, a cube mapping or a pyramid mapping as well-known standard mapping functions from 3D-space to a plan space.
- a default mapping function is not limited to those well-known variants.
- Figure 13 shows an example of mapping an omnidirectional video on a frame according to two different mapping functions.
- a 3D scene here a hotel hall, is projected on a spherical mapping surface 130.
- a front direction is selected for mapping the surface on a frame.
- the front direction may correspond to the part of the content displayed in front of the user when rendering on an immersive video rendering device as described on figures 2 to 12.
- the front direction is facing the window with an 'A' printed on it.
- a revolving door with a 'B' printed on stands on the left of the front direction.
- the pre-processing module of figure 1 performs a mapping of the projection 130 in a frame. Different mapping functions may be used leading to different frames.
- the pre-processing module 300 generates a sequence of frames 131 according to an equirectangular mapping function applied to the sphere 130.
- the pre-processing module 300 performs a mapping space change, transforming the sphere 130 into a cube 132 before mapping the cube 132 on a frame 133 according to a cube layout 134.
- the example cube layout of the figure 13 divides the frame in six sections made of two rows of three squares. On the top row lie left, front and right faces of the cube; on the bottom row lie top back and bottom faces of the cube with a 90° rotation. Continuity is ensured in each row. Numbers on the representation of the cube layout 134 represents the cube edges' connections.
- the pre-processing module 300 performs a mapping space change, transforming the sphere 130 into a pyramid before mapping the pyramid on a frame according to a pyramid layout 135.
- different layouts can be used for any of mapping functions as illustrated in figures 14, 15 or 16.
- space associated with a projection surface
- a default mapping includes indication on both the surface used in the projection and a default layout 134, 135 used by the projection, i.e. any indication needed for mapping back the 2D frame into the 3D space for immersive rendering.
- the respective default mappings presented in figure 13 are non- limiting examples of default mappings. Any mapping, defined as default by a convention between encoding and decoding/rendering, is compatible with the present principles.
- a first item is defined that corresponds to the identifier of the default omnidirectional-to-2D mapping (360_mapping_id) being used to generate the coded data.
- a mapping-ID field is inserted into the stream comprising encoded data representative of the sequence of images in a mapping information message.
- Table 2 exemplary mapping IDs used to identify pre-defined 360° video mapping methods.
- the proposed mapping information message is encoded within a dedicated SEI message.
- the SEI message being a Supplemental Enhancement Information according to ITU-T H.265 TELECOMMUNICATION STANDARDIZATION SECTOR OF ITU (10/2014), SERIES H: AUDIOVISUAL AND MULTIMEDIA SYSTEMS, Infrastructure of audiovisual services - Coding of moving video, High efficiency video coding, Recommendation ITU-T H.265, hereinafter "HEVC".
- HEVC High efficiency video coding
- This characteristic is well adapted to be delivered to immersive rendering device wherein the mapping information is used as side information outside the video codec.
- the proposed mapping information message is encoded in a sequence-level header information, like the Sequence Parameter Set specified in HEVC.
- the proposed mapping information message is encoded in a picture-level header information, like the Picture Parameter Set specified in HEVC.
- the second and third characteristics are more adapted to be delivered to decoding device where information is extracted by the decoder from the coded data.
- some normative decoding tool that exploits features (such as geometric distortion, periodicity or discontinuities between 2 adjacent pixels depending on the frame layout) of the considered mapping can be used by the decoder in that case.
- the indication comprises additional items that describe more precisely how the omnidirectional to 2D picture mapping is arranged.
- Those embodiments are particularly well adapted in case where default mappings are not defined or in case the defined default mappings are not used. This may be the case for improved compression efficiency purposes for example.
- the mapping is different from a default mapping because the surface of projection is different, because the front point of projection is different leading in a different orientation in the 3D space, or because the layout on the 2D frame is different.
- the indication further comprises a second item representative of the orientation of the mapping surface in the 3D space.
- some parameters phi_0, theta_0
- these two angle parameters are used to specify the 3D space coordinate system in which mapping surfaces are described later in.
- the orientation is given with respect to the front point of projection (according the front direction A of figure 13) corresponding to a point where the projection surface is tangent to the sphere of the 3D space.
- the parameters are used in an immersive rendering system as described with figures 2 to 12.
- the parameters are followed by the identifier (360_mapping_id) of the omnidirectional-to-2D mapping, which indicates which type of 3D to 2D surface is used so as to carry further items representative of different variants of an equirectangular mapping, a cube mapping or a pyramid mapping.
- the identifier (360_mapping_id) of the omnidirectional-to-2D mapping only specifies the type of surface used in projection and does not refers to others specificities of the pre-defined default mapping which then need to be detailed. Indeed, another binary value (default_equirectangular_mapping_flag, or default_cube_mapping_flag) is used to determine whether the mapping is the default one (1 ) or not (0).
- the indication comprises in addition to the mapping identifier (360_mapping_id), a binary value (or flag) representative of the usage of the corresponding default mapping.
- a binary value indicates if the default mode is used (1 ) wherein the default equi-rectangular mapping is assumed to be the one introduced with respect to figure 13. If so, no further item of mapping indication is provided. If a non-default equi-rectangular mapping is used (0), then additional items of mapping indication are provided to more fully specify the equirectangular mapping. According to non-limiting variants, a binary value (equator_on_x_axis_flag) indicates if the equator 136 is parallel to the x-axis of the mapped 2D picture or not.
- the layout of the equirectangular projection can be arranged along any of the 2D frame axis.
- some coordinates along the axis orthogonal to the equator are coded, in order to indicate the position of the poles (top_pole_coordinate_in_2D_picture, top_pole_coordinate_in_2D_picture) and of the equator ( equator_coordinate_in_2D_picture) on this axis.
- the poles and the equator fall in a location different from those of the default equi-rectangular mapping.
- the indication further comprises a third item representative of the density of the pixels mapped on the surface (density_infomation_flag).
- density_infomation_flag a third item representative of the density of the pixels mapped on the surface.
- the projection from a sphere to a frame results in non-uniform pixel density.
- a pixel in the frame F to encode does not always represent the same surface on the surface S (i.e. the same surface on the image during the rendering). For instance, in the equirectangular mapping the pixel density is quite different between a pole 141 and the equator 142.
- This density information flag indicates if a density lookup-table is encoded in the considered mapping information message.
- this density information flag is followed by a series of coded density values, which respectively indicate the (normalized) pixels density for each line/column parallel to the mapped equator. This density information is helpful to allow the codec of a decoding device to select normative video coding tools adapted to equi-rectangular mapped videos.
- the indication further comprises a fourth item representative of the layout of the mapping surface into the frame.
- This embodiment is particularly well adapted to cube mapping or pyramid mapping where the different faces of the cube or pyramid can be arranged in the encoded frame in various ways.
- this embodiment is also compatible with the equirectangular mapping in case, for instance, the equator would not be placed at the middle of the frame.
- a syntax element specifying the layout of the cube mapping may be included in the proposed mapping indication as illustrated by Table 3.
- Figure 15 illustrates a first and a second layout for a cube mapping as well as the 3D space representation of the cube used in the projection.
- each vertex's coordinates (coord inate_x, coordinate_y, coordinate_z) are indicated following a pre-fixed ordering (SO - S7) of the vertices.
- SO - S7 pre-fixed ordering
- Layout 1 or Layout 2 indicate the arrangement of cube faces once the 3D surface is put on a same 2D plan.
- a layout identifier (cube_2D_layout_id) is used assuming that each possible layout is pre-defined and that each pre-defined layout is associated with a particular identifier corresponding for instance to layout 1 or layout 2 of figure 15.
- the layout may be explicitly signaled in the proposed mapping information message as latter on described with respect to tables 4 and 5.
- Explicitly signalling the cube mapping layout would consist in an ordered list of cube faces identifier, which describes how the cube's faces are arranged in the target 2D plan. For instance, in the case of layout 1 of figure 15, such ordered list would take the form (3, 2, front, back, top, left, right, bottom), meaning that faces are arranged according to a 3x2 array of faces, and following the face order of the ordered list.
- the variant of a binary value (default_cube_mapping_flag) indicating if a default mode with a default layout is used (1 ) wherein the default layout 134 mapping is assumed to be the one introduced with respect to figure 13, is also compatible with the previous cube mapping embodiment. If so, no further item of mapping indication is provided. Else the above items explicitely describing the cube layout are inferred in the mapping indication.
- FIG. 16 illustrates a first and a second layout for pyramidal mapping as well as the 3D space representation of the pyramid.
- the coordinates of the pyramid's vertices in the 3D space are identified so as to indicate how the pyramid is oriented in the 3D space.
- each vertex's coordinates of the base (base_x, base_y, base_z) is indicated following a pre-fixed ordering of the vertices (B0-B3) as well as peak's coordinates (peak x, peak_y, peak z).
- a pyramid 2D layout identifier indicates the arrangement of faces once the 3D surface is put on a same 2D plan.
- Two non- limitative typical 2D layouts issued from the pyramid mapping are illustrated on figure 16, and can be referred to through a value of the pyramid_2D_layout_id syntax element of Table 3, respectively associated to each possible 2D layout issued from the sphere to pyramid mapping.
- the proposed advance mapping indication is illustrated by Table 3.
- top_pole_coordinate_in_2D_picture u(v) bottom_pole_coordinate_in_2D_picture u(v) density_infomation_flag u(l) for( i 0 ; i ⁇ nbDensity Values ; i++ ) ⁇
- Table 3 proposed mapping information message with further information specifying how the mapping is performed.
- the layout of cube mapping or pyramidal mapping is not defined by default and selected through their respective identifier; the indication then comprises a fifth item allowing to describe the layout of the mapping surface into the frame.
- a syntax element allowing to describe an explicitly layout of the 3D-to-2D mapping may be included in the proposed mapping indication, as illustrated by Table 4.
- a binary value indicates if the default cubic layout mode is used (1 ) wherein the default cubic layouts are assumed to be the ones introduced with respect to figure 15. If so, no further item of mapping indication is provided and the cubic layout identifier may be used. If a non-default cubic layout is used (0), then additional items of mapping indication are provided to fully specify the layout. In a first optional variant, the size of a face (face_width, facejieight) in the 2D frame is indicated.
- each face's position (face_pos_x, face_pos_y) indicates following a pre-fixed ordering of the faces (1 -6 as shown in table 5), the position of the face in the 2D frame.
- face_pos_x, face_pos_y indicates following a pre-fixed ordering of the faces (1 -6 as shown in table 5), the position of the face in the 2D frame.
- the proposed omnidirectional mapping indication comprises a generic syntax able to indicate any reversible transformation from the 3D sphere to the coded frame F.
- the previous embodiments are directed at handling most common omnidirectional-to-2D mappings wherein the projection uses a sphere, a cube or a pyramid.
- the generic case for omnidirectional video representation consists in establishing a correspondence between the 2D frame F and the 3D space associated to the immersive representation of the considered video data.
- This general concept is shown on figure 17, which illustrated the correspondence between the 2D frame F and a 3D surface S that may be defined in different ways.
- P is a point (x,y) in the coded 2D frame F.
- P' is a point on the 2D surface of acquisition, image of P.
- P' is the point expressed using polar coordinate on the sphere.
- 6 local parametrizations are used.
- P 3d is the point P' in the 3D space, belonging to the 3D surface of acquisition, using Cartesian coordinate system.
- P" is the point P 3d projected on the local plan tangent to the surface at P 3d .
- P" is at the center of the frame G.
- the 3D surface S is the sphere of figure 13.
- the sphere is naturally adapted to an omnidirectional content.
- the 3D surface S may be different from the sphere.
- the 3D surface S is the cube of figure 13. This makes it complex to specify a generic, simple, mapping representation syntax able to handle any 2D/3D mapping and de-mapping.
- the correspondence between any 2D frame F and the 3D sphere is indicated according to this embodiment so as to benefit from the properties of the 3D sphere.
- the mapping indication comprises a sixth item representative of the forward and backward transform between the 2D frame F (in Cartesian coordinates) and the 3D sphere in polar coordinates. This corresponds to the / and _1 functions illustrated on figure 18.
- a basic approach to provide this generic mapping item consists in coding a function from the 2D space of the coding frame F towards the 3D sphere.
- mapping and inverse mapping functions both go from a 2D space to another 2D space.
- An exemplary syntax specification for specifying such mapping function is illustrated by Table 6, under the form of two 2D lookup tables. This corresponds to generic mapping mode shown in Table 6.
- the sampling of the coding picture F used to signal the forward mapping function / consists in a number of picture samples equal to the size (width and height) of coding picture F.
- the sampling of the sphere used to indicate de-mapping _1 makes use of a sphere sampling that may depend on the 360° to 2D mapping process, and which is explicitly signaled under the form of the sphereSamplingHeight and sphereSamplingWidth fields.
- the proposed omnidirectional mapping indication comprises an even more generic syntax able to handle any case of 360° to 2D mapping and its reverse 2D to 360° de-mapping system, considered in any use case.
- the goal is to provide and syntax coding that is able to handle any case of set of (potentially multiple) parametric surface that may be used as an intermediate data representation space, in the transfer from the 2D coding space to the 3D environment, and the reverse.
- the 3D to 2D mapping process is modified as follows.
- an intermediate multi-dimensional space is fully specified, through its dimension, its size along each axis. This takes the form of the dim, size_l, size_dim, syntax elements.
- the transfer from the 3D sphere (indexed with polar coordinates , ⁇ ) towards this intermediate space is specified through the series of syntax elements (11 [phi][theta], I2[phi][theta], Idim[phi][theta]) which indicate coordinates in the multi-dimensional intermediate space, as a function of each ( ⁇ , ⁇ ) set of polar coordinates in the sphere.
- a last transfer function from the dim-dimensional intermediate space towards the 2D codec frame F is specified through the series of syntax elements (x[l i][l2] . . . [ldim], y[li][l2] . . . [ldim]), which indicate the cartesian coordinates in the frame F, which correspond to the coordinate (I1 J2 Idim) in the intermediate space.
- Table 7 proposed generic 360° mapping indication including an intermediate mapping and de-mapping space between the
- mapping indication into encoding method, transmitting method, decoding method and rendering method.
- Figure 19 diagrammatically illustrates a method 190 of encoding an image 11 to be encoded of a sequence of images (or video), the image being a 2D image into which an omnidirectional image is mapped.
- This method is implemented in the encoding module 400 of Figure 1 .
- a mapping indication (Ml) is used to select encoding tools adapted to the omnidirectional-to- 2D mapping for instance by exploiting (some of) the properties of the video issued from a 3D-to- 2D mapping, in order to provide increased compression efficiency compared to a 3D unaware encoding.
- this knowledge may help the codec, knowing the shape of the reference spatial area (usually known as reference block) of a rectangular block in current picture, to perform motion compensated temporal prediction of the rectangular block, by means of its associated motion vector.
- Those properties interesting for efficient encoding includes strong geometry distortions, non-uniform pixel density, discontinuities, and periodicity in the 2D image.
- the input image 11 is encoded responsive to the mapping information Ml and an encoded image 12 is output.
- a step 192 generates a bit stream B carrying data representative of the sequence of encoded images and carrying an indication of the omnidirectional-to-2D mapping encoded within the stream in a lossless manner.
- Figure 19 also diagrammatically illustrates a method 193 of transmitting a bit stream B comprising an encoded image I2 and indication of the mapping Ml of an omnidirectional image into the 2D encoded image. This method is implemented in the transmitting module 500 of Figure 1 .
- Figure 20 diagrammatically illustrates a method of decoding an image using an indication on the omnidirectional mapping according to a particular embodiment of the present principles.
- a data source provides a bit stream B encoded according to the method 190 of figure 19.
- the source belongs to a set of sources comprising a local memory (e.g. a video memory, a Random Access Memory, a flash memory, a Read Only Memory, a hard disk, etc.), a storage interface (e.g. an interface with a mass storage, an optical disc or a magnetic support) and a communication interface (e.g.
- a local memory e.g. a video memory, a Random Access Memory, a flash memory, a Read Only Memory, a hard disk, etc.
- a storage interface e.g. an interface with a mass storage, an optical disc or a magnetic support
- a communication interface e.g.
- a coded image I3 is obtained from the stream, the coded image I3 corresponding to a coded 3D image mapped from 3D to 2D space.
- Mapping indication Ml is also obtained from the bit stream B.
- a decoded image I4 is generated by a decoding responsive to tools adapted to the omnidirectional-to-2D mapping according to the mapping indication Ml.
- a rendering image I5 is generated from the decoded image I4.
- Figure 21 diagrammatically illustrates a method of rendering an image using an indication on the omnidirectional mapping according to a particular embodiment of the present principles.
- a data source provides a bit stream B encoded according to the method 190 of figure 19.
- the source belongs to a set of sources comprising a local memory (e.g. a video memory, a Random Access Memory, a flash memory, a Read Only Memory, a hard disk, etc.), a storage interface (e.g. an interface with a mass storage, an optical disc or a magnetic support) and a communication interface (e.g.
- a local memory e.g. a video memory, a Random Access Memory, a flash memory, a Read Only Memory, a hard disk, etc.
- a storage interface e.g. an interface with a mass storage, an optical disc or a magnetic support
- a communication interface e.g.
- a wireline interface for example a bus interface, a wide area network interface, a local area network interface
- a wireless interface such as a IEEE 802.1 1 interface or a Bluetooth® interface
- a coded image I3 is obtained from the stream, the coded image I3 corresponding to a coded 3D image mapped from 3D to 2D space.
- Mapping indication Ml is also obtained from the bit stream B.
- a decoded image I4 is generated by 3D unware decoding of coded image I3.
- a rendering image I5 is generated from the decoded image I4 responsive to mapping indication Ml on the omnidirectional- to-2D mapping used at the generation of the encoded image.
- Figure 22 illustrates a particular embodiment of the data structure of a bit stream 220 carrying data representative of a sequence of images encoded according to the method 190 of figure 19.
- the encoded images of the sequence form a first element of syntax of the bit stream 220 which is stored in the payload part of the bit stream 221 .
- the mapping indication are comprised in a second element of syntax of the bit stream, said second element of syntax being comprised in the header part 222 of the bit stream 220.
- the header part 222 is encoded in a lossless manner.
- Figure 23 shows a hardware embodiment of an apparatus 230 configured to implement any of the methods described in relation with figures 19, 20 or 21 .
- the device 230 comprises the following elements, connected to each other by a bus 231 of addresses and data that also transports a clock signal:
- microprocessor 232 which is, for example, a DSP (or Digital Signal Processor);
- ROM Read Only Memory
- graphics card 236 which may embed registers of random access memory
- the power source 237 is external to the device.
- the word « register » used in the specification may correspond to area of small capacity (some bits) or to very large area (e.g. a whole program or large amount of received or decoded data).
- the ROM 233 comprises at least a program and parameters.
- the ROM 233 may store algorithms and instructions to perform techniques in accordance with present principles. When switched on, the CPU 232 uploads the program in the RAM and executes the corresponding instructions.
- RAM 234 comprises, in a register, the program executed by the CPU 232 and uploaded after switch on of the device 230, input data in a register, intermediate data in different states of the method in a register, and other variables used for the execution of the method in a register.
- the implementations described herein may be implemented in, for example, a module of one of the methods 190, 200 or 210 or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program).
- An apparatus may be implemented in, for example, appropriate hardware, software, and firmware which may be one of the components of the systems described in figures 2 to 12.
- the methods and their modules may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device.
- processors also include communication devices, such as, for example, computers, cell phones, portable/personal digital assistants ("PDAs"), set-top-boxes and other devices that facilitate communication of information between end-users, for instance the components of the systems described in figures 2 to 12.
- PDAs portable/personal digital assistants
- the apparatus 230 disclosed herein performs the encoding (respectively decoding, rendering) of the images according to an H.264/AVC standard or an HEVC video coding standard.
- the present principle could easily be applied to any video coding standards.
- a bit stream representative of a sequence of images is obtained from a source.
- the source belongs to a set comprising:
- a local memory e.g. a video memory or a RAM (or Random Access Memory), a flash memory, a ROM (or Read Only Memory), a hard disk;
- a storage interface e.g. an interface with a mass storage, a RAM, a flash memory, a ROM, an optical disc or a magnetic support;
- a communication interface e.g. a wireline interface (for example a bus interface, a wide area network interface, a local area network interface) or a wireless interface (such as a IEEE 802.1 1 interface or a Bluetooth® interface).
- a wireline interface for example a bus interface, a wide area network interface, a local area network interface
- a wireless interface such as a IEEE 802.1 1 interface or a Bluetooth® interface
- the algorithms implementing the steps of a method 190 of encoding an image of a sequence images using mapping indication are stored in a memory GRAM of the graphics card 236 associated with the device 230 implementing these steps.
- a part of the RAM (234) is assigned by the CPU (232) for storage of the algorithms.
- a communication interface e.g. an interface to a point to point link, a bus, a point to multipoint link or a broadcast network.
- a stream representative of a sequence of images and including a mapping indication is obtained from a source.
- the bit stream is read from a local memory, e.g. a video memory (234), a RAM (234), a ROM (233), a flash memory (233) or a hard disk (233).
- the stream is received from a storage interface (235), e.g. an interface with a mass storage, a RAM, a ROM, a flash memory, an optical disc or a magnetic support and/or received from a communication interface (235), e.g. an interface to a point to point link, a bus, a point to multipoint link or a broadcast network.
- the algorithms implementing the steps of a method of decoding an image of a sequence of images responsive to an indication of the omnidirectional- to-2D mapping are stored in a memory GRAM of the graphics card 236 associated with the device 230 implementing these steps.
- a part of the RAM (234) is assigned by the CPU (232) for storage of the algorithms.
- the present disclosure is not limited to methods of encoding and decoding a sequence of images but also extends to any method of displaying the decoded video and to any device implementing this displaying method as, for example, the display devices of figures 2 to 12.
- the implementation of calculations necessary to encode and decode the bit stream is not limited either to an implementation in shader type microprograms but also extends to an implementation in any program type, for example programs that can be executed by a CPU type microprocessor.
- the use of the methods of the present disclosure is not limited to a live utilisation but also extends to any other utilisation, for example for processing known as postproduction processing in a recording studio.
- the implementations described herein may be implemented in, for example, a method or a process, an apparatus, a software program, a data stream, or a signal. Even if only discussed in the context of a single form of implementation (for example, discussed only as a method or a device), the implementation of features discussed may also be implemented in other forms (for example a program).
- An apparatus may be implemented in, for example, appropriate hardware, software, and firmware.
- the methods may be implemented in, for example, an apparatus such as, for example, a processor, which refers to processing devices in general, including, for example, a computer, a microprocessor, an integrated circuit, or a programmable logic device. Processors also include communication devices, such as, for example, Smartphones, tablets, computers, mobile phones, portable/personal digital assistants ("PDAs”), and other devices that facilitate communication of information between end-users.
- PDAs portable/personal digital assistants
- Implementations of the various processes and features described herein may be embodied in a variety of different equipment or applications, particularly, for example, equipment or applications associated with data encoding, data decoding, view generation, texture processing, and other processing of images and related texture information and/or depth information.
- equipment include an encoder, a decoder, a post-processor processing output from a decoder, a pre-processor providing input to an encoder, a video coder, a video decoder, a video codec, a web server, a set-top box, a laptop, a personal computer, a cell phone, a PDA, and other communication devices.
- the equipment may be mobile and even installed in a mobile vehicle.
- the methods may be implemented by instructions being performed by a processor, and such instructions (and/or data values produced by an implementation) may be stored on a processor-readable medium such as, for example, an integrated circuit, a software carrier or other storage device such as, for example, a hard disk, a compact diskette (“CD"), an optical disc (such as, for example, a DVD, often referred to as a digital versatile disc or a digital video disc), a random access memory (“RAM”), or a read-only memory (“ROM”).
- the instructions may form an application program tangibly embodied on a processor-readable medium. Instructions may be, for example, in hardware, firmware, software, or a combination.
- a processor may be characterized, therefore, as, for example, both a device configured to carry out a process and a device that includes a processor-readable medium (such as a storage device) having instructions for carrying out a process. Further, a processor-readable medium may store, in addition to or in lieu of instructions, data values produced by an implementation.
- implementations may produce a variety of signals formatted to carry information that may be, for example, stored or transmitted.
- the information may include, for example, instructions for performing a method, or data produced by one of the described implementations.
- a signal may be formatted to carry as data the rules for writing or reading the syntax of a described embodiment, or to carry as data the actual syntax- values written by a described embodiment.
- Such a signal may be formatted, for example, as an electromagnetic wave (for example, using a radio frequency portion of spectrum) or as a baseband signal.
- the formatting may include, for example, encoding a data stream and modulating a carrier with the encoded data stream.
- the information that the signal carries may be, for example, analog or digital information.
- the signal may be transmitted over a variety of different wired or wireless links, as is known.
- the signal may be stored on a processor-readable medium.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Graphics (AREA)
- Software Systems (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Compression Or Coding Systems Of Tv Signals (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Abstract
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP16306265.6A EP3301933A1 (fr) | 2016-09-30 | 2016-09-30 | Procédés, dispositifs et flux pour fournir une indication de mise en correspondance d'images omnidirectionnelles |
PCT/EP2017/074658 WO2018060347A1 (fr) | 2016-09-30 | 2017-09-28 | Procédés, dispositifs et flux destinés à fournir une indication de mappage d'images omnidirectionnelles |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3520417A1 true EP3520417A1 (fr) | 2019-08-07 |
Family
ID=57138001
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16306265.6A Withdrawn EP3301933A1 (fr) | 2016-09-30 | 2016-09-30 | Procédés, dispositifs et flux pour fournir une indication de mise en correspondance d'images omnidirectionnelles |
EP17777041.9A Withdrawn EP3520417A1 (fr) | 2016-09-30 | 2017-09-28 | Procédés, dispositifs et flux destinés à fournir une indication de mappage d'images omnidirectionnelles |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP16306265.6A Withdrawn EP3301933A1 (fr) | 2016-09-30 | 2016-09-30 | Procédés, dispositifs et flux pour fournir une indication de mise en correspondance d'images omnidirectionnelles |
Country Status (7)
Country | Link |
---|---|
US (1) | US20190268584A1 (fr) |
EP (2) | EP3301933A1 (fr) |
KR (1) | KR20190055228A (fr) |
CN (1) | CN109997364A (fr) |
CA (1) | CA3043247A1 (fr) |
RU (1) | RU2019112864A (fr) |
WO (1) | WO2018060347A1 (fr) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018131832A1 (fr) * | 2017-01-10 | 2018-07-19 | 엘지전자 주식회사 | Procédé permettant de transmettre une vidéo à 360 degrés, procédé permettant de recevoir une vidéo à 360 degrés, appareil permettant de transmettre une vidéo à 360 degrés et appareil permettant de recevoir une vidéo à 360 degrés, |
US10818087B2 (en) * | 2017-10-02 | 2020-10-27 | At&T Intellectual Property I, L.P. | Selective streaming of immersive video based on field-of-view prediction |
CN110662087B (zh) * | 2018-06-30 | 2021-05-11 | 华为技术有限公司 | 点云编解码方法和编解码器 |
US11575976B2 (en) | 2018-08-07 | 2023-02-07 | Coredinate Inc. | Omnidirectional video streaming |
CN113228658B (zh) * | 2018-12-14 | 2023-10-17 | 中兴通讯股份有限公司 | 沉浸式视频比特流处理 |
WO2020197086A1 (fr) * | 2019-03-25 | 2020-10-01 | 엘지전자 주식회사 | Dispositif de transmission de données de nuage de points, procédé de transmission de données de nuage de points, dispositif de réception de données de nuage de points et/ou procédé de réception de données de nuage de points |
US10909668B1 (en) * | 2019-07-31 | 2021-02-02 | Nxp Usa, Inc. | Adaptive sub-tiles for distortion correction in vision-based assistance systems and methods |
KR20210027918A (ko) * | 2019-09-03 | 2021-03-11 | (주)코믹스브이 | 2차원 이미지로부터 hmd를 위한 3차원 이미지를 생성하는 방법 |
JP6698929B1 (ja) * | 2019-10-31 | 2020-05-27 | 株式会社Cygames | プログラム、ゲーム仮想空間提供方法、及びゲーム仮想空間提供装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6897858B1 (en) * | 2000-02-16 | 2005-05-24 | Enroute, Inc. | Partial image decompression of a tiled image |
JP2003141562A (ja) * | 2001-10-29 | 2003-05-16 | Sony Corp | 非平面画像の画像処理装置及び画像処理方法、記憶媒体、並びにコンピュータ・プログラム |
US8854486B2 (en) * | 2004-12-17 | 2014-10-07 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for processing multiview videos for view synthesis using skip and direct modes |
US20150172544A1 (en) * | 2012-07-04 | 2015-06-18 | Zhipin Deng | Panorama based 3d video coding |
US10136119B2 (en) * | 2013-01-10 | 2018-11-20 | Qualcomm Incoporated | View synthesis in 3D video |
US9571812B2 (en) * | 2013-04-12 | 2017-02-14 | Disney Enterprises, Inc. | Signaling warp maps using a high efficiency video coding (HEVC) extension for 3D video coding |
US10104361B2 (en) * | 2014-11-14 | 2018-10-16 | Samsung Electronics Co., Ltd. | Coding of 360 degree videos using region adaptive smoothing |
WO2017188714A1 (fr) * | 2016-04-26 | 2017-11-02 | 엘지전자 주식회사 | Procédé de transmission d'une vidéo à 360 degrés, procédé de réception d'une vidéo à 360 degrés, appareil de transmission d'une vidéo à 360 degrés, appareil de réception d'une vidéo à 360 degrés |
-
2016
- 2016-09-30 EP EP16306265.6A patent/EP3301933A1/fr not_active Withdrawn
-
2017
- 2017-09-28 KR KR1020197012394A patent/KR20190055228A/ko not_active Application Discontinuation
- 2017-09-28 WO PCT/EP2017/074658 patent/WO2018060347A1/fr unknown
- 2017-09-28 EP EP17777041.9A patent/EP3520417A1/fr not_active Withdrawn
- 2017-09-28 CN CN201780073222.8A patent/CN109997364A/zh active Pending
- 2017-09-28 CA CA3043247A patent/CA3043247A1/fr not_active Abandoned
- 2017-09-28 US US16/345,993 patent/US20190268584A1/en not_active Abandoned
- 2017-09-28 RU RU2019112864A patent/RU2019112864A/ru not_active Application Discontinuation
Also Published As
Publication number | Publication date |
---|---|
KR20190055228A (ko) | 2019-05-22 |
WO2018060347A1 (fr) | 2018-04-05 |
CA3043247A1 (fr) | 2018-04-05 |
CN109997364A (zh) | 2019-07-09 |
EP3301933A1 (fr) | 2018-04-04 |
RU2019112864A (ru) | 2020-10-30 |
US20190268584A1 (en) | 2019-08-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP7241018B2 (ja) | 没入型ビデオフォーマットのための方法、装置、及びストリーム | |
US20190268584A1 (en) | Methods, devices and stream to provide indication of mapping of omnidirectional images | |
KR102600011B1 (ko) | 3 자유도 및 볼류메트릭 호환 가능한 비디오 스트림을 인코딩 및 디코딩하기 위한 방법들 및 디바이스들 | |
KR20200065076A (ko) | 볼류메트릭 비디오 포맷을 위한 방법, 장치 및 스트림 | |
KR20170132669A (ko) | 몰입형 비디오 포맷을 위한 방법, 장치 및 스트림 | |
KR20200083616A (ko) | 볼류메트릭 비디오를 인코딩/디코딩하기 위한 방법, 장치 및 스트림 | |
US11812066B2 (en) | Methods, devices and stream to encode global rotation motion compensated images | |
JP2021502033A (ja) | ボリュメトリックビデオを符号化/復号する方法、装置、およびストリーム | |
EP3562159A1 (fr) | Procédé, appareil et flux pour format vidéo volumétrique | |
KR20190046850A (ko) | 몰입형 비디오 포맷을 위한 방법, 장치 및 스트림 | |
CN114503554B (zh) | 用于传送体积视频内容的方法和装置 | |
US20230007338A1 (en) | A method and apparatus for decoding a 3d video | |
CN115443654A (zh) | 用于对体积视频进行编码和解码的方法和装置 | |
JP2023506832A (ja) | 補助パッチを有する容積ビデオ | |
KR102607709B1 (ko) | 3 자유도 및 볼류메트릭 호환 가능한 비디오 스트림을 인코딩 및 디코딩하기 위한 방법들 및 디바이스들 | |
KR20220035229A (ko) | 볼류메트릭 비디오 콘텐츠를 전달하기 위한 방법 및 장치 | |
WO2018069215A1 (fr) | Procédé, appareil et flux permettant de coder une transparence et des informations d'ombre d'un format vidéo immersif | |
EP3709659A1 (fr) | Procédé et appareil de codage et de décodage de vidéo volumétrique | |
KR20220069040A (ko) | 볼류메트릭 비디오를 인코딩, 송신 및 디코딩하기 위한 방법 및 장치 | |
EP3310053A1 (fr) | Procédé et appareil permettant de coder des informations de transparence pour un format vidéo immersif | |
EP3310057A1 (fr) | Procédé, appareil et flux permettant de coder des informations d'ombrage et de transparence pour un format vidéo immersif | |
EP3310052A1 (fr) | Procédé, appareil et flux de format vidéo immersif | |
CN111247803A (zh) | 立体全向帧打包 | |
US20230032599A1 (en) | Methods and apparatuses for encoding, decoding and rendering 6dof content from 3dof+ composed elements | |
US20230215080A1 (en) | A method and apparatus for encoding and decoding volumetric video |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20190416 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
RAP1 | Party data changed (applicant data changed or rights of an application transferred) |
Owner name: INTERDIGITAL VC HOLDINGS, INC. |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
17Q | First examination report despatched |
Effective date: 20200423 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN |
|
18W | Application withdrawn |
Effective date: 20200702 |