WO2021053269A1 - Six degrees of freedom spatial layout signaling - Google Patents

Six degrees of freedom spatial layout signaling Download PDF

Info

Publication number
WO2021053269A1
WO2021053269A1 PCT/FI2020/050590 FI2020050590W WO2021053269A1 WO 2021053269 A1 WO2021053269 A1 WO 2021053269A1 FI 2020050590 W FI2020050590 W FI 2020050590W WO 2021053269 A1 WO2021053269 A1 WO 2021053269A1
Authority
WO
WIPO (PCT)
Prior art keywords
viewing
sub
media presentation
presentation description
volumes
Prior art date
Application number
PCT/FI2020/050590
Other languages
French (fr)
Inventor
Lauri ILOLA
Jaakko KERÄNEN
Vinod Kumar Malamal Vadakital
Kimmo Roimela
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to EP20864782.6A priority Critical patent/EP4032313A4/en
Publication of WO2021053269A1 publication Critical patent/WO2021053269A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/142Detection of scene cut or scene change
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/146Data rate or code amount at the encoder output
    • H04N19/147Data rate or code amount at the encoder output according to rate distortion criteria
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/262Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists
    • H04N21/26258Content or additional data distribution scheduling, e.g. sending additional data at off-peak times, updating software modules, calculating the carousel transmission frequency, delaying a video stream transmission, generating play-lists for generating a list of items to be played back in a given order, e.g. playlist, or scheduling item distribution according to such list
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video

Definitions

  • the examples and non-limiting embodiments relate generally to multimedia and software, and more particularly, to six degrees of freedom spatial layout signaling.
  • an apparatus includes means for segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; means for generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; means for generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and means for providing one or more of the sub viewing volumes based on a client selection and request.
  • an apparatus includes means for monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; means for requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; means for selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and means for rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
  • an apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: segment a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generate viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generate one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and provide one or more of the sub viewing volumes based on a client selection and request.
  • an apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: monitor a viewing position of a client device within a viewing volume level of one or more media presentation description files; request new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; select one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and render a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
  • a method includes segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
  • a method includes monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
  • a non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations, the operations comprising: segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
  • a non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations, the operations comprising: monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
  • FIG. 1 is a block diagram depicting compression and consumption of a volumetric video scene.
  • FIG. 2 is an example block diagram illustrating the semantic mapping between MPD file and volumetric video.
  • FIG. 3 is an example code excerpt that shows signaling of sub viewing volume to an adaptation set.
  • FIG. 4 is an example code excerpt that shows signaling of the centroid of the sub viewing volume.
  • FIG. 5 is a block diagram illustrating the communication between client and server architecture.
  • FIG. 6 is an example code excerpt that shows signaling of the sub viewing volume information on the MPD level.
  • FIG. 7 is an example code excerpt that shows signaling of the sub viewing volume information within Periods.
  • FIG. 8 is a block diagram illustrating signaling between the client and the server.
  • FIG. 9 is a diagram that depicts communication of the user viewing position by the client and database on the server that is used to store information about each sub viewing volume and associated content storage url.
  • FIG. 10 is an example apparatus configured to implement six degrees of freedom spatial layout signaling, based on the examples described herein.
  • FIG. 11 shows an example method to implement six degrees of freedom spatial layout signaling, based on the examples described herein.
  • FIG. 12 shows another example method to implement six degrees of freedom spatial layout signaling, based on the examples described herein.
  • MPEG-2 second of several standards developed by the Moving Pictures Expert Group mp2t MPEG-2 TS Byte Stream Format
  • volumetric video coding where dynamic 3D objects or scenes are coded into video streams for delivery and playback.
  • the MPEG standards PCC (Point Cloud Compression) and MIV (Metadata for Immersive Video) are two examples of such volumetric video compression.
  • PCC Point Cloud Compression
  • MIV Methodadata for Immersive Video
  • MPEG part 12 defines how volumetric video for limited viewing volumes may encoded but does not define how larger viewing volumes may be achieved.
  • MPEG DASH defines how video may be adaptively streamed from a server to a client device, but streaming tools for volumetric content do not exist yet.
  • the examples described herein relate to creating signaling between a server and client to enable selective streaming of sub-volumes of volumetric content for large scenes with arbitrary viewing volumes.
  • the examples described herein cover techniques related to the server side content segmentation as well as client side content consumption and the related signaling between the two.
  • Centroids of the sub viewing volumes may form a grid or tetrahedral mesh which may be used to select preferred sub viewing volumes for any viewing point within the scene viewing volume; ii) segmentation into short sequences that enable fast switching between viewing volumes (DASH + SegmentTemplates) while keeping MPD-files relatively short; iii) adding viewing volume and view point indications in DASH MPD-files; iv) generation of sub volume scheme URIs for client side instructions to fetch content; v) methods for location based MPD responses (client requests an MPD-file with a particular viewing position, server responds by sending the best fitting MDP file to reduce the amount of information that is streamed to the client in the form of MPD files, requires overlapping viewing volumes for MPD-files); and vi) methods for dynamic MPD-file generation based on client viewing positions.
  • the client application position of the viewer in the scene is tracked, which is used to request sub-volumes of volumetric video from the server.
  • the client continuously monitors its position in relation to viewing volumes in the MPD-file and requests only best fitting sub-volume atlas streams that are used to render views.
  • the examples described herein further cover the following topics: i) monitoring viewing position within the MPD-level viewing volume to request new viewing volume data when approaching edges of the MPD-level viewing volume; ii) choosing correct sub-volume atlas streams from the MPD-file based on viewing position; and iii) rendering novel views from one or more volumetric video streams.
  • FIG. 1 is a block diagram 100 depicting compression and consumption of a volumetric video scene, in accordance with the examples described herein. Fundamentally, each frame of the input 3D scene 102 is processed separately. The resulting per-frame atlas and metadata are then stored into video and metadata streams, respectively.
  • the first part converts the input 3D scene 102 into a canonical representation for processing.
  • the input scene 102 may consist of, for example, a procedurally animated 3D scene, animated 3D meshes, or registered camera views with depth maps.
  • the input is sampled at an internal processing frame rate and converted into a collection of 3D samples of the scene geometry at a specified internal processing resolution. Depending on the input, this may involve e.g. voxelizing a mesh model, or down-sampling a high resolution point cloud with very fine details into the processing resolution.
  • the internal representation is finally a collection of scene data points registered in a common 3D scene coordinate system, representing all aspects of the 3D input scene 102.
  • Example aspects may include but are not limited to the color, geometry, and surface normals of the 3D scene 102.
  • the View Optimizer 104 takes the internal representation and creates a segmentation of the scene optimized for a specified viewing constraint (the viewing volume). This involves creating view-tiles that have sufficient coverage and resolution for representing the original input scene at minimal quality degradation within the given viewing constraints.
  • the View Optimizer 104 makes use of at least the 3D position of the data points in the scene, but additional attributes such as surface normals, colors, material attributes, and viewing and capture directions may also be considered.
  • the View Optimizer 104 may divide the view-tiles into several sub viewing volumes if larger movement around the scene is intended and/or the scene is very complex.
  • the sub viewing volumes may be pre-defined by the content author, or automatically generated based on the input scene 102.
  • the size or bitrate of a single sub viewing volume may be constrained, and all view-tiles in a single sub viewing volume are spatially local to each other to enable efficient streaming.
  • the resulting view-tiles for one or more sub viewing volumes may then be pre-rendered in the Cloud Rendering 106 stage. This may involve resampling an input point cloud into 2D tile projections, or calling an external renderer, e.g. a path tracing renderer, to render views of a 3D input scene.
  • an external renderer e.g. a path tracing renderer
  • virtual views may be synthesized by view interpolation between the original input views, or parts of the input views may be used directly to produce the rendered tiles.
  • the rendered tiles are then input into a Packer 108, which produces an optimal 2D layout metadata of the rendered view-tiles which may be used to pack pre-rendered tiles into video frames.
  • the additional metadata that is required to unpack and re-render the packed tiles is also generated by the packer 108.
  • These packed atlases along with the associated metadata are then piped to Video Codecs 110 and IV Encoder 112 for generating compressed representations .
  • bitstreams may be further segmented for DASH compatible playback. This may include creation of different adaptations of the same viewing volume by resolution, bitrate, codec or other variable. Information about different viewing volumes and encoded representations is utilized by the Server Logic 118 or may be stored inside static MPD files.
  • the information about different sub viewing volumes and representations need to be made available to client devices, such as client 120, which perform view synthesis of novel views based on user viewport.
  • client devices such as client 120
  • This information may be signaled using DASH technology.
  • the IV streams may be provided from the IV media database 114 of the cloud 116 to the stream loader 124 of the client 120.
  • Each IV stream contains the necessary information for the view synthesizer to employ rendering methods using the client Tenderer 122 such as point cloud rendering, mesh rendering, or ray-casting to reconstruct a view of the scene from any given 3D viewpoint within the IV stream viewing volume (corresponding to a single sub viewing volume).
  • Only relevant IV stream(s) are retrieved from the content server (e.g., cloud 116) to minimize the content streamed to the client device 120, thus enabling larger viewing volumes.
  • These individual IV streams may be combined in the client Tenderer 122 to achieve a larger local viewing volume which is the union of the currently streamed sub viewing volumes. This way, real-time 3D viewing of the volumetric video for arbitrary large viewing volumes may be achieved.
  • DASH manifest i.e. MPD-file.
  • the metadata may be signaled in several different ways.
  • a DASH MPD file for a volumetric video may contain one or more Periods (e.g., Periods 204 and 206), which contain several Adaptation Sets (e.g., Adaptation Sets 208 and 210) for different sub viewing volumes (e.g., sub viewing volumes 216 and 218).
  • Each viewing volume may have different resolution and bitrate alternatives (e.g., Bitstream alternatives 220 and 222), which may be represented as different Representations (e.g., 212, 214) in an Adaptation Set (e.g., 208 or 210).
  • Bitstream alternatives 220 and 222 may be represented as different Representations (e.g., 212, 214) in an Adaptation Set (e.g., 208 or 210).
  • FIG. 2 is an example block diagram 200 illustrating the semantic mapping between an MPD file 202 and volumetric video.
  • the structural mapping of sub viewing volumes (e.g., sub viewing volumes 216 and 218) into MPD file structures (such as MPD 202) is an important part of the examples described herein.
  • the scene sequences (224, 226) are comprised of the sub viewing volumes (216, 218).
  • SegmentTemplate functionality it is recommended to utilize SegmentTemplate functionality in DASH to define short segments without bloating the overall DASH file with individual Segment lists. Below is a short example how SegmentTemplate may be used to reconstruct small segments.
  • Each shape may need to contain an id so that the shapes may be combined.
  • ShapeType shall be defined as an abstract root class which is used by the rest of the refined shapes. ShapeType may optionally contain an order id, which is useful when combining different shapes with arithmetic operations.
  • Cuboid shape may contain two points, minimum and maximum values, with which a 3d cuboid may be represented.
  • the sphere may contain a single point representing the origin of the sphere and may have a radius information which may indicate the size of the sphere around the origin.
  • the Group inherits id from ShapeType, so that later combining different groups becomes available.
  • it may contain an operator attribute, which explains how different primitives are combined.
  • Valid values for the operator may be defined, like + for adding primitives or - for subtracting primitives.
  • the order of the primitives (defined in ShapeType) describes the order in which the operations may be applied to primitives in the group. The operations may be performed first with two lowest order primitives and then to following primitives. Subtraction may take the lowest order primitive and subtract the second lowest order primitive from it, then proceed until all primitives are processed.
  • an MPD file may contain a list of all available sub viewing volumes, in which case the client device chooses best fitting Adaptation Sets from the MPD file by itself. This is a simple solution that may work well for fairly small scenes, which do not consist of hundreds or thousands of sub viewing volumes. Because of the design simplicity, this may likely be the most important embodiment. From a signaling perspective MPEG DASH MPD Adaptation Sets would need information about the sub viewing volume centroids for the Client to select the optimal sub viewing volumes.
  • FIG. 3 is an example code excerpt 300 that shows signaling of sub viewing volume centroid to an adaptation set.
  • Reference number 302 of FIG. 3 (code also provided below) highlights changes that may be required to the MPEG DASH specification .
  • MPEG DASH supports viewpoint attribute, which could be used to signal the centroid of the sub viewing volume.
  • viewpoint attribute which could be used to signal the centroid of the sub viewing volume.
  • syntax of the Viewpoint attribute is not defined and each Adaptation Set may only contain zero or one ViewingVolumeCentroid .
  • the syntax of the Viewpoint attribute is not defined and each Adaptation Set may only contain zero or one ViewingVolumeCentroid .
  • ViewingVolumeCentroid may be defined as three explicit attributes as shown in FIG. 4.
  • FIG. 4 is an example code excerpt 400 that shows signaling of the centroid of the sub viewing volume.
  • Reference number 402 of FIG. 4 (code also provided below) highlights the signaling.
  • FIG. 5 is a block diagram 500 illustrating the communication between client 502 and server 504 architecture.
  • the client 502 simply downloads an MPD file from the server 504.
  • the client 502 With the information about the ViewingVolumeCentroids in the MPD file, the client 502 is able to select desired sub viewing volumes to be downloaded.
  • the selection of the correct streams is done by the client application logic (e.g., client logic 126 as shown in FIG. 1) based on the viewing point of the user.
  • FIG. 5 shows content storage 508 providing volumetric video bitstreams to client 502, and receiving segmented volumetric video bitstreams from cloud 504.
  • MPD files may be further divided into smaller MPD files, in which case the collection of MPD files defines the entire viewing volume of the scene. This may require additional signaling on the MPD level so that the server and client are able to select the best fitting collection of sub viewing volumes, without having to read viewing volume related information from individual volumetric video streams.
  • the benefit of this embodiment over the previous one is that not all sub viewing volumes need to be signaled in a single MPD file and that the selection process in the server side is very simple for choosing the best fitting MPD file for the client. This would reduce the amount of data that would need to be signaled to the client inside MPD files, thus enabling streaming of large scenes which may contain thousands of sub viewing volumes.
  • FIG. 6 is an example code excerpt 600 that shows signaling of the sub viewing volume information on the MPD level.
  • Reference number 602 of FIG. 6 (code also provided below) highlights the signaling.
  • the sub viewing volume information may be signaled within Periods (e.g., Period 204 and/or Period 206), if the sub viewing volume segmentation of the scene is expected to change during playback.
  • Periods e.g., Period 204 and/or Period 206
  • FIG. 7 is an example code excerpt 700 that shows signaling of the sub viewing volume information within Periods.
  • Reference number 702 of FIG. 7 (code also provided below) highlights the signaling.
  • FIG. 8 is a block diagram 800 illustrating signaling between the client 802 and the server 804.
  • the client 802 requests an MPD file from the server 804 by providing its viewing position to the server 804.
  • the server 804 picks the best partial viewing volume containing MPD file and sends it to the client 802.
  • the application selects the desired sub viewing volumes based on user viewing point and downloads only required streams of volumetric data.
  • FIG. 8 shows content storage 808 providing volumetric video bitstreams to client 802, and receiving segmented volumetric video bitstreams from cloud 804.
  • index file listing URLs for sub viewing volume MPD files may be used.
  • the index file may be communicated outside of DASH MPD for example as a JSON object via HTTP request.
  • the scene description may be defined as the following kind of JSON file.
  • Viewing volume may be defined using a similar ShapeType construct as described in embodiment 0.
  • URI scheme based scene segmentation In another embodiment the scene could be divided into static sub viewing volumes for which a URI scheme may be developed.
  • the entire viewing volume may be divided into a 16x16x16 grid.
  • Each grid item, i.e. sub viewing volume may have a scheme ID which may describe its relative position in the entire viewing volume.
  • the size of the entire viewing volume may be communicated off-band or as an MPD level viewing volume element.
  • the size of individual sub-viewing volumes may be communicated as Adaptation Set level viewing volume elements.
  • the URI scheme may work as follows https://server.com/content/scene/ :id/:xx/:yy/:zz.
  • (:id) may be used to indicate scene id.
  • (:xx) indicates the sub division index on the x-axis
  • (:yy) indicates the sub division index on the y-axis
  • (:zz) indicates the sub division index on the z-axis.
  • the client may know the size of the entire viewing volume and that it is divided into equal static sub viewing volumes e.g. 16x16x16 grid.
  • the client is able to request specific sub viewing volume with any viewing point within the viewing volume by using the scheme described above.
  • FIG. 9 is a diagram 900 illustrating embodiment 5, depicting communication of the user viewing position by the client 902 and database 906 on the server 904 that is used to store information about each sub viewing volume and associated content storage url.
  • the server 904 constructs MPD files on the fly that contain only relevant sub viewing volume information for each client.
  • the MPD files may contain information about its valid sub viewing volume, so that the client 902 may easily tell when a new MPD may be requested.
  • FIG. 9 shows content storage 908 providing volumetric video bitstreams to client 902, and receiving segmented volumetric video bitstreams from cloud 904.
  • This method requires by far the least amount of information submitted within an MPD file, but requires a client-server architecture, where the server side contains a database and processing capabilities (implemented by viewing volume DB 906) to reconstruct MPD files on the fly for several different clients.
  • Hierarchical arrangement of viewing volumes In another embodiment signaling of parent viewing volumes may be desired in order to hierarchically arrange viewing volumes. This may be implemented by adding (e.g., by appending) an optional parent attribute for ShapeType. This signaling may be particularly useful to cater for devices, which may be able to process larger viewing volumes locally.
  • the parent attribute of ShapeType indicates the id of the parent sub viewing volume shape. Following this information the client is able to identify what parent viewing volume contains which sub viewing volume.
  • the ShapeType may also contain a list of child nodes, which may contain a sub volume of the parent shape. With the afore described signaling the client is able to navigate deeper in the viewing volume by following the url linking of parent and child ShapeTypes.
  • viewing orientation in addition to viewing volume may be signaled.
  • Viewing direction may be signaled with normalized direction vector with x, y and z components.
  • vertical and horizontal field of view values may be signaled. This may enable more fine grained viewing volume definition.
  • DirectionType may be added to ShapeType to add directionality information to viewing volume.
  • viewing volumes may be linked together by exit directions to provide easy accessible MPD links, when the client is exiting local viewing volume in a specific direction.
  • exit directions may be performed outside of MPD files to provide additional flexibility or may be added inside MPD files, if needed. With this information available for the client, it becomes less dependent on the server side component.
  • Exit directions may be defined either as geographic compass points, 3d axes or arbitrary direction vectors.
  • the url in the ShapeType may refer to MPD or content url for the given sub viewing volume, whereas the strings in ExitDirectionTypes may give the associated url for MPD for the surrounding sub viewing volumes.
  • the client is able to request neighboring sub viewing volumes simply by following the directionality. The client may from time to time end up in a situation where it may need to fetch two or more neighboring sub viewing volumes, in which case the client fetches the MPD files for all needed sub viewing volumes, but only downloads the partial sub viewing volume video streams needed.
  • the main advantage of the examples described herein is to enable consumption of 6DoF content by enabling selective streaming of volumetric sub volumes. Such mechanisms are required due to the size of volumetric scenes, which cannot be locally stored.
  • FIG. 10 is an example apparatus 1000, which may be implemented in hardware, configured to implement six degrees of freedom spatial layout signaling, based on the examples described herein.
  • the apparatus 1000 comprises a processor 1002, at least one non-transitory memory 1004 including computer program code 1005, wherein the at least one memory 1004 and the computer program code 1005 are configured to, with the at least one processor 1002, cause the apparatus to implement circuitry, a process, component, module, or function (collectively 1006) to implement the signaling as described herein.
  • the apparatus 1000 optionally includes a display and/or I/O interface 1008 that may be used to display aspects or a status of the methods described herein (e.g., as the methods are being performed or at a subsequent time).
  • the display and/or I/O interface 1008 may also be configured to receive input such as user input
  • the apparatus 1000 also optionally includes one or more network (NW) interfaces (I/F(s)) 1010.
  • NW I/F(s) 1010 may be wired and/or wireless and communicate over the Internet/other network(s) via any communication technique.
  • the NW I/F(s) 1010 may comprise one or more transmitters and one or more receivers.
  • the apparatus 1000 may be configured as a server or client based on the signaling aspects described herein (for example, apparatus 1000 may be a remote, virtual or cloud apparatus).
  • references to a 'computer', 'processor', etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry.
  • References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
  • the memory 1004 may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory.
  • the memory 1004 may comprise a database for storing data.
  • circuitry refers to all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor (s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present.
  • circuitry' applies to all uses of this term in this application.
  • the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware.
  • the term 'circuitry' would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or another network device.
  • FIG. 11 is an example method 1100 that implements six degrees of freedom spatial layout signaling based on the examples described herein.
  • the method includes segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
  • the method includes generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files.
  • the method includes generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content.
  • the method includes providing one or more of the sub viewing volumes based on a client selection and request.
  • the method 1100 may be performed by a server.
  • FIG. 12 is an example method 1200 that implements six degrees of freedom spatial layout signaling, based on the examples described herein.
  • the method includes monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files.
  • the method includes requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files.
  • the method includes selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device.
  • the method includes rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
  • the method 1200 may be performed by a client.
  • An example apparatus includes means for segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; means for generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; means for generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and means for providing one or more of the sub viewing volumes based on a client selection and request.
  • the apparatus may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
  • the apparatus may further include means for defining data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
  • the apparatus may further include means for signaling information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
  • the apparatus may further include means for dividing the media presentation description files into smaller media presentation description files; means for providing signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; means for receiving a request for a media presentation description file based on a viewing position of a client; means for selecting a partial viewing volume containing the requested media presentation description file; and means for providing the selected partial viewing volume to the client.
  • the apparatus may further include means for implementing an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
  • the apparatus may further include means for implementing a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
  • the apparatus may further include means for dynamically creating the media presentation description files based on a client device viewing position.
  • the apparatus may further include means for hierarchically arranging viewing volumes based on signaling of parent sub viewing volumes.
  • the apparatus may further include means for signaling a viewing orientation of a sub viewing volume.
  • the apparatus may further include means for linking sub viewing volumes by exit direction.
  • An example apparatus includes means for monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; means for requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; means for selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and means for rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
  • the apparatus may further include wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
  • the apparatus may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
  • the apparatus may further include wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
  • the apparatus may further include means for downloading a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
  • the apparatus may further include means for requesting the media presentation description file based on the viewing position of the client device; means for selecting a partial viewing volume containing the requested media presentation description file based on a client viewing point; and means for receiving the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
  • the apparatus may further include means for implementing an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
  • the apparatus may further include means for requesting a segmented static sub viewing volume based on an associated uniform resource identifier.
  • the apparatus may further include wherein the media presentation description files are dynamically created based on the viewing position of the client device.
  • the apparatus may further include wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
  • the apparatus may further include wherein a viewing orientation of a sub viewing volume is signaled.
  • the apparatus may further include means for requesting the media presentation description file for a next sub viewing volume in an exit direction.
  • An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: segment a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generate viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generate one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and provide one or more of the sub viewing volumes based on a client selection and request.
  • the apparatus may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: define data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: signal information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: divide the media presentation description files into smaller media presentation description files; provide signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; receive a request for a media presentation description file based on a viewing position of a client; select a partial viewing volume containing the requested media presentation description file; and provide the selected partial viewing volume to the client.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: implement an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: implement a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: dynamically create the media presentation description files based on a client device viewing position.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: hierarchically arrange viewing volumes based on signaling of parent sub viewing volumes.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: signal a viewing orientation of a sub viewing volume.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: link sub viewing volumes by exit direction.
  • An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: monitor a viewing position of a client device within a viewing volume level of one or more media presentation description files; request new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; select one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and render a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
  • the apparatus may further include wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
  • the apparatus may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
  • the apparatus may further include wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: download a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: request the media presentation description file based on the viewing position of the client device; select a partial viewing volume containing the requested media presentation description file based on a client viewing point; and receive the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: implement an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: request a segmented static sub viewing volume based on an associated uniform resource identifier.
  • the apparatus may further include wherein the media presentation description files are dynamically created based on the viewing position of the client device.
  • the apparatus may further include wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
  • the apparatus may further include wherein a viewing orientation of a sub viewing volume is signaled.
  • the apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: request the media presentation description file for a next sub viewing volume in an exit direction.
  • An example method includes segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
  • the method may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
  • the method may further include defining data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
  • the method may further include signaling information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
  • the method may further include dividing the media presentation description files into smaller media presentation description files; providing signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; receiving a request for a media presentation description file based on a viewing position of a client; selecting a partial viewing volume containing the requested media presentation description file; and providing the selected partial viewing volume to the client.
  • the method may further include implementing an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
  • the method may further include implementing a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
  • the method may further include dynamically creating the media presentation description files based on a client device viewing position.
  • the method may further include hierarchically arranging viewing volumes based on signaling of parent sub viewing volumes.
  • the method may further include signaling a viewing orientation of a sub viewing volume.
  • the method may further include linking sub viewing volumes by exit direction.
  • An example method includes monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
  • the method may further include wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
  • the method may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
  • the method may further include wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
  • the method may further include downloading a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
  • the method may further include requesting the media presentation description file based on the viewing position of the client device; selecting a partial viewing volume containing the requested media presentation description file based on a client viewing point; and receiving the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
  • the method may further include implementing an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
  • the method may further include requesting a segmented static sub viewing volume based on an associated uniform resource identifier.
  • the method may further include wherein the media presentation description files are dynamically created based on the viewing position of the client device.
  • the method may further include wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
  • the method may further include wherein a viewing orientation of a sub viewing volume is signaled.
  • the method may further include requesting the media presentation description file for a next sub viewing volume in an exit direction.
  • An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations may be provided, the operations comprising: segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
  • the non-transitory program storage device may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
  • the operations of the non-transitory program storage device may further include defining data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
  • the operations of the non-transitory program storage device may further include signaling information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
  • the operations of the non-transitory program storage device may further include dividing the media presentation description files into smaller media presentation description files; providing signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; receiving a request for a media presentation description file based on a viewing position of a client; selecting a partial viewing volume containing the requested media presentation description file; and providing the selected partial viewing volume to the client.
  • the operations of the non-transitory program storage device may further include implementing an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
  • the operations of the non-transitory program storage device may further include implementing a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
  • the operations of the non-transitory program storage device may further include dynamically creating the media presentation description files based on a client device viewing position.
  • the operations of the non-transitory program storage device may further include hierarchically arranging viewing volumes based on signaling of parent sub viewing volumes. [00159] The operations of the non-transitory program storage device may further include signaling a viewing orientation of a sub viewing volume.
  • the operations of the non-transitory program storage device may further include linking sub viewing volumes by exit direction.
  • An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations may be provided, the operations comprising: monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
  • the non-transitory program storage device may further include wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
  • the non-transitory program storage device may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
  • the non-transitory program storage device may further include wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
  • the operations of the non-transitory program storage device may further include downloading a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
  • the operations of the non-transitory program storage device may further include requesting the media presentation description file based on the viewing position of the client device; selecting a partial viewing volume containing the requested media presentation description file based on a client viewing point; and receiving the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
  • the operations of the non-transitory program storage device may further include implementing an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
  • the operations of the non-transitory program storage device may further include requesting a segmented static sub viewing volume based on an associated uniform resource identifier.
  • the non-transitory program storage device may further include wherein the media presentation description files are dynamically created based on the viewing position of the client device.
  • the non-transitory program storage device may further include wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
  • the non-transitory program storage device may further include wherein a viewing orientation of a sub viewing volume is signaled.
  • the operations of the non-transitory program storage device may further include requesting the media presentation description file for a next sub viewing volume in an exit direction.
  • An example apparatus includes one or more circuitries configured to execute the method of any of the functions of a server (e.g. cloud) as described herein.
  • a server e.g. cloud
  • An example apparatus includes one or more circuitries configured to execute the method of any of the functions of a client (e.g. renderer) as described herein.
  • a client e.g. renderer

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

Apparatuses, methods, and computer programs are disclosed for six degrees of freedom spatial layout signaling. An example apparatus includes means for segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; means for generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; means for generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and means for providing one or more of the sub viewing volumes based on a client selection and request.

Description

Six Degrees of Freedom Spatial Layout Signaling
TECHNICAL FIELD
[0001] The examples and non-limiting embodiments relate generally to multimedia and software, and more particularly, to six degrees of freedom spatial layout signaling.
BACKGROUND
[0002] It is known to perform video coding and decoding.
SUMMARY
[0003] The following summary is merely intended to be an example. The summary is not intended to limit the scope of the claims.
[0004] In accordance with an aspect, an apparatus includes means for segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; means for generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; means for generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and means for providing one or more of the sub viewing volumes based on a client selection and request.
[0005] In accordance with an aspect, an apparatus includes means for monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; means for requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; means for selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and means for rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
[0006] In accordance with an aspect, an apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: segment a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generate viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generate one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and provide one or more of the sub viewing volumes based on a client selection and request.
[0007] In accordance with an aspect, an apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: monitor a viewing position of a client device within a viewing volume level of one or more media presentation description files; request new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; select one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and render a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
[0008] In accordance with an aspect, a method includes segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
[0009] In accordance with an aspect, a method includes monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene. [0010] In accordance with an aspect, a non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations is provided, the operations comprising: segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
[0011] In accordance with an aspect, a non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations is provided, the operations comprising: monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene. BRIEF DESCRIPTION OF THE DRAWINGS
[0012] The foregoing aspects and other features are explained in the following description, taken in connection with the accompanying drawings, wherein: [0013] FIG. 1 is a block diagram depicting compression and consumption of a volumetric video scene.
[0014] FIG. 2 is an example block diagram illustrating the semantic mapping between MPD file and volumetric video.
[0015] FIG. 3 is an example code excerpt that shows signaling of sub viewing volume to an adaptation set.
[0016] FIG. 4 is an example code excerpt that shows signaling of the centroid of the sub viewing volume.
[0017] FIG. 5 is a block diagram illustrating the communication between client and server architecture. [0018] FIG. 6 is an example code excerpt that shows signaling of the sub viewing volume information on the MPD level.
[0019] FIG. 7 is an example code excerpt that shows signaling of the sub viewing volume information within Periods.
[0020] FIG. 8 is a block diagram illustrating signaling between the client and the server.
[0021] FIG. 9 is a diagram that depicts communication of the user viewing position by the client and database on the server that is used to store information about each sub viewing volume and associated content storage url. [0022] FIG. 10 is an example apparatus configured to implement six degrees of freedom spatial layout signaling, based on the examples described herein.
[0023] FIG. 11 shows an example method to implement six degrees of freedom spatial layout signaling, based on the examples described herein.
[0024] FIG. 12 shows another example method to implement six degrees of freedom spatial layout signaling, based on the examples described herein.
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[0025] The following acronyms and abbreviations that may be found in the specification and/or the drawing figures are defined as follows:
2D or 2d two-dimensional
2k a horizontal resolution of approximately 2,000 pixels on a display device or content, or
2048x1080
3D or 3d three-dimensional
6DOF or 6DoF six degrees of freedom
720p 1280x720 px 108 Op 1920x1080 px codec coder-decoder
DASH Dynamic Adaptive Streaming over HTTP
DB database
GPU graphics processing unit
HTTP HyperText Transfer Protocol id or ID identification
IV immersive video
JSON JavaScript Object Notation max maxrmum mimeType Multipurpose Internet Mail Extensions min minimum MIV metadata for immersive video MPD Media Presentation Description .mpd file extension for Media Presentation
Description files
MPEG moving picture experts group
MPEG-I MPEG immersive
MPEG-2 second of several standards developed by the Moving Pictures Expert Group mp2t MPEG-2 TS Byte Stream Format
Neg negative
N# International Organization for Standardization document number
PCC point cloud coding/compression
Pos positive px pixel
.sidx Stufflt Archive Index Files ts timescale
URI uniform resource identifier url or URL uniform resource locator
XML extensible markup language xs XML schema
[0026] The examples described herein relate to volumetric video coding, where dynamic 3D objects or scenes are coded into video streams for delivery and playback. The MPEG standards PCC (Point Cloud Compression) and MIV (Metadata for Immersive Video) are two examples of such volumetric video compression. [0027] For limited viewing volume immersive video (MPEG-I part 12) the work is progressing to generate partial views and patches that segment the scene and contain regions occluded from a single point of view. Storing such information usually contributes to larger data amounts than traditional 360 degree videos. It has been discovered that even with a significantly limited viewing volume the amount of compressed content may require several 2k atlases worth of geometry and texture data. Removing or easing limitations on the viewing volume to allow a viewer to freely explore the scene explodes the amount of data required, thus making local storage of such experiences unviable. Partial streaming of pre-rendered content could be considered as a solution to such problem which today remains mostly unexplored.
[0028] Entire scenes of pre-rendered volumetric data cannot be stored locally and while high-end GPUs capable for raytracing are now being introduced, real-time raytracing with full global illumination is not realistic in mainstream computing devices for a long time, thus another solution for experiencing immersive video is needed. Especially the challenging case of real world captured content requires novel approaches to be developed. Techniques for partial streaming of limited sub viewing volumes do not exist yet.
[0029] In traditional 360 video viewport based adaptation has been used to improve quality. In this case signaling between server and client application have been defined to enable such experiences. Similar concepts and methods may be applied to streaming of limited viewing volumes, with some adjustments.
[0030] MPEG part 12 defines how volumetric video for limited viewing volumes may encoded but does not define how larger viewing volumes may be achieved. [0031] MPEG DASH defines how video may be adaptively streamed from a server to a client device, but streaming tools for volumetric content do not exist yet.
[0032] In general, the examples described herein relate to creating signaling between a server and client to enable selective streaming of sub-volumes of volumetric content for large scenes with arbitrary viewing volumes. Architecturally the examples described herein cover techniques related to the server side content segmentation as well as client side content consumption and the related signaling between the two.
[0033] On the server volumetric content is pre-processed and segmented into streamable subdivisions fit for consumption. The process involves analysis of the scene based on which the scene is divided into subdivisions which are volumetrically encoded for example using MPEG-I part 12 like technologies. On the server side the examples described herein cover the following topics: i) subdivision into possibly overlapping sub viewing volumes. Centroids of the sub viewing volumes may form a grid or tetrahedral mesh which may be used to select preferred sub viewing volumes for any viewing point within the scene viewing volume; ii) segmentation into short sequences that enable fast switching between viewing volumes (DASH + SegmentTemplates) while keeping MPD-files relatively short; iii) adding viewing volume and view point indications in DASH MPD-files; iv) generation of sub volume scheme URIs for client side instructions to fetch content; v) methods for location based MPD responses (client requests an MPD-file with a particular viewing position, server responds by sending the best fitting MDP file to reduce the amount of information that is streamed to the client in the form of MPD files, requires overlapping viewing volumes for MPD-files); and vi) methods for dynamic MPD-file generation based on client viewing positions.
[0034] On the client application position of the viewer in the scene is tracked, which is used to request sub-volumes of volumetric video from the server. The client continuously monitors its position in relation to viewing volumes in the MPD-file and requests only best fitting sub-volume atlas streams that are used to render views. Accordingly, the examples described herein further cover the following topics: i) monitoring viewing position within the MPD-level viewing volume to request new viewing volume data when approaching edges of the MPD-level viewing volume; ii) choosing correct sub-volume atlas streams from the MPD-file based on viewing position; and iii) rendering novel views from one or more volumetric video streams.
[0035] First the overall volumetric video compression pipeline is described. Then the essential idea of enabling selective streaming of sub viewing volumes is described.
[0036] FIG. 1 is a block diagram 100 depicting compression and consumption of a volumetric video scene, in accordance with the examples described herein. Fundamentally, each frame of the input 3D scene 102 is processed separately. The resulting per-frame atlas and metadata are then stored into video and metadata streams, respectively.
[0037] The first part converts the input 3D scene 102 into a canonical representation for processing. Depending on the source, the input scene 102 may consist of, for example, a procedurally animated 3D scene, animated 3D meshes, or registered camera views with depth maps. The input is sampled at an internal processing frame rate and converted into a collection of 3D samples of the scene geometry at a specified internal processing resolution. Depending on the input, this may involve e.g. voxelizing a mesh model, or down-sampling a high resolution point cloud with very fine details into the processing resolution. The internal representation is finally a collection of scene data points registered in a common 3D scene coordinate system, representing all aspects of the 3D input scene 102. Example aspects may include but are not limited to the color, geometry, and surface normals of the 3D scene 102.
[0038] The View Optimizer 104 takes the internal representation and creates a segmentation of the scene optimized for a specified viewing constraint (the viewing volume). This involves creating view-tiles that have sufficient coverage and resolution for representing the original input scene at minimal quality degradation within the given viewing constraints. The View Optimizer 104 makes use of at least the 3D position of the data points in the scene, but additional attributes such as surface normals, colors, material attributes, and viewing and capture directions may also be considered. Furthermore the View Optimizer 104 may divide the view-tiles into several sub viewing volumes if larger movement around the scene is intended and/or the scene is very complex. The sub viewing volumes may be pre-defined by the content author, or automatically generated based on the input scene 102. The size or bitrate of a single sub viewing volume may be constrained, and all view-tiles in a single sub viewing volume are spatially local to each other to enable efficient streaming.
[0039] The resulting view-tiles for one or more sub viewing volumes may then be pre-rendered in the Cloud Rendering 106 stage. This may involve resampling an input point cloud into 2D tile projections, or calling an external renderer, e.g. a path tracing renderer, to render views of a 3D input scene. For natural camera inputs, virtual views may be synthesized by view interpolation between the original input views, or parts of the input views may be used directly to produce the rendered tiles.
[0040] For each sub viewing volume the rendered tiles are then input into a Packer 108, which produces an optimal 2D layout metadata of the rendered view-tiles which may be used to pack pre-rendered tiles into video frames. The additional metadata that is required to unpack and re-render the packed tiles is also generated by the packer 108. These packed atlases along with the associated metadata are then piped to Video Codecs 110 and IV Encoder 112 for generating compressed representations .
[0041] After generating the compressed representation for each sub viewing volume the bitstreams may be further segmented for DASH compatible playback. This may include creation of different adaptations of the same viewing volume by resolution, bitrate, codec or other variable. Information about different viewing volumes and encoded representations is utilized by the Server Logic 118 or may be stored inside static MPD files.
[0042] For viewing the volumetric video, the information about different sub viewing volumes and representations need to be made available to client devices, such as client 120, which perform view synthesis of novel views based on user viewport. This information may be signaled using DASH technology. The IV streams may be provided from the IV media database 114 of the cloud 116 to the stream loader 124 of the client 120. Each IV stream contains the necessary information for the view synthesizer to employ rendering methods using the client Tenderer 122 such as point cloud rendering, mesh rendering, or ray-casting to reconstruct a view of the scene from any given 3D viewpoint within the IV stream viewing volume (corresponding to a single sub viewing volume). Only relevant IV stream(s) are retrieved from the content server (e.g., cloud 116) to minimize the content streamed to the client device 120, thus enabling larger viewing volumes. These individual IV streams may be combined in the client Tenderer 122 to achieve a larger local viewing volume which is the union of the currently streamed sub viewing volumes. This way, real-time 3D viewing of the volumetric video for arbitrary large viewing volumes may be achieved.
[0043] In order to achieve selective streaming of volumetric sub volumes additional metadata is needed in DASH manifest i.e. MPD-file. Depending on the way client devices consume content the metadata may be signaled in several different ways. On the highest level a DASH MPD file for a volumetric video may contain one or more Periods (e.g., Periods 204 and 206), which contain several Adaptation Sets (e.g., Adaptation Sets 208 and 210) for different sub viewing volumes (e.g., sub viewing volumes 216 and 218). Each viewing volume may have different resolution and bitrate alternatives (e.g., Bitstream alternatives 220 and 222), which may be represented as different Representations (e.g., 212, 214) in an Adaptation Set (e.g., 208 or 210).
[0044] FIG. 2 is an example block diagram 200 illustrating the semantic mapping between an MPD file 202 and volumetric video. The structural mapping of sub viewing volumes (e.g., sub viewing volumes 216 and 218) into MPD file structures (such as MPD 202) is an important part of the examples described herein. As further shown in FIG. 2, the scene sequences (224, 226) are comprised of the sub viewing volumes (216, 218). [0045] It is recommended to utilize SegmentTemplate functionality in DASH to define short segments without bloating the overall DASH file with individual Segment lists. Below is a short example how SegmentTemplate may be used to reconstruct small segments.
<Period duration="PT10S">
<BaseURL>content/</BaseURL>
<!-- Everything in one Adaptation Set -->
<AdaptationSet mimeType="video/mp2t">
<!-- 720p Representation at 3.0 Mbps -->
<Representation id="720p" bandwidth="3000000" width="1280" height="720">
<BaseURL>720/</BaseURL>
<!-- Example of SegmentTemplate -->
<SegmentTemplate media="segment-$Number$.ts" timescale="l">
<RepresentationIndex sourceURL="representation- index.sidx"/>
<!-- SegmentTimeline for convenience for the client -->
<SegmentTimeline>
<!-- Starting from time 0, there are 10 segments with a duration of (1 / @timescale) seconds-->
<S t="0" r="10" d="l"/>
</SegmentTimeline>
</SegmentTemplate>
</Representation>
<!-- 1080p Representation at 5.4 Mbps -->
<Representation id="1080p" bandwidth="5400000" width="1920" height="1080">
<BaseURL>l080/</BaseURL>
<!-- Example of SegmentTemplate -->
<SegmentTemplate media="segment-$Number$.ts" timescale="l">
<RepresentationIndex sourceURL="representation- index.sidx"/>
<!-- SegmentTimeline for convenience for the client -->
<SegmentTimeline>
<!-- Starting from time 0, there are 10 segments with a duration of (1 / @timescale) seconds-->
<S t="0" r="10" d="1"/>
</SegmentTimeline>
</SegmentTemplate>
</Representation>
</AdaptationSet>
</Period> [0046] New data structures for DASH manifest. There are some common elements that are used later by other embodiments of the examples described herein. Firstly, there needs to be a definition for a 3d point in space, which is required by most primitives used to define viewpoints and viewing volumes. This PointType may contain three floating point values representing 3d point in space as x, y and z component.
<xs:complexType name="PointType">
<xs:attribute name="x" type="xs:float" required="true"/>
<xs:attribute name="y" type="xs:float" required="true"/>
<xs:attribute name="z" type="xs:float" required="true"/>
</xs:complexType>
[0047] Secondly to define viewing points a set of shapes needs to be defined that may be used and combined to generate arbitrary viewing volumes. For more details on the viewing volume definition, please refer to PCT/FI2019/050585. For illustration we define here a few examples. Each shape may need to contain an id so that the shapes may be combined. ShapeType shall be defined as an abstract root class which is used by the rest of the refined shapes. ShapeType may optionally contain an order id, which is useful when combining different shapes with arithmetic operations.
<xs:complexType name="ShapeType" abstract="true">
<xs:attribute name="id" type="xs:unsignedInt" required="true"/>
<xs:attribute name="order" type="xs:unsignedlnt" required="false"/>
</xs:complexType>
[0048] Cuboid shape may contain two points, minimum and maximum values, with which a 3d cuboid may be represented.
<xs:complexType name="CuboidType">
<xs:complexContent>
<xs:extension base="ShapeType">
<xs:element name="min" type="PointType" required="true"/> <xs:element name="max" type="PointType" required="true"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
[0049] The sphere may contain a single point representing the origin of the sphere and may have a radius information which may indicate the size of the sphere around the origin.
<xs:complexType name="SphereType">
<xs:complexContent>
<xs:extension base="ShapeType">
<xs:element name="origin" type="PointType" required="true"/>
<xs:attribute name="r" type="xs:float" required="true"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
[0050] In addition to different shapes there may need to be a method to combine different shapes to create constructs of the shapes. As with the rest of the shapes, the Group inherits id from ShapeType, so that later combining different groups becomes available. In addition, it may contain an operator attribute, which explains how different primitives are combined. Valid values for the operator may be defined, like + for adding primitives or - for subtracting primitives. The order of the primitives (defined in ShapeType) describes the order in which the operations may be applied to primitives in the group. The operations may be performed first with two lowest order primitives and then to following primitives. Subtraction may take the lowest order primitive and subtract the second lowest order primitive from it, then proceed until all primitives are processed.
<xs:simpleType name="OperatorType">
<xs:restriction base="xs:string ">
<xs:enumeration value="+"/>
<xs:enumeration value="-"/> </xs:restriction>
</xs:simpleType>
<xs:complexType name="GroupType">
<xs:complexContent>
<xs:extension base="ShapeType">
<xs :sequence>
<xs:element name="primitive" type="ShapeType" minOccurs="2 " maxOccurs= "unbounded"/>
</xs :sequence>
<xs :attribute name="operator " type="OperatorType"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
[0051] Static MPD file with client-side logic. In the first embodiment an MPD file may contain a list of all available sub viewing volumes, in which case the client device chooses best fitting Adaptation Sets from the MPD file by itself. This is a simple solution that may work well for fairly small scenes, which do not consist of hundreds or thousands of sub viewing volumes. Because of the design simplicity, this may likely be the most important embodiment. From a signaling perspective MPEG DASH MPD Adaptation Sets would need information about the sub viewing volume centroids for the Client to select the optimal sub viewing volumes.
[0052] FIG. 3 is an example code excerpt 300 that shows signaling of sub viewing volume centroid to an adaptation set. Reference number 302 of FIG. 3 (code also provided below) highlights changes that may be required to the MPEG DASH specification .
<xs:complexType name="AdaptationSetType">
<xs:complexContent>
<xs:extension base="RepresentationBaseType">
<xs:sequence>
<.../>
<xs:element name="ViewingVolumeCentroid" type="PointType" minOccurs="0" maxOccurs="1 "/>
</xs:sequence> <.../>
</xs:extension>
</xs:complexContent>
</xs:complexType>
[0053] MPEG DASH supports viewpoint attribute, which could be used to signal the centroid of the sub viewing volume. However, the syntax of the Viewpoint attribute is not defined and each Adaptation Set may only contain zero or one ViewingVolumeCentroid . Alternatively the
ViewingVolumeCentroid may be defined as three explicit attributes as shown in FIG. 4.
[0054] FIG. 4 is an example code excerpt 400 that shows signaling of the centroid of the sub viewing volume. Reference number 402 of FIG. 4 (code also provided below) highlights the signaling.
<xs:complexType name="AdaptationSetType">
<xs:complexContent>
<xs:extension base= "RepresentationBaseType">
<.../>
<xs :attribute name="ViewingVolumeCentroidX" type="xs:float" required="false"/>
<xs :attribute name="ViewingVolumeCentroidY" type="xs:float" required="false"/>
<xs :attribute name="ViewingVolumeCentroid type="xs:float" required="false"/>
</xs:extension>
</xs:complexContent>
</xs:complexType>
[0055] FIG. 5 is a block diagram 500 illustrating the communication between client 502 and server 504 architecture. The client 502 simply downloads an MPD file from the server 504. With the information about the ViewingVolumeCentroids in the MPD file, the client 502 is able to select desired sub viewing volumes to be downloaded. The selection of the correct streams is done by the client application logic (e.g., client logic 126 as shown in FIG. 1) based on the viewing point of the user. In particular, FIG. 5 shows content storage 508 providing volumetric video bitstreams to client 502, and receiving segmented volumetric video bitstreams from cloud 504.
[0056] Partial MPD streaming. In another embodiment MPD files may be further divided into smaller MPD files, in which case the collection of MPD files defines the entire viewing volume of the scene. This may require additional signaling on the MPD level so that the server and client are able to select the best fitting collection of sub viewing volumes, without having to read viewing volume related information from individual volumetric video streams. The benefit of this embodiment over the previous one is that not all sub viewing volumes need to be signaled in a single MPD file and that the selection process in the server side is very simple for choosing the best fitting MPD file for the client. This would reduce the amount of data that would need to be signaled to the client inside MPD files, thus enabling streaming of large scenes which may contain thousands of sub viewing volumes.
[0057] FIG. 6 is an example code excerpt 600 that shows signaling of the sub viewing volume information on the MPD level. Reference number 602 of FIG. 6 (code also provided below) highlights the signaling.
<xs:complexType name="MPDtype">
<xs:sequence>
<.../>
<xs:element name="ViewingVolume" type="ShapeType" minOccurs="0" axOccurs="1 "/>
</xs:sequence>
<.../>
</xs:complexType>
[0058] Alternatively, the sub viewing volume information may be signaled within Periods (e.g., Period 204 and/or Period 206), if the sub viewing volume segmentation of the scene is expected to change during playback.
[0059] FIG. 7 is an example code excerpt 700 that shows signaling of the sub viewing volume information within Periods. Reference number 702 of FIG. 7 (code also provided below) highlights the signaling.
<xs:complexType name="PeriodType">
<xs:sequence>
<.../>
<xs:element name="ViewingVolume" type="ShapeType" minOccurs="0" maxOccurs="1 "/>
</xs:sequence>
<.../>
</xs:complexType>
[0060] FIG. 8 is a block diagram 800 illustrating signaling between the client 802 and the server 804. The client 802 requests an MPD file from the server 804 by providing its viewing position to the server 804. The server 804 picks the best partial viewing volume containing MPD file and sends it to the client 802. On the client side, the application selects the desired sub viewing volumes based on user viewing point and downloads only required streams of volumetric data. In particular, FIG. 8 shows content storage 808 providing volumetric video bitstreams to client 802, and receiving segmented volumetric video bitstreams from cloud 804.
[0061] Utilization of index files for MPD scene segmentation.
In another embodiment an index file listing URLs for sub viewing volume MPD files may be used. The index file may be communicated outside of DASH MPD for example as a JSON object via HTTP request. As an example, the scene description may be defined as the following kind of JSON file.
{
"scene": {
"id": 1, "viewing_volume" : { "ShapeType"},
"base_url": "https://server.com/content/scene/"
/* other useful metadata */
},
"sub_viewing_volumes" :[
/* list of sub viewing volumes */
{
"viewing_volume" : { "ShapeType"},
"mpd_url": "volume_01.mpd"
},
{
"viewing_volume" : { "ShapeType"},
"mpd_url": "volume_02.mpd"
}
]
}
[0062] Viewing volume may be defined using a similar ShapeType construct as described in embodiment 0.
[0063] URI scheme based scene segmentation. In another embodiment the scene could be divided into static sub viewing volumes for which a URI scheme may be developed. As an example, the entire viewing volume may be divided into a 16x16x16 grid. Each grid item, i.e. sub viewing volume, may have a scheme ID which may describe its relative position in the entire viewing volume. When the client knows either the size of the entire viewing volume or size of the current sub viewing volume, it is able to request the correct next sub viewing volume from the server without having to communicate any additional information to the server. The size of the entire viewing volume may be communicated off-band or as an MPD level viewing volume element. The size of individual sub-viewing volumes may be communicated as Adaptation Set level viewing volume elements.
[0064] As an example, the URI scheme may work as follows https://server.com/content/scene/ :id/:xx/:yy/:zz. (:id) may be used to indicate scene id. (:xx) indicates the sub division index on the x-axis, (:yy) indicates the sub division index on the y-axis and (:zz) indicates the sub division index on the z-axis. If the client knows that its current sub viewing volume URI is https://server .com/content/scene/l/08/10/00/desc.mpd and the user is moving forward in the y-axis, the neighboring sub viewing volume URI in the positive y-axis direction may have (:yy) = 11, which may be associated with the following url: https:://server.com/content/scene/1/08/11/00/desc.mpd
[0065] Alternatively the client may know the size of the entire viewing volume and that it is divided into equal static sub viewing volumes e.g. 16x16x16 grid. The client is able to request specific sub viewing volume with any viewing point within the viewing volume by using the scheme described above.
[0066] Communication of the URI scheme, viewing volume size and sub division information may be done off band. Below is an example, how the above scheme could be specified as a JSON object.
{
"scene": { "id": 1,
"viewing volume": { "ShapeType"},
"base url":
"https://server.com/content/scene/:id/:xx/:yy/:zz"
"sub divisions": { "type": "static uniform grid",
"x": [16, ":xx"T,
"y": [16, ":yy"],
"z": [16, ":zz"] }, /* other useful metadata */
}
}
[0067] Dynamic MPD file with server-side logic and client return channel. In another embodiment MPD files may be dynamically created based on client device viewing position. FIG. 9 is a diagram 900 illustrating embodiment 5, depicting communication of the user viewing position by the client 902 and database 906 on the server 904 that is used to store information about each sub viewing volume and associated content storage url. The server 904 constructs MPD files on the fly that contain only relevant sub viewing volume information for each client. To enable flexible switching between sub viewing volumes the MPD files may contain information about its valid sub viewing volume, so that the client 902 may easily tell when a new MPD may be requested. In particular, FIG. 9 shows content storage 908 providing volumetric video bitstreams to client 902, and receiving segmented volumetric video bitstreams from cloud 904.
[0068] This method requires by far the least amount of information submitted within an MPD file, but requires a client-server architecture, where the server side contains a database and processing capabilities (implemented by viewing volume DB 906) to reconstruct MPD files on the fly for several different clients.
[0069] Hierarchical arrangement of viewing volumes. In another embodiment signaling of parent viewing volumes may be desired in order to hierarchically arrange viewing volumes. This may be implemented by adding (e.g., by appending) an optional parent attribute for ShapeType. This signaling may be particularly useful to cater for devices, which may be able to process larger viewing volumes locally.
<xs:complexType name="ShapeType" abstract="true">
<xs:attribute name="id" type="xs:unsignedInt" required="true"/>
<xs:attribute name="order" type="xs:unsignedlnt" required="false"/>
<xs:attribute name="parent" type="xs:unsignedlnt" required="false"/>
<xs:element name="url" type="xs:string" required="false"/> <xs:sequence> <xs:element name="child" type="ShapeType" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:complexType>
[0070] The parent attribute of ShapeType indicates the id of the parent sub viewing volume shape. Following this information the client is able to identify what parent viewing volume contains which sub viewing volume. The ShapeType may also contain a list of child nodes, which may contain a sub volume of the parent shape. With the afore described signaling the client is able to navigate deeper in the viewing volume by following the url linking of parent and child ShapeTypes.
[0071] Additional viewing orientation information. In another embodiment viewing orientation in addition to viewing volume may be signaled. Viewing direction may be signaled with normalized direction vector with x, y and z components. In addition, vertical and horizontal field of view values may be signaled. This may enable more fine grained viewing volume definition. DirectionType may be added to ShapeType to add directionality information to viewing volume.
<xs:complexType name="DirectionType">
<xs:attribute name="x" type="xs:float" required="true"/>
<xs:attribute name="y" type="xs:float" required="true"/>
<xs:attribute name="z" type="xs:float" required="true"/>
<xs:attribute name="FOVX" type="xs:float" required="false"/> <xs:attribute name="FOVY" type="xs:float" required="false"/> <xs:attribute name="url" type="xs:string" required="true"/> </xs:complexType>
<xs:complexType name="ShapeType" abstract="true">
<xs:attribute name="id" type="xs:unsignedlnt" required="true"/>
<xs:attribute name="order" type="xs:unsignedlnt" required="false"/>
<xs:element name="direction" type="DirectionType" required="false"/>
</xs:complexType>
[0072] Relative URIs between MPDs. In another embodiment viewing volumes may be linked together by exit directions to provide easy accessible MPD links, when the client is exiting local viewing volume in a specific direction. When exiting the sub viewing volume in a specific exit direction, the client may directly fetch the MPD for the next sub viewing volume in the exit direction. This signaling may be performed outside of MPD files to provide additional flexibility or may be added inside MPD files, if needed. With this information available for the client, it becomes less dependent on the server side component. Exit directions may be defined either as geographic compass points, 3d axes or arbitrary direction vectors.
<xs:complexType name="ExitDirectionType" abstract="true">
</xs:complexType>
<xs:complexContent name="CompassExitDirection">
<xs:extension base="ExitDirectionType">
<xs:element name="north" type="xs:string" required="false"/>
<xs:element name="east" type="xs:string" required="false"/>
<xs:element name="south" type="xs:string" required="false"/>
<xs:element name="west" type="xs:string" required="false"/>
<xs:element name="up" type="xs:string" required="false"/> <xs:element name="down" type="xs:string" required="false"/>
</xs:extension>
</xs:complexContent>
<xs:complexContent name="AxesExitDirection">
<xs:extension base="ExitDirectionType">
<xs:element name="PosX" type="xs:string" required="false"/>
<xs:element name="NegX" type="xs:string" required="false"/>
<xs:element name="PosY" type="xs:string" required="false"/>
<xs:element name="NegY" type="xs:string" required="false"/>
<xs:element name="PosZ" type="xs:string" required="false"/>
<xs:element name="NegZ" type="xs:string" required="false"/>
</xs:extension>
</xs:complexContent> <xs:complexType name="DirectionType">
<xs:attribute name="x" type="xs:float" required="true"/>
<xs:attribute name="y" type="xs:float" required="true"/>
<xs:attribute name="z" type="xs:float" required="true"/>
<xs:attribute name="url" type="xs:string" required="true"/> </xs:complexType>
<xs:complexContent name="AnyExitDirection">
<xs:extension base="ExitDirectionType">
<xs:sequence>
<xs:element name="direction" type="DirectionType" minOccurs="0" maxOccurs="unbounded"/>
</xs:sequence>
</xs:extension>
</xs:complexContent>
<xs:complexType name="ShapeType" abstract="true">
<xs:attribute name="id" type="xs:unsignedInt" required="true"/>
<xs:attribute name="order" type="xs:unsignedlnt" required="false"/>
<xs:attribute name="parent" type="xs:unsignedlnt" required="false"/>
<xs:element name="url" type="xs:string" required="false"/> <xs:element name="ExitDirection" type="ExitDirectionType required="false"/>
</xs:complexType>
[0073] The url in the ShapeType may refer to MPD or content url for the given sub viewing volume, whereas the strings in ExitDirectionTypes may give the associated url for MPD for the surrounding sub viewing volumes. By following the exit direction links the client is able to request neighboring sub viewing volumes simply by following the directionality. The client may from time to time end up in a situation where it may need to fetch two or more neighboring sub viewing volumes, in which case the client fetches the MPD files for all needed sub viewing volumes, but only downloads the partial sub viewing volume video streams needed.
[0074] The main advantage of the examples described herein is to enable consumption of 6DoF content by enabling selective streaming of volumetric sub volumes. Such mechanisms are required due to the size of volumetric scenes, which cannot be locally stored.
[0075] FIG. 10 is an example apparatus 1000, which may be implemented in hardware, configured to implement six degrees of freedom spatial layout signaling, based on the examples described herein. The apparatus 1000 comprises a processor 1002, at least one non-transitory memory 1004 including computer program code 1005, wherein the at least one memory 1004 and the computer program code 1005 are configured to, with the at least one processor 1002, cause the apparatus to implement circuitry, a process, component, module, or function (collectively 1006) to implement the signaling as described herein. The apparatus 1000 optionally includes a display and/or I/O interface 1008 that may be used to display aspects or a status of the methods described herein (e.g., as the methods are being performed or at a subsequent time). The display and/or I/O interface 1008 may also be configured to receive input such as user input The apparatus 1000 also optionally includes one or more network (NW) interfaces (I/F(s)) 1010. The NW I/F(s) 1010 may be wired and/or wireless and communicate over the Internet/other network(s) via any communication technique. The NW I/F(s) 1010 may comprise one or more transmitters and one or more receivers. The apparatus 1000 may be configured as a server or client based on the signaling aspects described herein (for example, apparatus 1000 may be a remote, virtual or cloud apparatus).
[0076] References to a 'computer', 'processor', etc. should be understood to encompass not only computers having different architectures such as single/multi-processor architectures and sequential (Von Neumann)/parallel architectures but also specialized circuits such as field-programmable gate arrays (FPGA), application specific circuits (ASIC), signal processing devices and other processing circuitry. References to computer program, instructions, code etc. should be understood to encompass software for a programmable processor or firmware such as, for example, the programmable content of a hardware device whether instructions for a processor, or configuration settings for a fixed-function device, gate array or programmable logic device etc.
[0077] The memory 1004 may be implemented using any suitable data storage technology, such as semiconductor based memory devices, flash memory, magnetic memory devices and systems, optical memory devices and systems, fixed memory and removable memory. The memory 1004 may comprise a database for storing data.
[0078] As used in this application, the term 'circuitry' refers to all of the following: (a) hardware-only circuit implementations, such as implementations in only analog and/or digital circuitry, and (b) combinations of circuits and software (and/or firmware), such as (as applicable): (i) a combination of processor(s) or (ii) portions of processor (s)/software including digital signal processor(s), software, and memory(ies) that work together to cause an apparatus to perform various functions, and (c) circuits, such as a microprocessor(s) or a portion of a microprocessor(s), that require software or firmware for operation, even if the software or firmware is not physically present. This definition of 'circuitry' applies to all uses of this term in this application. As a further example, as used in this application, the term 'circuitry' would also cover an implementation of merely a processor (or multiple processors) or a portion of a processor and its (or their) accompanying software and/or firmware. The term 'circuitry' would also cover, for example and if applicable to the particular element, a baseband integrated circuit or applications processor integrated circuit for a mobile phone or a similar integrated circuit in a server, a cellular network device, or another network device.
[0079] FIG. 11 is an example method 1100 that implements six degrees of freedom spatial layout signaling based on the examples described herein. At 1102, the method includes segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives. At 1104, the method includes generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files. At 1106, the method includes generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content. At 1108, the method includes providing one or more of the sub viewing volumes based on a client selection and request. The method 1100 may be performed by a server.
[0080] FIG. 12 is an example method 1200 that implements six degrees of freedom spatial layout signaling, based on the examples described herein. At 1202, the method includes monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files. At 1204, the method includes requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files. At 1206, the method includes selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device. At 1208, the method includes rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene. The method 1200 may be performed by a client.
[0081] An example apparatus includes means for segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; means for generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; means for generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and means for providing one or more of the sub viewing volumes based on a client selection and request.
[0082] The apparatus may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
[0083] The apparatus may further include means for defining data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
[0084] The apparatus may further include means for signaling information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
[0085] The apparatus may further include means for dividing the media presentation description files into smaller media presentation description files; means for providing signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; means for receiving a request for a media presentation description file based on a viewing position of a client; means for selecting a partial viewing volume containing the requested media presentation description file; and means for providing the selected partial viewing volume to the client.
[0086] The apparatus may further include means for implementing an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
[0087] The apparatus may further include means for implementing a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
[0088] The apparatus may further include means for dynamically creating the media presentation description files based on a client device viewing position.
[0089] The apparatus may further include means for hierarchically arranging viewing volumes based on signaling of parent sub viewing volumes.
[0090] The apparatus may further include means for signaling a viewing orientation of a sub viewing volume.
[0091] The apparatus may further include means for linking sub viewing volumes by exit direction. [0092] An example apparatus includes means for monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; means for requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; means for selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and means for rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
[0093] The apparatus may further include wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
[0094] The apparatus may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
[0095] The apparatus may further include wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
[0096] The apparatus may further include means for downloading a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
[0097] The apparatus may further include means for requesting the media presentation description file based on the viewing position of the client device; means for selecting a partial viewing volume containing the requested media presentation description file based on a client viewing point; and means for receiving the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
[0098] The apparatus may further include means for implementing an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
[0099] The apparatus may further include means for requesting a segmented static sub viewing volume based on an associated uniform resource identifier.
[00100] The apparatus may further include wherein the media presentation description files are dynamically created based on the viewing position of the client device.
[00101] The apparatus may further include wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
[00102] The apparatus may further include wherein a viewing orientation of a sub viewing volume is signaled. [00103] The apparatus may further include means for requesting the media presentation description file for a next sub viewing volume in an exit direction.
[00104] An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: segment a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generate viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generate one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and provide one or more of the sub viewing volumes based on a client selection and request.
[00105] The apparatus may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
[00106] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: define data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes. [00107] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: signal information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
[00108] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: divide the media presentation description files into smaller media presentation description files; provide signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; receive a request for a media presentation description file based on a viewing position of a client; select a partial viewing volume containing the requested media presentation description file; and provide the selected partial viewing volume to the client.
[00109] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: implement an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
[00110] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: implement a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
[00111] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: dynamically create the media presentation description files based on a client device viewing position.
[00112] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: hierarchically arrange viewing volumes based on signaling of parent sub viewing volumes.
[00113] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: signal a viewing orientation of a sub viewing volume.
[00114] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: link sub viewing volumes by exit direction.
[00115] An example apparatus includes at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: monitor a viewing position of a client device within a viewing volume level of one or more media presentation description files; request new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; select one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and render a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
[00116] The apparatus may further include wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
[00117] The apparatus may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
[00118] The apparatus may further include wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
[00119] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: download a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file. [00120] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: request the media presentation description file based on the viewing position of the client device; select a partial viewing volume containing the requested media presentation description file based on a client viewing point; and receive the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
[00121] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: implement an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
[00122] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: request a segmented static sub viewing volume based on an associated uniform resource identifier.
[00123] The apparatus may further include wherein the media presentation description files are dynamically created based on the viewing position of the client device.
[00124] The apparatus may further include wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
[00125] The apparatus may further include wherein a viewing orientation of a sub viewing volume is signaled. [00126] The apparatus may further include wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: request the media presentation description file for a next sub viewing volume in an exit direction.
[00127] An example method includes segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
[00128] The method may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
[00129] The method may further include defining data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
[00130] The method may further include signaling information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes. [00131] The method may further include dividing the media presentation description files into smaller media presentation description files; providing signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; receiving a request for a media presentation description file based on a viewing position of a client; selecting a partial viewing volume containing the requested media presentation description file; and providing the selected partial viewing volume to the client.
[00132] The method may further include implementing an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
[00133] The method may further include implementing a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
[00134] The method may further include dynamically creating the media presentation description files based on a client device viewing position.
[00135] The method may further include hierarchically arranging viewing volumes based on signaling of parent sub viewing volumes.
[00136] The method may further include signaling a viewing orientation of a sub viewing volume.
[00137] The method may further include linking sub viewing volumes by exit direction. [00138] An example method includes monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
[00139] The method may further include wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
[00140] The method may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
[00141] The method may further include wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
[00142] The method may further include downloading a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
[00143] The method may further include requesting the media presentation description file based on the viewing position of the client device; selecting a partial viewing volume containing the requested media presentation description file based on a client viewing point; and receiving the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
[00144] The method may further include implementing an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
[00145] The method may further include requesting a segmented static sub viewing volume based on an associated uniform resource identifier.
[00146] The method may further include wherein the media presentation description files are dynamically created based on the viewing position of the client device.
[00147] The method may further include wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
[00148] The method may further include wherein a viewing orientation of a sub viewing volume is signaled.
[00149] The method may further include requesting the media presentation description file for a next sub viewing volume in an exit direction. [00150] An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations may be provided, the operations comprising: segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
[00151] The non-transitory program storage device may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
[00152] The operations of the non-transitory program storage device may further include defining data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
[00153] The operations of the non-transitory program storage device may further include signaling information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes. [00154] The operations of the non-transitory program storage device may further include dividing the media presentation description files into smaller media presentation description files; providing signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; receiving a request for a media presentation description file based on a viewing position of a client; selecting a partial viewing volume containing the requested media presentation description file; and providing the selected partial viewing volume to the client.
[00155] The operations of the non-transitory program storage device may further include implementing an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
[00156] The operations of the non-transitory program storage device may further include implementing a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
[00157] The operations of the non-transitory program storage device may further include dynamically creating the media presentation description files based on a client device viewing position.
[00158] The operations of the non-transitory program storage device may further include hierarchically arranging viewing volumes based on signaling of parent sub viewing volumes. [00159] The operations of the non-transitory program storage device may further include signaling a viewing orientation of a sub viewing volume.
[00160] The operations of the non-transitory program storage device may further include linking sub viewing volumes by exit direction.
[00161] An example non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations may be provided, the operations comprising: monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
[00162] The non-transitory program storage device may further include wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
[00163] The non-transitory program storage device may further include wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
[00164] The non-transitory program storage device may further include wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
[00165] The operations of the non-transitory program storage device may further include downloading a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
[00166] The operations of the non-transitory program storage device may further include requesting the media presentation description file based on the viewing position of the client device; selecting a partial viewing volume containing the requested media presentation description file based on a client viewing point; and receiving the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
[00167] The operations of the non-transitory program storage device may further include implementing an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
[00168] The operations of the non-transitory program storage device may further include requesting a segmented static sub viewing volume based on an associated uniform resource identifier.
[00169] The non-transitory program storage device may further include wherein the media presentation description files are dynamically created based on the viewing position of the client device.
[00170] The non-transitory program storage device may further include wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
[00171] The non-transitory program storage device may further include wherein a viewing orientation of a sub viewing volume is signaled.
[00172] The operations of the non-transitory program storage device may further include requesting the media presentation description file for a next sub viewing volume in an exit direction.
[00173] An example apparatus includes one or more circuitries configured to execute the method of any of the functions of a server (e.g. cloud) as described herein.
[00174] An example apparatus includes one or more circuitries configured to execute the method of any of the functions of a client (e.g. renderer) as described herein.
[00175] It should be understood that the foregoing description is only illustrative. Various alternatives and modifications may be devised by those skilled in the art. For example, features recited in the various dependent claims could be combined with each other in any suitable combination (s). In addition, features from different embodiments described above could be selectively combined into a new embodiment. Accordingly, the description is intended to embrace all such alternatives, modifications and variances which fall within the scope of the appended claims.

Claims

CLAIMS What is claimed is:
1. An apparatus comprising: means for segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; means for generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; means for generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and means for providing one or more of the sub viewing volumes based on a client selection and request.
2. The apparatus of claim 1, wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
3. The apparatus of any of claims 1 to 2, further comprising: means for defining data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
4. The apparatus of any of claims 1 to 3, further comprising: means for signaling information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
5. The apparatus of any of claims 1 to 4, further comprising: means for dividing the media presentation description files into smaller media presentation description files; means for providing signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; means for receiving a request for a media presentation description file based on a viewing position of a client; means for selecting a partial viewing volume containing the requested media presentation description file; and means for providing the selected partial viewing volume to the client.
6. The apparatus of any of claims 1 to 5, further comprising: means for implementing an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
7. The apparatus of any of claims 1 to 6, further comprising: means for implementing a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
8. The apparatus of any of claims 1 to 7, further comprising: means for dynamically creating the media presentation description files based on a client device viewing position.
9. The apparatus of any of claims 1 to 8, further comprising: means for hierarchically arranging viewing volumes based on signaling of parent sub viewing volumes.
10. The apparatus of any of claims 1 to 9, further comprising: means for signaling a viewing orientation of a sub viewing volume.
11. The apparatus of any of claims 1 to 10, further comprising: means for linking sub viewing volumes by exit direction.
12. An apparatus comprising: means for monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; means for requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; means for selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and means for rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
13. The apparatus of claim 12, wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
14. The apparatus of any of claim 12 to 13, wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
15. The apparatus of any of claims 12 to 14, wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
16. The apparatus of any of claims 12 to 15, further comprising: means for downloading a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
17. The apparatus of any of claims 12 to 16, further comprising: means for requesting the media presentation description file based on the viewing position of the client device; means for selecting a partial viewing volume containing the requested media presentation description file based on a client viewing point; and means for receiving the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
18. The apparatus of any of claims 12 to 17, further comprising: means for implementing an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
19. The apparatus of any of claims 12 to 18, further comprising: means for requesting a segmented static sub viewing volume based on an associated uniform resource identifier.
20. The apparatus of any of claims 12 to 19, wherein the media presentation description files are dynamically created based on the viewing position of the client device.
21. The apparatus of any of claims 12 to 20, wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
22. The apparatus of any of claims 12 to 21, wherein a viewing orientation of a sub viewing volume is signaled.
23. The apparatus of any of claims 12 to 22, further comprising: means for requesting the media presentation description file for a next sub viewing volume in an exit direction.
24. An apparatus comprising: at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: segment a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generate viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generate one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and provide one or more of the sub viewing volumes based on a client selection and request.
25. The apparatus of claim 24, wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
26. The apparatus of any of claims 24 to 25, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: define data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
27. The apparatus of any of claims 24 to 26, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: signal information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
28. The apparatus of any of claims 24 to 27, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: divide the media presentation description files into smaller media presentation description files; provide signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; receive a request for a media presentation description file based on a viewing position of a client; select a partial viewing volume containing the requested media presentation description file; and provide the selected partial viewing volume to the client.
29. The apparatus of any of claims 24 to 28, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: implement an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
30. The apparatus of any of claims 24 to 29, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: implement a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
31. The apparatus of any of claims 24 to 30, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: dynamically create the media presentation description files based on a client device viewing position.
32. The apparatus of any of claims 24 to 31, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: hierarchically arrange viewing volumes based on signaling of parent sub viewing volumes.
33. The apparatus of any of claims 24 to 32, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: signal a viewing orientation of a sub viewing volume.
34. The apparatus of any of claims 24 to 33, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: link sub viewing volumes by exit direction.
35. An apparatus comprising: at least one processor; and at least one non-transitory memory including computer program code; wherein the at least one memory and the computer program code are configured to, with the at least one processor, cause the apparatus at least to perform: monitor a viewing position of a client device within a viewing volume level of one or more media presentation description files; request new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; select one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and render a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
36. The apparatus of claim 35, wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
37. The apparatus of any of claims 35 to 36, wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
38. The apparatus of any of claims 35 to 37, wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
39. The apparatus of any of claims 35 to 38, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: download a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
40. The apparatus of any of claims 35 to 39, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: request the media presentation description file based on the viewing position of the client device; select a partial viewing volume containing the requested media presentation description file based on a client viewing point; and receive the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
41. The apparatus of any of claims 35 to 40, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: implement an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
42. The apparatus of any of claims 35 to 41, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: request a segmented static sub viewing volume based on an associated uniform resource identifier.
43. The apparatus of any of claims 35 to 42, wherein the media presentation description files are dynamically created based on the viewing position of the client device.
44. The apparatus of any of claims 35 to 43, wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
45. The apparatus of any of claims 35 to 44, wherein a viewing orientation of a sub viewing volume is signaled.
46. The apparatus of any of claims 35 to 45, wherein the at least one memory and the computer program code are further configured to, with the at least one processor, cause the apparatus at least to perform: request the media presentation description file for a next sub viewing volume in an exit direction.
47. A method comprising: segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
48. The method of claim 47, wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
49. The method of any of claims 47 to 48, further comprising: defining data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
50. The method of any of claims 47 to 49, further comprising: signaling information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
51. The method of any of claims 47 to 50, further comprising: dividing the media presentation description files into smaller media presentation description files; providing signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; receiving a request for a media presentation description file based on a viewing position of a client; selecting a partial viewing volume containing the requested media presentation description file; and providing the selected partial viewing volume to the client.
52. The method of any of claims 47 to 51, further comprising: implementing an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
53. The method of any of claims 47 to 52, further comprising: implementing a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
54. The method of any of claims 47 to 53, further comprising: dynamically creating the media presentation description files based on a client device viewing position.
55. The method of any of claims 47 to 54, further comprising: hierarchically arranging viewing volumes based on signaling of parent sub viewing volumes.
56. The method of any of claims 47 to 55, further comprising: signaling a viewing orientation of a sub viewing volume.
57. The method of any of claims 47 to 56, further comprising: linking sub viewing volumes by exit direction.
58. A method comprising: monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
59. The method of claim 58, wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
60. The method of any of claims 58 to 59, wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
61. The method of any of claims 58 to 60, wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
62. The method of any of claims 58 to 61, further comprising: downloading a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
63. The method of any of claims 58 to 62, further comprising: requesting the media presentation description file based on the viewing position of the client device; selecting a partial viewing volume containing the requested media presentation description file based on a client viewing point; and receiving the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
64. The method of any of claims 58 to 63, further comprising: implementing an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
65. The method of any of claims 58 to 64, further comprising: requesting a segmented static sub viewing volume based on an associated uniform resource identifier.
66. The method of any of claims 58 to 65, wherein the media presentation description files are dynamically created based on the viewing position of the client device.
67. The method of any of claims 58 to 66, wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
68. The method of any of claims 58 to 67, wherein a viewing orientation of a sub viewing volume is signaled.
69. The method of any of claims 58 to 68, further comprising: requesting the media presentation description file for a next sub viewing volume in an exit direction.
70. A non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations, the operations comprising: segmenting a scene of volumetric video into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives; generating viewing volume and view point indications within one or more dynamic adaptive streaming over hypertext transfer protocol media presentation description files; generating one or more sub volume scheme uniform resource identifiers for client side instructions to retrieve content; and providing one or more of the sub viewing volumes based on a client selection and request.
71. The non-transitory program storage device of claim 70, wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
72. The non-transitory program storage device of any of claims 70 to 71, the operations further comprising: defining data structures for dynamic adaptive streaming over hypertext transfer protocol, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
73. The non-transitory program storage device of any of claims 70 to 72, the operations further comprising: signaling information about the centroids of the sub viewing volumes to the adaptation sets using one or more viewpoint attributes.
74. The non-transitory program storage device of any of claims 70 to 73, the operations further comprising: dividing the media presentation description files into smaller media presentation description files; providing signaling information about the sub viewing volumes, wherein the signaling information about the sub viewing volumes is provided using at least one of one or more levels of the media presentation description files, or one or more periods of the media presentation description files; receiving a request for a media presentation description file based on a viewing position of a client; selecting a partial viewing volume containing the requested media presentation description file; and providing the selected partial viewing volume to the client.
75. The non-transitory program storage device of any of claims 70 to 74, the operations further comprising: implementing an index file listing uniform resource locators for the sub viewing volume of the media presentation description files.
76. The non-transitory program storage device of any of claims 70 to 75, the operations further comprising: implementing a uniform resource identifier scheme to associate a segmented static sub viewing volume with a particular uniform resource locator.
77. The non-transitory program storage device of any of claims 70 to 76, the operations further comprising: dynamically creating the media presentation description files based on a client device viewing position.
78. The non-transitory program storage device of any of claims 70 to 77, the operations further comprising: hierarchically arranging viewing volumes based on signaling of parent sub viewing volumes.
79. The non-transitory program storage device of any of claims 70 to 78, the operations further comprising: signaling a viewing orientation of a sub viewing volume.
80. The non-transitory program storage device of any of claims 70 to 79, the operations further comprising: linking sub viewing volumes by exit direction.
81. A non-transitory program storage device readable by a machine, tangibly embodying a program of instructions executable by the machine for performing operations, the operations comprising: monitoring a viewing position of a client device within a viewing volume level of one or more media presentation description files; requesting new viewing volume data when approaching one or more edges of a level of at least one of the media presentation description files; selecting one or more sub volume atlas streams from a media presentation description file based on the viewing position and an orientation of the client device; and rendering a novel view from the selected one or more sub volume atlas streams, wherein the rendering comprises combining individual immersive video streams as a union of currently streamed sub viewing volumes of a scene.
82. The non-transitory program storage device of claim 81, wherein the viewing volume data has been segmented into one or more sequences, each sequence comprising one or more sub viewing volumes having one or more bitstream alternatives.
83. The non-transitory program storage device of any of claims 81 to 82, wherein centroids of the sub viewing volumes form a grid or tetrahedral mesh for selecting preferred sub viewing volumes for any viewing point within a larger viewing volume.
84. The non-transitory program storage device of any of claims 81 to 83, wherein data structures for dynamic adaptive streaming over hypertext transfer protocol are defined, wherein the data structures comprise a three-dimensional point in space, a set of shapes used to generate the sub viewing volumes, a number of groups of shapes, and an operator primitive to combine the shapes and the groups of shapes.
85. The non-transitory program storage device of any of claims 81 to 84, the operations further comprising: downloading a media presentation description file from a server, and selecting a desired sub viewing volume based on information about the centroids in the media presentation description file.
86. The non-transitory program storage device of any of claims 81 to 85, the operations further comprising: requesting the media presentation description file based on the viewing position of the client device; selecting a partial viewing volume containing the requested media presentation description file based on a client viewing point; and receiving the selected partial viewing volume from a server; wherein the media presentation description files have been divided into smaller media presentation description files.
87. The non-transitory program storage device of any of claims 81 to 86, the operations further comprising: implementing an index file listing uniform resource locators for the sub viewing volumes of the media presentation description file.
88. The non-transitory program storage device of any of claims 81 to 87, the operations further comprising: requesting a segmented static sub viewing volume based on an associated uniform resource identifier.
89. The non-transitory program storage device of any of claims 81 to 88, wherein the media presentation description files are dynamically created based on the viewing position of the client device.
90. The non-transitory program storage device of any of claims
81 to 89, wherein viewing volumes are hierarchically arranged based on signaling of parent sub viewing volumes.
91. The non-transitory program storage device of any of claims 81 to 90, wherein a viewing orientation of a sub viewing volume is signaled.
92. The non-transitory program storage device of any of claims 81 to 91, the operations further comprising: requesting the media presentation description file for a next sub viewing volume in an exit direction.
93. An apparatus comprising: one or more circuitries configured to execute the method of any of claims 47-57.
94. An apparatus comprising: one or more circuitries configured to execute the method of any of claims 58-69.
PCT/FI2020/050590 2019-09-19 2020-09-16 Six degrees of freedom spatial layout signaling WO2021053269A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP20864782.6A EP4032313A4 (en) 2019-09-19 2020-09-16 Six degrees of freedom spatial layout signaling

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962902519P 2019-09-19 2019-09-19
US62/902,519 2019-09-19

Publications (1)

Publication Number Publication Date
WO2021053269A1 true WO2021053269A1 (en) 2021-03-25

Family

ID=74881409

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2020/050590 WO2021053269A1 (en) 2019-09-19 2020-09-16 Six degrees of freedom spatial layout signaling

Country Status (3)

Country Link
US (1) US11259050B2 (en)
EP (1) EP4032313A4 (en)
WO (1) WO2021053269A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11974026B2 (en) 2020-03-26 2024-04-30 Nokia Technologies Oy Apparatus, a method and a computer program for volumetric video

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20210105451A1 (en) * 2019-12-23 2021-04-08 Intel Corporation Scene construction using object-based immersive media
WO2021130355A1 (en) * 2019-12-24 2021-07-01 Koninklijke Kpn N.V. Video processing device and manifest file for video streaming
KR102447796B1 (en) * 2020-11-27 2022-09-27 한국전자기술연구원 Apparatus and method for fast refining of patch segment for v-pcc encoder
KR20220153381A (en) * 2021-05-11 2022-11-18 삼성전자주식회사 Method and apparatus for providing media service
US11570418B2 (en) * 2021-06-17 2023-01-31 Creal Sa Techniques for generating light field data by combining multiple synthesized viewpoints

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2824885A1 (en) 2013-07-12 2015-01-14 Alcatel Lucent A manifest file format supporting panoramic video
US9942577B1 (en) 2016-02-23 2018-04-10 Amazon Technologies, Inc. Dynamic objects caching for media content playback
WO2019064853A1 (en) 2017-09-26 2019-04-04 キヤノン株式会社 Information processing device, information providing device, control method, and program

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200202608A1 (en) * 2018-12-21 2020-06-25 Point Cloud Compression, B.V. Method and apparatus for receiving a volumetric video stream

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2824885A1 (en) 2013-07-12 2015-01-14 Alcatel Lucent A manifest file format supporting panoramic video
US9942577B1 (en) 2016-02-23 2018-04-10 Amazon Technologies, Inc. Dynamic objects caching for media content playback
WO2019064853A1 (en) 2017-09-26 2019-04-04 キヤノン株式会社 Information processing device, information providing device, control method, and program
EP3691285A1 (en) 2017-09-26 2020-08-05 C/o Canon Kabushiki Kaisha Information processing device, information providing device, control method, and program

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
"Text of ISO/IEC FDIS 23009-1:2014 4th edition", 127. MPEG MEETING; 20190708 - 20190712; GOTHENBURG; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 15 August 2019 (2019-08-15), XP030206802 *
AHMED HAMZA ET AL.: "DASH Signalling of V-PCC Tile Groups", 4 July 2019
ERIC YIP, YOUNGKWON LIM: "6DoF Access Metadata for V-PCC", 125. MPEG MEETING; 20190114 - 20190118; MARRAKECH; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 9 January 2019 (2019-01-09), XP030214513 *
PARK, J. ET AL.: "Volumetric media streaming for augmented reality", IEEE GLOBAL COMMUNICATIONS CONFERENCE, 9 December 2018 (2018-12-09), Abu Dhabi, United Arab Emirates, XP033519700, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8647537> [retrieved on 20201208], DOI: 10.1109/GLOCOM.2018.8647537 *
See also references of EP4032313A4
XU YILING ET AL.: "Introduction to Point Cloud Compression", 24 August 2018
ZHAO, S. ET AL.: "SDN-assisted adaptive streaming framework for tile- based immersive content using MPEG-DASH", IEEE CONFERENCE ON NETWORK FUNCTION VIRTUALIZATION AND SOFTWARE DEFINED NETWORKS, 6 November 2017 (2017-11-06), Berlin, Germany, XP033264681, Retrieved from the Internet <URL:http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8169831> [retrieved on 20201208], DOI: 10.1109/NFV-SDN.2017.8169831 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11974026B2 (en) 2020-03-26 2024-04-30 Nokia Technologies Oy Apparatus, a method and a computer program for volumetric video

Also Published As

Publication number Publication date
US11259050B2 (en) 2022-02-22
EP4032313A4 (en) 2023-10-04
EP4032313A1 (en) 2022-07-27
US20210092444A1 (en) 2021-03-25

Similar Documents

Publication Publication Date Title
US11259050B2 (en) Six degrees of freedom spatial layout signaling
EP3782368A1 (en) Processing video patches for three-dimensional content
US20230319328A1 (en) Reference of neural network model for adaptation of 2d video for streaming to heterogeneous client end-points
JP2023518676A (en) Placement of immersive media and delivery from immersive media to heterogeneous client endpoints
CN115136595A (en) Adaptation of 2D video for streaming to heterogeneous client endpoints
WO2023200535A9 (en) Smart client for streaming of scene-based immersive media
US12058193B2 (en) Bidirectional presentation datastream
US11943271B2 (en) Reference of neural network model by immersive media for adaptation of media for streaming to heterogenous client end-points
US12132954B2 (en) Smart client for streaming of scene-based immersive media
US11991424B2 (en) Immersive media data complexity analyzer for transformation of asset formats
US12132966B2 (en) Immersive media analyzer for reuse of scene assets
US11930059B2 (en) Immersive media streaming prioritized by frequency of asset reuse
US20240357000A1 (en) Bidirectional presentation data stream
US12081598B2 (en) Redundant cache for reusable immersive media assets
US20230146230A1 (en) Immersive media analyzer for reuse of scene assets
US20240104803A1 (en) Scene graph translation
US20240236443A1 (en) Independent mapping space for asset interchange using itmf
US20230370666A1 (en) Streaming scene prioritizer for immersive media

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20864782

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2020864782

Country of ref document: EP

Effective date: 20220419