WO2023062271A1 - A method, an apparatus and a computer program product for video coding - Google Patents

A method, an apparatus and a computer program product for video coding Download PDF

Info

Publication number
WO2023062271A1
WO2023062271A1 PCT/FI2022/050659 FI2022050659W WO2023062271A1 WO 2023062271 A1 WO2023062271 A1 WO 2023062271A1 FI 2022050659 W FI2022050659 W FI 2022050659W WO 2023062271 A1 WO2023062271 A1 WO 2023062271A1
Authority
WO
WIPO (PCT)
Prior art keywords
session
rtp
bitstream
sub
bitstreams
Prior art date
Application number
PCT/FI2022/050659
Other languages
French (fr)
Inventor
Lukasz Kondrad
Lauri Aleksi ILOLA
Kashyap KAMMACHI SREEDHAR
Sujeet Shyamsundar Mate
Emre Baris Aksu
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Priority to EP22880465.4A priority Critical patent/EP4416929A1/en
Publication of WO2023062271A1 publication Critical patent/WO2023062271A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/65Network streaming protocols, e.g. real-time transport protocol [RTP] or real-time control protocol [RTCP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/1066Session management
    • H04L65/1069Session establishment or de-establishment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/61Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
    • H04L65/612Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio for unicast
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L65/00Network arrangements, protocols or services for supporting real-time applications in data packet communication
    • H04L65/60Network streaming of media packets
    • H04L65/75Media network packet handling
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/63Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
    • H04N21/643Communication protocols
    • H04N21/6437Real-time Transport Protocol [RTP]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/84Generation or processing of descriptive data, e.g. content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/85Assembly of content; Generation of multimedia applications
    • H04N21/854Content authoring
    • H04N21/85406Content authoring involving a specific file format, e.g. MP4 format

Definitions

  • the present solution generally relates to coding of volumetric video.
  • Volumetric video data represents a three-dimensional (3D) scene or object, and can be used as input for AR (Augmented Reality), VR (Virtual Reality), and MR (Mixed Reality) applications.
  • Such data describes geometry (Shape, size, position in 3D space) and respective attributes (e.g., color, opacity, reflectance, ...), and any possible temporal transformations of the geometry and attributes at given time instances (like frames in 2D video).
  • Volumetric video can be generated from 3D models, also referred to as volumetric visual objects, i.e., CGI (Computer Generated Imagery), or captured from real-world scenes using a variety of capture solutions, e.g., multi-camera, laser scan, combination of video and dedicated depth sensors, and more.
  • CGI Computer Generated Imagery
  • volumetric data comprises triangle meshes, point clouds, or voxels.
  • Temporal information about the scene can be included in the form of individual capture instances, i.e., “frames” in 2D video, or other means, e.g., position of an object as a function of time.
  • an apparatus comprising means for receiving a bitstream representing coded volumetric video; means for demultiplexing the bitstream into a number of sub-bitstreams; means for encapsulating the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; means for sending the encapsulated sub-bitstreams over one or more RTP session to a client; and means for providing to the client information allowing the client to map an RTP session to one or more appropriate sub-bitstream.
  • RTP Real-time Transfer Protocol
  • a method comprising receiving a bitstream representing coded volumetric video; demultiplexing the bitstream into a number of sub-bitstreams; encapsulating the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; sending the encapsulated sub-bitstreams over one or more RTP session to a client; and providing to the client information allowing the client to map a RTP session to one or more appropriate sub-bitstream.
  • RTP Real-time Transfer Protocol
  • an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following receive a bitstream representing coded volumetric video; demultiplex the bitstream into a number of sub-bitstreams; encapsulate the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; send the encapsulated sub-bitstreams over one or more RTP session to a client; and provide to the client information allowing the client to map a RTP session to one or more appropriate subbitstream.
  • RTP Real-time Transfer Protocol
  • a fourth aspect there is provided computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to receive a bitstream representing coded volumetric video; demultiplex the bitstream into a number of sub-bitstreams; encapsulate the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; send the encapsulated sub-bitstreams over one or more RTP session to a client; and provide to the client information allowing the client to map a RTP session to one or more appropriate subbitstream.
  • RTP Real-time Transfer Protocol
  • an RTP session is created for each subbitstream.
  • a session specific file format describing each RTP session is created and information allowing a client to identify each RTP session is provided and an RTP session is mapped to an appropriate subbitstream.
  • a new session level V3C specific attribute is recorded in the session specific file format, wherein the attribute groups together different V3C specific media streams.
  • a new type is recorded in a session level group attribute of the session specific file format, wherein the attribute groups together different V3C specific media streams.
  • V3C specific token indicating which RTP session originates from one bitstream is recorded.
  • a parameter containing byte-data for a V3C parameter set and/or multiple parameters forV3C parameter set specific fields is recorded in a session level attribute of the session specific file format.
  • the session specific file format it is recorded a new media level V3C specific attribute containing a parameter with byte-data representing V3C unit header and/or multiple parameters for V3C unit header specific fields.
  • byte-data representing a V3C parameter set is recorded as part of an RTP header extension, and the presence of the V3C parameter set in the RTP header extension is signaled in the session specific file format as a parameter in rtpmap medial level attribute.
  • byte-data representing a V3C unit header is recorded as part of an RTP header extension, and the presence of the V3C unit header in the RTP header extension is recorded in the session specific file format as a parameter in rtpmap medial level attribute.
  • a parameter is recorded in a session specific file format V3C specific session level attribute, wherein the parameter indicates if playout adjustment is required.
  • one RTP session that multiplex all subbitstreams is created.
  • a session specific file format is created, wherein the session specific file format provides information allowing a client to interpret payload types and reconstruct bitstream representing the coded volumetric video.
  • a session specific file format is created, wherein the session specific file format provides information allowing a client to interpret payload types and de-multiplex it into a number of sub-bitstreams or reconstruct single bitstream representing coded volumetric video.
  • the computer program product is embodied on a non-transitory computer readable medium.
  • Fig. 1 shows an example of a compression process of a volumetric video
  • Fig. 2 shows an example of a de-compression process of a volumetric video
  • Fig. 3 shows an example of a V3C bitstream originated from ISO/IEC 23090-5;
  • Fig. 4 shows an example of an extension header
  • Fig. 5 shows an example architecture of a V3C bitstream delivery over multiple RTP sessions with a client reconstructing the V3C bitstream
  • Fig. 6 shows an example architecture of a V3C bitstream delivery over number of RTP session with a client sending V3C sub-bitstream with associated V3C unit header;
  • Fig. 7 shows an example architecture of a V3C bitstream delivery over one RTP session with a client reconstructing V3C bitstream
  • Fig. 8 shows an example architecture of a V3C bitstream delivery over one RTP session with a client sending V3C sub-bitstream with associated V3C unit header;
  • Fig. 9 is a flowchart illustrating a method according to an embodiment
  • Fig. 10 is a flowchart illustrating a method according to another embodiment.
  • Fig. 11 shows an apparatus according to an embodiment.
  • the present embodiments are in particularly targeted to a solution for V3C mapping V3C bitstream to RTP sessions in SDP.
  • Figure 1 illustrates an overview of an example of a compression process of a volumetric video. Such process may be applied for example in MPEG Point Cloud Coding (PCC).
  • PCC MPEG Point Cloud Coding
  • the process starts with an input point cloud frame 101 that is provided for patch generation 102, geometry image generation 104 and texture image generation 105.
  • the patch generation 102 process aims at decomposing the point cloud into a minimum number of patches with smooth boundaries, while also minimizing the reconstruction error.
  • the normal at every point can be estimated.
  • An initial clustering of the point cloud can then be obtained by associating each point with one of the following six oriented planes, defined by their normals:
  • each point may be associated with the plane that has the closest normal (i.e., maximizes the dot product of the point normal and the plane normal).
  • the initial clustering may then be refined by iteratively updating the cluster index associated with each point based on its normal and the cluster indices of its nearest neighbors.
  • the final step may comprise extracting patches by applying a connected component extraction procedure. Patch info determined at patch generation 102 for the input point cloud frame 101 is delivered to packing process 103, to geometry image generation 104 and to texture image generation 105.
  • the packing process 103 aims at mapping the extracted patches onto a 2D plane, while trying to minimize the unused space, and guaranteeing that every TxT (e.g., 16x16) block of the grid is associated with a unique patch.
  • T may be a user- defined parameter. Parameter T may be encoded in the bitstream and sent to the decoder.
  • W and H may be user-defined parameters, which correspond to the resolution of the geometry/texture images that will be encoded.
  • the patch location is determined through an exhaustive search that is performed in raster scan order. The first location that can guarantee an overlapping-free insertion of the patch is selected and the grid cells covered by the patch are marked as used. If no empty space in the current resolution image can fit a patch, then the height H of the grid may be temporarily doubled, and search is applied again. At the end of the process, H is clipped so as to fit the used grid cells.
  • the geometry image generation 104 and the texture image generation 105 are configured to generate geometry images and texture images respectively.
  • the image generation process may exploit the 3D to 2D mapping computed during the packing process to store the geometry and texture of the point cloud as images.
  • each patch may be projected onto two images, referred to as layers.
  • H(u, y) be the set of points of the current patch that get projected to the same pixel (u, v).
  • the first layer also called a near layer, stores the point o H u, v) with the lowest depth DO.
  • the second layer referred to as the far layer, captures the point of H(u, v) with the highest depth within the interval [DO, D0+A], where is a user-defined parameter that describes the surface thickness.
  • the generated videos may have the following characteristics:
  • the geometry video is monochromatic.
  • the texture generation procedure exploits the reconstructed/smoothed geometry in order to compute the colors to be associated with the re-sampled points.
  • the geometry images and the texture images may be provided to image padding 107.
  • the image padding 107 may also receive as an input an occupancy map (OM) 106 to be used with the geometry images and texture images.
  • the occupancy map 106 may comprise a binary map that indicates for each cell of the grid whether it belongs to the empty space or to the point cloud.
  • the occupancy map (OM) may be a binary image of binary values where the occupied pixels and non-occupied pixels are distinguished and depicted respectively.
  • the occupancy map may alternatively comprise a non-binary image allowing additional information to be stored in it.
  • the representative values of the DOM may comprise binary values or other values, for example integer values. It should be noticed that one cell of the 2D grid may produce a pixel during the image generation process. Such an occupancy map may be derived from the packing process 103.
  • the padding process 107 aims at filling the empty space between patches in order to generate a piecewise smooth image suited for video compression.
  • each block of TxT e.g., 16x16 pixels is compressed independently. If the block is empty (i.e., unoccupied, i.e., all its pixels belong to empty space), then the pixels of the block are filled by copying either the last row or column of the previous TxT block in raster order. If the block is full (i.e., occupied, i.e., no empty pixels), nothing is done. If the block has both empty and filled pixels (i.e., edge block), then the empty pixels are iteratively filled with the average value of their non-empty neighbors.
  • the padded geometry images and padded texture images may be provided for video compression 108.
  • the generated images/layers may be stored as video frames and compressed using for example the HM16.16 video codec according to the HM configurations provided as parameters.
  • the video compression 108 also generates reconstructed geometry images to be provided for smoothing 109, wherein a smoothed geometry is determined based on the reconstructed geometry images and patch info from the patch generation 102.
  • the smoothed geometry may be provided to texture image generation 105 to adapt the texture images.
  • the patch may be associated with auxiliary information being encoded/decoded for each patch as metadata.
  • the auxiliary information may comprise index of the projection plane, 2D bounding box, 3D location of the patch.
  • Metadata may be encoded/decoded for every patch:
  • mapping information providing for each TxT block its associated patch index may be encoded as follows:
  • L be the ordered list of the indexes of the patches such that their 2D bounding box contains that block.
  • the order in the list is the same as the order used to encode the 2D bounding boxes.
  • L is called the list of candidate patches.
  • the empty space between patches is considered as a patch and is assigned the special index 0, which is added to the candidate patches list of all the blocks.
  • the occupancy map consists of a binary map that indicates for each cell of the grid whether it belongs to the empty space or to the point cloud.
  • One cell of the 2D grid produces a pixel during the image generation process.
  • the occupancy map compression 110 leverages the auxiliary information described in previous section, in order to detect the empty TxT blocks (i.e., blocks with patch index 0).
  • the remaining blocks may be encoded as follows:
  • the occupancy map can be encoded with a precision of a BOxBO blocks.
  • the compression process may comprise one or more of the following example operations:
  • Binary values may be associated with BOxBO sub-blocks belonging to the same TxT block.
  • a value 1 associated with a sub-block if it contains at least a non-padded pixel, and 0 otherwise. If a sub-block has a value of 1 it is said to be full, otherwise it is an empty sub-block.
  • a binary information may be encoded for each TxT block to indicate whether it is full or not.
  • an extra information indicating the location of the full/empty sub-blocks may be encoded as follows: o Different traversal orders may be defined for the sub-blocks, for example horizontally, vertically, or diagonally starting from top right or top left corner o The encoder chooses one of the traversal orders and may explicitly signal its index in the bitstream. o The binary values associated with the sub-blocks may be encoded by using a run-length encoding strategy.
  • FIG. 2 illustrates an overview of a de-compression process for MPEG Point Cloud Coding (PCC).
  • a de-multiplexer 201 receives a compressed bitstream, and after de-multiplexing, provides compressed texture video and compressed geometry video to video decompression 202.
  • the de-multiplexer 201 transmits compressed occupancy map to occupancy map decompression 203. It may also transmit a compressed auxiliary patch information to auxiliary patch-info compression 204.
  • Decompressed geometry video from the video decompression 202 is delivered to geometry reconstruction 205, as are the decompressed occupancy map and decompressed auxiliary patch information.
  • the point cloud geometry reconstruction 205 process exploits the occupancy map information in order to detect the non-empty pixels in the geometry/texture images/layers. The 3D positions of the points associated with those pixels may be computed by leveraging the auxiliary patch information and the geometry images.
  • the reconstructed geometry image may be provided for smoothing 206, which aims at alleviating potential discontinuities that may arise at the patch boundaries due to compression artifacts.
  • the implemented approach moves boundary points to the centroid of their nearest neighbors.
  • the smoothed geometry may be transmitted to texture reconstruction 207, which also receives a decompressed texture video from video decompression 202.
  • the texture reconstruction 207 outputs a reconstructed point cloud.
  • the texture values for the texture reconstruction are directly read from the texture images.
  • the point cloud geometry reconstruction process exploits the occupancy map information in order to detect the non-empty pixels in the geometry/texture images/layers.
  • the 3D positions of the points associated with those pixels are computed by levering the auxiliary patch information and the geometry images. More precisely, let P be the point associated with the pixel (u, v) and let (30, sO, rO) be the 3D location of the patch to which it belongs and (uO, vO, ul, vl) its 2D bounding box. P can be expressed in terms of depth 3(u, v), tangential shift s(u, v) and bi-tangential shift r(u, v) as follows:
  • g(u, v) is the luma component of the geometry image.
  • the texture values can be directly read from the texture images.
  • the result of the decoding process is a 3D point cloud reconstruction.
  • volumetric frame can be represented as a point cloud.
  • a point cloud is a set of unstructured points in 3D space, where each point is characterized by its position in a 3D coordinate system (e.g., Euclidean), and some corresponding attributes (e.g., color information provided as RGBA value, or normal vectors).
  • a volumetric frame can be represented as images, with or without depth, captured from multiple viewpoints in 3D space.
  • the volumetric video can be represented by one or more view frames (where a view is a projection of a volumetric scene on to a plane (the camera plane) using a real or virtual camera with known/computed extrinsic and intrinsic).
  • Each view may be represented by a number of components (e.g., geometry, color, transparency, and occupancy picture), which may be part of the geometry picture or represented separately.
  • a volumetric frame can be represented as a mesh.
  • Mesh is a collection of points, called vertices, and connectivity information between vertices, called edges. Vertices along with edges form faces. The combination of vertices, edges and faces can uniquely approximate shapes of objects.
  • a volumetric frame can provide viewers the ability to navigate a scene with six degrees of freedom, i.e., both translational and rotational movement of their viewing pose (which includes yaw, pitch, and roll).
  • the data to be coded for a volumetric frame can also be significant, as a volumetric frame can contain many numbers of objects, and the positioning and movement of these objects in the scene can result in many dis-occluded regions.
  • the interaction of the light and materials in objects and surfaces in a volumetric frame can generate complex light fields that can produce texture variations for even a slight change of pose.
  • a sequence of volumetric frames is a volumetric video. Due to large amount of information, storage and transmission of a volumetric video requires compression.
  • a way to compress a volumetric frame can be to project the 3D geometry and related attributes into a collection of 2D images along with additional associated metadata.
  • the projected 2D images can then be coded using 2D video and image coding technologies, for example ISO/IEC 14496- 10 (H.264/AVC) and ISO/IEC 23008-2 (H.265/HEVC).
  • the metadata can be coded with technologies specified in specification such as ISO/IEC 23090-5.
  • the coded images and the associated metadata can be stored or transmitted to a client that can decode and render the 3D volumetric frame.
  • ISO/IEC 23090-5 specifies the syntax, semantics, and process for coding volumetric video.
  • the specified syntax is designed to be generic, so that it can be reused for a variety of applications.
  • Point clouds, immersive video with depth, and mesh representations can all use ISO/IEC 23090-5 standard with extensions that deal with the specific nature of the final representation.
  • the purpose of the specification is to define how to decode and interpret the associated data (for example atlas data in ISO/IEC 23090-5) which tells a Tenderer how to interpret 2D frames to reconstruct a volumetric frame.
  • V-PCC ISO/IEC 23090-5
  • MIV ISO/IEC 23090-12
  • the syntax element pdu_projection_id specifies the index of the projection plane for the patch. There can be 6 or 18 projection planes in V- PCC, and they are implicit, i.e., pre-determined.
  • pdu_projection_id corresponds to a view ID, i.e., identifies which view the patch originated from. View IDs and their related information is explicitly provided in MIV view parameters list and may be tailored for each content.
  • MPEG 3DG ISO SC29 WG7
  • V3C the mesh compression
  • V3C uses the ptl_profile_toolset_idc parameter.
  • V3C bitstream is a sequence of bits that forms the representation of coded volumetric frames and the associated data making one or more coded V3C sequences (CVS).
  • CVS is a sequence of bits identified and separated by appropriate delimiters, and is required to start with a VPS, includes a V3C unit, and contains one or more V3C units with atlas sub-bitstream or video subbitstream. This is illustrated in Figure 3.
  • Video sub-bitstream and atlas subbitstreams can be referred to as V3C sub-bitstreams.
  • a V3C unit header in conjunction with VPS information identify which V3C sub-bitstream a V3C unit contains and how to interpret it.
  • V3C bitstream can be stored according to Annex C of ISO/IEC 23090-5, which specifies syntax and semantics of a sample stream format to be used by applications that deliver some or all of the V3C unit stream as an ordered stream of bytes or bits within which the locations of V3C unit boundaries need to be identifiable from patterns in the data.
  • VPS V3C Parameter Set
  • vuh_v3c_parameter_set_id specifies the value of vps_v3c_parameter_set_id for the active V3C VPS.
  • the VPS provides the following information about V3C bitstream among others:
  • the V3C includes signalling mechanisms, through profile_tier_level syntax structure in VPS to support interoperability while restricting capabilities of V3C profiles.
  • V3C also includes an initial set of tool constraint flags to indicate additional restrictions on profile.
  • the sub-profile indicator syntax element is always present, but the value of OxFFFFFFFF indicates that no subprofile is used, i.e., the full profile is supported.
  • a Real Time Transfer Protocol is intended for an end-to-end, real-time transfer or streaming media and provides facilities for jitter compensation and detection of packet loss and out-of-order delivery.
  • RTP allows data transfer to multiple destinations through IP multicast or to a specific destination through IP unicast.
  • the majority of the RTP implementations are built on the User Datagram Protocol (UDP).
  • UDP User Datagram Protocol
  • Other transport protocols may also be utilized.
  • RTP is used in together with other protocols such as H.323 and Real Time Streaming Protocol RTSP.
  • RTP Resource Streaming Protocol
  • RTCP RTCP protocol
  • the RTP specification describes two protocols: RTP and RTCP.
  • RTP is used for the transfer of multimedia data
  • RTCP is used to periodically send control information and QoS parameters.
  • RTP sessions may be initiated between client and server using a signalling protocol, such as H.323, the Session Initiation Protocol (SIP), or RTSP. These protocols may use the Session Description Protocol (RFC 8866) to specify the parameters for the sessions.
  • a signalling protocol such as H.323, the Session Initiation Protocol (SIP), or RTSP.
  • SIP Session Initiation Protocol
  • RTSP Real-Time Transport Protocol
  • RRC 8866 Session Description Protocol
  • RTP is designed to carry a multitude of multimedia formats, which permits the development of new formats without revising the RTP standard. To this end, the information required by a specific application of the protocol is not included in the generic RTP header. For a class of applications (e.g., audio, video), an RTP profile may be defined. For a media format (e.g., a specific video coding format), an associated RTP payload format may be defined. Every instantiation of RTP in a particular application may require a profile and payload format specifications.
  • the profile defines the codecs used to encode the payload data and their mapping to payload format codecs in the protocol field Payload Type (PT) of the RTP header.
  • PT protocol field Payload Type
  • the profile defines a set of static payload type assignments, and a dynamic mechanism for mapping between a payload format, and a PT value using Session Description Protocol (SDP).
  • SDP Session Description Protocol
  • the latter mechanism is used for newer video codec such as RTP payload format for H.264 Video defined in RFC 6184 or RTP Payload Format for High Efficiency Video Coding (HEVC) defined in RFC 7798.
  • An RTP session is established for each multimedia stream. Audio and video streams may use separate RTP sessions, enabling a receiver to selectively receive components of a particular stream.
  • the RTP specification recommends even port number for RTP, and the use of the next odd port number of the associated RTCP session. A single port can be used for RTP and RTCP in applications that multiplex the protocols.
  • RTP packets are created at the application layer and handed to the transport layer for delivery. Each unit of RTP media data created by an application begins with the RTP packet header.
  • the RPT header has a minimum size of 12 bytes. After the header, optional header extensions may be present. This is followed by the RTP payload, the format of which is determined by the particular class of application.
  • the fields in the header are as follows:
  • P (Padding) (1 bit) Used to indicate if there are extra padding bytes at the end of the RTP packet.
  • Extension header (1 bit) Indicates the presence of an extension header between the header and payload data.
  • the extension header is application or profile specific.
  • PT Payment type: (7 bits) Indicates the format of the payload and thus determines its interpretation by the application.
  • Sequence number (16 bits) The sequence number is incremented for each RTP data packet sent and is to be used by the receiver to detect packet loss and to accommodate out-of-order delivery.
  • Timestamp (32 bits) Used by the receiver to play back the received samples at appropriate time and interval. When several media streams are present, the timestamps may be independent in each stream. The granularity of the timing is application specific. For example, video stream may use a 90 kHz clock. The clock granularity is one of the details that is specified in the RTP profile for an application.
  • Synchronization source identifier uniquely identifies the source of the stream. The synchronization sources within the same RTP session will be unique.
  • Header extension (optional, presence indicated by Extension field)
  • the first 32-bit word contains a profile-specific identifier (16 bits) and a length specifier (16 bits) that indicates the length of the extension in 32-bit units, excluding the 32 bits of the extension header.
  • the extension header data is shown in Figure 4.
  • SDP Session Description Protocol
  • SDP is used as an example of a session specific file format.
  • SDP is a format for describing multimedia communication sessions for the purposes of announcement and invitation. Its predominant use is in support of conversational and streaming media applications. SDP does not deliver any media streams itself, but is used between endpoints for negotiation of network metrics, media types, and other associated properties. The set of properties and parameters is called a session profile. SDP is extensible for the support of new media types and formats.
  • the Session Description Protocol describes a session as a group of fields in a text-based format, one field per line.
  • the form of each field is as follows:
  • ⁇ character> ⁇ value> ⁇ CR> ⁇ LF>
  • ⁇ character> is a single case-sensitive character
  • ⁇ value> is structured text in a format that depends on the character. Values may be UTF- 8 encoded. Whitespace is not allowed immediately to either side of the equal sign.
  • Session descriptions consist of three sections: session, timing, and media descriptions. Each description may contain multiple timing and media descriptions. Names are only unique within the associated syntactic construct.
  • v (protocol version number, currently only 0)
  • o (originator and session identifier : username, id, version number, network address)
  • s (session name : mandatory with at least one UTF-8-encoded character)
  • i * (session title or short information)
  • u * (URI of description)
  • e * (zero or more email address with optional name of contacts)
  • p * (zero or more phone number with optional name of contacts)
  • c * (connection inf ormation— not required if included in all media)
  • b * (zero or more bandwidth information lines)
  • m (media name and transport address)
  • i * (media title or information field)
  • c * ( connection information — optional if included at session level )
  • b * ( zero or more bandwidth information lines )
  • k * (encryption key)
  • a * ( zero or more media attribute lines — overriding the Session attribute lines )
  • This session is originated by the user “jdoe” at IPv4 address 10.47.16.5. Its name is “SDP Seminar” and extended session information (“A Seminar on the session description protocol”) is included along with a link for additional information and an email address to contact the responsible party, Jane Doe.
  • This session is specified to last two hours using NTP timestamps, with a connection address (which indicates the address clients must connect to or - when a multicast address is provided, as it is here - subscribe to) specified as IPv4 224.2.17.12 with a TTL of 127. Recipients of this session description are instructed to only receive media. Two media descriptions are provided, both using RTP Audio Video Profile.
  • the first is an audio stream on port 49170 using RTP/AVP payload type 0 (defined by RFC 3551 as PCMU), and the second is a video stream on port 51372 using RTP/AVP payload type 99 (defined as “dynamic”). Finally, an attribute is included which maps RTP/AVP payload type 99 to format h263-1998 with a 90 kHz clock rate.
  • RTCP ports for the audio and video streams of 49171 and 51373 respectively are implied.
  • the media types are “audio/L8” and “audio/L16”.
  • Codec-specific parameters may be added in other attributes, for example, "fmtp".
  • “fmtp” attribute allows parameters that are specific to a particular format to be conveyed in a way that SDP does not have to understand them.
  • the format can be one of the formats specified for the media. Format-specific parameters, semicolon separated, may be any set of parameters required to be conveyed by SDP and given unchanged to the media tool that will use this format. At most one instance of this attribute is allowed for each format.
  • RFC7798 defines the following sprop-vps, sprop-sps, sprop-pps, profile-space, profile-id, tier-flag, level-id, interop-constraints, profilecompatibility-indicator, sprop-sub-layer-id, recv-sub-layer-id, max-recv-level- id, tx-mode, max-lsr, max-lps, max-cpb, max-dpb, max-br, max-tr, max-tc, max-fps, sprop-max-don-diff, sprop-depack-buf-nalus, sprop-depack-buf- bytes, depack-buf-cap, sprop-segmentation-id, sprop-spatial-segmentation- idc, dec-parallel-cap, and include-dph.
  • group and “mid” attributes defined in RFC 5888 allows to group "m" lines in SDP for different purposes.
  • An example can be for lip synchronization or for receiving a media flow consisting of several media streams on different transport addresses.
  • each "m" line is identified by a token, which is carried in a "mid” attribute below the “m” line.
  • the session description carries session-level “group” attributes that group different "m” lines (identified by their tokens) using different group semantics.
  • the semantics of a group describe the purpose for which the "m” lines are grouped.
  • the "group” line indicates that the "m” lines identified by tokens 1 and 2 (the audio and the video "m” lines, respectively) are grouped for the purpose of lip synchronization (LS).
  • RFC5888 defines two semantics for group Lip Synchronization (LS), as used in the example above, and Flow Identification (FID).
  • LS group Lip Synchronization
  • FID Flow Identification
  • RFC5583 defines another grouping type Decoding Dependency (DDP).
  • DDP Decoding Dependency
  • RFC8843 defines another grouping type BUNDLE, which among other is utilized when multiple types of media are sent in a single RTP session as described in RFC8860.
  • Layered decoding dependency identifies the described media stream as one or more Media Partitions of a layered Media Bitstream.
  • lay When "lay" is used, all media streams required for decoding the Operation Point MUST be identified by identification-tag and fmt- dependency following the "lay" string.
  • mdc Multi-descriptive decoding dependency signals that the described media stream is part of a set of an MDC Media Bitstream.
  • N-out-of-M media streams of the group need to be available to from an Operation Point.
  • the values of N and M depend on the properties of the Media Bitstream and are not signaled within this context.
  • mdc all required media streams for the Operation Point MUST be identified by identification-tag and fmt- dependency following the "mdc" string.
  • the example below shows a session description with three media descriptions, all of type video and with layered decoding dependency ("lay").
  • RTP was designed to support multimedia sessions, containing multiple types of media sent simultaneously, by using multiple transport-layer flows, i.e., RTP sessions. This approach, however, is not always beneficial and can:
  • RTP-based applications can reduce the risk of communication failure and can lead to improved reliability and performance. It might seem appropriate for RTP-based applications to send all their RTP streams bundled into one RTP session, running over a single transport-layer flow. However, this was initially prohibited by the RTP specifications RFC3550 and RFC3551, because the design of RTP makes certain assumptions that can be incompatible with sending multiple media types in a single RTP session.
  • RFC8860 updates RFC3550 and RFC3551 to allow sending an RTP session containing RTP streams with media from multiple media types such as audio, video, and text.
  • RTP session-level parameters - for example, the RTCP RR and RS bandwidth modifiers [RFC3556], the RTP/AVPF trr-int parameter [RFC4585], transport protocol, RTCP extensions in use, and any security parameters - are consistent across the session; and
  • the BUNDLE extension RFC8843 is used to signal RTP sessions containing multiple media types.
  • SDP can be utilized in a scenario where two entities (i.e., server and client) negotiate at a common understanding and setup of a multimedia session between them.
  • one entity e.g., server
  • This offer/answer model is described in RFC3264 and can be used by other protocols for example Session Initiation Protocol (SIP) RFC 3264.
  • SIP Session Initiation Protocol
  • V3C bitstream is a composition of number of video sub-bitstreams and atlas sub-bitstreams that can be identified by V3C unit headers.
  • Video sub-bitstream can be coded by well-known video coding standards such as AVC, HEVC, WC which are NAL unit based and have well defined RTP payload format (RFC 6184, RFC 7798, and internet draft, respectively).
  • V3C Parameter Set and V3C Unit headers As each of the sub-bitstream of V3C has its own RTP format type and can be separately represented by the SDP, the problem arises on how to indicate the relation between separate RTP sessions and how to pass to a receiver the information that is provided by V3C Parameter Set and V3C Unit headers.
  • the present embodiments provide a method to map V3C bitstream to session specific protocol, such as SDP, that allows signalling to a receiver all information required to receive separate RTP sessions and reconstruct V3C bitstream based on the provided information, or pass single RTP session data to appropriate decoders and reconstruct a volumetric frame.
  • session specific protocol such as SDP
  • a V3C grouping parameter is introduced which comprises media streams to deliver constituent sub bitstreams of atlas, geometry, occupancy, and attribute.
  • the constituent sub bitstreams may be delivered as separate RTP streams or as multiplexed in a single RTP stream.
  • V3C mapping attribute which indicates sub-bitstreams of atlas, geometry, occupancy and attribute and related parameters.
  • the V3C group parameter can also have a flag to indicate whether the playout time adjustment can be performed atomically for the individual sub-bitstreams, the default behavior the playout time adjustment will be performed simultaneously to maintain inter stream sample sync.
  • FIGS 5 to 8 illustrate possible architectures on how a V3C bitstream can be delivered over network using RTP protocols.
  • V3C bitstream is provided to a server.
  • the server is configured to demultiplex the V3C bitstream into number of V3C subbitstreams.
  • Each V3C sub-bitstreams may be encapsulated to an appropriate RTP payload format and sent over a dedicated RTP session to a client.
  • the server creates a session specific file format, such as a SDP file, that describes each RTP session as well as provides the novel information that allows a client to identify each RTP session and map it to appropriate V3C sub-bitstream.
  • a client is able to reconstruct V3C bitstream and provide it to a V3C decoder/renderer.
  • SDP can be provided in a declarative manner where a client does not have any decision capability, or in case a server has capability to re-encode V3C sub-bitstreams, or V3C sub-bitstreams are provided in number of alternatives, the SDP may be used in offer/answer mode to allow a client to choose the most appropriate codecs.
  • FIG. 6 An example of the architecture presented on Figure 6 is similar to architecture presented on Figure 5, but a client does not reconstruct the V3C bitstream but sends a separate V3C sub-bitstream to a V3C decoder/renderer and signal at initialization V3C unit header associated with given V3C sub-bitstream.
  • a client as well can pass to a V3C decoder a V3C parameter set syntax element that does not have to be encapsulated in V3C unit.
  • FIGS 7 and 8 Examples of an architecture presented on Figures 7 and 8 are similar to architecture presented on Figures 5 and 6, respectively, with the difference that server creates only one RTP session that multiplex all V3C sub-bitstream and the session specific file format, such as a SDP, provides appropriate novel information that allow to interpret the payload types and reconstruct V3C bitstream (Figure 7), or pass V3C sub-bitstream together with associated V3C unit header ( Figure 8) to V3C decoder/renderer.
  • the " v3cmtp " media attribute allows format-specific parameters to be conveyed about a given RTP Payload Type. If present, the ⁇ format> parameter MUST be one of the media formats (i.e., RTP payload types) specified for the media stream. The meaning of the ⁇ v3c specific media-level parameters> is defined in other embodiments.
  • v3cmtp” attribute may be signaled for each payload type separately in the media description or it may be signaled once, to indicate that all payload types share the same “v3cmtp” attribute.
  • V3c-unit-header provides a V3C unit header byte defined in ISO/IEC 23090- 5.
  • ⁇ value> contains base16 [RFC 4648] (hexadecimal) representation of the 4 bytes of V3C unit header.
  • information from V3C unit header is provided as a separate v3c specific parameters.
  • v3c-u nit-type provide a V3C unit type value corresponding to vuh_unit_type defined in ISO/IEC 23090-5, i.e., defines V3C sub-bitstream type. ⁇ value> contains vuh_unit_type value.
  • v3c-vps-id provide a V3C unit value corresponding to vuh_v3c_parameter_set_id defined in ISO/IEC 23090-5. ⁇ value> vuh_v3c_parameter_set_id value.
  • v3c-atlas-id provide a V3C unit value corresponding to vuh_atlas_id defined in ISO/IEC 23090-5. ⁇ value> contains vuh_atlas_id value.
  • v3c-attr-idx provide a V3C unit value corresponding to vuh_attribute_index defined in ISO/IEC 23090-5. ⁇ value> contains vuh_attribute_index value.
  • v3c-attr-part-idx provide a V3C unit value corresponding to vuh_attribute_partition_index defined in ISO/IEC 23090-5. ⁇ value> vuh_attribute_partition_index value.
  • v3c-map-idx ⁇ value>
  • v3c-map-idx provide a V3C unit value corresponding to vuh_map_index defined in ISO/IEC 23090-5. ⁇ value> vuh_map_index value.
  • v3c-aux-video-flag provides a V3C unit value corresponding to vuh_auxiliary_video_flag defined in ISO/IEC 23090-5. ⁇ value> vuh_auxiliary_video_flag value.
  • bytes describing V3C unit header defined in ISO/IEC 23090-5 are carried as part of RTP header extension.
  • a new identifier is defined to indicate that an RTP stream contains header extension as well as to describe how the header extension should be parsed.
  • the new identifier is used with an extmap attribute in media level of the SDP or other session specific file format. urn : ietf : params : rtp-hdrext : v3c : vuh
  • the 8-bit ID is the local identifier and length field are as defined RFC 5285.
  • the 4 bytes of the RTP header extension contains v3c_unit_header() structure as defined in ISO/IEC 23090-5.
  • bytes describing V3C parameter set defined in ISO/IEC 23090-5 are carried as part of RTP header extension.
  • a new identifier is defined to indicate that an RTP stream contains header extension as well as to describe how the header extension should be parsed.
  • the new identifier is then used with an extmap attribute in media level of the SDP or other session specific file format. urn : iet f : params : rtp-hdrext : v3c : vps
  • the 8-bit ID is the local identifier and length field defines are as defined in RFC 5285.
  • the N bytes of the RTP header extension contains v3c_parameter_set() structure as defined in ISO/IEC 23090-5.
  • a new token for the 'group' SDP attribute equal to V3C is defined, 'group' attribute on the session level with V3C token allows to indicate which RTP session(s) originates from one V3C bitstream.
  • the tokens that follow are mapped to ‘mid’ values in media levels. Additional v3c specific parameters, e.g., such as v3c- parameter-set may exist.
  • v3c- parameter-set may exist.
  • a new 'group-v3c' SDP attribute is defined.
  • 'group-v3c' attribute on the session level with V3C token allows to indicate which RTP session originates from one V3C bitstream.
  • the tokens that follow can contain tokens that are mapped to ‘mid’ values in media levels as well can contain v3c session-level specific parameters, e.g., such as v3c-parameter-set.
  • V3C session level attribute implies that the constituent media streams are required to be processed together to successfully decode the V3C. This implication is signaled by the sender using V3C and acted upon by the receiver.
  • V3c-parameter-set provides V3C parameter set bytes as defined in ISO/IEC 23090-5.
  • ⁇ value> contains base16 [RFC 4648] (hexadecimal) representation of the V3C parameter set bytes.
  • profile, level tier information may be extracted from v3c parameter sets as separate v3c specific parameters.
  • V3c-ptl-level-idc provide a V3C unit value corresponding to ptl_level_idc defined in ISO/IEC 23090-5. ⁇ value> ptl_level_idc value.
  • v3c-ptl-tier-flag provide a V3C unit value corresponding to ptl_tier_flag defined in ISO/IEC 23090-5. ⁇ value> ptl_tier_flag value.
  • V3c-ptl-codec-idc provide a V3C unit value corresponding to ptl_profile_codec_group_idc defined in ISO/IEC 23090-5. ⁇ value> ptl_profile_codec_group_idc value.
  • V3c-ptl-toolset-idc provide a V3C unit value corresponding to ptl_profile_toolset_idc defined in ISO/IEC 23090-5. ⁇ value> ptl_profile_toolset_idc value.
  • V3c-ptl-rec-idc provide a V3C unit value corresponding to ptl_profile_reconstruction_idc defined in ISO/IEC 23090-5. ‘ ⁇ value> ptl_profile_reconstruction_idc.
  • an additional attribute is present in the ‘group’ SDP attribute with V3C token.
  • a value of 0 or if the “plad” token is absent will result in the receiver synchronizing all the constituent sub-bitstreams without any inter-subbitstream adjustment. Playout adjustment for all the constituent sub-bitstreams corresponding to a single sample can be performed as deemed suitable by the receiver. If the value of “plad” is equal to 1 , the reconstruction and rendering can proceed without waiting for the attribute (e.g., texture) sample data. In other words, at least the occupancy, depth, and atlas data to be present to initiate rendering.
  • the attribute e.g., texture
  • Example 1 A SDP with RTP Session per V3C component
  • the example below shows a session description with one V3C object that is composed of four media descriptions that correspond to four V3C components: occupancy, geometry, attribute, and atlas.
  • the example shows the grouping of the media format descriptions of the different media descriptions indicated by "V3C” grouping, and "mid” attributes. Additionally, the grouping attribute "V3C” provides a V3C VPS utilizing v3c-parameter-set parameter of the grouping line.
  • the attribute can contain a v3c-unit-header parameter or use number of explicit parameters that provide the explicit information, e.g., v3c-unit-type, v3c-vps-id,v3c-atlas- id,v3c-attr-idx,v3c-map-idx.
  • Example 2 A SDP with an RTP stream per V3C component multiplexed in one RTP Session
  • the example below shows a session description with one V3C object that is composed of four media descriptions that correspond to four V3C components: occupancy, geometry, attribute, and atlas.
  • the example shows the grouping of the media format descriptions of the different media descriptions indicated by "V3C” grouping, and "mid” attributes. Additionally, the grouping "V3C” provides V3C VPS information utilizing v3c-parameter-set parameter of the grouping line.
  • the SDP also provides “BUNDLE” grouping mechanism required when multiple media are sent on the same transport flow.
  • the attribute can contain a v3c-unit-header parameter or number of parameters that provide the explicit information, e.g., v3c-unit-type, v3c-vps- id,v3c-atlas-id,v3c-attr-idx,v3c-map-idx.
  • RTP and RTCP packets are demultiplexed into different RTP streams based on their SSRC.
  • “extmap” attribute with “urn:ietf:params:rtp-hdrext:sdes:mid” URI is used to map RTP header extension to correct media description.
  • a client also knows that each RTP stream containing a V3C component has a unique SSRC value.
  • V3C virtual reality
  • the offer example below shows a session description with one V3C object that is composed of four media descriptions that correspond to four V3C components: occupancy, geometry, attribute, and atlas.
  • the video components of V3C are proposed to a client in three different coding alternatives H.264, H.265, and H.266.
  • Client provides an SDP answer, where it selects a different video codec for each V3C video component.
  • the example below shows a session description with one V3C object that is composed of four media descriptions that correspond to four V3C components: occupancy, geometry, attribute, and atlas.
  • the SDP provides “BUNDLE” grouping mechanism required when multiple media are sent on the same transport flow [RFC 8843].
  • All V3C components have the same payload type.
  • RTP and RTCP packets are demultiplexed into different RTP streams based on their SSRC.
  • a “urn:ietf:params:rtp-hdrext:sdes:mid” URI is added in the “extmap” attribute.
  • Each RTP stream has an RTP header extension that carries V3C unit header identified by “urn:ietf:params:rtp-hdrext:v3c:vuh” URI.
  • Atlas RTP stream may additionally have RTP header extension that carries V3C VPS identified by urn:ietf:params:rtp-hdrext:v3c:vps.
  • SDP has been used as an example of a session specific file format.
  • the method generally comprises receiving 905 a bitstream representing coded volumetric video; demultiplexing 910 the bitstream into a number of sub-bitstreams; encapsulating 915 the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; sending 920 the encapsulated sub-bitstreams over one or more RTP session to a client; and providing 925 to the client information allowing the client to map a RTP session to one or more appropriate subbitstream.
  • RTP Real-time Transfer Protocol
  • An apparatus comprises means for receiving a bitstream representing coded volumetric video; means for demultiplexing the bitstream into a number of sub-bitstreams; means for encapsulating the subbitstreams to a Real-time Transfer Protocol (RTP) payload format; means for sending the encapsulated sub-bitstreams over one or more RTP session to a client; and means for providing to the client information allowing the client to map an RTP session to one or more appropriate sub-bitstream.
  • the means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry.
  • the memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 9 according to various embodiments.
  • the method generally comprises receiving 1005 RTP streams as one or more RTP sessions comprising one or more appropriate sub-bitstreams; decapsulating 1010 RTP payload based on the media level description; associating 1015 constituent sub-bitstreams as belonging to single V3C bitstream based on the session level description; and delivering 1020 decapsulated payloads to the V3C decoder/receiver.
  • An apparatus comprises means for receiving RTP streams as one or more RTP sessions comprising one or more appropriate sub-bitstreams; means for decapsulating RTP payload based on the media level description; means for associating constituent sub-bitstreams as belonging to single V3C bitstream based on the session level description; and means for delivering decapsulated payloads to the V3C decoder/receiver.
  • the means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry.
  • the memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 10 according to various embodiments. An example of an apparatus is disclosed with reference to Figure 11.
  • Figure 11 shows a block diagram of a video coding system according to an example embodiment as a schematic block diagram of an electronic device 50, which may incorporate a codec.
  • the electronic device may comprise an encoder or a decoder.
  • the electronic device 50 may for example be a mobile terminal or a user equipment of a wireless communication system or a camera device.
  • the electronic device 50 may be also comprised at a local or a remote server or a graphics processing unit of a computer.
  • the device may be also comprised as part of a head-mounted display device.
  • the apparatus 50 may comprise a display 32 in the form of a liquid crystal display. In other embodiments of the invention the display may be any suitable display technology suitable to display an image or video.
  • the apparatus 50 may further comprise a keypad 34.
  • any suitable data or user interface mechanism may be employed.
  • the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display.
  • the apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input.
  • the apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection.
  • the apparatus 50 may also comprise a battery (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator).
  • the apparatus may further comprise a camera 42 capable of recording or capturing images and/or video.
  • the camera 42 may be a multi-lens camera system having at least two camera sensors.
  • the camera is capable of recording or detecting individual frames which are then passed to the codec 54 or the controller for processing.
  • the apparatus may receive the video and/or image data for processing from another device prior to transmission and/or storage.
  • the apparatus 50 may comprise a controller 56 or processor for controlling the apparatus 50.
  • the apparatus or the controller 56 may comprise one or more processors or processor circuitry and be connected to memory 58 which may store data in the form of image, video and/or audio data, and/or may also store instructions for implementation on the controller 56 or to be executed by the processors or the processor circuitry.
  • the controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of image, video and/or audio data or assisting in coding and decoding carried out by the controller.
  • the apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a IIICC (Universal Integrated Circuit Card) and UICC reader for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network.
  • the apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system, or a wireless local area network.
  • the apparatus 50 may further comprise an antenna 44 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es).
  • the apparatus may comprise one or more wired interfaces configured to transmit and/or receive data over a wired connection, for example an electrical cable or an optical fiber connection.
  • a device may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment.
  • a network device like a server may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of various embodiments.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Computer Security & Cryptography (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The embodiments relate to a method and technical equipment for volumetric video coding. The method comprises receiving (905) a bitstream representing coded volumetric video; demultiplexing (910) the bitstream into a number of sub-bitstreams; encapsulating (915) the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; sending (920) the encapsulated sub-bitstreams over one or more RTP session to a client; and providing (925) to the client information allowing the client to map a RTP session to one or more appropriate sub-bitstream.

Description

A METHOD, AN APPARATUS AND A COMPUTER PROGRAM PRODUCT FOR VIDEO CODING
Technical Field
The present solution generally relates to coding of volumetric video.
Background
Volumetric video data represents a three-dimensional (3D) scene or object, and can be used as input for AR (Augmented Reality), VR (Virtual Reality), and MR (Mixed Reality) applications. Such data describes geometry (Shape, size, position in 3D space) and respective attributes (e.g., color, opacity, reflectance, ...), and any possible temporal transformations of the geometry and attributes at given time instances (like frames in 2D video). Volumetric video can be generated from 3D models, also referred to as volumetric visual objects, i.e., CGI (Computer Generated Imagery), or captured from real-world scenes using a variety of capture solutions, e.g., multi-camera, laser scan, combination of video and dedicated depth sensors, and more. Also, a combination of CGI and real-world data is possible. Examples of representation formats for volumetric data comprise triangle meshes, point clouds, or voxels. Temporal information about the scene can be included in the form of individual capture instances, i.e., “frames” in 2D video, or other means, e.g., position of an object as a function of time.
Summary
The scope of protection sought for various embodiments of the invention is set out by the independent claims. The embodiments and features, if any, described in this specification that do not fall under the scope of the independent claims are to be interpreted as examples useful for understanding various embodiments of the invention.
Various aspects include a method, an apparatus and a computer readable medium comprising a computer program stored therein, which are characterized by what is stated in the independent claims. Various embodiments are disclosed in the dependent claims.
According to a first aspect, there is provided an apparatus comprising means for receiving a bitstream representing coded volumetric video; means for demultiplexing the bitstream into a number of sub-bitstreams; means for encapsulating the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; means for sending the encapsulated sub-bitstreams over one or more RTP session to a client; and means for providing to the client information allowing the client to map an RTP session to one or more appropriate sub-bitstream.
According to a second aspect, there is provided a method, comprising receiving a bitstream representing coded volumetric video; demultiplexing the bitstream into a number of sub-bitstreams; encapsulating the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; sending the encapsulated sub-bitstreams over one or more RTP session to a client; and providing to the client information allowing the client to map a RTP session to one or more appropriate sub-bitstream.
According to a third aspect, there is provided an apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following receive a bitstream representing coded volumetric video; demultiplex the bitstream into a number of sub-bitstreams; encapsulate the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; send the encapsulated sub-bitstreams over one or more RTP session to a client; and provide to the client information allowing the client to map a RTP session to one or more appropriate subbitstream.
According to a fourth aspect, there is provided computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to receive a bitstream representing coded volumetric video; demultiplex the bitstream into a number of sub-bitstreams; encapsulate the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; send the encapsulated sub-bitstreams over one or more RTP session to a client; and provide to the client information allowing the client to map a RTP session to one or more appropriate subbitstream.
According to an embodiment, an RTP session is created for each subbitstream.
According to an embodiment, a session specific file format describing each RTP session is created and information allowing a client to identify each RTP session is provided and an RTP session is mapped to an appropriate subbitstream.
According to an embodiment, a new session level V3C specific attribute is recorded in the session specific file format, wherein the attribute groups together different V3C specific media streams.
According to an embodiment, a new type is recorded in a session level group attribute of the session specific file format, wherein the attribute groups together different V3C specific media streams.
According to an embodiment, a V3C specific token indicating which RTP session originates from one bitstream is recorded.
According to an embodiment, a parameter containing byte-data for a V3C parameter set and/or multiple parameters forV3C parameter set specific fields is recorded in a session level attribute of the session specific file format.
According to an embodiment, in the session specific file format it is recorded a new media level V3C specific attribute containing a parameter with byte-data representing V3C unit header and/or multiple parameters for V3C unit header specific fields.
According to an embodiment, byte-data representing a V3C parameter set is recorded as part of an RTP header extension, and the presence of the V3C parameter set in the RTP header extension is signaled in the session specific file format as a parameter in rtpmap medial level attribute. According to an embodiment, byte-data representing a V3C unit header is recorded as part of an RTP header extension, and the presence of the V3C unit header in the RTP header extension is recorded in the session specific file format as a parameter in rtpmap medial level attribute.
According to an embodiment, a parameter is recorded in a session specific file format V3C specific session level attribute, wherein the parameter indicates if playout adjustment is required.
According to an embodiment, one RTP session that multiplex all subbitstreams is created.
According to an embodiment, a session specific file format is created, wherein the session specific file format provides information allowing a client to interpret payload types and reconstruct bitstream representing the coded volumetric video.
According to an embodiment, a session specific file format is created, wherein the session specific file format provides information allowing a client to interpret payload types and de-multiplex it into a number of sub-bitstreams or reconstruct single bitstream representing coded volumetric video.
According to an embodiment, the computer program product is embodied on a non-transitory computer readable medium.
Description of the Drawings
In the following, various embodiments will be described in more detail with reference to the appended drawings, in which
Fig. 1 shows an example of a compression process of a volumetric video;
Fig. 2 shows an example of a de-compression process of a volumetric video; Fig. 3 shows an example of a V3C bitstream originated from ISO/IEC 23090-5;
Fig. 4 shows an example of an extension header;
Fig. 5 shows an example architecture of a V3C bitstream delivery over multiple RTP sessions with a client reconstructing the V3C bitstream;
Fig. 6 shows an example architecture of a V3C bitstream delivery over number of RTP session with a client sending V3C sub-bitstream with associated V3C unit header;
Fig. 7 shows an example architecture of a V3C bitstream delivery over one RTP session with a client reconstructing V3C bitstream;
Fig. 8 shows an example architecture of a V3C bitstream delivery over one RTP session with a client sending V3C sub-bitstream with associated V3C unit header;
Fig. 9 is a flowchart illustrating a method according to an embodiment;
Fig. 10 is a flowchart illustrating a method according to another embodiment; and
Fig. 11 shows an apparatus according to an embodiment.
Description of Example Embodiments
The following description and drawings are illustrative and are not to be construed as unnecessarily limiting. The specific details are provided for a thorough understanding of the disclosure. However, in certain instances, well- known or conventional details are not described in order to avoid obscuring the description. References to one or an embodiment in the present disclosure can be, but not necessarily are, reference to the same embodiment and such references mean at least one of the embodiments. Reference in this specification to “one embodiment” or “an embodiment” means that a particular feature, structure, or characteristic described in connection with the embodiment in included in at least one embodiment of the disclosure.
The present embodiments are in particularly targeted to a solution for V3C mapping V3C bitstream to RTP sessions in SDP.
Figure 1 illustrates an overview of an example of a compression process of a volumetric video. Such process may be applied for example in MPEG Point Cloud Coding (PCC). The process starts with an input point cloud frame 101 that is provided for patch generation 102, geometry image generation 104 and texture image generation 105.
The patch generation 102 process aims at decomposing the point cloud into a minimum number of patches with smooth boundaries, while also minimizing the reconstruction error. For patch generation, the normal at every point can be estimated. An initial clustering of the point cloud can then be obtained by associating each point with one of the following six oriented planes, defined by their normals:
- (1.0, 0.0, 0.0),
- (0.0, 1.0, 0.0),
- (0.0, 0.0, 1.0),
- (-1 .0, 0.0, 0.0),
- (0.0, -1.0, 0.0), and
- (0.0, 0.0, -1.0)
More precisely, each point may be associated with the plane that has the closest normal (i.e., maximizes the dot product of the point normal and the plane normal).
The initial clustering may then be refined by iteratively updating the cluster index associated with each point based on its normal and the cluster indices of its nearest neighbors. The final step may comprise extracting patches by applying a connected component extraction procedure. Patch info determined at patch generation 102 for the input point cloud frame 101 is delivered to packing process 103, to geometry image generation 104 and to texture image generation 105. The packing process 103 aims at mapping the extracted patches onto a 2D plane, while trying to minimize the unused space, and guaranteeing that every TxT (e.g., 16x16) block of the grid is associated with a unique patch. It should be noticed that T may be a user- defined parameter. Parameter T may be encoded in the bitstream and sent to the decoder.
The used simple packing strategy iteratively tries to insert patches into a WxH grid. W and H may be user-defined parameters, which correspond to the resolution of the geometry/texture images that will be encoded. The patch location is determined through an exhaustive search that is performed in raster scan order. The first location that can guarantee an overlapping-free insertion of the patch is selected and the grid cells covered by the patch are marked as used. If no empty space in the current resolution image can fit a patch, then the height H of the grid may be temporarily doubled, and search is applied again. At the end of the process, H is clipped so as to fit the used grid cells.
The geometry image generation 104 and the texture image generation 105 are configured to generate geometry images and texture images respectively. The image generation process may exploit the 3D to 2D mapping computed during the packing process to store the geometry and texture of the point cloud as images. In order to better handle the case of multiple points being projected to the same pixel, each patch may be projected onto two images, referred to as layers. For example, let H(u, y) be the set of points of the current patch that get projected to the same pixel (u, v). The first layer, also called a near layer, stores the point o H u, v) with the lowest depth DO. The second layer, referred to as the far layer, captures the point of H(u, v) with the highest depth within the interval [DO, D0+A], where is a user-defined parameter that describes the surface thickness. The generated videos may have the following characteristics:
• Geometry: WxH YUV420-8bit,
• Texture: WxH YUV420-8bit,
It is to be noticed that the geometry video is monochromatic. In addition, the texture generation procedure exploits the reconstructed/smoothed geometry in order to compute the colors to be associated with the re-sampled points. The geometry images and the texture images may be provided to image padding 107. The image padding 107 may also receive as an input an occupancy map (OM) 106 to be used with the geometry images and texture images. The occupancy map 106 may comprise a binary map that indicates for each cell of the grid whether it belongs to the empty space or to the point cloud. In other words, the occupancy map (OM) may be a binary image of binary values where the occupied pixels and non-occupied pixels are distinguished and depicted respectively. The occupancy map may alternatively comprise a non-binary image allowing additional information to be stored in it. Therefore, the representative values of the DOM (Deep Occupancy Map) may comprise binary values or other values, for example integer values. It should be noticed that one cell of the 2D grid may produce a pixel during the image generation process. Such an occupancy map may be derived from the packing process 103.
The padding process 107 aims at filling the empty space between patches in order to generate a piecewise smooth image suited for video compression. For example, in a simple padding strategy, each block of TxT (e.g., 16x16) pixels is compressed independently. If the block is empty (i.e., unoccupied, i.e., all its pixels belong to empty space), then the pixels of the block are filled by copying either the last row or column of the previous TxT block in raster order. If the block is full (i.e., occupied, i.e., no empty pixels), nothing is done. If the block has both empty and filled pixels (i.e., edge block), then the empty pixels are iteratively filled with the average value of their non-empty neighbors.
The padded geometry images and padded texture images may be provided for video compression 108. The generated images/layers may be stored as video frames and compressed using for example the HM16.16 video codec according to the HM configurations provided as parameters. The video compression 108 also generates reconstructed geometry images to be provided for smoothing 109, wherein a smoothed geometry is determined based on the reconstructed geometry images and patch info from the patch generation 102. The smoothed geometry may be provided to texture image generation 105 to adapt the texture images. The patch may be associated with auxiliary information being encoded/decoded for each patch as metadata. The auxiliary information may comprise index of the projection plane, 2D bounding box, 3D location of the patch.
For example, the following metadata may be encoded/decoded for every patch:
- index of the projection plane o Index 0 for the planes (1 .0, 0.0, 0.0) and (-1 .0, 0.0, 0.0) o Index 1 for the planes (0.0, 1 .0, 0.0) and (0.0, -1 .0, 0.0) o Index 2 for the planes (0.0, 0.0, 1 .0) and (0.0, 0.0, -1 .0)
- 2D bounding box (uO, vO, ul, vl)
- 3D location (xO, yO, z0) of the patch represented in terms of depth 30, tangential shift sO and bitangential shift rO. According to the chosen projection planes, (50, sO, rO) may be calculated as follows: o Index 0, 30= xO, s0=z0 and rO = y0 o Index 1, 30= yO, s0=z0 and rO = x0 o Index 2, 30= zO, s0=x0 and rO = yO
Also, mapping information providing for each TxT block its associated patch index may be encoded as follows:
- For each TxT block, let L be the ordered list of the indexes of the patches such that their 2D bounding box contains that block. The order in the list is the same as the order used to encode the 2D bounding boxes. L is called the list of candidate patches.
- The empty space between patches is considered as a patch and is assigned the special index 0, which is added to the candidate patches list of all the blocks.
- Let I be index of the patch, which the current TxT block belongs to, and let J be the position of I in L. Instead of explicitly coding the index I, its position J is arithmetically encoded instead, which leads to better compression efficiency.
The occupancy map consists of a binary map that indicates for each cell of the grid whether it belongs to the empty space or to the point cloud. One cell of the 2D grid produces a pixel during the image generation process. The occupancy map compression 110 leverages the auxiliary information described in previous section, in order to detect the empty TxT blocks (i.e., blocks with patch index 0). The remaining blocks may be encoded as follows: The occupancy map can be encoded with a precision of a BOxBO blocks. B0 is a configurable parameter. In order to achieve lossless encoding, B0 may be set to 1 . In practice B0=2 or B0=4 results in visually acceptable results, while significantly reducing the number of bits required to encode the occupancy map.
The compression process may comprise one or more of the following example operations:
• Binary values may be associated with BOxBO sub-blocks belonging to the same TxT block. A value 1 associated with a sub-block, if it contains at least a non-padded pixel, and 0 otherwise. If a sub-block has a value of 1 it is said to be full, otherwise it is an empty sub-block.
• If all the sub-blocks of a TxT block are full (i.e., have value 1 ). The block is said to be full. Otherwise, the block is said to be non-full.
• A binary information may be encoded for each TxT block to indicate whether it is full or not.
• If the block is non-full, an extra information indicating the location of the full/empty sub-blocks may be encoded as follows: o Different traversal orders may be defined for the sub-blocks, for example horizontally, vertically, or diagonally starting from top right or top left corner o The encoder chooses one of the traversal orders and may explicitly signal its index in the bitstream. o The binary values associated with the sub-blocks may be encoded by using a run-length encoding strategy.
■ The binary value of the initial sub-block is encoded.
■ Continuous runs of 0s and 1 s are detected, while following the traversal order selected by the encoder.
■ The number of detected runs is encoded.
■ The length of each run, except of the last one, is also encoded.
Figure 2 illustrates an overview of a de-compression process for MPEG Point Cloud Coding (PCC). A de-multiplexer 201 receives a compressed bitstream, and after de-multiplexing, provides compressed texture video and compressed geometry video to video decompression 202. In addition, the de-multiplexer 201 transmits compressed occupancy map to occupancy map decompression 203. It may also transmit a compressed auxiliary patch information to auxiliary patch-info compression 204. Decompressed geometry video from the video decompression 202 is delivered to geometry reconstruction 205, as are the decompressed occupancy map and decompressed auxiliary patch information. The point cloud geometry reconstruction 205 process exploits the occupancy map information in order to detect the non-empty pixels in the geometry/texture images/layers. The 3D positions of the points associated with those pixels may be computed by leveraging the auxiliary patch information and the geometry images.
The reconstructed geometry image may be provided for smoothing 206, which aims at alleviating potential discontinuities that may arise at the patch boundaries due to compression artifacts. The implemented approach moves boundary points to the centroid of their nearest neighbors. The smoothed geometry may be transmitted to texture reconstruction 207, which also receives a decompressed texture video from video decompression 202. The texture reconstruction 207 outputs a reconstructed point cloud. The texture values for the texture reconstruction are directly read from the texture images.
The point cloud geometry reconstruction process exploits the occupancy map information in order to detect the non-empty pixels in the geometry/texture images/layers. The 3D positions of the points associated with those pixels are computed by levering the auxiliary patch information and the geometry images. More precisely, let P be the point associated with the pixel (u, v) and let (30, sO, rO) be the 3D location of the patch to which it belongs and (uO, vO, ul, vl) its 2D bounding box. P can be expressed in terms of depth 3(u, v), tangential shift s(u, v) and bi-tangential shift r(u, v) as follows:
3(u, v) = 30 + g(u, v) s(u, v) = sO - uO + u r(u, v) = rO - vO + v where g(u, v) is the luma component of the geometry image. For the texture reconstruction, the texture values can be directly read from the texture images. The result of the decoding process is a 3D point cloud reconstruction.
There are alternatives to capture and represent a volumetric frame. The format used to capture and represent the volumetric frame depends on the process to be performed on it, and the target application using the volumetric frame. As a first example a volumetric frame can be represented as a point cloud. A point cloud is a set of unstructured points in 3D space, where each point is characterized by its position in a 3D coordinate system (e.g., Euclidean), and some corresponding attributes (e.g., color information provided as RGBA value, or normal vectors). As a second example, a volumetric frame can be represented as images, with or without depth, captured from multiple viewpoints in 3D space. In other words, the volumetric video can be represented by one or more view frames (where a view is a projection of a volumetric scene on to a plane (the camera plane) using a real or virtual camera with known/computed extrinsic and intrinsic). Each view may be represented by a number of components (e.g., geometry, color, transparency, and occupancy picture), which may be part of the geometry picture or represented separately. As a third example, a volumetric frame can be represented as a mesh. Mesh is a collection of points, called vertices, and connectivity information between vertices, called edges. Vertices along with edges form faces. The combination of vertices, edges and faces can uniquely approximate shapes of objects.
Depending on the capture, a volumetric frame can provide viewers the ability to navigate a scene with six degrees of freedom, i.e., both translational and rotational movement of their viewing pose (which includes yaw, pitch, and roll). The data to be coded for a volumetric frame can also be significant, as a volumetric frame can contain many numbers of objects, and the positioning and movement of these objects in the scene can result in many dis-occluded regions. Furthermore, the interaction of the light and materials in objects and surfaces in a volumetric frame can generate complex light fields that can produce texture variations for even a slight change of pose.
A sequence of volumetric frames is a volumetric video. Due to large amount of information, storage and transmission of a volumetric video requires compression. A way to compress a volumetric frame can be to project the 3D geometry and related attributes into a collection of 2D images along with additional associated metadata. The projected 2D images can then be coded using 2D video and image coding technologies, for example ISO/IEC 14496- 10 (H.264/AVC) and ISO/IEC 23008-2 (H.265/HEVC). The metadata can be coded with technologies specified in specification such as ISO/IEC 23090-5. The coded images and the associated metadata can be stored or transmitted to a client that can decode and render the 3D volumetric frame.
In the following, a short reference of ISO/IEC 23090-5 Visual Volumetric Videobased Coding (V3C) and Video-based Point Cloud Compression (V-PCC) 2nd Edition is given. ISO/IEC 23090-5 specifies the syntax, semantics, and process for coding volumetric video. The specified syntax is designed to be generic, so that it can be reused for a variety of applications. Point clouds, immersive video with depth, and mesh representations can all use ISO/IEC 23090-5 standard with extensions that deal with the specific nature of the final representation. The purpose of the specification is to define how to decode and interpret the associated data (for example atlas data in ISO/IEC 23090-5) which tells a Tenderer how to interpret 2D frames to reconstruct a volumetric frame.
Two applications of V3C (ISO/IEC 23090-5) have been defined, V-PCC (ISO/IEC 23090-5) and MIV (ISO/IEC 23090-12). MIV and V-PCC use number of V3C syntax elements with a slightly modified semantics. An example on how the generic syntax element can be differently interpreted by the application is pdu_projection_id.
In case of V-PCC, the syntax element pdu_projection_id specifies the index of the projection plane for the patch. There can be 6 or 18 projection planes in V- PCC, and they are implicit, i.e., pre-determined. In case of MIV, pdu_projection_id corresponds to a view ID, i.e., identifies which view the patch originated from. View IDs and their related information is explicitly provided in MIV view parameters list and may be tailored for each content.
MPEG 3DG (ISO SC29 WG7) group has started a work on a third application of V3C - the mesh compression. It is also envisaged that mesh coding will re- use V3C syntax as much as possible and can also slightly modify the semantics.
To differentiate between applications of V3C bitstream that allow a client to properly interpret the decoded data, V3C uses the ptl_profile_toolset_idc parameter.
V3C bitstream is a sequence of bits that forms the representation of coded volumetric frames and the associated data making one or more coded V3C sequences (CVS). Where CVS is a sequence of bits identified and separated by appropriate delimiters, and is required to start with a VPS, includes a V3C unit, and contains one or more V3C units with atlas sub-bitstream or video subbitstream. This is illustrated in Figure 3. Video sub-bitstream and atlas subbitstreams can be referred to as V3C sub-bitstreams. A V3C unit header in conjunction with VPS information identify which V3C sub-bitstream a V3C unit contains and how to interpret it. An example of this is shown herein below:
Figure imgf000016_0001
V3C bitstream can be stored according to Annex C of ISO/IEC 23090-5, which specifies syntax and semantics of a sample stream format to be used by applications that deliver some or all of the V3C unit stream as an ordered stream of bytes or bits within which the locations of V3C unit boundaries need to be identifiable from patterns in the data.
CVS start with a VPS (V3C Parameter Set), which allows to interpret each V3C unit that vuh_v3c_parameter_set_id specifies the value of vps_v3c_parameter_set_id for the active V3C VPS. The VPS provides the following information about V3C bitstream among others:
• Profile, tier, and level to which the bitstream is conformant
• Number of atlases that constitute to the V3C bitstream
• Number of occupancies, geometry, attributes video-sub bitstreams
• Number of maps for each geometry and attribute video components • Mapping information from attribute index to attribute type
Figure imgf000017_0001
Figure imgf000018_0001
The V3C includes signalling mechanisms, through profile_tier_level syntax structure in VPS to support interoperability while restricting capabilities of V3C profiles. V3C also includes an initial set of tool constraint flags to indicate additional restrictions on profile. Currently the sub-profile indicator syntax element is always present, but the value of OxFFFFFFFF indicates that no subprofile is used, i.e., the full profile is supported.
Figure imgf000018_0002
1 Under preparation. Stage at time of publication: ISO/IEC CD 23090-12:2020
Figure imgf000019_0001
A Real Time Transfer Protocol (RTP) is intended for an end-to-end, real-time transfer or streaming media and provides facilities for jitter compensation and detection of packet loss and out-of-order delivery. RTP allows data transfer to multiple destinations through IP multicast or to a specific destination through IP unicast. The majority of the RTP implementations are built on the User Datagram Protocol (UDP). Other transport protocols may also be utilized. RTP is used in together with other protocols such as H.323 and Real Time Streaming Protocol RTSP.
The RTP specification describes two protocols: RTP and RTCP. RTP is used for the transfer of multimedia data, and the RTCP is used to periodically send control information and QoS parameters.
RTP sessions may be initiated between client and server using a signalling protocol, such as H.323, the Session Initiation Protocol (SIP), or RTSP. These protocols may use the Session Description Protocol (RFC 8866) to specify the parameters for the sessions.
RTP is designed to carry a multitude of multimedia formats, which permits the development of new formats without revising the RTP standard. To this end, the information required by a specific application of the protocol is not included in the generic RTP header. For a class of applications (e.g., audio, video), an RTP profile may be defined. For a media format (e.g., a specific video coding format), an associated RTP payload format may be defined. Every instantiation of RTP in a particular application may require a profile and payload format specifications.
The profile defines the codecs used to encode the payload data and their mapping to payload format codecs in the protocol field Payload Type (PT) of the RTP header. For example, RTP profile for audio and video conferences with minimal control is defined in RFC 3551. The profile defines a set of static payload type assignments, and a dynamic mechanism for mapping between a payload format, and a PT value using Session Description Protocol (SDP). The latter mechanism is used for newer video codec such as RTP payload format for H.264 Video defined in RFC 6184 or RTP Payload Format for High Efficiency Video Coding (HEVC) defined in RFC 7798.
An RTP session is established for each multimedia stream. Audio and video streams may use separate RTP sessions, enabling a receiver to selectively receive components of a particular stream. The RTP specification recommends even port number for RTP, and the use of the next odd port number of the associated RTCP session. A single port can be used for RTP and RTCP in applications that multiplex the protocols.
RTP packets are created at the application layer and handed to the transport layer for delivery. Each unit of RTP media data created by an application begins with the RTP packet header.
The RPT header has a minimum size of 12 bytes. After the header, optional header extensions may be present. This is followed by the RTP payload, the format of which is determined by the particular class of application. The fields in the header are as follows:
• Version: (2 bits) Indicates the version of the protocol.
• P (Padding): (1 bit) Used to indicate if there are extra padding bytes at the end of the RTP packet.
• X (Extension): (1 bit) Indicates the presence of an extension header between the header and payload data. The extension header is application or profile specific.
• CC (CSRC count): (4 bits) Contains the number of CSRC identifiers that follow the SSRC
• M (Marker) : (1 bit) Signalling used at the application level in a profilespecific manner. If it is set, it means that the current data has some special relevance for the application.
• PT (Payload type): (7 bits) Indicates the format of the payload and thus determines its interpretation by the application. • Sequence number: (16 bits) The sequence number is incremented for each RTP data packet sent and is to be used by the receiver to detect packet loss and to accommodate out-of-order delivery.
• Timestamp: (32 bits) Used by the receiver to play back the received samples at appropriate time and interval. When several media streams are present, the timestamps may be independent in each stream. The granularity of the timing is application specific. For example, video stream may use a 90 kHz clock. The clock granularity is one of the details that is specified in the RTP profile for an application.
• SSRC: (32 bits) Synchronization source identifier uniquely identifies the source of the stream. The synchronization sources within the same RTP session will be unique.
• CSRC: (32 bits each) Contributing source IDs enumerate contributing sources to a stream which has been generated from multiple sources.
• Header extension: (optional, presence indicated by Extension field) The first 32-bit word contains a profile-specific identifier (16 bits) and a length specifier (16 bits) that indicates the length of the extension in 32-bit units, excluding the 32 bits of the extension header. The extension header data is shown in Figure 4.
In this disclosure, the Session Description Protocol (SDP) is used as an example of a session specific file format. SDP is a format for describing multimedia communication sessions for the purposes of announcement and invitation. Its predominant use is in support of conversational and streaming media applications. SDP does not deliver any media streams itself, but is used between endpoints for negotiation of network metrics, media types, and other associated properties. The set of properties and parameters is called a session profile. SDP is extensible for the support of new media types and formats.
The Session Description Protocol describes a session as a group of fields in a text-based format, one field per line. The form of each field is as follows:
<character>=<value><CR><LF> where <character> is a single case-sensitive character and <value> is structured text in a format that depends on the character. Values may be UTF- 8 encoded. Whitespace is not allowed immediately to either side of the equal sign.
Session descriptions consist of three sections: session, timing, and media descriptions. Each description may contain multiple timing and media descriptions. Names are only unique within the associated syntactic construct.
Fields appear in the order, shown below; optional fields are marked with an asterisk: v= (protocol version number, currently only 0) o= (originator and session identifier : username, id, version number, network address) s= (session name : mandatory with at least one UTF-8-encoded character) i=* (session title or short information) u=* (URI of description) e=* (zero or more email address with optional name of contacts) p=* (zero or more phone number with optional name of contacts) c=* (connection inf ormation— not required if included in all media) b=* (zero or more bandwidth information lines)
One or more time descriptions ("t=" and "r=" lines; see below) z=* (time zone adjustments) k=* (encryption key) a=* (zero or more session attribute lines)
Zero or more Media descriptions (each one starting by an "m=" line; see below)
Time description (mandatory): t= (time the session is active) r=* (zero or more repeat times)
Media description (optional): m= (media name and transport address) i=* (media title or information field) c=* ( connection information — optional if included at session level ) b=* ( zero or more bandwidth information lines ) k=* (encryption key) a=* ( zero or more media attribute lines — overriding the Session attribute lines )
Below is a sample session description from RFC 4566. This session is originated by the user “jdoe” at IPv4 address 10.47.16.5. Its name is “SDP Seminar” and extended session information (“A Seminar on the session description protocol”) is included along with a link for additional information and an email address to contact the responsible party, Jane Doe. This session is specified to last two hours using NTP timestamps, with a connection address (which indicates the address clients must connect to or - when a multicast address is provided, as it is here - subscribe to) specified as IPv4 224.2.17.12 with a TTL of 127. Recipients of this session description are instructed to only receive media. Two media descriptions are provided, both using RTP Audio Video Profile. The first is an audio stream on port 49170 using RTP/AVP payload type 0 (defined by RFC 3551 as PCMU), and the second is a video stream on port 51372 using RTP/AVP payload type 99 (defined as “dynamic”). Finally, an attribute is included which maps RTP/AVP payload type 99 to format h263-1998 with a 90 kHz clock rate. RTCP ports for the audio and video streams of 49171 and 51373 respectively are implied. v=0 o=j doe 2890844526 2890842807 IN IP4 10 . 47 . 16 . 5 s=SDP Seminar i=A Seminar on the ses sion description protocol u=http : / / www . example . com/ seminars/ sdp . pdf e= j . doe@example . com ( Jane Doe ) c=IN I P4 224 . 2 . 17 . 12 / 127 t=2873397496 2873404696 a=recvonly m=audio 49170 RTP/AVP 0 m=video 51372 RTP/AVP 99 a=rtpmap : 99 h263-1998 / 90000 SDP uses attributes to extend the core protocol. Attributes can appear within the Session or Media sections and are scoped accordingly as session-level or media-level. New attributes can be added to the standard through registration with IANA. A media description may contain any number of “a=” lines (attributefields) that are media description specific. Session-level attributes convey additional information that applies to the session as a whole rather than to individual media descriptions.
Attributes are either properties or values: a=<attribute-name> a=<attribute-name> : <attribute-value>
Examples of attributes defined in RFC8866 are “rtpmap” and “fmpt”.
“rtpmap” attribute maps from an RTP payload type number (as used in an "m=" line) to an encoding name denoting the payload format to be used. It also provides information on the clock rate and encoding parameters. Up to one "a=rtpmap:" attribute can be defined for each media format specified. This can be the following: m=audio 49230 RTP/AVP 96 97 98 a=rtpmap : 96 L8 / 8000 a=rtpmap : 97 L16/ 8000 a=rtpmap : 98 L16/ 11025/ 2
In the example above, the media types are “audio/L8” and “audio/L16”.
Parameters added to an "a=rtpmap:" attribute may only be those required for a session directory to make the choice of appropriate media to participate in a session. Codec-specific parameters may be added in other attributes, for example, "fmtp".
"fmtp" attribute allows parameters that are specific to a particular format to be conveyed in a way that SDP does not have to understand them. The format can be one of the formats specified for the media. Format-specific parameters, semicolon separated, may be any set of parameters required to be conveyed by SDP and given unchanged to the media tool that will use this format. At most one instance of this attribute is allowed for each format. An example is: a=fmtp : 96 prof ile- level- id=42e016 ; max-mbps=l 08000 ; max-f s=3600
For example RFC7798 defines the following sprop-vps, sprop-sps, sprop-pps, profile-space, profile-id, tier-flag, level-id, interop-constraints, profilecompatibility-indicator, sprop-sub-layer-id, recv-sub-layer-id, max-recv-level- id, tx-mode, max-lsr, max-lps, max-cpb, max-dpb, max-br, max-tr, max-tc, max-fps, sprop-max-don-diff, sprop-depack-buf-nalus, sprop-depack-buf- bytes, depack-buf-cap, sprop-segmentation-id, sprop-spatial-segmentation- idc, dec-parallel-cap, and include-dph.
“group” and “mid” attributes defined in RFC 5888 allows to group "m" lines in SDP for different purposes. An example can be for lip synchronization or for receiving a media flow consisting of several media streams on different transport addresses.
An example would be in a given session description, each "m" line is identified by a token, which is carried in a "mid" attribute below the "m" line. The session description carries session-level "group" attributes that group different "m" lines (identified by their tokens) using different group semantics. The semantics of a group describe the purpose for which the "m" lines are grouped. In the example below, the "group" line indicates that the "m" lines identified by tokens 1 and 2 (the audio and the video "m" lines, respectively) are grouped for the purpose of lip synchronization (LS). v=0 o=Laura 289083124 289083124 IN I P4 one . example . com c=IN I P4 192 . 0 . 2 . 1 t=0 0 a=group : LS 1 2 m=audio 30000 RTP/AVP 0 a=mid : 1 m=video 30002 RTP/AVP 31 a=mid : 2 RFC5888 defines two semantics for group Lip Synchronization (LS), as used in the example above, and Flow Identification (FID). RFC5583 defines another grouping type Decoding Dependency (DDP). RFC8843 defines another grouping type BUNDLE, which among other is utilized when multiple types of media are sent in a single RTP session as described in RFC8860.
"depend" attribute defined in RFC5583 allows to signal two types of decoding dependencies: layered and multi-description.
The following dependency-type values are defined in in RFC5583:
• lay: Layered decoding dependency identifies the described media stream as one or more Media Partitions of a layered Media Bitstream. When "lay" is used, all media streams required for decoding the Operation Point MUST be identified by identification-tag and fmt- dependency following the "lay" string.
• mdc: Multi-descriptive decoding dependency signals that the described media stream is part of a set of an MDC Media Bitstream. By definition, at least N-out-of-M media streams of the group need to be available to from an Operation Point. The values of N and M depend on the properties of the Media Bitstream and are not signaled within this context. When "mdc" is used, all required media streams for the Operation Point MUST be identified by identification-tag and fmt- dependency following the "mdc" string.
The example below shows a session description with three media descriptions, all of type video and with layered decoding dependency ("lay"). Each of the media descriptions includes two possible media format descriptions with different encoding parameters as, e.g., "packetization-mode" (not shown in the example) for the media subtypes "H264" and "H264-SVC" given by the "a=rtpmap:"-line. v=0 o=svcsrv 289083124 289083124 IN IP4 host . example . com s=LAYERED VIDEO SIGNALING Seminar t=0 0 c=IN I P4 192 . 0 . 2 . 1 / 127 a=group:DDP LI L2 L3 m=video 40000 RTP/AVP 96 97 b=AS : 90 a=f ramerate : 15 a=rtpmap:96 H264/90000 a=rtpmap:97 H264/90000 a=mid: LI m=video 40002 RTP/AVP 98 99 b=AS : 64 a=f ramerate : 15 a=rtpmap:98 H264-SVC/90000 a=rtpmap:99 H264-SVC/90000 a=mid: L2 a=depend:98 lay LI: 96, 97; 99 lay LI : 97 m=video 40004 RTP/AVP 100 101 b=AS : 128 a=f ramerate : 30 a=rtpmap:100 H264-SVC/90000 a=rtpmap:101 H264-SVC/90000 a=mid: L3 a=depend:100 lay Ll:96,97; 101 lay Ll:97 L2:99
As defined in RFC3550 and RFC3551 RTP was designed to support multimedia sessions, containing multiple types of media sent simultaneously, by using multiple transport-layer flows, i.e., RTP sessions. This approach, however, is not always beneficial and can:
• increase delay to establish a complete session
• increase state and resource consumption in the middleboxes
• increase risk that a subset of the transport-layer flows will fail to be established
Therefore, in some cases using fewer RTP sessions can reduce the risk of communication failure and can lead to improved reliability and performance. It might seem appropriate for RTP-based applications to send all their RTP streams bundled into one RTP session, running over a single transport-layer flow. However, this was initially prohibited by the RTP specifications RFC3550 and RFC3551, because the design of RTP makes certain assumptions that can be incompatible with sending multiple media types in a single RTP session.
RFC8860 updates RFC3550 and RFC3551 to allow sending an RTP session containing RTP streams with media from multiple media types such as audio, video, and text.
From signalling perspective, it shall be
• ensured that any participant in the RTP session is aware that this is an RTP session with multiple media types;
• ensured that the payload types in use in the RTP session are using unique values, with no overlap between the media types;
• ensured that RTP session-level parameters - for example, the RTCP RR and RS bandwidth modifiers [RFC3556], the RTP/AVPF trr-int parameter [RFC4585], transport protocol, RTCP extensions in use, and any security parameters - are consistent across the session; and
• ensured that RTP and RTCP functions that can be bound to a particular media type are reused where possible, rather than configuring multiple code points for the same thing.
When using SDP signalling, the BUNDLE extension RFC8843 is used to signal RTP sessions containing multiple media types.
The RTP and RTCP packets are then demultiplexed into the different RTP streams based on their SSRC. While the RTP payload type is then used to select the correct media-decoding pathway for each RTP stream. In case not enough payload type values are available, then to associate RTP streams multiplexed on the same transport flow with their respective SDP media description, an urn:ietf:params:rtp-hdrext:sdes:mid RTP header extension from RFC7941 could be used to provide media description identifier that matches the value of the SDP a=mid attribute defined in RFC5888.
SDP can be utilized in a scenario where two entities (i.e., server and client) negotiate at a common understanding and setup of a multimedia session between them. In such scenario one entity (e.g., server), offers the other a description of the desired session (or possible option of a session) from their perspective, and the other participant answers with the desired session from their perspective. This offer/answer model is described in RFC3264 and can be used by other protocols for example Session Initiation Protocol (SIP) RFC 3264.
V3C bitstream is a composition of number of video sub-bitstreams and atlas sub-bitstreams that can be identified by V3C unit headers. Video sub-bitstream can be coded by well-known video coding standards such as AVC, HEVC, WC which are NAL unit based and have well defined RTP payload format (RFC 6184, RFC 7798, and internet draft, respectively).
As each of the sub-bitstream of V3C has its own RTP format type and can be separately represented by the SDP, the problem arises on how to indicate the relation between separate RTP sessions and how to pass to a receiver the information that is provided by V3C Parameter Set and V3C Unit headers.
The present embodiments provide a method to map V3C bitstream to session specific protocol, such as SDP, that allows signalling to a receiver all information required to receive separate RTP sessions and reconstruct V3C bitstream based on the provided information, or pass single RTP session data to appropriate decoders and reconstruct a volumetric frame.
In an embodiment of the solution, a V3C grouping parameter is introduced which comprises media streams to deliver constituent sub bitstreams of atlas, geometry, occupancy, and attribute. In some embodiments, the constituent sub bitstreams may be delivered as separate RTP streams or as multiplexed in a single RTP stream.
In an embodiment of the solution, a V3C mapping attribute which indicates sub-bitstreams of atlas, geometry, occupancy and attribute and related parameters.
In another embodiment of the solution, the V3C group parameter can also have a flag to indicate whether the playout time adjustment can be performed atomically for the individual sub-bitstreams, the default behavior the playout time adjustment will be performed simultaneously to maintain inter stream sample sync. Architecture
Figures 5 to 8 illustrate possible architectures on how a V3C bitstream can be delivered over network using RTP protocols.
In an example shown in Figure 5, V3C bitstream is provided to a server. The server is configured to demultiplex the V3C bitstream into number of V3C subbitstreams. Each V3C sub-bitstreams may be encapsulated to an appropriate RTP payload format and sent over a dedicated RTP session to a client. Along the RTP sessions the server creates a session specific file format, such as a SDP file, that describes each RTP session as well as provides the novel information that allows a client to identify each RTP session and map it to appropriate V3C sub-bitstream. Using the information provided by the SDP and over RTP session a client is able to reconstruct V3C bitstream and provide it to a V3C decoder/renderer.
SDP can be provided in a declarative manner where a client does not have any decision capability, or in case a server has capability to re-encode V3C sub-bitstreams, or V3C sub-bitstreams are provided in number of alternatives, the SDP may be used in offer/answer mode to allow a client to choose the most appropriate codecs.
An example of the architecture presented on Figure 6 is similar to architecture presented on Figure 5, but a client does not reconstruct the V3C bitstream but sends a separate V3C sub-bitstream to a V3C decoder/renderer and signal at initialization V3C unit header associated with given V3C sub-bitstream. A client as well can pass to a V3C decoder a V3C parameter set syntax element that does not have to be encapsulated in V3C unit.
Examples of an architecture presented on Figures 7 and 8 are similar to architecture presented on Figures 5 and 6, respectively, with the difference that server creates only one RTP session that multiplex all V3C sub-bitstream and the session specific file format, such as a SDP, provides appropriate novel information that allow to interpret the payload types and reconstruct V3C bitstream (Figure 7), or pass V3C sub-bitstream together with associated V3C unit header (Figure 8) to V3C decoder/renderer. MEDIA LEVEL
According to an embodiment, a "v3cmtp" media attribute is defined: a=v3cmtp : <f ormat> <v3c specific media-level parameters>
The " v3cmtp " media attribute allows format-specific parameters to be conveyed about a given RTP Payload Type. If present, the <format> parameter MUST be one of the media formats (i.e., RTP payload types) specified for the media stream. The meaning of the <v3c specific media-level parameters> is defined in other embodiments.
“v3cmtp” attribute may be signaled for each payload type separately in the media description or it may be signaled once, to indicate that all payload types share the same “v3cmtp” attribute.
According to an embodiment, a “v3c-unit-header” v3c specific parameter is defined v3c-unit-header=<value>
“v3c-unit-header” provides a V3C unit header byte defined in ISO/IEC 23090- 5. <value> contains base16 [RFC 4648] (hexadecimal) representation of the 4 bytes of V3C unit header.
Alternative encoding schemes may be provided for the <value> such as ASCII, decimal or base64 encoded strings.
According to an embodiment, information from V3C unit header is provided as a separate v3c specific parameters.
A “v3c-u nit-type” v3c specific parameter is defined: v3c-uni t- type=<value>
“v3c-u nit-type” provide a V3C unit type value corresponding to vuh_unit_type defined in ISO/IEC 23090-5, i.e., defines V3C sub-bitstream type. <value> contains vuh_unit_type value. A “v3c-vps-id” v3c specific parameter is defined: v3c-vps-id=<value>
“v3c-vps-id” provide a V3C unit value corresponding to vuh_v3c_parameter_set_id defined in ISO/IEC 23090-5. <value> vuh_v3c_parameter_set_id value.
A “v3c-atlas-id” v3c specific parameter is defined: v3c-atlas-id=<value>
“v3c-atlas-id” provide a V3C unit value corresponding to vuh_atlas_id defined in ISO/IEC 23090-5. <value> contains vuh_atlas_id value.
A “v3c-attr-idx” v3c specific parameter is defined: v3c-attr-idx=<value>
“v3c-attr-idx” provide a V3C unit value corresponding to vuh_attribute_index defined in ISO/IEC 23090-5. <value> contains vuh_attribute_index value.
A “v3c-attr-part-idx” v3c specific parameter is defined: v3c-attr-part-idx=<value>
“v3c-attr-part-idx” provide a V3C unit value corresponding to vuh_attribute_partition_index defined in ISO/IEC 23090-5. <value> vuh_attribute_partition_index value.
A “v3c-map-idx” v3c specific parameter is defined: v3c-map-idx=<value>
“v3c-map-idx” provide a V3C unit value corresponding to vuh_map_index defined in ISO/IEC 23090-5. <value> vuh_map_index value. A “v3c-aux-video-flag” v3c specific parameter is defined: v3 c- aux- video -f lag=<value>
“v3c-aux-video-flag” provides a V3C unit value corresponding to vuh_auxiliary_video_flag defined in ISO/IEC 23090-5. <value> vuh_auxiliary_video_flag value.
According to an embodiment, bytes describing V3C unit header defined in ISO/IEC 23090-5 are carried as part of RTP header extension. A new identifier is defined to indicate that an RTP stream contains header extension as well as to describe how the header extension should be parsed. The new identifier is used with an extmap attribute in media level of the SDP or other session specific file format. urn : ietf : params : rtp-hdrext : v3c : vuh
The 8-bit ID is the local identifier and length field are as defined RFC 5285. The 4 bytes of the RTP header extension contains v3c_unit_header() structure as defined in ISO/IEC 23090-5.
Figure imgf000033_0001
I v3c_unit_header ( 2 bytes ) |
Figure imgf000033_0002
According to an embodiment, bytes describing V3C parameter set defined in ISO/IEC 23090-5 are carried as part of RTP header extension. A new identifier is defined to indicate that an RTP stream contains header extension as well as to describe how the header extension should be parsed. The new identifier is then used with an extmap attribute in media level of the SDP or other session specific file format. urn : iet f : params : rtp-hdrext : v3c : vps The 8-bit ID is the local identifier and length field defines are as defined in RFC 5285. The N bytes of the RTP header extension contains v3c_parameter_set() structure as defined in ISO/IEC 23090-5.
0 1 2 3
Figure imgf000034_0001
I v3c_parameter_set ( ) bytes |
Figure imgf000034_0002
SESSION LEVEL
According to an embodiment, a new token for the 'group' SDP attribute equal to V3C is defined, 'group' attribute on the session level with V3C token allows to indicate which RTP session(s) originates from one V3C bitstream. When 'group' SDP attribute indicates V3C, the tokens that follow are mapped to ‘mid’ values in media levels. Additional v3c specific parameters, e.g., such as v3c- parameter-set may exist. a=group : v3c <tokens> <v3c specific session-level parameters>
According to an embodiment a new 'group-v3c' SDP attribute is defined. 'group-v3c' attribute on the session level with V3C token allows to indicate which RTP session originates from one V3C bitstream. When 'group-v3c' SDP attribute indicates V3C, the tokens that follow can contain tokens that are mapped to ‘mid’ values in media levels as well can contain v3c session-level specific parameters, e.g., such as v3c-parameter-set. a=group-v3c <tokens> <v3c specific session-level parameters> According to an embodiment, one either uses a new session level attribute a=group-v3c to indicate relationship between media sources that belong in the same V3C content, or uses a new type (V3C) for session level a=group: type attribute to indicate the same. The V3C session level attribute implies that the constituent media streams are required to be processed together to successfully decode the V3C. This implication is signaled by the sender using V3C and acted upon by the receiver.
According to an embodiment, a “v3c-parameter-set” v3c specific parameter is defined: v3c-parameter-set=<value>
“v3c-parameter-set” provides V3C parameter set bytes as defined in ISO/IEC 23090-5. <value> contains base16 [RFC 4648] (hexadecimal) representation of the V3C parameter set bytes.
Alternative encoding schemes may be provided for the <value> such as ASCII, decimal or base64 encoded strings.
According to an embodiment, profile, level tier information may be extracted from v3c parameter sets as separate v3c specific parameters.
A “v3c-ptl-level-idc” v3c specific parameter is defined: v3c-ptl-level-idc=<value>
“v3c-ptl-level-idc” provide a V3C unit value corresponding to ptl_level_idc defined in ISO/IEC 23090-5. <value> ptl_level_idc value.
A “v3c-ptl-tier_flag” v3c specific parameter is defined: v3c-ptl-tier-f lag=<value>
“v3c-ptl-tier-flag” provide a V3C unit value corresponding to ptl_tier_flag defined in ISO/IEC 23090-5. <value> ptl_tier_flag value. A “v3c-ptl -codec- ide” v3c specific parameter is defined: v3c-ptl-codec-idc=<value>
“v3c-ptl-codec-idc” provide a V3C unit value corresponding to ptl_profile_codec_group_idc defined in ISO/IEC 23090-5. <value> ptl_profile_codec_group_idc value.
A “v3c-ptl-toolset-idc” v3c specific parameter is defined: v3c-ptl-tool set-idc=<value>
“v3c-ptl-toolset-idc” provide a V3C unit value corresponding to ptl_profile_toolset_idc defined in ISO/IEC 23090-5. <value> ptl_profile_toolset_idc value.
A “v3c-ptl-rec-idc” v3c specific parameter is defined: v3c-ptl-rec-idc=<value>
“v3c-ptl-rec-idc” provide a V3C unit value corresponding to ptl_profile_reconstruction_idc defined in ISO/IEC 23090-5. ‘<value> ptl_profile_reconstruction_idc.
According to an embodiment, an additional attribute is present in the ‘group’ SDP attribute with V3C token. The attribute referred to as “plad” can have an integer value. plad=<value>
A value of 0 or if the “plad” token is absent will result in the receiver synchronizing all the constituent sub-bitstreams without any inter-subbitstream adjustment. Playout adjustment for all the constituent sub-bitstreams corresponding to a single sample can be performed as deemed suitable by the receiver. If the value of “plad” is equal to 1 , the reconstruction and rendering can proceed without waiting for the attribute (e.g., texture) sample data. In other words, at least the occupancy, depth, and atlas data to be present to initiate rendering.
Example 1 - A SDP with RTP Session per V3C component
The example below shows a session description with one V3C object that is composed of four media descriptions that correspond to four V3C components: occupancy, geometry, attribute, and atlas. The example shows the grouping of the media format descriptions of the different media descriptions indicated by "V3C" grouping, and "mid" attributes. Additionally, the grouping attribute "V3C" provides a V3C VPS utilizing v3c-parameter-set parameter of the grouping line.
Each media description, that is indicated by “V3C” grouping, should contain at least one "a=v3cmap" attribute carrying V3C related information. The attribute can contain a v3c-unit-header parameter or use number of explicit parameters that provide the explicit information, e.g., v3c-unit-type, v3c-vps-id,v3c-atlas- id,v3c-attr-idx,v3c-map-idx. v=0 o=svcsrv 289083124 289083124 IN IP4 host.example.com s=V3C SIGNALING t=0 0 c=IN IP4 192.0.2.1/127 a=group: V3C 1 2 3 4 v3c-parameter-set=AF6F00939921878 // dummy value for VPS m=video 40000 RTP/AVP 96 a=rtpmap:96 H264/90000 a=v3cmap:96 v3c-unit-header=l 0000000 // vuh_unit_type 2 (occupancy) a=mid : 1 m=video 40002 RTP/AVP 97 a=rtpmap:97 H264/90000 a=v3cmap:97 v3c-unit-header=l 8000000 // vuh_unit_type 3 (geometry) a=mid : 2 m=video 40004 RTP/AVP 98 a=rtpmap : 98 H264 / 90000 a=v3cmap : 98 v3c-unit-type=4 ; v3c-vps-id=0 ; v3c-atlas- id=0 ; v3c-attr-idx=0 ; v3c-map-idx=0 / / attribute texture a=mid : 3 m=video 40008 RTP/AVP 100 a=rtpmap : 100 ATLAS / 90000 / /ATLAS identifier is not defined, and it is a hypothetical scenario a=v3cmap : 100 v3c-unit-type=l ; v3c-vps-id=0 a=mid : 4
Example 2 - A SDP with an RTP stream per V3C component multiplexed in one RTP Session
The example below shows a session description with one V3C object that is composed of four media descriptions that correspond to four V3C components: occupancy, geometry, attribute, and atlas. The example shows the grouping of the media format descriptions of the different media descriptions indicated by "V3C" grouping, and "mid" attributes. Additionally, the grouping "V3C" provides V3C VPS information utilizing v3c-parameter-set parameter of the grouping line. The SDP also provides “BUNDLE” grouping mechanism required when multiple media are sent on the same transport flow.
Each media description, that is indicated by “V3C” grouping, contains "a= v3cmap" should contain at least one "a=v3cmap" attribute carrying V3C related information. The attribute can contain a v3c-unit-header parameter or number of parameters that provide the explicit information, e.g., v3c-unit-type, v3c-vps- id,v3c-atlas-id,v3c-attr-idx,v3c-map-idx.
Additionally, media descriptions for V3C video components have the same payload type. In this situation RTP and RTCP packets are demultiplexed into different RTP streams based on their SSRC. In order to select the correct media-decoding pathway for each RTP stream, “extmap” attribute with “urn:ietf:params:rtp-hdrext:sdes:mid” URI is used to map RTP header extension to correct media description. A client also knows that each RTP stream containing a V3C component has a unique SSRC value. v=0 o=svcsrv 289083124 289083124 IN IP4 host.example.com s=V3C SIGNALING t=0 0 c=IN IP4 192.0.2.1/127 a=group: BUNDLE 1 2 3 4 a=group:V3C 1 2 3 4 v3c-ptl-level-idc=10;v3c-parameter- set=AF6F00939921878 m=video 40000 RTP/AVP 96 a=rtpmap:96 H264/90000 a=v3cmap: 96 v3c-unit-type=2 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=mid : 1 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid m=video 40000 RTP/AVP 96 a=rtpmap:96 H264/90000 a=v3cmap: 96 v3c-unit-type=3 ; v3c-vps-id=0 ; v3c-atlas-id=0 ; a=mid : 2 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid m=video 40000 RTP/AVP 96 a=rtpmap:96 H264/90000 a=v3cmap: 96 v3c-unit-type=4 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=mid : 3 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid m=video 40000 RTP/AVP 96 a=rtpmap:96 ATLAS/90000 a=v3cmap: 96 v3c-unit-type=l ; v3c-vps-id=0 ; v3c-atlas-id=0 a=mid : 4 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid
Example 3 An Offer/Answer Offer
The offer example below shows a session description with one V3C object that is composed of four media descriptions that correspond to four V3C components: occupancy, geometry, attribute, and atlas. The video components of V3C are proposed to a client in three different coding alternatives H.264, H.265, and H.266. v=0 o=svcsrv 289083124 289083124 IN IP4 host.example.com s=V3C SIGNALING t=0 0 c=IN IP4 192.0.2.1/127 a=group: BUNDLE 1 2 3 4 a=group:V3C 4 3 2 1 v3c-ptl-level-idc=10;v3c-parameter- set=AF6F00939921878 m=video 40000 RTP/AVP 96 97 98 a=rtpmap:96 H264/90000 a=rtpmap:97 H265/90000 a=rtpmap:98 H266/90000 a=v3cmap: 96 v3c-unit-type=2 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=v3cmap: 97 v3c-unit-type=2 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=v3cmap: 98 v3c-unit-type=2 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=mid : 1 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid m=video 40000 RTP/AVP 96 97 98 a=rtpmap:96 H264/90000 a=rtpmap:97 H265/90000 a=rtpmap:98 H266/90000 a=v3cmap: 96 v3c-unit-type=3 ; v3c-vps-id=0 ; v3c-atlas-id=0 ; a=v3cmap: 97 v3c-unit-type=3 ; v3c-vps-id=0 ; v3c-atlas-id=0 ; a=v3cmap: 98 v3c-unit-type=3 ; v3c-vps-id=0 ; v3c-atlas-id=0 ; a=mid : 2 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid m=video 40000 RTP/AVP 96 97 98 a=rtpmap:96 H264/90000 a=rtpmap:97 H265/90000 a=rtpmap:98 H266/90000 a=v3cmap: 96 v3c-unit-type=4 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=v3cmap: 97 v3c-unit-type=4 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=v3cmap: 98 v3c-unit-type=4 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=mid : 3 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid m=video 40000 RTP/AVP 96 a=rtpmap:96 ATLAS/90000 a=v3cmap: 96 v3c-unit-type=l ; v3c-vps-id=0 ; v3c-atlas-id=0 a=mid : 4 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid
Answer
Client provides an SDP answer, where it selects a different video codec for each V3C video component. v=0 o=svcclt 289083124 289083124 IN IP4 host.example.com s=V3C SIGNALING t=0 0 c=IN IP4 192.0.2.2/127 a=group: BUNDLE 1 2 3 4 a=group:V3C 4 3 2 1 v3c-ptl-level-idc=10;v3c-parameter- set=AF6F00939921878 m=video 0 RTP/AVP 96 a=rtpmap:96 H264/90000 a=v3cmap: 96 v3c-unit-type=2 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=mid : 1 a=bundle-only a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid m=video 0 RTP/AVP 97 a=rtpmap:97 H265/90000 a=v3cmap : 97 v3c-unit-
Figure imgf000042_0001
-vps-id=0 ; v3c-atlas-id=0 ; a=mid : 2 a=bundle-only m=video 0 RTP/AVP 98 a=rtpmap : 98 H266/ 90000 a=v3cmap : 98 v3c-unit-type=4 ; v3c-vps-id=0 ; v3c-atlas-id=0 a=mid : 3 a=bundle-only m=video 40000 RTP/AVP 96 a=rtpmap : 96 ATLAS / 90000 a=v3cmap : 96 v3c-unit-type=l ; v3c-vps-id=0 ; v3c-atlas-id=0 a=mid : 4 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid
Example 4 An example of V3C RTP header extensions usage
The example below shows a session description with one V3C object that is composed of four media descriptions that correspond to four V3C components: occupancy, geometry, attribute, and atlas. The SDP provides “BUNDLE” grouping mechanism required when multiple media are sent on the same transport flow [RFC 8843].
All V3C components have the same payload type. In this case RTP and RTCP packets are demultiplexed into different RTP streams based on their SSRC. In order to select the correct media-decoding pathway for each RTP stream a “urn:ietf:params:rtp-hdrext:sdes:mid” URI is added in the “extmap” attribute. Each RTP stream has an RTP header extension that carries V3C unit header identified by “urn:ietf:params:rtp-hdrext:v3c:vuh” URI. Atlas RTP stream may additionally have RTP header extension that carries V3C VPS identified by urn:ietf:params:rtp-hdrext:v3c:vps. RTP header extension is signalled using “extmap” mapping attribute. v=0 o=svcsrv 289083124 289083124 IN I P4 host . example . com s=V3C S IGNALING t=0 0 c=IN IP4 192.0.2.1/127 a=group: BUNDLE 1 2 3 4 m=video 40000 RTP/AVP 96 a=rtpmap:96 H264/90000 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid a=extmap : 2 urn : ietf : params : rtp-hdrext : v3c : vuh a=mid : 1 m=video 40000 RTP/AVP 96 a=rtpmap:96 H264/90000 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid a=extmap : 2 urn : ietf : params : rtp-hdrext : v3c : vuh a=mid : 2 m=video 40000 RTP/AVP 96 a=rtpmap:96 H264/90000 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid a=extmap : 2 urn : ietf : params : rtp-hdrext : v3c : vuh a=mid : 3 m=video 40000 RTP/AVP 96 a=rtpmap:96 ATLAS/90000 a=extmap : 1 urn : ietf : params : rtp-hdrext : sdes : mid a=extmap : 2 urn : ietf : params : rtp-hdrext : v3c : vuh a=extmap : 3 urn : ietf : params : rtp-hdrext : v3c : vps a=mid : 4
In the previous examples, SDP has been used as an example of a session specific file format.
The method according to an embodiment is shown in Figure 9. The method generally comprises receiving 905 a bitstream representing coded volumetric video; demultiplexing 910 the bitstream into a number of sub-bitstreams; encapsulating 915 the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format; sending 920 the encapsulated sub-bitstreams over one or more RTP session to a client; and providing 925 to the client information allowing the client to map a RTP session to one or more appropriate subbitstream. Each of the steps can be implemented by a respective module of a computer system.
An apparatus according to an embodiment comprises means for receiving a bitstream representing coded volumetric video; means for demultiplexing the bitstream into a number of sub-bitstreams; means for encapsulating the subbitstreams to a Real-time Transfer Protocol (RTP) payload format; means for sending the encapsulated sub-bitstreams over one or more RTP session to a client; and means for providing to the client information allowing the client to map an RTP session to one or more appropriate sub-bitstream. The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 9 according to various embodiments.
The method according to another embodiment is shown in Figure 10. The method generally comprises receiving 1005 RTP streams as one or more RTP sessions comprising one or more appropriate sub-bitstreams; decapsulating 1010 RTP payload based on the media level description; associating 1015 constituent sub-bitstreams as belonging to single V3C bitstream based on the session level description; and delivering 1020 decapsulated payloads to the V3C decoder/receiver.
An apparatus according to an embodiment comprises means for receiving RTP streams as one or more RTP sessions comprising one or more appropriate sub-bitstreams; means for decapsulating RTP payload based on the media level description; means for associating constituent sub-bitstreams as belonging to single V3C bitstream based on the session level description; and means for delivering decapsulated payloads to the V3C decoder/receiver. The means comprises at least one processor, and a memory including a computer program code, wherein the processor may further comprise processor circuitry. The memory and the computer program code are configured to, with the at least one processor, cause the apparatus to perform the method of Figure 10 according to various embodiments. An example of an apparatus is disclosed with reference to Figure 11. Figure 11 shows a block diagram of a video coding system according to an example embodiment as a schematic block diagram of an electronic device 50, which may incorporate a codec. In some embodiments the electronic device may comprise an encoder or a decoder. The electronic device 50 may for example be a mobile terminal or a user equipment of a wireless communication system or a camera device. The electronic device 50 may be also comprised at a local or a remote server or a graphics processing unit of a computer. The device may be also comprised as part of a head-mounted display device. The apparatus 50 may comprise a display 32 in the form of a liquid crystal display. In other embodiments of the invention the display may be any suitable display technology suitable to display an image or video. The apparatus 50 may further comprise a keypad 34. In other embodiments of the invention any suitable data or user interface mechanism may be employed. For example, the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display. The apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input. The apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection. The apparatus 50 may also comprise a battery (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator). The apparatus may further comprise a camera 42 capable of recording or capturing images and/or video. The camera 42 may be a multi-lens camera system having at least two camera sensors. The camera is capable of recording or detecting individual frames which are then passed to the codec 54 or the controller for processing. The apparatus may receive the video and/or image data for processing from another device prior to transmission and/or storage.
The apparatus 50 may comprise a controller 56 or processor for controlling the apparatus 50. The apparatus or the controller 56 may comprise one or more processors or processor circuitry and be connected to memory 58 which may store data in the form of image, video and/or audio data, and/or may also store instructions for implementation on the controller 56 or to be executed by the processors or the processor circuitry. The controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of image, video and/or audio data or assisting in coding and decoding carried out by the controller.
The apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a IIICC (Universal Integrated Circuit Card) and UICC reader for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network. The apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system, or a wireless local area network. The apparatus 50 may further comprise an antenna 44 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es). The apparatus may comprise one or more wired interfaces configured to transmit and/or receive data over a wired connection, for example an electrical cable or an optical fiber connection.
The various embodiments can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the method. For example, a device may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving, and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of various embodiments.
If desired, the different functions discussed herein may be performed in a different order and/or concurrently with other. Furthermore, if desired, one or more of the above-described functions and embodiments may be optional or may be combined.
Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims.
It is also noted herein that while the above describes example embodiments, these descriptions should not be viewed in a limiting sense. Rather, there are several variations and modifications, which may be made without departing from the scope of the present disclosure as, defined in the appended claims.

Claims

46 Claims:
1 . An apparatus comprising:
- means for receiving a bitstream representing coded volumetric video;
- means for demultiplexing the bitstream into a number of sub-bitstreams;
- means for encapsulating the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format;
- means for sending the encapsulated sub-bitstreams over one or more RTP session to a client; and
- means for providing to the client information allowing the client to map an RTP session to one or more appropriate sub-bitstream.
2. The apparatus according to claim 1 , further comprising means for creating an RTP session for each sub-bitstream.
3. The apparatus according to claim 2, further comprising means for creating a session specific file format describing each RTP session and providing information allowing a client to identify each RTP session and to map an RTP session to an appropriate sub-bitstream.
4. The apparatus according to claim 3, wherein the bitstream is a V3C bitstream, wherein the apparatus further comprises means for recording in the session specific file format, a new session level V3C specific attribute which groups together a number of V3C specific media sub-bitstreams.
5. The apparatus according to claim 3, wherein the bitstream is a V3C specific media stream, wherein the apparatus further comprises means for recording in a session level group attribute of the session specific file format, a new type, which groups together a number of V3C specific media sub-bitstreams.
6. The apparatus according to claim 4 or 5, further comprising means for recording a V3C specific token indicating which RTP session originates from the V3C specific bitstream.
7. The apparatus according to claim 4 or 5, further comprising means for recording in a session level attribute of the session specific file format, a 47 parameter containing byte-data for a V3C parameter set and/or multiple parameters for V3C parameter set specific fields.
8. The apparatus according to claim 4 or 5, further comprising means for recording in the session specific file format, a new media level V3C specific attribute containing a parameter with byte-data representing V3C unit header and/or multiple parameters for V3C unit header specific fields.
9. The apparatus according to claim 4 or 5, further comprising
- means for recording byte-data representing a V3C parameter set as part of an RTP header extension,
- means for signaling in the session specific file format the presence of the V3C parameter set in the RTP header extension as a parameter in rtpmap medial level attribute.
10. The apparatus according to claim 4 or 5, further comprising
- means for recording byte-data representing a V3C unit header as part of an RTP header extension,
- means for recording in the session specific file format the presence of the V3C unit header in the RTP header extension as a parameter in rtpmap medial level attribute.
11. The apparatus according to claim 4 or 5, further comprising means for recording in a V3C specific session level attribute a parameter indicating if playout adjustment is required.
12. The apparatus according to any of the claims 1 to 11 , further comprising means for creating one RTP session that multiplex all sub-bitstreams.
13. The apparatus according to claim 12, further comprising means for creating a session specific file format providing information allowing a client to interpret payload types and reconstruct bitstream representing the coded volumetric video.
14. The apparatus according to claim 12 or 13, further comprising means for creating a session specific file format providing information allowing a client to 48 interpret payload types and de-multiplex it into a number of sub-bitstreams or reconstruct single bitstream representing coded volumetric video.
15. A method, comprising:
- receiving a bitstream representing coded volumetric video;
- demultiplexing the bitstream into a number of sub-bitstreams;
- encapsulating the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format;
- sending the encapsulated sub-bitstreams over one or more RTP session to a client; and
- providing to the client information allowing the client to map a RTP session to one or more appropriate sub-bitstream.
16. An apparatus comprising at least one processor, memory including computer program code, the memory and the computer program code configured to, with the at least one processor, cause the apparatus to perform at least the following:
- receive a bitstream representing coded volumetric video;
- demultiplex the bitstream into a number of sub-bitstreams;
- encapsulate the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format;
- send the encapsulated sub-bitstreams over one or more RTP session to a client; and
- provide to the client information allowing the client to map a RTP session to one or more appropriate sub-bitstream.
17. The apparatus according to claim 16, further comprising computer program code to cause the apparatus to create an RTP session for each sub-bitstream.
18. The apparatus according to claim 17, further comprising computer program code to cause the apparatus to create a session specific file format describing each RTP session and providing information allowing a client to identify each RTP session and to map an RTP session to an appropriate sub-bitstream.
19. The apparatus according to claim 18, wherein the bitstream is a V3C bitstream, wherein the apparatus further comprises computer program code to cause the apparatus to record in the session specific file format, a new session level V3C specific attribute which groups together a number of V3C specific media sub-bitstreams.
20. The apparatus according to claim 18, wherein the bitstream is a V3C bitstream, wherein the apparatus further comprises computer program code to cause the apparatus to record in a session level group attribute of the session specific file format, a new type, which groups together a number of V3C specific media sub-bitstreams.
21. The apparatus according to claim 19 or 20, further comprising computer program code to cause the apparatus to record a V3C specific token indicating which RTP session originates from the V3C specific bitstream.
22. The apparatus according to claim 19 or 20, further comprising computer program code to cause the apparatus to record in a session level attribute of the session specific file format, a parameter containing byte-data for a V3C parameter set and/or multiple parameters for V3C parameter set specific fields.
23. The apparatus according to claim 19 or 20, further comprising computer program code to cause the apparatus to record in the session specific file format, a new media level V3C specific attribute containing a parameter with byte-data representing V3C unit header and/or multiple parameters for V3C unit header specific fields.
24. The apparatus according to claim 19 or 20, further comprising computer program code to cause the apparatus to
- record byte-data representing a V3C parameter set as part of an RTP header extension,
- signal in the session specific file format the presence of the V3C parameter set in the RTP header extension as a parameter in rtpmap medial level attribute.
25. The apparatus according to claim 19 or 20, further comprising computer program code to cause the apparatus to
- record byte-data representing a V3C unit header as part of an RTP header extension, - record in the session specific file format the presence of the V3C unit header in the RTP header extension as a parameter in rtpmap medial level attribute.
26. The apparatus according to claim 19 or 20, further comprising computer program code to cause the apparatus to record in a V3C specific session level attribute a parameter indicating if playout adjustment is required.
27. The apparatus according to any of the claims 16 to 26, further comprising computer program code to cause the apparatus to create one RTP session that multiplex all sub-bitstreams.
28. The apparatus according to claim 27, further comprising computer program code to cause the apparatus to create a session specific file format providing information allowing a client to interpret payload types and reconstruct bitstream representing the coded volumetric video.
29. The apparatus according to claim 27 or 28, further comprising computer program code to cause the apparatus to create a session specific file format providing information allowing a client to interpret payload types and demultiplex it into a number of sub-bitstreams or reconstruct single bitstream representing coded volumetric video.
30. A computer program product comprising computer program code configured to, when executed on at least one processor, cause an apparatus or a system to
- receive a bitstream representing coded volumetric video;
- demultiplex the bitstream into a number of sub-bitstreams;
- encapsulate the sub-bitstreams to a Real-time Transfer Protocol (RTP) payload format;
- send the encapsulated sub-bitstreams over one or more RTP session to a client; and
- provide to the client information allowing the client to map a RTP session to one or more appropriate sub-bitstream.
PCT/FI2022/050659 2021-10-14 2022-10-04 A method, an apparatus and a computer program product for video coding WO2023062271A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP22880465.4A EP4416929A1 (en) 2021-10-14 2022-10-04 A method, an apparatus and a computer program product for video coding

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
FI20216061 2021-10-14
FI20216061 2021-10-14

Publications (1)

Publication Number Publication Date
WO2023062271A1 true WO2023062271A1 (en) 2023-04-20

Family

ID=85987415

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2022/050659 WO2023062271A1 (en) 2021-10-14 2022-10-04 A method, an apparatus and a computer program product for video coding

Country Status (2)

Country Link
EP (1) EP4416929A1 (en)
WO (1) WO2023062271A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117596375A (en) * 2024-01-18 2024-02-23 锋尚文化集团股份有限公司 Thousand-person-level virtual performance cloud data exchange method, device and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180367586A1 (en) * 2017-06-16 2018-12-20 Canon Kabushiki Kaisha Methods, devices, and computer programs for improving streaming of portions of media data
US20210021664A1 (en) * 2019-07-16 2021-01-21 Apple Inc. Streaming of volumetric point cloud content based on session description protocols and real time protocols
WO2021074005A1 (en) * 2019-10-14 2021-04-22 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Immersive viewport dependent multiparty video communication

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180367586A1 (en) * 2017-06-16 2018-12-20 Canon Kabushiki Kaisha Methods, devices, and computer programs for improving streaming of portions of media data
US20210021664A1 (en) * 2019-07-16 2021-01-21 Apple Inc. Streaming of volumetric point cloud content based on session description protocols and real time protocols
WO2021074005A1 (en) * 2019-10-14 2021-04-22 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Immersive viewport dependent multiparty video communication

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"Text of ISO/IEC FDIS 23090-10 Carriage of Visual Volumetric Video-based Coding Data", 134. MPEG MEETING; 20210426 - 20210430; ONLINE; (MOTION PICTURE EXPERT GROUP OR ISO/IEC JTC1/SC29/WG11), 31 August 2021 (2021-08-31), XP030297805 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117596375A (en) * 2024-01-18 2024-02-23 锋尚文化集团股份有限公司 Thousand-person-level virtual performance cloud data exchange method, device and storage medium
CN117596375B (en) * 2024-01-18 2024-03-19 锋尚文化集团股份有限公司 Thousand-person-level virtual performance cloud data exchange method, device and storage medium

Also Published As

Publication number Publication date
EP4416929A1 (en) 2024-08-21

Similar Documents

Publication Publication Date Title
JP6721631B2 (en) Video encoding/decoding method, device, and computer program product
CN110431850B (en) Signaling important video information in network video streaming using MIME type parameters
CN108702503B (en) Method and apparatus for providing video bitstream
CN113287323B (en) Method, client device and computer readable medium for retrieving media data
JP2020526982A (en) Regionwise packing, content coverage, and signaling frame packing for media content
CN114503599A (en) Using extensions in the GLTF2 scene description to support video and audio data
US8432937B2 (en) System and method for recovering the decoding order of layered media in packet-based communication
EP3888375A1 (en) Method, device, and computer program for encapsulating media data into a media file
US20210218976A1 (en) Multiple decoder interface for streamed media data
EP4029275A1 (en) An apparatus, a method and a computer program for video coding and decoding
WO2023062271A1 (en) A method, an apparatus and a computer program product for video coding
WO2023073283A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
US20220407899A1 (en) Real-time augmented reality communication session
US20240080477A1 (en) Method, An Apparatus and A Computer Program Product For Streaming Volumetric Video Content
WO2023175234A1 (en) A method, an apparatus and a computer program product for streaming volumetric video
US20230239453A1 (en) Method, an apparatus and a computer program product for spatial computing service session description for volumetric extended reality conversation
WO2023099809A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
WO2023161556A1 (en) A method, an apparatus and a computer program product for video encoding and video decoding
WO2014047938A1 (en) Digital video code stream decoding method, splicing method and apparatus
EP4284000A1 (en) An apparatus, a method and a computer program for volumetric video
WO2024194520A1 (en) A method, an apparatus and a computer program product for video coding
WO2024069045A9 (en) An apparatus and a method for processing volumetric video content
EP4300984A1 (en) A method, an apparatus and a computer program product for mapping media bitstream partitions in real-time streaming
RU2784900C1 (en) Apparatus and method for encoding and decoding video
EP4145832A1 (en) An apparatus, a method and a computer program for volumetric video

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22880465

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2022880465

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE

ENP Entry into the national phase

Ref document number: 2022880465

Country of ref document: EP

Effective date: 20240514