WO2020201632A1 - An apparatus, a method and a computer program for omnidirectional video - Google Patents

An apparatus, a method and a computer program for omnidirectional video Download PDF

Info

Publication number
WO2020201632A1
WO2020201632A1 PCT/FI2020/050213 FI2020050213W WO2020201632A1 WO 2020201632 A1 WO2020201632 A1 WO 2020201632A1 FI 2020050213 W FI2020050213 W FI 2020050213W WO 2020201632 A1 WO2020201632 A1 WO 2020201632A1
Authority
WO
WIPO (PCT)
Prior art keywords
track
quality
representation
recommended viewport
video
Prior art date
Application number
PCT/FI2020/050213
Other languages
French (fr)
Inventor
Sujeet Shyamsundar Mate
Igor Curcio
Miska Hannuksela
Emre Aksu
Kashyap KAMMACHI SREEDHAR
Original Assignee
Nokia Technologies Oy
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nokia Technologies Oy filed Critical Nokia Technologies Oy
Publication of WO2020201632A1 publication Critical patent/WO2020201632A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/172Processing image signals image signals comprising non-image signal components, e.g. headers or format information
    • H04N13/178Metadata, e.g. disparity information
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/134Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the element, parameter or criterion affecting or controlling the adaptive coding
    • H04N19/154Measured or subjectively estimated visual quality after decoding, e.g. measurement of distortion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/17Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being an image region, e.g. an object
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/33Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability in the spatial domain
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/30Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using hierarchical techniques, e.g. scalability
    • H04N19/36Scalability techniques involving formatting the layers as a function of picture distortion after decoding, e.g. signal-to-noise [SNR] scalability
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/21805Source of audio or video content, e.g. local disk arrays enabling multiple viewpoints, e.g. using a plurality of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234345Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements the reformatting operation being performed only on part of the stream, e.g. a region of the image or a time segment
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/23439Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements for generating different versions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/4728End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for selecting a Region Of Interest [ROI], e.g. for requesting a higher resolution version of a selected region
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/60Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client 
    • H04N21/65Transmission of management data between client and server
    • H04N21/658Transmission by the client directed to the server
    • H04N21/6587Control parameters, e.g. trick play commands, viewpoint selection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/81Monomedia components thereof
    • H04N21/816Monomedia components thereof involving special video data, e.g 3D video
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/80Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
    • H04N21/83Generation or processing of protective or descriptive data associated with content; Content structuring
    • H04N21/845Structuring of content, e.g. decomposing content into time segments
    • H04N21/8456Structuring of content, e.g. decomposing content into time segments by decomposing the content in the time domain, e.g. in time segments

Definitions

  • the present invention relates to an apparatus, a method and a computer program for omnidirectional video/image coding, decoding, file writing, file reading, and delivery an
  • a video coding system may comprise an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form.
  • the encoder may discard some information in the original video sequence in order to represent the video in a more compact form, for example, to enable the storage/transmission of the video information at a lower bitrate than otherwise might be needed.
  • Video compression systems such as Advanced Video Coding standard (H.264/AVC), the Multiview Video Coding (MVC) extension of H.264/AVC or scalable extensions of HEVC (High Efficiency Video Coding) can be used.
  • H.264/AVC Advanced Video Coding standard
  • MVC Multiview Video Coding
  • HEVC High Efficiency Video Coding
  • Some embodiments provide a method for encoding and decoding video information.
  • a method, apparatus and computer program product for video coding as well as decoding are provided.
  • additional DASH signaling mechanism is provided by some embodiments. Therefore, the recommended viewport content signaling is enhanced to make it better suited for consumption over conventional displays and to enable selection of high-quality media representations for the associated recommended viewport timed metadata track.
  • a method comprising:
  • An apparatus comprises at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least:
  • a computer readable storage medium comprises code for use by an apparatus, which when executed by a processor, causes the apparatus to perform:
  • a first circuitry configured to obtain a track or a representation wherein a recommended viewport is covered with a specified quality with respect to a quality of the remaining areas
  • a second circuitry configured to include, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with the specified quality with respect to the quality of the remaining areas.
  • a method according to a fifth aspect comprises:
  • An apparatus comprises at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least:
  • a computer readable storage medium comprises code for use by an apparatus, which when executed by a processor, causes the apparatus to perform:
  • An apparatus comprises:
  • a first circuitry configured to receive, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a specified quality with respect to a quality of the remaining areas;
  • a second circuitry configured to select, based on the metadata, to process the track or the representation.
  • Further aspects include at least apparatuses and computer program products/code stored on a non-transitory memory medium arranged to carry out the above methods.
  • Fig. la shows an example of a multi-camera system as a simplified block diagram, in accordance with an embodiment
  • FIG. lb shows a perspective view of a multi-camera system, in accordance with an embodiment
  • FIG. 2a illustrates image stitching, projection, and mapping processes, in accordance with an embodiment
  • Fig. 2b illustrates a process of forming a monoscopic equirectangular panorama picture, in accordance with an embodiment
  • FIG. 3 shows an example of mapping a higher resolution sampled front face of a cube map on the same packed virtual reality frame as other cube faces, in accordance with an embodiment
  • Fig. 4a shows an example of image stitching, projection and region-wise packing
  • Fig. 4b shows an example of a process of forming a monoscopic equirectangular panorama picture
  • Fig. 5 shows an example how extractor tracks can be used for tile-based omnidirectional video streaming, in accordance with an embodiment
  • FIG. 6a illustrates an example of an omnidirectional video/image from an event, in accordance with an embodiment
  • FIG. 6b illustrates a user’s viewport of Fig. 6a, in accordance with an embodiment
  • Figs. 6c illustrates an example of an omnidirectional video/image represented by an equirectangular projection, in accordance with an embodiment
  • Fig. 7a shows an example of a hierarchical data model used in Dynamic adaptive streaming over HTTP (DASH);
  • FIG. 7b shows an example of an omnidirectional streaming system
  • Fig. 7c shows an example of content flow in a DASH delivery function of MPEG omnidirectional media format
  • FIG. 8a shows a schematic diagram of an encoder suitable for implementing embodiments of the invention
  • Fig. 8b shows a schematic diagram of a decoder suitable for implementing embodiments of the invention.
  • FIG. 9a shows some elements of a video encoding section, in accordance with an embodiment
  • Fig. 9b shows a video decoding section, in accordance with an embodiment
  • FIG. 10a shows a flow chart of an encoding method, in accordance with an embodiment
  • Fig. 10b shows a flow chart of a decoding method, in accordance with an embodiment
  • FIG. 11 shows a schematic diagram of an example multimedia communication system within which various embodiments may be implemented
  • FIG. 12 shows schematically an electronic device employing embodiments of the invention
  • FIG. 13 shows schematically a user equipment suitable for employing embodiments of the invention
  • Fig. 14 further shows schematically electronic devices employing embodiments of the invention connected using wireless and wired network connections.
  • the invention may be applicable to video coding systems like streaming systems, DVD players, digital television receivers, personal video recorders, systems and computer programs on personal computers, handheld computers and communication devices, as well as network elements such as transcoders and cloud computing arrangements where video data is handled.
  • the Advanced Video Coding standard (which may be abbreviated AVC or H.264/AVC) was developed by the Joint Video Team (JVT) of the Video Coding Experts Group (VCEG) of the Telecommunications Standardization Sector of International Telecommunication Union (ITU-T) and the Moving Picture Experts Group (MPEG) of International Organisation for Standardization (ISO) / International Electrotechnical Commission (IEC).
  • JVT Joint Video Team
  • VCEG Video Coding Experts Group
  • MPEG Moving Picture Experts Group
  • ISO International Organisation for Standardization
  • ISO International Electrotechnical Commission
  • the H.264/AVC standard is published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.264 and ISO/IEC International Standard 14496-10, also known as MPEG-4 Part 10 Advanced Video Coding (AVC).
  • ITU-T Recommendation H.264 and ISO/IEC International Standard 14496-10 also known as MPEG-4 Part 10 Advanced Video Coding (AVC).
  • AVC
  • High Efficiency Video Coding standard (which may be abbreviated HEVC or H.265/HEVC) was developed by the Joint Collaborative Team - Video Coding (JCT-VC) of VCEG and MPEG.
  • JCT-VC Joint Collaborative Team - Video Coding
  • the standard is published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.265 and ISO/IEC International Standard 23008-2, also known as MPEG-H Part 2 High Efficiency Video Coding (HEVC).
  • Extensions to H.265/HEVC include scalable, multiview, three-dimensional, and fidelity range extensions, which may be referred to as SHVC, MV- HEVC, 3D-HEVC, and REXT, respectively.
  • bitstream and coding structures, and concepts of H.264/AVC and HEVC and some of their extensions are described in this section as an example of a video encoder, decoder, encoding method, decoding method, and a bitstream structure, wherein the embodiments may be implemented.
  • Some of the key definitions, bitstream and coding structures, and concepts of H.264/AVC are the same as in HEVC standard - hence, they are described below jointly.
  • the aspects of the invention are not limited to H.264/AVC or HEVC or their extensions, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
  • a syntax element may be defined as an element of data represented in the bitstream.
  • a syntax structure may be defined as zero or more syntax elements present together in the bitstream in a specified order.
  • bitstream syntax and semantics as well as the decoding process for error-free bitstreams are specified in H.264/AVC and HEVC.
  • the encoding process is not specified, but encoders must generate conforming bitstreams.
  • Bitstream and decoder conformance can be verified with the Hypothetical Reference Decoder (HRD).
  • HRD Hypothetical Reference Decoder
  • the standards contain coding tools that help in coping with transmission errors and losses, but the use of the tools in encoding is optional and no decoding process has been specified for erroneous bitstreams.
  • the elementary unit for the input to an H.264/AVC or HEVC encoder and the output of an H.264/AVC or HEVC decoder, respectively, is a picture.
  • a picture given as an input to an encoder may also be referred to as a source picture, and a picture decoded by a decoder may be referred to as a decoded picture.
  • the source and decoded pictures may each be comprised of one or more sample arrays, such as one of the following sets of sample arrays:
  • Luma and two chroma (YCbCr or YCgCo).
  • RGB Green, Blue and Red
  • Arrays representing other unspecified monochrome or tri-stimulus color samplings for example, YZX, also known as XYZ).
  • these arrays may be referred to as luma (or L or Y) and chroma, where the two chroma arrays may be referred to as Cb and Cr; regardless of the actual color representation method in use.
  • the actual color representation method in use may be indicated e.g. in a coded bitstream e.g. using the Video Usability Information (VUI) syntax of H.264/AVC and/or HEVC.
  • VUI Video Usability Information
  • a component may be defined as an array or a single sample from one of the three sample arrays (luma and two chroma) or the array or a single sample of the array that compose a picture in monochrome format.
  • a picture may either be a frame or a field.
  • a frame comprises a matrix of luma samples and possibly the corresponding chroma samples.
  • a field is a set of alternate sample rows of a frame. Fields may be used as encoder input for example when the source signal is interlaced. Chroma sample arrays may be absent (and hence monochrome sampling may be in use) or may be subsampled when compared to luma sample arrays.
  • each of the two chroma arrays has half the height and half the width of the luma array. In 4:2:2 sampling, each of the two chroma arrays has the same height and half the width of the luma array.
  • each of the two chroma arrays has the same height and width as the luma array.
  • the location of chroma samples with respect to luma samples may be determined in the encoder side (e.g. as pre processing step or as part of encoding).
  • the chroma sample positions with respect to luma sample positions may be pre-defined for example in a coding standard, such as H.264/AVC or HEVC, or may be indicated in the bitstream for example as part of VUI of H.264/AVC or HEVC.
  • the source video sequence(s) provided as input for encoding may either represent interlaced source content or progressive source content. Fields of opposite parity have been captured at different times for interlaced source content. Progressive source content contains captured frames.
  • An encoder may encode fields of interlaced source content in two ways: a pair of interlaced fields may be coded into a coded frame or a field may be coded as a coded field.
  • an encoder may encode frames of progressive source content in two ways: a frame of progressive source content may be coded into a coded frame or a pair of coded fields.
  • a field pair or a complementary field pair may be defined as two fields next to each other in decoding and/or output order, having opposite parity (i.e.
  • Some video coding standards or schemes allow mixing of coded frames and coded fields in the same coded video sequence.
  • predicting a coded field from a field in a coded frame and/or predicting a coded frame for a complementary field pair may be enabled in encoding and/or decoding.
  • a partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets.
  • a picture partitioning may be defined as a division of a picture into smaller non-overlapping units.
  • a block partitioning may be defined as a division of a block into smaller non-overlapping units, such as sub-blocks.
  • term block partitioning may be considered to cover multiple levels of partitioning, for example partitioning of a picture into slices, and partitioning of each slice into smaller units, such as macroblocks of H.264/AVC. It is noted that the same unit, such as a picture, may have more than one partitioning. For example, a coding unit of HEVC may be partitioned into prediction units and separately by another quadtree into transform units.
  • a coded picture is a coded representation of a picture.
  • Video coding standards and specifications may allow encoders to divide a coded picture to coded slices or alike. In-picture prediction is typically disabled across slice boundaries. Thus, slices can be regarded as a way to split a coded picture to independently decodable pieces. In H.264/AVC and HEVC, in-picture prediction may be disabled across slice boundaries. Thus, slices can be regarded as a way to split a coded picture into independently decodable pieces, and slices are therefore often regarded as elementary units for transmission.
  • encoders may indicate in the bitstream which types of in-picture prediction are turned off across slice boundaries, and the decoder operation takes this information into account for example when concluding which prediction sources are available. For example, samples from a neighbouring macroblock or CU may be regarded as unavailable for intra prediction, if the neighbouring macroblock or CU resides in a different slice.
  • a macroblock is a 16x16 block of luma samples and the corresponding blocks of chroma samples. For example, in the 4:2:0 sampling pattern, a macroblock contains one 8x8 block of chroma samples per each chroma component.
  • a picture is partitioned to one or more slice groups, and a slice group contains one or more slices.
  • a slice consists of an integer number of macroblocks ordered consecutively in the raster scan within a particular slice group.
  • a coding block may be defined as an NxN block of samples for some value of N such that the division of a coding tree block into coding blocks is a partitioning.
  • a coding tree block may be defined as an NxN block of samples for some value of N such that the division of a component into coding tree blocks is a partitioning.
  • a coding tree unit may be defined as a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples of a picture that has three sample arrays, or a coding tree block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples.
  • a coding unit may be defined as a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples.
  • video pictures are divided into coding units (CU) covering the area of the picture.
  • a CU consists of one or more prediction units (PU) defining the prediction process for the samples within the CU and one or more transform units (TU) defining the prediction error coding process for the samples in the said CU.
  • PU prediction units
  • TU transform units
  • a CU consists of a square block of samples with a size selectable from a predefined set of possible CU sizes.
  • a CU with the maximum allowed size may be named as LCU (largest coding unit) or coding tree unit (CTU) and the video picture is divided into non-overlapping LCUs.
  • LCU largest coding unit
  • CTU coding tree unit
  • An LCU can be further split into a combination of smaller CUs, e.g. by recursively splitting the LCU and resultant CUs.
  • Each resulting CU typically has at least one PU and at least one TU associated with it.
  • Each PU and TU can be further split into smaller PUs and TUs in order to increase granularity of the prediction and prediction error coding processes, respectively.
  • Each PU has prediction information associated with it defining what kind of a prediction is to be applied for the pixels within that PU (e.g. motion vector information for inter predicted PUs and intra prediction directionality information for intra predicted PUs).
  • Each TU can be associated with information describing the prediction error decoding process for the samples within the said TU (including e.g. DCT coefficient information). It is typically signaled at CU level whether prediction error coding is applied or not for each CU. In the case there is no prediction error residual associated with the CU, it can be considered there are no TUs for the said CU.
  • the division of the image into CUs, and division of CUs into PUs and TUs is typically signaled in the bitstream allowing the decoder to reproduce the intended structure of these units.
  • a picture can be partitioned in tiles, which are rectangular and contain an integer number of CTUs.
  • the partitioning to tiles forms a grid that may be characterized by a list of tile column widths (in CTUs) and a list of tile row heights (in CTUs).
  • Tiles are ordered in the bitstream consecutively in the raster scan order of the tile grid.
  • a tile may contain an integer number of slices.
  • a slice consists of an integer number of CTUs.
  • the CTUs are scanned in the raster scan order of CTUs within tiles or within a picture, if tiles are not in use.
  • a slice may contain an integer number of tiles or a slice can be contained in a tile.
  • the CUs have a specific scan order.
  • a slice may be defined as an integer number of coding tree units contained in one independent slice segment and all subsequent dependent slice segments (if any) that precede the next independent slice segment (if any) within the same access unit.
  • An independent slice segment may be defined as a slice segment for which the values of the syntax elements of the slice segment header are not inferred from the values for a preceding slice segment.
  • a dependent slice segment may be defined as a slice segment for which the values of some syntax elements of the slice segment header are inferred from the values for the preceding independent slice segment in decoding order. In other words, only the independent slice segment may have a“full” slice header.
  • An independent slice segment may be conveyed in one NAL unit (without other slice segments in the same NAL unit) and likewise a dependent slice segment may be conveyed in one NAL unit (without other slice segments in the same NAL unit).
  • a coded slice segment may be considered to comprise a slice segment header and slice segment data.
  • a slice segment header may be defined as part of a coded slice segment containing the data elements pertaining to the first or all coding tree units represented in the slice segment.
  • a slice header may be defined as the slice segment header of the independent slice segment that is a current slice segment or the most recent independent slice segment that precedes a current dependent slice segment in decoding order.
  • Slice segment data may comprise an integer number of coding tree unit syntax structures.
  • in-picture prediction may be disabled across slice boundaries.
  • slices can be regarded as a way to split a coded picture into independently decodable pieces, and slices are therefore often regarded as elementary units for transmission.
  • encoders may indicate in the bitstream which types of in-picture prediction are turned off across slice boundaries, and the decoder operation takes this information into account for example when concluding which prediction sources are available. For example, samples from a neighboring macroblock or CU may be regarded as unavailable for intra prediction, if the neighboring macroblock or CU resides in a different slice.
  • NAL Network Abstraction Layer
  • a NAL unit may be defined as a syntax structure containing an indication of the type of data to follow and bytes containing that data in the form of an RBSP interspersed as necessary with emulation prevention bytes.
  • a raw byte sequence payload (RBSP) may be defined as a syntax structure containing an integer number of bytes that is encapsulated in a NAL unit.
  • An RBSP is either empty or has the form of a string of data bits containing syntax elements followed by an RBSP stop bit and followed by zero or more subsequent bits equal to 0.
  • NAL units consist of a header and payload.
  • the NAL unit header indicates the type of the NAL unit and whether a coded slice contained in the NAL unit is a part of a reference picture or a non-reference picture.
  • H.264/AVC includes a 2-bit nal ref idc syntax element, which when equal to 0 indicates that a coded slice contained in the NAL unit is a part of a non-reference picture and when greater than 0 indicates that a coded slice contained in the NAL unit is a part of a reference picture.
  • the NAL unit header for SVC and MVC NAL units may additionally contain various indications related to the scalability and multiview hierarchy.
  • a two-byte NAL unit header is used for all specified NAL unit types.
  • the NAL unit header contains one reserved bit, a six-bit NAL unit type indication (called nal unit type), a six- bit reserved field (called nuh layer id) and a three-bit temporal_id_plusl indication for temporal level.
  • temporal_id_plusl 1.
  • Temporalld 0 corresponds to the lowest temporal level.
  • the value of temporal_id_plusl is required to be non-zero in order to avoid start code emulation involving the two NAL unit header bytes.
  • the bitstream created by excluding all VCL NAL units having a Temporalld greater than or equal to a selected value and including all other VCL NAL units remains conforming. Consequently, a picture having Temporalld equal to TID does not use any picture having a
  • a sub-layer or a temporal sub-layer may be defined to be a temporal scalable layer of a temporal scalable bitstream, consisting of VCL NAL units with a particular value of the Temporalld variable and the associated non-VCL NAL units.
  • layer identifier, Layerld, nuh layer id and layer id are used interchangeably unless otherwise indicated.
  • nuh layer id and/or similar syntax elements in NAL unit header carries scalability layer information.
  • the Layerld value nuh layer id and/or similar syntax elements may be mapped to values of variables or syntax elements describing different scalability dimensions.
  • NAL units can be categorized into Video Coding Layer (VCL) NAL units and non-VCL NAL units.
  • VCL NAL units are typically coded slice NAL units.
  • coded slice NAL units contain syntax elements representing one or more coded macroblocks, each of which corresponds to a block of samples in the uncompressed picture.
  • coded slice NAL units contain syntax elements representing one or more CU.
  • a non-VCL NAL unit may be for example one of the following types: a sequence parameter set, a picture parameter set, a supplemental enhancement information (SEI) NAL unit, an access unit delimiter, an end of sequence NAL unit, an end of bitstream NAL unit, or a filler data NAL unit.
  • SEI Supplemental Enhancement Information
  • Parameter sets may be needed for the reconstruction of decoded pictures, whereas many of the other non-VCL NAL units are not necessary for the reconstruction of decoded sample values.
  • Parameters that remain unchanged through a coded video sequence may be included in a sequence parameter set. Examples of parameters that are required to be unchanged within a coded video sequence in many coding systems and hence included in a sequence parameter set are the width and height of the pictures included in the coded video sequence.
  • the sequence parameter set may optionally contain video usability information (VUI), which includes parameters that may be important for buffering, picture output timing, rendering, and resource reservation.
  • VUI video usability information
  • a sequence parameter set RBSP includes parameters that can be referred to by one or more picture parameter set RBSPs or one or more SEI NAL units containing a buffering period SEI message.
  • a picture parameter set contains such parameters that are likely to be unchanged in several coded pictures.
  • a picture parameter set RBSP may include parameters that can be referred to by the coded slice NAL units of one or more coded pictures.
  • a video parameter set may be defined as a syntax structure containing syntax elements that apply to zero or more entire coded video sequences as determined by the content of a syntax element found in the SPS referred to by a syntax element found in the PPS referred to by a syntax element found in each slice segment header.
  • a video parameter set RBSP may include parameters that can be referred to by one or more sequence parameter set RBSPs.
  • VPS resides one level above SPS in the parameter set hierarchy and in the context of scalability and/or 3D video.
  • VPS may include parameters that are common for all slices across all (scalability or view) layers in the entire coded video sequence.
  • SPS includes the parameters that are common for all slices in a particular (scalability or view) layer in the entire coded video sequence, and may be shared by multiple (scalability or view) layers.
  • PPS includes the parameters that are common for all slices in a particular layer representation (the representation of one scalability or view layer in one access unit) and are likely to be shared by all slices in multiple layer representations.
  • VPS may provide information about the dependency relationships of the layers in a bitstream, as well as many other information that are applicable to all slices across all (scalability or view) layers in the entire coded video sequence.
  • VPS may be considered to comprise two parts, the base VPS and a VPS extension, where the VPS extension may be optionally present.
  • a SEI NAL unit may contain one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, rendering, error detection, error concealment, and resource reservation.
  • SEI messages are specified in H.264/AVC and HEVC, and the user data SEI messages enable organizations and companies to specify SEI messages for their own use.
  • H.264/AVC and HEVC contain the syntax and semantics for the specified SEI messages but no process for handling the messages in the recipient is defined.
  • encoders are required to follow the H.264/AVC standard or the HEVC standard when they create SEI messages, and decoders conforming to the H.264/AVC standard or the HEVC standard, respectively, are not required to process SEI messages for output order conformance.
  • One of the reasons to include the syntax and semantics of SEI messages in H.264/AVC and HEVC is to allow different system specifications to interpret the supplemental information identically and hence interoperate. It is intended that system specifications can require the use of particular SEI messages both in the encoding end and in the decoding end, and additionally the process for handling particular SEI messages in the recipient can be specified.
  • SEI NAL units there are two types, namely the suffix SEI NAL unit and the prefix SEI NAL unit, having a different nal unit type value from each other.
  • the SEI message(s) contained in a suffix SEI NAL unit are associated with the VCL NAL unit preceding, in decoding order, the suffix SEI NAL unit.
  • the SEI message(s) contained in a prefix SEI NAL unit are associated with the VCL NAL unit following, in decoding order, the prefix SEI NAL unit.
  • a coded picture may be defined as a coded representation of a picture containing all coding tree units of the picture.
  • an access unit (AU) may be defined as a set of NAL units that are associated with each other according to a specified classification rule, are consecutive in decoding order, and contain at most one picture with any specific value of nuh layer id.
  • an access unit may also contain non-VCL NAL units.
  • a coded picture with nuh layer id equal to nuhLayerldA may be required to precede, in decoding order, all coded pictures with nuh layer id greater than nuhLayerldA in the same access unit.
  • An AU typically contains all the coded pictures that represent the same output time and/or capturing time.
  • a bitstream may be defined as a sequence of bits, in the form of a NAL unit stream or a byte stream, that forms the representation of coded pictures and associated data forming one or more coded video sequences.
  • a first bitstream may be followed by a second bitstream in the same logical channel, such as in the same file or in the same connection of a communication protocol.
  • An elementary stream (in the context of video coding) may be defined as a sequence of one or more bitstreams.
  • the end of the first bitstream may be indicated by a specific NAL unit, which may be referred to as the end of bitstream (EOB) NAL unit and which is the last NAL unit of the bitstream.
  • EOB end of bitstream
  • a byte stream format has been specified in H.264/AVC and HEVC for transmission or storage environments that do not provide framing structures.
  • the byte stream format separates NAL units from each other by attaching a start code in front of each NAL unit.
  • encoders run a byte-oriented start code emulation prevention algorithm, which adds an emulation prevention byte to the NAL unit payload if a start code would have occurred otherwise.
  • start code emulation prevention may always be performed regardless of whether the byte stream format is in use or not.
  • the bit order for the byte stream format may be specified to start with the most significant bit (MSB) of the first byte, proceed to the least significant bit (LSB) of the first byte, followed by the MSB of the second byte, etc.
  • the byte stream format may be considered to consist of a sequence of byte stream NAL unit syntax structures. Each byte stream NAL unit syntax structure may be considered to comprise one start code prefix followed by one NAL unit syntax structure, as well as trailing and/or heading padding bits and/or bytes.
  • a motion-constrained tile set is such a set of one or more tiles that the inter prediction process is constrained in encoding such that no sample value outside the motion- constrained tile set, and no sample value at a fractional sample position that is derived using one or more sample values outside the motion-constrained tile set, is used for inter prediction of any sample within the motion-constrained tile set.
  • An MCTS may be required to be rectangular. Additionally, the encoding of an MCTS is constrained in a manner that motion vector candidates are not derived from blocks outside the MCTS.
  • sample locations used in inter prediction may be saturated so that a location that would be outside the picture otherwise is saturated to point to the corresponding boundary sample of the picture.
  • motion vectors may effectively cross that boundary or a motion vector may effectively cause fractional sample interpolation that would refer to a location outside that boundary, since the sample locations are saturated onto the boundary.
  • encoders generating MCTSs may apply motion constraints to all tile boundaries of the MCTS, including picture boundaries.
  • the temporal motion-constrained tile sets SEI message of HEVC can be used to indicate the presence of motion-constrained tile sets in the bitstream.
  • a motion-constrained picture is such that the inter prediction process is constrained in encoding such that no sample value outside the picture, and no sample value at a fractional sample position that is derived using one or more sample values outside the picture, would be used for inter prediction of any sample within the picture and/or sample locations used for prediction need not be saturated to be within picture boundaries.
  • a view may be defined as a sequence of pictures representing one camera or viewpoint.
  • the pictures representing a view may also be called view components.
  • a view component may be defined as a coded representation of a view in a single access unit.
  • multiview video coding more than one view are coded in a bitstream. Since views are typically intended to be displayed on stereoscopic or multiview autostereoscopic display or to be used for other 3D arrangements, they typically represent the same scene and are content-wise partly overlapping although representing different viewpoints to the content. Hence, inter- view prediction may be utilized in multiview video coding to take advantage of inter- view correlation and improve compression efficiency.
  • One way to realize inter- view prediction is to include one or more decoded pictures of one or more other views in the reference picture list(s) of a picture being coded or decoded residing within a first view.
  • View scalability may refer to such multiview video coding or multiview video bitstreams, which enable removal or omission of one or more coded views, while the resulting bitstream remains conforming and represents video with a smaller number of views than originally.
  • Frame packing may be defined to comprise arranging more than one input picture, which may be referred to as (input) constituent frames, into an output picture.
  • frame packing is not limited to any particular type of constituent frames or the constituent frames need not have a particular relation with each other.
  • frame packing is used for arranging constituent frames of a stereoscopic video clip into a single picture sequence, as explained in more details in the next paragraph.
  • the arranging may include placing the input pictures in spatially non-overlapping areas within the output picture. For example, in a side-by-side arrangement, two input pictures are placed within an output picture horizontally adjacently to each other.
  • the arranging may also include partitioning of one or more input pictures into two or more constituent frame partitions and placing the constituent frame partitions in spatially non-overlapping areas within the output picture.
  • the output picture or a sequence of frame-packed output pictures may be encoded into a bitstream e.g. by a video encoder.
  • the bitstream may be decoded e.g. by a video decoder.
  • the decoder or a post-processing operation after decoding may extract the decoded constituent frames from the decoded picture(s) e.g. for displaying.
  • a spatial packing of a stereo pair into a single frame is performed at the encoder side as a pre-processing step for encoding and then the frame-packed frames are encoded with a conventional 2D video coding scheme.
  • the output frames produced by the decoder contain constituent frames of a stereo pair.
  • the spatial resolution of the original frames of each view and the packaged single frame have the same resolution.
  • the encoder downsamples the two views of the stereoscopic video before the packing operation.
  • the spatial packing may use for example a side-by-side or top-bottom format, and the downsampling should be performed accordingly.
  • a uniform resource identifier may be defined as a string of characters used to identify a name of a resource. Such identification enables interaction with representations of the resource over a network, using specific protocols.
  • a URI is defined through a scheme specifying a concrete syntax and associated protocol for the URI.
  • the uniform resource locator (URL) and the uniform resource name (URN) are forms of URI.
  • a URL may be defined as a URI that identifies a web resource and specifies the means of acting upon or obtaining the representation of the resource, specifying both its primary access mechanism and network location.
  • a URN may be defined as a URI that identifies a resource by name in a particular namespace.
  • a URN may be used for identifying a resource without implying its location or how to access it.
  • the term requesting locator may be defined to an identifier that can be used to request a resource, such as a file or a segment.
  • a requesting locator may, for example, be a URL or specifically an HTTP URL.
  • a client may use a requesting locator with a communication protocol, such as HTTP, to request a resource from a server or a sender.
  • ISOBMFF ISO base media file format
  • MPEG-4 file format ISO/IEC 14496-14, also known as the MP4 format
  • file format for NAL unit structured video ISO/IEC 14496-15
  • 3 GPP file format 3GPP TS 26.244, also known as the 3GP format.
  • ISOBMFF is the base for derivation of all the above mentioned file formats (excluding the ISOBMFF itself).
  • ISOBMFF Some concepts, structures, and specifications of ISOBMFF are described below as an example of a container file format, based on which some embodiments may be implemented.
  • the aspects of the invention are not limited to ISOBMFF, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
  • a basic building block in the ISO base media file format is called a box.
  • Each box has a header and a payload.
  • the box header indicates the type of the box and the size of the box in terms of bytes.
  • a box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file.
  • the ISO base media file format may be considered to specify a hierarchical structure of boxes.
  • a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box.
  • 4CC four character code
  • the media data may be provided in a media data‘mdat’ box and the movie‘moov’ box may be used to enclose the metadata.
  • the movie‘moov’ box may include one or more tracks, and each track may reside in one corresponding track‘trak’ box.
  • a track may be one of the many types, including a media track that refers to samples formatted according to a media compression format (and its encapsulation to the ISO base media file format).
  • a track may be regarded as a logical channel.
  • Movie fragments may be used e.g. when recording content to ISO files e.g. in order to avoid losing data if a recording application crashes, runs out of memory space, or some other incident occurs. Without movie fragments, data loss may occur because the file format may require that all metadata, e.g., the movie box, be written in one contiguous area of the file. Furthermore, when recording a file, there may not be sufficient amount of memory space (e.g., random access memory RAM) to buffer a movie box for the size of the storage available, and re-computing the contents of a movie box when the movie is closed may be too slow. Moreover, movie fragments may enable simultaneous recording and playback of a file using a regular ISO file parser. Furthermore, a smaller duration of initial buffering may be required for progressive downloading, e.g., simultaneous reception and playback of a file when movie fragments are used and the initial movie box is smaller compared to a file with the same media content but structured without movie fragments.
  • memory space e.g.
  • the movie fragment feature may enable splitting the metadata that otherwise might reside in the movie box into multiple pieces. Each piece may correspond to a certain period of time of a track. In other words, the movie fragment feature may enable interleaving file metadata and media data. Consequently, the size of the movie box may be limited and the use cases mentioned above be realized.
  • the media samples for the movie fragments may reside in an mdat box.
  • a moof box may be provided.
  • the moof box may include the information for a certain duration of playback time that would previously have been in the moov box.
  • the moov box may still represent a valid movie on its own, but in addition, it may include an mvex box indicating that movie fragments will follow in the same file.
  • the movie fragments may extend the presentation that is associated to the moov box in time.
  • the movie fragment there may be a set of track fragments, including anywhere from zero to a plurality per track.
  • the track fragments may in turn include anywhere from zero to a plurality of track runs, each of which document is a contiguous run of samples for that track (and hence are similar to chunks).
  • many fields are optional and can be defaulted.
  • the metadata that may be included in the moof box may be limited to a subset of the metadata that may be included in a moov box and may be coded differently in some cases. Details regarding the boxes that can be included in a moof box may be found from the ISOBMFF specification.
  • a self-contained movie fragment may be defined to consist of a moof box and an mdat box that are consecutive in the file order and where the mdat box contains the samples of the movie fragment (for which the moof box provides the metadata) and does not contain samples of any other movie fragment (i.e. any other moof box).
  • the TrackBox (‘trak’ box) includes in its hierarchy of boxes the SampleDescriptionBox, which gives detailed information about the coding type used, and any initialization information needed for that coding.
  • the SampleDescriptionBox contains an entry-count and as many sample entries as the entry-count indicates.
  • the format of sample entries is track-type specific but derived from generic classes (e.g. VisualSampleEntry, AudioSampleEntry). Which type of sample entry form is used for derivation of the track-type specific sample entry format is determined by the media handler of the track.
  • the track reference mechanism can be used to associate tracks with each other.
  • the TrackReferenceBox includes box(es), each of which provides a reference from the containing track to a set of other tracks. These references are labeled through the box type (i.e. the four-character code of the box) of the contained box(es).
  • the ISO Base Media File Format contains three mechanisms for timed metadata that can be associated with particular samples: sample groups, timed metadata tracks, and sample auxiliary information. Derived specification may provide similar functionality with one or more of these three mechanisms.
  • a sample grouping in the ISO base media file format and its derivatives may be defined as an assignment of each sample in a track to be a member of one sample group, based on a grouping criterion.
  • a sample group in a sample grouping is not limited to being contiguous samples and may contain non-adjacent samples. As there may be more than one sample grouping for the samples in a track, each sample grouping may have a type field to indicate the type of grouping.
  • Sample groupings may be represented by two linked data structures: (1) a SampleToGroupBox (sbgp box) represents the assignment of samples to sample groups; and (2) a SampleGroupDescriptionBox (sgpd box) contains a sample group entry for each sample group describing the properties of the group. There may be multiple instances of the SampleToGroupBox and SampleGroupDescriptionBox based on different grouping criteria. These may be distinguished by a type field used to indicate the type of grouping. SampleToGroupBox may comprise a
  • grouping_type_parameter field that can be used e.g. to indicate a sub-type of the grouping.
  • a Track group enables grouping of tracks based on certain characteristics or the tracks within a group have a particular relationship. Track grouping, however, does not allow any image items in the group.
  • class TrackGroupBox extends Box('trgr') ⁇
  • track group type indicates the grouping type and may be set to a value, or a value registered, or a value from a derived specification or registration.
  • Example value include 'msrc' which indicates that this track belongs to a multi-source presentation.
  • the tracks that have the same value of track group id within a TrackGroupTypeBox of track group type 'msrc' are mapped as being originated from the same source. For example, a recording of a video telephony call may have both audio and video for both participants, and the value of track group id associated with the audio track and the video track of one participant differs from value of track group id associated with the tracks of the other participant.
  • the pair of track group id and track group type identifies a track group within the file.
  • the tracks that contain a particular TrackGroupTypeBox having the same value of track group id and track group type belong to the same track group.
  • Entity grouping is similar to track grouping but enables grouping of both tracks and image items in the same group.
  • syntax of EntityToGroupBox in ISOBMFF is as follows:
  • group id is a non-negative integer assigned to the particular grouping that may not be equal to any group id value of any other EntityToGroupBox, any item ID value of the hierarchy level (file, movie or track) that contains the GroupsListBox, or any track ID value (when the GroupsListBox is contained in the file level)
  • num entities in group specifies the number of entity id values mapped to this entity group entity id is resolved to an item, when an item with item ID equal to entity id is present in the hierarchy level (file, movie or track) that contains the GroupsListBox, or to a track, when a track with track ID equal to entity id is present and the GroupsListBox is contained in the file level.
  • Files conforming to the ISOBMFF may contain any non-timed objects, referred to as items, meta items, or metadata items, in a meta box (four-character code:‘meta’). While the name of the meta box refers to metadata, items can generally contain metadata or media data.
  • the meta box may reside at the top level of the file, within a movie box (four-character code:‘moov’), and within a track box (four-character code:‘trak’), but at most one meta box may occur at each of the file level, movie level, or track level.
  • the meta box may be required to contain a HandlerBox (‘hdlr’) box indicating the structure or format of the‘meta’ box contents.
  • the meta box may list and characterize any number of items that can be referred and each one of them can be associated with a file name and are uniquely identified with the filef by item identifier (item id) which is an integer value.
  • the metadata items may be for example stored in the Item Data Box (‘idat’) box of the meta box or in an 'mdat' box or reside in a separate file. If the metadata is located external to the file then its location may be declared by the DatalnformationBox (four-character code:‘dinf ).
  • XML extensible Markup Language
  • MetaBox the metadata may be encapsulated into either the XMLBox (four-character code:‘xml’) or the BinaryXMLBox (four-character code:‘bxml’).
  • An item may be stored as a contiguous byte range, or it may be stored in several extents, each being a contiguous byte range. In other words, items may be stored fragmented into extents, e.g. to enable interleaving.
  • An extent is a contiguous subset of the bytes of the resource. The resource can be formed by concatenating the extents.
  • ItemPropertiesBox enables the association of any item with an ordered set of item properties. Item properties may be regarded as small data records.
  • the ItemPropertiesBox consists of two parts:
  • ItemPropertyContainerBox that contains an implicitly indexed list of item properties, and one or more ItemPropertyAssociationBox(es) that associate items with item properties.
  • the restricted video ('resv') sample entry and mechanism has been specified for the ISOBMFF in order to handle situations where the file author requires certain actions on the player or renderer after decoding of a visual track. Players not recognizing or not capable of processing the required actions are stopped from decoding or rendering the restricted video tracks.
  • the 'resv' sample entry mechanism applies to any type of video codec.
  • a RestrictedSchemelnfoBox is present in the sample entry of 'resv' tracks and comprises a OriginalFormatBox, SchemeTypeBox, and
  • SchemelnformationBox The original sample entry type that would have been unless the 'resv' sample entry type were used is contained in the OriginalFormatBox.
  • the SchemeTypeBox provides an indication which type of processing is required in the player to process the video.
  • SchemelnformationBox comprises further information of the required processing.
  • the scheme type may impose requirements on the contents of the SchemelnformationBox.
  • the stereo video scheme indicated in the SchemeTypeBox indicates that when decoded frames either contain a representation of two spatially packed constituent frames that form a stereo pair (frame packing) or only one view of a stereo pair (left and right views in different tracks).
  • StereoVideoBox may be contained in SchemelnformationBox to provide further information e.g. on which type of frame packing arrangement has been used (e.g. side-by-side or top-bottom).
  • the Matroska file format is capable of (but not limited to) storing any of video, audio, picture, or subtitle tracks in one file.
  • Matroska may be used as a basis format for derived file formats, such as WebM.
  • Matroska uses Extensible Binary Meta Language (EBML) as basis.
  • EBML specifies a binary and octet (byte) aligned format inspired by the principle of XML.
  • EBML itself is a generalized description of the technique of binary markup.
  • a Matroska file consists of Elements that make up an EBML "document.” Elements incorporate an Element ID, a descriptor for the size of the element, and the binary data itself. Elements can be nested.
  • a Segment Element of Matroska is a container for other top-level (level 1) elements.
  • a Matroska file may comprise (but is not limited to be composed of) one Segment.
  • Multimedia data in Matroska files is organized in Clusters (or Cluster Elements), each containing typically a few seconds of multimedia data.
  • a Cluster comprises BlockGroup elements, which in turn comprise Block Elements.
  • a Cues Element comprises metadata which may assist in random access or seeking and may include file pointers or respective timestamps for seek points.
  • Adaptive HTTP streaming was first standardized in Release 9 of 3rd Generation Partnership Project (3GPP) packet- switched streaming (PSS) service (3GPP TS 26.234 Release 9:“Transparent end-to-end packet- switched streaming service (PSS); protocols and codecs”).
  • 3GPP 3rd Generation Partnership Project
  • PSS packet- switched streaming
  • MPEG took 3GPP AHS Release 9 as a starting point for the MPEG DASH standard (ISO/IEC 23009-1 :“Dynamic adaptive streaming over HTTP (DASH)-Part 1 : Media presentation description and segment formats,” International Standard, 2 nd Edition, 2014).
  • MPEG DASH and 3GP-DASH are technically close to each other and may therefore be collectively referred to as DASH.
  • Some concepts, formats, and operations of DASH are described below as an example of a video streaming system, wherein the embodiments may be implemented.
  • the aspects of the invention are not limited to DASH, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
  • the multimedia content may be stored on an HTTP server and may be delivered using HTTP.
  • the content may be stored on the server in two parts: Media Presentation Description (MPD), which describes a manifest of the available content, its various alternatives, their URL addresses, and other characteristics; and segments, which contain the actual multimedia bitstreams in the form of chunks, in a single or multiple files.
  • MPD Media Presentation Description
  • the MDP provides the necessary information for clients to establish a dynamic adaptive streaming over HTTP.
  • the MPD contains information describing media presentation, such as an HTTP- uniform resource locator (URL) of each Segment to make GET Segment request.
  • URL HTTP- uniform resource locator
  • the DASH client may obtain the MPD e.g. by using HTTP, email, thumb drive, broadcast, or other transport methods.
  • the DASH client may become aware of the program timing, media-content availability, media types, resolutions, minimum and maximum bandwidths, and the existence of various encoded alternatives of multimedia components, accessibility features and required digital rights management (DRM), media-component locations on the network, and other content characteristics. Using this information, the DASH client may select the appropriate encoded alternative and start streaming the content by fetching the segments using e.g. HTTP GET requests. After appropriate buffering to allow for network throughput variations, the client may continue fetching the subsequent segments and also monitor the network bandwidth fluctuations. The client may decide how to adapt to the available bandwidth by fetching segments of different alternatives (with lower or higher bitrates) to maintain an adequate buffer.
  • DRM digital rights management
  • a media presentation consists of a sequence of one or more Periods, each Period contains one or more Groups, each Group contains one or more Adaptation Sets, each Adaptation Sets contains one or more Representations, each Representation consists of one or more Segments.
  • a Representation is one of the alternative choices of the media content or a subset thereof typically differing by the encoding choice, e.g. by bitrate, resolution, language, codec, etc.
  • the Segment contains certain duration of media data, and metadata to decode and present the included media content.
  • a Segment is identified by a URI and can typically be requested by a HTTP GET request.
  • a Segment may be defined as a unit of data associated with an HTTP -URL and optionally a byte range that are specified by an MPD.
  • the DASH MPD complies with Extensible Markup Language (XML) and is therefore specified through elements and attributes as defined in XML.
  • the MPD may be specified using the following conventions: Elements in an XML document may be identified by an upper-case first letter and may appear in bold face as Element. To express that an element Elementl is contained in another element Element2, one may write Element2.Elementl. If an element's name consists of two or more combined words, camel-casing may be used, e.g. ImportantElement. Elements may be present either exactly once, or the minimum and maximum occurrence may be defined by ⁇ minOccurs> ...
  • Attributes in an XML document may be identified by a lower-case first letter as well as they may be preceded by a‘@’-sign, e.g. @attribute.
  • a‘@’-sign e.g. @attribute.
  • Attributes may have assigned a status in the XML as mandatory (M), optional (O), optional with default value (OD) and conditionally mandatory (CM).
  • M mandatory
  • O optional
  • OD optional with default value
  • CM conditionally mandatory
  • an independent representation may be defined as a representation that can be processed independently of any other representations.
  • An independent representation may be understood to comprise an independent bitstream or an independent layer of a bitstream.
  • a dependent representation may be defined as a representation for which Segments from its complementary representations are necessary for presentation and/or decoding of the contained media content components.
  • a dependent representation may be understood to comprise e.g. a predicted layer of a scalable bitstream.
  • a complementary representation may be defined as a representation which complements at least one dependent representation.
  • a complementary representation may be an independent representation or a dependent representation.
  • Dependent Representations may be described by a Representation element that contains a @dependencyld attribute. Dependent
  • Representations can be regarded as regular Representations except that they depend on a set of complementary Representations for decoding and/or presentation.
  • the @dependencyld contains the values of the @id attribute of all the complementary Representations, i.e. Representations that are necessary to present and/or decode the media content components contained in this dependent Representation.
  • a media content component or a media component may be defined as one continuous component of the media content with an assigned media component type that can be encoded individually into a media stream.
  • Media content may be defined as one media content period or a contiguous sequence of media content periods.
  • Media content component type may be defined as a single type of media content such as audio, video, or text.
  • a media stream may be defined as an encoded version of a media content component.
  • An Initialization Segment may be defined as a Segment containing metadata that is necessary to present the media streams encapsulated in Media Segments.
  • an Initialization Segment may comprise the Movie Box ('moov') which might not include metadata for any samples, i.e. any metadata for samples is provided in 'moof boxes.
  • a Media Segment contains certain duration of media data for playback at a normal speed, such duration is referred as Media Segment duration or Segment duration.
  • the content producer or service provider may select the Segment duration according to the desired characteristics of the service. For example, a relatively short Segment duration may be used in a live service to achieve a short end-to-end latency. The reason is that Segment duration is typically a lower bound on the end-to- end latency perceived by a DASH client since a Segment is a discrete unit of generating media data for DASH. Content generation is typically done such a manner that a whole Segment of media data is made available for a server. Furthermore, many client implementations use a Segment as the unit for GET requests.
  • a Segment can be requested by a DASH client only when the whole duration of Media Segment is available as well as encoded and encapsulated into a Segment.
  • different strategies of selecting Segment duration may be used.
  • DASH supports rate adaptation by dynamically requesting Media Segments from different Representations within an Adaptation Set to match varying network bandwidth.
  • coding dependencies within Representation have to be taken into account.
  • a Representation switch may only happen at a random access point (RAP), which is typically used in video coding techniques such as H.264/AVC.
  • RAP random access point
  • SAP Stream Access Point
  • a SAP is specified as a position in a Representation that enables playback of a media stream to be started using only the information contained in Representation data starting from that position onwards (preceded by initialising data in the Initialisation Segment, if any). Hence, Representation switching can be performed in SAP.
  • SAP Type 1 corresponds to what is known in some coding schemes as a“Closed GOP random access point” (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps) and in addition the first picture in decoding order is also the first picture in presentation order.
  • SAP Type 2 corresponds to what is known in some coding schemes as a“Closed GOP random access point” (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps), for which the first picture in decoding order may not be the first picture in presentation order.
  • SAP Type 3 corresponds to what is known in some coding schemes as an“Open GOP random access point”, in which there may be some pictures in decoding order that cannot be correctly decoded and have presentation times earlier than intra-coded picture associated with the SAP.
  • a stream access point (SAP) sample group as specified in ISOBMFF identifies samples as being of the indicated SAP type.
  • the grouping_type_parameter for the SAP sample group comprises the fields target layers and layer id method idc.
  • target layers specifies the target layers for the indicated SAPs.
  • the semantics of target layers may depend on the value of layer id method idc, which specifies the semantics of target layers.
  • layer id method idc equal to 0 specifies that the target layers consist of all the layers represented by the track.
  • the sample group description entry for the SAP sample group comprises the fields dependent flag and SAP type.
  • non-layered media dependent flag may be required to be 0 for non-layered media dependent flag equal to 1 specifies that the reference layers, if any, for predicting the target layers may have to be decoded for accessing a sample of this sample group dependent flag equal to 0 specifies that the reference layers, if any, for predicting the target layers need not be decoded for accessing any SAP of this sample group sap type values in the range of 1 to 6, inclusive, specify the SAP type, of the associated samples.
  • a sync sample may be defined as a sample in a track that is of a SAP of type 1 or 2.
  • Sync samples may be indicated with SyncSampleBox or by sample is non sync sample equal to 0 in the signaling for track fragments.
  • a Segment may further be partitioned into Subsegments e.g. to enable downloading segments in multiple parts. Subsegments may be required to contain complete access units.
  • Subsegments may be indexed by Segment Index box, which contains information to map presentation time range and byte range for each Subsegment.
  • the Segment Index box may also describe subsegments and stream access points in the segment by signaling their durations and byte offsets.
  • a DASH client may use the information obtained from Segment Index box(es) to make a HTTP GET request for a specific Subsegment using byte range HTTP request. If relatively long Segment duration is used, then Subsegments may be used to keep the size of HTTP responses reasonable and flexible for bitrate adaptation.
  • the indexing information of a segment may be put in the single box at the beginning of that segment, or spread among many indexing boxes in the segment. Different methods of spreading are possible, such as hierarchical, daisy chain, and hybrid. This technique may avoid adding a large box at the beginning of the segment and therefore may prevent a possible initial download delay.
  • the m- th Subsegment of X and the n-th Subsegment of Y shall be non-overlapping whenever m is not equal to n. It may be required that for dependent Representations the concatenation of the Initialization Segment with the sequence of Subsegments of the dependent Representations, each being preceded by the corresponding Subsegment of each of the complementary Representations in order as provided in the @dependencyld attribute shall represent a conforming Subsegment sequence conforming to the media format as specified in the @mimeType attribute for this dependent Representation.
  • Track references of ISOBMFF can be reflected in the list of four-character codes in the @associationType attribute of DASH MPD that is mapped to the list of Representation@id values given in the @associationId in a one to one manner. These attributes may be used for linking media Representations with metadata Representations.
  • MPEG-DASH defines segment-container formats for both ISOBMFF and MPEG-2 Transport Streams.
  • Other specifications may specify segment formats based on other container formats.
  • a segment format based on Matroska container file format has been proposed and may be summarized as follows.
  • Matroska files are carried as DASH segments or alike, the association of DASH units and Matroska units may be specified as follows.
  • a subsegment (of DASH) may be defined as one or more consecutive Clusters of Matroska-encapsulated content.
  • Initialization Segment of DASH may be required to comprise the EBML header, Segment header (of Matroska), Segment Information (of Matroska) and Tracks, and may optionally comprise other levell elements and padding.
  • a Segment Index of DASH may comprise a Cues Element of Matroska.
  • MPEG Omnidirectional Media Format (ISO/IEC 23090-2) is a virtual reality (VR) system standard.
  • OMAF defines a media format (comprising both file format derived from ISOBMFF and streaming formats for DASH and MPEG Media Transport).
  • OMAF version 1 supports 360° video, images, and audio, as well as the associated timed text and facilitates three degrees of freedom (3DoF) content consumption, meaning that a viewport can be selected with any azimuth and elevation range and tilt angle that are covered by the omnidirectional content but the content is not adapted to any translational changes of the viewing position.
  • 3DoF three degrees of freedom
  • the viewport-dependent streaming scenarios described further below have also been designed for 3DoF although could potentially be adapted to a different number of degrees of freedom.
  • Standardization of OMAF version 2 is ongoing.
  • OMAF v2 is planned to include features like support for multiple viewpoints, overlays, sub-picture compositions, and six degrees of freedom with a viewing space limited roughly to upper-body movements
  • a real-world audio-visual scene (A) may be captured 220 by audio sensors as well as a set of cameras or a camera device with multiple lenses and sensors. The acquisition results in a set of digital image/video (Bi) and audio (Ba) signals.
  • the cameras/lenses may cover all directions around the center point of the camera set or camera device, thus the name of 360-degree video.
  • Audio can be captured using many different microphone configurations and stored as several different content formats, including channel-based signals, static or dynamic (i.e. moving through the 3D scene) object signals, and scene-based signals (e.g., Higher Order Ambisonics).
  • the channel-based signals may conform to one of the loudspeaker layouts defined in CICP (Coding- Independent Code-Points).
  • CICP Coding- Independent Code-Points
  • the loudspeaker layout signals of the rendered immersive audio program may be binaraulized for presentation via headphones.
  • the input images of one time instance may be stitched to generate a projected picture representing one view.
  • An example of image stitching, projection, and region-wise packing process for monoscopic content is illustrated with Fig. 2b.
  • Input images (Bi) are stitched and projected 202 onto a three-dimensional projection structure that may for example be a unit sphere.
  • the projection structure may be considered to comprise one or more surfaces, such as plane(s) or part(s) thereof.
  • a projection structure may be defined as three-dimensional structure consisting of one or more surface(s) on which the captured VR image/video content is projected, and from which a respective projected picture can be formed.
  • the image data on the projection structure is further arranged onto a two-dimensional projected picture (C) 203.
  • projection may be defined as a process by which a set of input images are projected onto a projected picture.
  • representation formats including for example an equirectangular projection (ERP) format and a cube map projection (CMP) format. It may be considered that the projected picture covers the entire sphere.
  • a region-wise packing 204 is then applied to map the projected picture 203 (C) onto a packed picture 205 (D). If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding 206. Otherwise, regions of the projected picture (C) are mapped onto a packed picture (D) by indicating the location, shape, and size of each region in the packed picture, and the packed picture (D) is given as input to image/video encoding.
  • region-wise packing may be defined as a process by which a projected picture is mapped to a packed picture.
  • the term packed picture may be defined as a picture that results from region- wise packing of a projected picture.
  • a projected picture representing two views (CL, CR), one for each eye.
  • Both views (CL, CR) can be mapped onto the same packed picture (D), and encoded by a traditional 2D video encoder.
  • each view of the projected picture can be mapped to its own packed picture, in which case the image stitching, projection, and region- wise packing is performed as illustrated in Fig. 2a.
  • a sequence of packed pictures of either the left view or the right view can be independently coded or, when using a multiview video encoder, predicted from the other view.
  • Input images (Bi) are stitched and projected onto two three- dimensional projection structures, one for each eye.
  • the image data on each projection structure is further arranged onto a two-dimensional projected picture (CL for left eye, CR for right eye), which covers the entire sphere.
  • Frame packing is applied to pack the left view picture and right view picture onto the same projected picture.
  • region-wise packing is then applied to the pack projected picture onto a packed picture, and the packed picture (D) is given as input to image/video encoding. If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding.
  • the image stitching, projection, and region-wise packing process can be carried out multiple times for the same source images to create different versions of the same content, e.g. for different orientations of the projection structure.
  • the region- wise packing process can be performed multiple times from the same projected picture to create more than one sequence of packed pictures to be encoded.
  • 360-degree panoramic content covers horizontally the full 360- degree field-of-view around the capturing position of an imaging device.
  • the vertical field-of-view may vary and can be e.g. 180 degrees.
  • Panoramic image covering 360-degree field-of-view horizontally and 180-degree field-of-view vertically can be represented by a sphere that has been mapped to a two-dimensional image plane using equirectangular projection (ERP).
  • ERP equirectangular projection
  • the horizontal coordinate may be considered equivalent to a longitude
  • the vertical coordinate may be considered equivalent to a latitude, with no transformation or scaling applied.
  • the process of forming a monoscopic equirectangular panorama picture is illustrated in Fig. 4b.
  • a set of input images such as fisheye images of a camera array or a camera device with multiple lenses and sensors, is stitched onto a spherical image.
  • the spherical image is further projected onto a cylinder (without the top and bottom faces).
  • the cylinder is unfolded to form a two-dimensional projected picture.
  • one or more of the presented steps may be merged; for example, the input images may be directly projected onto a cylinder without an intermediate projection onto a sphere.
  • the projection structure for equirectangular panorama may be considered to be a cylinder that comprises a single surface.
  • 360-degree content can be mapped onto different types of solid geometrical structures, such as polyhedron (i.e.
  • a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid
  • cylinder by projecting a spherical image onto the cylinder, as described above with the equirectangular projection
  • cylinder directly without projecting onto a sphere first
  • cone etc. and then unwrapped to a two-dimensional image plane.
  • panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of equirectangular projection, where the polar areas of the sphere have not been mapped onto the two-dimensional image plane.
  • a panoramic image may have less than 360-degree horizontal field-of-view and up to 180- degree vertical field-of-view, while otherwise has the characteristics of equirectangular projection format.
  • Region-wise packing information may be encoded as metadata in or along the bitstream.
  • the packing information may comprise a region- wise mapping from a pre-defined or indicated source format to the packed picture format, e.g. from a projected picture to a packed picture, as described earlier.
  • Rectangular region-wise packing metadata may be described as follows:
  • the metadata defines a rectangle in a projected picture, the respective rectangle in the packed picture, and an optional transformation of rotation by 90, 180, or 270 degrees and/or horizontal and/or vertical mirroring. Rectangles may, for example, be indicated by the locations of the top-left comer and the bottom-right comer.
  • the mapping may comprise resampling. As the sizes of the respective rectangles can differ in the projected and packed pictures, the mechanism infers region-wise resampling.
  • region-wise packing provides signalling for the following usage scenarios:
  • regions of ERP or faces of CMP can have different sampling densities and the underlying projection structure can have different orientations.
  • a guard band may be defined as an area in a packed picture that is not rendered but may be used to improve the rendered part of the packed picture to avoid or mitigate visual artifacts such as seams.
  • the OMAF allows the omission of image stitching, projection, and region-wise packing and encode the image/video data in their captured format.
  • images (D) are considered the same as images (Bi) and a limited number of fisheye images per time instance are encoded.
  • the stitching process is not needed, since the captured signals are inherently immersive and omnidirectional.
  • the stitched images (D) are encoded 206 as coded images (Ei) or a coded video bitstream (Ev).
  • the captured audio (Ba) is encoded 222 as an audio bitstream (Ea).
  • the coded images, video, and/or audio are then composed 224 into a media file for file playback (F) or a sequence of an initialization segment and media segments for streaming (Fs), according to a particular media container file format.
  • the media container file format is the ISO base media file format.
  • the file encapsulator 224 also includes metadata into the file or the segments, such as projection and region- wise packing information assisting in rendering the decoded packed pictures.
  • the metadata in the file may include:
  • Region-wise packing information may be encoded as metadata in or along the bitstream, for example as region- wise packing SEI message(s) and/or as region- wise packing boxes in a file containing the bitstream.
  • the packing information may comprise a region- wise mapping from a pre-defined or indicated source format to the packed picture format, e.g. from a projected picture to a packed picture, as described earlier.
  • the region-wise mapping information may for example comprise for each mapped region a source rectangle (a.k.a. projected region) in the projected picture and a destination rectangle (a.k.a.
  • the packing information may comprise one or more of the following: the orientation of the three-dimensional projection structure relative to a coordinate system, indication which projection format is used, region- wise quality ranking indicating the picture quality ranking between regions and/or first and second spatial region sequences, one or more transformation operations, such as rotation by 90, 180, or 270 degrees, horizontal mirroring, and vertical mirroring.
  • the semantics of packing information may be specified in a manner that they are indicative for each sample location within packed regions of a decoded picture which is the respective spherical coordinate location.
  • the segments (Fs) may be delivered 225 using a delivery mechanism to a player.
  • the file that the file encapsulator outputs (F) is identical to the file that the file decapsulator inputs (F').
  • a file decapsulator 226 processes the file (F') or the received segments (F's) and extracts the coded bitstreams (E'a, E'v, and/or E'i) and parses the metadata.
  • the audio, video, and/or images are then decoded 228 into decoded signals (B'a for audio, and D' for images/video).
  • decoded packed pictures (D') are projected 229 onto the screen of a head-mounted display or any other display device 230 based on the current viewing orientation or viewport and the projection, spherical coverage, projection structure orientation, and region- wise packing metadata parsed from the file.
  • decoded audio (B'a) is rendered 229, e.g. through headphones 231, according to the current viewing orientation.
  • the current viewing orientation is determined by the head tracking and possibly also eye tracking functionality 227.
  • the current viewing orientation may also be used the video and audio decoders 228 for decoding optimization.
  • a video rendered by an application on a HMD or on another display device renders a portion of the 360-degree video.
  • This portion may be defined as a viewport.
  • a viewport may be understood as a window on the 360-degree world represented in the omnidirectional video displayed via a rendering display.
  • a viewport may be defined as a part of the spherical video that is currently displayed.
  • a viewport may be characterized by horizontal and vertical field of views (FOV or FoV).
  • a viewpoint may be defined as the point or space from which the user views the scene; it usually corresponds to a camera position. Slight head motion does not imply a different viewpoint.
  • a viewing position may be defined as the position within a viewing space from which the user views the scene.
  • a viewing space may be defined as a 3D space of viewing positions within which rendering of image and video is enabled and VR experience is valid.
  • An omnidirectional image may be divided into several regions called tiles.
  • the tiles may have been encoded as motion constrained tiles with different quality/resolution.
  • a client apparatus may request the regions/tiles corresponding to a current viewport of the user with high resolution/quality.
  • the term omnidirectional may refer to media content that has greater spatial extent than a field-of-view of a device rendering the content.
  • Omnidirectional content may for example cover substantially 360 degrees in horizontal dimension and substantially 180 degrees in vertical dimension, but omnidirectional may also refer to content covering less than 360 degree view in horizontal direction and/or 180 degree view in vertical direction.
  • the client e.g.
  • the player may request the whole 360-degree video/image either with uniform quality, which means a viewport independent delivery, or such that the quality of the video/image in a viewport of the user is higher than the quality of the video/image in the non- viewport part of the scene, which means a viewport dependent delivery.
  • the (requested) 360-degree video may be encoded at different bitrates.
  • Each encoded bitstream may be stored with, for example, ISOBMFF and then segmented based on MPEG-DASH.
  • the whole 360-degree video may be delivered to the client/player uniformly at the same quality.
  • the (requested) 360-degree video may be divided into several regions/tiles and encoded as, for example, motion constrained tiles.
  • Each encoded tiled bitstream may be stored with, for example, ISOBMFF and then segmented based on MPEG-DASH.
  • the regions/tiles corresponding to the user's viewport may be delivered at high quality/resolution, whereas other parts of 360-degree video which are not within the user's viewport may be delivered at a lower quality/resolution.
  • Fig. 6c illustrates an example of an omnidirectional video/image from an event, for example a football game.
  • the video/image frame is represented by the equirectangular projection, although problems/solutions described herein apply more generally to all the projection formats used for representing omnidirectional videos/images.
  • Tiles which include information of the viewport (illustrated with hashed or cross-hatched blocks in Fig. 6c) are encoded with higher resolution and/or quality and the other tiles are encoded with lower resolution and/or quality (illustrated with solid white blocks in Fig. 6c).
  • the rectangle drawn with solid, thick lines illustrate the current viewport.
  • Fig. 7b shows an example of an omnidirectional streaming system 600.
  • Raw video signal may be input 601 and motion constrained encoding 602 may be applied to the raw video data.
  • the motion constrained encoding 602 may form a first motion constrained bitstream 610 with a first quality and/or a first resolution and a second motion constrained bitstream 611 with a second quality and/or a second resolution.
  • the encoded image information may be capsulated to one or more files and stored 603.
  • the encapsulation stage 603 may take into consideration a user’s viewport information so that those parts of the video/image which are within the user’s viewport may be encapsulated from that motion constrained bitstream which has higher quality and/or resolution (e.g. the first motion constrained bitstream 610) and the other parts may be encapsulated from the other motion constrained bitstream (e.g. the second motion constrained bitstream 611).
  • the file(s) may be segmented 604 e.g. to comply with a segment format of MPEG DASH, and Media Presentation Description may be formed 605.
  • the content may be delivered 606 via a communication network to a player e.g. as a response to a request from the player.
  • Fig. 7c shows an example of content flow in the DASH delivery function of MPEG omnidirectional media format (OMAF).
  • OMAF MPEG omnidirectional media format
  • Fs/F's are initialization and media segments.
  • G illustrates DASH Media Presentation Description (MPD), which may include omnidirectional media-specific metadata, such as information on projection and region- wise packing.
  • An MPD (G) may be generated based on the segments (Fs) and other media files representing the same content.
  • the DASH MPD generator includes omnidirectional media-specific descriptors.
  • the descriptors may include projection type, region-wise packing type, content coverage, spherical region-wise quality ranking, 2D region-wise quality ranking, and fisheye omnidirectional video information. This information may be generated on the basis of the equivalent information in the segments.
  • the player may be informed of the orientation of the user’s gaze e.g. on the basis of information provided by a head mounted display 614 the user is carrying on to watch the
  • the parser and file decapsulator 607 may use that information to select and request coding units so that the quality/resolution of different areas of the viewport correspond with desired or recommended quality/resolution.
  • a tile track may be defined as a track that contains sequences of one or more motion- constrained tile sets of a coded bitstream. Decoding of a tile track without the other tile tracks of the bitstream may require a specialized decoder, which may be e.g. required to skip absent tiles in the decoding process.
  • An HEVC tile track specified in ISO/IEC 14496-15 enables storage of one or more temporal motion-constrained tile sets as a track. When a tile track contains tiles of an HEVC base layer, the sample entry type 'hvtl ' is used. When a tile track contains tiles of a non-base layer, the sample entry type 'lhtl' is used.
  • a sample of a tile track consists of one or more complete tiles in one or more complete slice segments.
  • a tile track is independent from any other tile track that includes VCL NAL units of the same layer as this tile track.
  • a tile track has a 'tbas' track reference to a tile base track.
  • the tile base track does not include VCL NAL units.
  • a tile base track indicates the tile ordering using a 'sabt' track reference to the tile tracks.
  • An HEVC coded picture corresponding to a sample in the tile base track can be reconstructed by collecting the coded data from the tile-aligned samples of the tracks indicated by the 'sabt' track reference in the order of the track references.
  • a constructed tile set track is a tile set track, e.g. a track according to ISOBMFF, containing constructors that, when executed, result into a tile set bitstream.
  • a constructor is a set of instructions that, when executed, results into a valid piece of sample data according to the underlying sample format.
  • An extractor is a constructor that, when executed, copies the sample data of an indicated byte range of an indicated sample of an indicated track. Inclusion by reference may be defined as an extractor or alike that, when executed, copies the sample data of an indicated byte range of an indicated sample of an indicated track.
  • bitstream ⁇ is a tile set ⁇ track
  • optionB ⁇ illustrates alternatives, i.e. either optionA or optionB, which is selected consistently in all selections.
  • a full-picture-compliant tile set track can be played as with any full-picture track using the parsing and decoding process of full-picture tracks.
  • a full-picture-compliant bitstream can be decoded as with any full-picture bitstream using the decoding process of full-picture bitstreams.
  • a full-picture track is a track representing an original bitstream (including all its tiles).
  • a tile set bitstream is a bitstream that contains a tile set of an original bitstream but not representing the entire original bitstream.
  • a tile set track is a track representing a tile set of an original bitstream but not representing the entire original bitstream.
  • a full-picture-compliant tile set track may comprise extractors as defined for HEVC.
  • An extractor may, for example, be an in-line constructor including a slice segment header and a sample constructor extracting coded video data for a tile set from a referenced full-picture track.
  • An in-line constructor is a constructor that, when executed, returns the sample data that it contains.
  • an in-line constructor may comprise a set of instructions for rewriting a new slice header.
  • the phrase in-line may be used to indicate coded data that is included in the sample of a track.
  • a full-picture track is a track representing an original bitstream (including all its tiles).
  • a NAL-unit-like structure refers to a structure with the properties of a NAL unit except that start code emulation prevention is not performed.
  • a pre-constructed tile set track is a tile set track containing the sample data in-line.
  • a tile set bitstream is a bitstream that contains a tile set of an original bitstream but not representing the entire original bitstream.
  • a tile set track is a track representing a tile set of an original bitstream but not representing the entire original bitstream.
  • Video codec may comprise an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form.
  • a video encoder and/or a video decoder may also be separate from each other, i.e. need not form a codec.
  • encoder discards some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate).
  • a video encoder may be used to encode an image sequence, as defined subsequently, and a video decoder may be used to decode a coded image sequence.
  • a video encoder or an intra coding part of a video encoder or an image encoder may be used to encode an image, and a video decoder or an inter decoding part of a video decoder or an image decoder may be used to decode a coded image.
  • Some hybrid video encoders encode the video information in two phases. Firstly, pixel values in a certain picture area (or "block") are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (e.g.
  • DCT Discrete Cosine Transform
  • inter prediction In temporal prediction, the sources of prediction are previously decoded pictures (a.k.a. reference pictures). In intra block copy (a.k.a. intra-block-copy prediction), prediction is applied similarly to temporal prediction but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process. Inter-layer or inter-view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively. In some cases, inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter- view prediction provided that they are performed with the same or similar process than temporal prediction. Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion-compensated prediction.
  • Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
  • intra prediction modes There may be different types of intra prediction modes available in a coding scheme, out of which an encoder can select and indicate the used one, e.g. on block or coding unit basis.
  • a decoder may decode the indicated intra prediction mode and reconstruct the prediction block accordingly.
  • several angular intra prediction modes each for different angular directions, may be available.
  • Angular intra prediction may be considered to extrapolate the border samples of adjacent blocks along a linear prediction direction.
  • a planar prediction mode may be available.
  • Planar prediction may be considered to essentially form a prediction block, in which each sample of a prediction block may be specified to be an average of the vertically aligned sample in the adjacent sample column on the left of the current block and the horizontally aligned sample in the adjacent sample line above the current block. Additionally or alternatively, a DC prediction mode may be available, in which the prediction block is essentially an average sample value of a neighboring block or blocks.
  • One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighbouring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
  • FIG. 8a shows a block diagram of a video encoder suitable for employing embodiments of the invention.
  • Fig. 8a presents an encoder for two layers, but it would be appreciated that presented encoder could be similarly simplified to encode only one layer or extended to encode more than two layers.
  • Fig. 8a illustrates an embodiment of a video encoder comprising a first encoder section 500 for a base layer and a second encoder section 502 for an enhancement layer.
  • Each of the first encoder section 500 and the second encoder section 502 may comprise similar elements for encoding incoming pictures.
  • the encoder sections 500, 502 may comprise a pixel predictor 302, 402, prediction error encoder 303, 403 and prediction error decoder 304, 404.
  • the 8a also shows an embodiment of the pixel predictor 302, 402 as comprising an inter-predictor 306, 406, an intra-predictor 308, 408, a mode selector 310, 410, a filter 316, 416, and a reference frame memory 318, 418.
  • the pixel predictor 302 of the first encoder section 500 receives 300 base layer images of a video stream to be encoded at both the inter-predictor 306 (which determines the difference between the image and a motion compensated reference frame 318) and the intra-predictor 308 (which determines a prediction for an image block based only on the already processed parts of current frame or picture).
  • the output of both the inter predictor and the intra-predictor are passed to the mode selector 310.
  • the intra-predictor 308 may have more than one intra-prediction modes. Hence, each mode may perform the intra-prediction and provide the predicted signal to the mode selector 310.
  • the mode selector 310 also receives a copy of the base layer picture 300.
  • the pixel predictor 402 of the second encoder section 502 receives 400 enhancement layer images of a video stream to be encoded at both the inter-predictor 406 (which determines the difference between the image and a motion compensated reference frame 418) and the intra-predictor 408 (which determines a prediction for an image block based only on the already processed parts of current frame or picture).
  • the output of both the inter-predictor and the intra-predictor are passed to the mode selector 410.
  • the intra-predictor 408 may have more than one intra-prediction modes. Hence, each mode may perform the intra-prediction and provide the predicted signal to the mode selector 410.
  • the mode selector 410 also receives a copy of the enhancement layer picture
  • the output of the inter-predictor 306, 406 or the output of one of the optional intra-predictor modes or the output of a surface encoder within the mode selector is passed to the output of the mode selector 310, 410.
  • the output of the mode selector is passed to a first summing device 321, 421.
  • the first summing device may subtract the output of the pixel predictor 302, 402 from the base layer picture 300/enhancement layer picture 400 to produce a first prediction error signal 320, 420 which is input to the prediction error encoder 303, 403.
  • the pixel predictor 302, 402 further receives from a preliminary reconstructor 339, 439 the combination of the prediction representation of the image block 312, 412 and the output 338, 438 of the prediction error decoder 304, 404.
  • the preliminary reconstructed image 314, 414 may be passed to the intra-predictor 308, 408 and to a filter 316, 416.
  • the filter 316, 416 receiving the preliminary representation may filter the preliminary representation and output a final reconstructed image 340, 440 which may be saved in a reference frame memory 318, 418.
  • the reference frame memory 318 may be connected to the inter-predictor 306 to be used as the reference image against which a future base layer picture 300 is compared in inter-prediction operations.
  • the reference frame memory 318 may also be connected to the inter-predictor 406 to be used as the reference image against which a future enhancement layer pictures 400 is compared in inter-prediction operations. Moreover, the reference frame memory 418 may be connected to the inter-predictor 406 to be used as the reference image against which a future enhancement layer picture 400 is compared in inter prediction operations.
  • Filtering parameters from the filter 316 of the first encoder section 500 may be provided to the second encoder section 502 subject to the base layer being selected and indicated to be source for predicting the filtering parameters of the enhancement layer according to some embodiments.
  • the prediction error encoder 303, 403 comprises a transform unit 342, 442 and a quantizer 344, 444.
  • the transform unit 342, 442 transforms the first prediction error signal 320, 420 to a transform domain.
  • the transform is, for example, the DCT transform.
  • the quantizer 344, 444 quantizes the transform domain signal, e.g. the DCT coefficients, to form quantized coefficients.
  • the prediction error decoder 304, 404 receives the output from the prediction error encoder 303, 403 and performs the opposite processes of the prediction error encoder 303, 403 to produce a decoded prediction error signal 338, 438 which, when combined with the prediction representation of the image block 312, 412 at the second summing device 339, 439, produces the preliminary reconstructed image 314, 414.
  • the prediction error decoder may be considered to comprise a dequantizer 361, 461, which dequantizes the quantized coefficient values, e.g.
  • the prediction error decoder may also comprise a block filter which may filter the reconstructed block(s) according to further decoded information and filter parameters.
  • the entropy encoder 330, 430 receives the output of the prediction error encoder 303, 403 and may perform a suitable entropy encoding/variable length encoding on the signal to provide error detection and correction capability.
  • the outputs of the entropy encoders 330, 430 may be inserted into a bitstream e.g. by a multiplexer 508.
  • Fig. 8b shows a block diagram of a video decoder suitable for employing embodiments of the invention.
  • Fig. 8b depicts a structure of a two-layer decoder, but it would be appreciated that the decoding operations may similarly be employed in a single-layer decoder.
  • the video decoder 550 comprises a first decoder section 552 for base layer pictures and a second decoder section 554 for enhancement layer pictures.
  • Block 556 illustrates a demultiplexer for delivering information regarding base layer pictures to the first decoder section 552 and for delivering information regarding enhancement layer pictures to the second decoder section 554.
  • Reference P’n stands for a predicted representation of an image block.
  • Reference D’n stands for a reconstructed prediction error signal.
  • Blocks 704, 804 illustrate preliminary reconstructed images (I’n).
  • Reference R’n stands for a final reconstructed image.
  • Blocks 703, 803 illustrate inverse transform (T-l).
  • Blocks 702, 802 illustrate inverse quantization (Q-l).
  • Blocks 700, 800 illustrate entropy decoding (E-l).
  • Blocks 706, 806 illustrate a reference frame memory (RFM).
  • Blocks 707, 807 illustrate prediction (P) (either inter prediction or intra prediction).
  • Blocks 708, 808 illustrate filtering (F).
  • Blocks 709, 809 may be used to combine decoded prediction error information with predicted base or enhancement layer pictures to obtain the preliminary reconstructed images (I’n).
  • Preliminary reconstructed and filtered base layer pictures may be output 710 from the first decoder section 552 and preliminary reconstructed and filtered enhancement layer pictures may be output 810 from the second decoder section 554.
  • the decoder could be interpreted to cover any operational unit capable to carry out the decoding operations, such as a player, a receiver, a gateway, a demultiplexer and/or a decoder.
  • the decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame.
  • the decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
  • the motion information is indicated with motion vectors associated with each motion compensated image block, such as a prediction unit.
  • Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures.
  • those are typically coded differentially with respect to block specific predicted motion vectors.
  • the predicted motion vectors are created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks.
  • Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor.
  • this prediction information may be represented for example by a reference index of previously coded/decoded picture.
  • the reference index is typically predicted from adjacent blocks and/or co-located blocks in temporal reference picture.
  • typical high efficiency video codecs employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction.
  • predicting the motion field information is carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks.
  • Typical video codecs enable the use of uni-prediction, where a single prediction block is used for a block being (de)coded, and bi-prediction, where two prediction blocks are combined to form the prediction for a block being (de)coded.
  • Some video codecs enable weighted prediction, where the sample values of the prediction blocks are weighted prior to adding residual information.
  • multiplicative weighting factor and an additive offset which can be applied.
  • a weighting factor and offset may be coded for example in the slice header for each allowable reference picture index.
  • the weighting factors and/or offsets are not coded but are derived e.g. based on the relative picture order count (POC) distances of the reference pictures.
  • POC picture order count
  • Typical video encoders utilize Lagrangian cost functions to find optimal coding modes, e.g. the desired Macroblock mode and associated motion vectors.
  • This kind of cost function uses a weighting factor l to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
  • H.264/AVC and HEVC include a concept of picture order count (POC).
  • a value of POC is derived for each picture and is non-decreasing with increasing picture position in output order. POC therefore indicates the output order of pictures.
  • POC may be used in the decoding process, for example, for implicit scaling of motion vectors in the temporal direct mode of bi-predictive slices, for implicitly derived weights in weighted prediction, and for reference picture list initialization.
  • POC may be used in the verification of output order conformance.
  • Video encoders and/or decoders may be able to store multiple reference pictures in a decoded picture buffer (DPB) and use them adaptively for inter prediction.
  • the reference picture management may be defined as a process to determine which reference pictures are maintained in the DPB. Examples of reference picture management are described in the following.
  • a reference picture set valid or active for a picture includes all the reference pictures used as reference for the picture and all the reference pictures that are kept marked as“used for reference” for any subsequent pictures in decoding order.
  • RefPicSetStCurrAfter RefPicSetStFollO, RefPicSetStFolll, RefPicSetLtCurr, and RefPicSetLtFoll.
  • RefPicSetStFollO and RefPicSetStFolll may also be considered to form jointly one subset
  • RefPicSetStFoll The notation of the six subsets is as follows.“Curr” refers to reference pictures that are included in the reference picture lists of the current picture and hence may be used as inter prediction reference for the current picture.“Foil” refers to reference pictures that are not included in the reference picture lists of the current picture but may be used in subsequent pictures in decoding order as reference pictures.“St” refers to short-term reference pictures, which may generally be identified through a certain number of least significant bits of their POC value.“Lt” refers to long term reference pictures, which are specifically identified and generally have a greater difference of POC values relative to the current picture than what can be represented by the mentioned certain number of least significant bits.“0” refers to those reference pictures that have a smaller POC value than that of the current picture.“1” refers to those reference pictures that have a greater POC value than that of the current picture. RefPicSetStCurrO, RefPicSetStCurrl, RefPicSet
  • RefPicSetStFolll are collectively referred to as the short-term subset of the reference picture set.
  • RefPicSetLtCurr and RefPicSetLtFoll are collectively referred to as the long-term subset of the reference picture set.
  • a reference picture set may be specified in a sequence parameter set and taken into use in the slice header through an index to the reference picture set.
  • a reference picture set may also be specified in a slice header.
  • a reference picture set may be coded independently or may be predicted from another reference picture set (known as inter-RPS prediction).
  • inter-RPS prediction a flag (used_by_curr_pic_X_flag) is additionally sent for each reference picture indicating whether the reference picture is used for reference by the current picture (included in a *Curr list) or not (included in a *Foll list).
  • RefPicSetLtCurr, and RefPicSetLtFoll are all set to empty.
  • the reference picture for inter prediction is indicated with an index to a reference picture list.
  • the index may be coded with variable length coding, which usually causes a smaller index to have a shorter value for the corresponding syntax element.
  • two reference picture lists (reference picture list 0 and reference picture list 1) are generated for each bi-predictive (B) slice, and one reference picture list (reference picture list 0) is formed for each inter-coded (P) slice.
  • a reference picture list such as reference picture list 0 and reference picture list 1, is typically constructed in two steps: First, an initial reference picture list is generated. The initial reference picture list may be generated for example on the basis of POC, or information on the prediction hierarchy, or any combination thereof. Second, the initial reference picture list may be reordered by reference picture list reordering (RPLR) commands, also known as reference picture list modification syntax structure, which may be contained in slice headers. If reference picture sets are used, the reference picture list 0 may be initialized to contain RefPicSetStCurrO first, followed by RefPicSetStCurrl, followed by RefPicSetLtCurr.
  • RPLR reference picture list reordering
  • Reference picture list 1 may be initialized to contain RefPicSetStCurrl first, followed by RefPicSetStCurrO.
  • the initial reference picture lists may be modified through the reference picture list modification syntax structure, where pictures in the initial reference picture lists may be identified through an entry index to the list.
  • reference picture list modification is encoded into a syntax structure comprising a loop over each entry in the final reference picture list, where each loop entry is a fixed-length coded index to the initial reference picture list and indicates the picture in ascending position order in the final reference picture list.
  • a reference picture index may be coded by an encoder into the bitstream in some inter coding modes or it may be derived (by an encoder and a decoder) for example using neighboring blocks in some other inter coding modes.
  • Figs la and lb illustrate an example of a camera having multiple lenses and imaging sensors but also other types of cameras may be used to capture wide view images and/or wide view video.
  • wide view image and wide view video mean an image and a video, respectively, which comprise visual information having a relatively large viewing angle, larger than 100 degrees.
  • a so called 360 panorama image/video as well as images/videos captured by using a fish eye lens may also be called as a wide view image/video in this specification.
  • the wide view image/video may mean an image/video in which some kind of projection distortion may occur when a direction of view changes between successive images or frames of the video so that a transform may be needed to find out co-located pixels from a reference image or a reference frame. This will be described in more detail later in this specification.
  • the camera 100 of Fig. la comprises two or more camera units 102 and is capable of capturing wide view images and/or wide view video.
  • the number of camera units 102 is eight, but may also be less than eight or more than eight.
  • Each camera unit 102 is located at a different location in the multi-camera system and may have a different orientation with respect to other camera units 102.
  • the camera units 102 may have an omnidirectional constellation so that it has a 360-degree viewing angle in a 3D-space. In other words, such camera 100 may be able to see each direction of a scene so that each spot of the scene around the camera 100 can be viewed by at least one camera unit 102.
  • the camera 100 of Fig. la may also comprise a processor 104 for controlling the operations of the camera 100.
  • a memory 106 for storing data and computer code to be executed by the processor 104, and a transceiver 108 for communicating with, for example, a communication network and/or other devices in a wireless and/or wired manner.
  • the camera 100 may further comprise a user interface (UI) 110 for displaying information to the user, for generating audible signals and/or for receiving user input.
  • UI user interface
  • the camera 100 need not comprise each feature mentioned above, or may comprise other features as well.
  • Fig. la also illustrates some operational elements which may be implemented, for example, as a computer code in the software of the processor, in a hardware, or both.
  • a focus control element 114 may perform operations related to adjustment of the optical system of camera unit or units to obtain focus meeting target specifications or some other predetermined criteria.
  • An optics adjustment element 116 may perform movements of the optical system or one or more parts of it according to instructions provided by the focus control element 114. It should be noted here that the actual adjustment of the optical system need not be performed by the apparatus but it may be performed manually, wherein the focus control element 114 may provide information for the user interface 110 to indicate a user of the device how to adjust the optical system.
  • Fig. lb shows as a perspective view the camera 100 of Fig. la. In Fig. lb seven camera units 102a-102g can be seen, but the camera 100 may comprise even more camera units which are not visible from this perspective. Fig. lb also shows two microphones 112a, 112b, but the apparatus may also comprise one or more than two microphones. [0217] It should be noted here that embodiments disclosed in this specification may also be implemented with apparatuses having only one camera unit 102 or less or more than eight camera units 102a-102g.
  • the camera 100 may be controlled by another device (not shown), wherein the camera 100 and the other device may communicate with each other and a user may use a user interface of the other device for entering commands, parameters, etc. and the user may be provided information from the camera 100 via the user interface of the other device.
  • Terms 360-degree video, omnidirectional video, immersive video or virtual reality (VR) video may be used interchangeably. They may generally refer to video content that provides such a large field of view that only a part of the video is displayed at a single point of time in typical displaying arrangements.
  • a virtual reality video may be viewed on a head-mounted display (HMD) that may be capable of displaying e.g. about 100-degree field of view (FOV).
  • the spatial subset of the virtual reality video content to be displayed may be selected based on the orientation of the head-mounted display.
  • a flat-panel viewing environment is assumed, wherein e.g. up to 40-degree field-of-view may be displayed.
  • wide field of view content e.g. fisheye
  • MPEG omnidirectional media format may be described with Figs. 2a and 2b.
  • 360-degree image or video content may be acquired and prepared for example as follows.
  • Images or video can be captured by a set of cameras or a camera device with multiple lenses and imaging sensors.
  • the acquisition results in a set of digital image/video signals.
  • the cameras/lenses may cover all directions around the center point of the camera set or camera device.
  • the images of the same time instance are stitched, projected, and mapped onto a packed virtual reality frame, which may alternatively be referred to as a packed picture.
  • the mapping may alternatively be referred to as region- wise mapping or region- wise packing.
  • the breakdown of image stitching, projection, and mapping processes are illustrated with Fig. 2a and described as follows.
  • Input images 201 are stitched and projected 202 onto a three-dimensional projection structure, such as a sphere or a cube.
  • the projection structure may be considered to comprise one or more surfaces, such as plane(s) or part(s) thereof.
  • a projection structure may be defined as a three-dimensional structure consisting of one or more surface(s) on which the captured virtual reality image/video content may be projected, and from which a respective projected frame can be formed.
  • the image data on the projection structure is further arranged onto a two-dimensional projected frame 203.
  • projection may be defined as a process by which a set of input images are projected onto a projected frame or a projected picture.
  • There may be a pre-defined set of representation formats of the projected frame including for example an equirectangular panorama and a cube map representation format.
  • Region-wise mapping 204 may be applied to map projected frames 203 onto one or more packed virtual reality frames 205.
  • the region-wise mapping may be understood to be equivalent to extracting two or more regions from the projected frame, optionally applying a geometric transformation (such as rotating, mirroring, and/or resampling) to the regions, and placing the transformed regions in spatially non-overlapping areas, a.k.a. constituent frame partitions, within the packed virtual reality frame.
  • the packed virtual reality frame 205 may be identical to the projected frame 203. Otherwise, regions of the projected frame are mapped onto a packed virtual reality frame by indicating the location, shape, and size of each region in the packed virtual reality frame.
  • mapping may be defined as a process by which a projected frame is mapped to a packed virtual reality frame.
  • packed virtual reality frame may be defined as a frame that results from a mapping of a projected frame.
  • the input images 201 may be converted to packed virtual reality frames 205 in one process without intermediate steps.
  • Packing information may be encoded as metadata in or along the bitstream.
  • the packing information may comprise a region-wise mapping from a pre-defined or indicated source format to the packed frame format, e.g. from a projected frame to a packed VR frame, as described earlier.
  • the region-wise mapping information may for example comprise for each mapped region a source rectangle in the projected frame and a destination rectangle in the packed VR frame, where samples within the source rectangle are mapped to the destination rectangle and rectangles may for example be indicated by the locations of the top-left comer and the bottom-right comer.
  • the mapping may comprise resampling.
  • the packing information may comprise one or more of the following: the orientation of the three-dimensional projection structure relative to a coordinate system, indication which omnidirectional projection format is used, region- wise quality ranking indicating the picture quality ranking between regions and/or first and second spatial region sequences, one or more transformation operations, such as rotation by 90, 180, or 270 degrees, horizontal mirroring, and vertical mirroring.
  • the semantics of packing information may be specified in a manner that they are indicative for each sample location within packed regions of a decoded picture which is the respective spherical coordinate location.
  • a coordinate system may be defined through orthogonal coordinate axes, such as X (lateral), Y (vertical, pointing upwards), and Z (back-to-front axis, pointing outwards). Rotations around the axes may be defined and may be referred to as yaw, pitch, and roll. Y aw may be defined to rotate around the Y axis, pitch around the X axis, and roll around the Z axis. Rotations may be defined to be extrinsic, i.e., around the X, Y, and Z fixed reference axes. The angles may be defined to increase clockwise when looking from the origin towards the positive end of an axis.
  • the coordinate system specified can be used for defining the sphere coordinates, which may be referred to azimuth (f) and elevation (Q).
  • Global coordinate axes may be defined as coordinate axes, e.g. according to the coordinate system as discussed above, that are associated with audio, video, and images representing the same acquisition position and intended to be rendered together.
  • the origin of the global coordinate axes is usually the same as the center point of a device or rig used for omnidirectional audio/video acquisition as well as the position of the observer's head in the three-dimensional space in which the audio and video tracks are located.
  • the playback may be recommended to be started using the orientation (0, 0) in (azimuth, elevation) relative to the global coordinate axes.
  • the projection structure may be rotated relative to the global coordinate axes.
  • the rotation may be performed for example to achieve better compression performance based on the spatial and temporal activity of the content at certain spherical parts.
  • the rotation may be performed to adjust the rendering orientation for already encoded content. For example, if the horizon of the encoded content is not horizontal, it may be adjusted afterwards by indicating that the projection structure is rotated relative to the global coordinate axes.
  • the projection orientation may be indicated as yaw, pitch, and roll angles that define the orientation of the projection structure relative to the global coordinate axes.
  • the projection orientation may be included e.g. in a box in a sample entry of an ISOBMFF track for omnidirectional video.
  • 360-degree panoramic content i.e., images and video
  • the vertical field-of- view may vary and can be e.g. 180 degrees.
  • Panoramic image covering 360-degree field-of-view horizontally and 180-degree field-of-view vertically can be represented by a sphere that has been mapped to a two-dimensional image plane using equirectangular projection (ERP).
  • ERP equirectangular projection
  • the horizontal coordinate may be considered equivalent to a longitude
  • the vertical coordinate may be considered equivalent to a latitude, with no transformation or scaling applied.
  • panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of equirectangular projection, where the polar areas of the sphere have not been mapped onto the two-dimensional image plane.
  • panoramic content may have less than 360-degree horizontal field-of-view and up to 180-degree vertical field-of- view, while otherwise have the characteristics of equirectangular projection format.
  • cube map projection format spherical video is projected onto the six faces (a.k.a. sides) of a cube.
  • the cube map may be generated e.g. by first rendering the spherical scene six times from a viewpoint, with the views defined by a 90 degree view frustum representing each cube face.
  • the cube sides may be frame-packed into the same frame or each cube side may be treated individually (e.g. in encoding). There are many possible orders of locating cube sides onto a frame and/or cube sides may be rotated or mirrored.
  • the frame width and height for frame-packing may be selected to fit the cube sides "tightly" e.g. at 3x2 cube side grid, or may include unused constituent frames e.g. at 4x3 cube side grid.
  • a cube map can be stereoscopic.
  • a stereoscopic cube map can e.g. be reached by re projecting each view of a stereoscopic panorama to the cube map format.
  • the process of forming a monoscopic equirectangular panorama picture is illustrated in Fig. 2b, in accordance with an embodiment.
  • a set of input images 211 such as fisheye images of a camera array or a camera device 100 with multiple lenses and sensors 102, is stitched 212 onto a spherical image 213.
  • the spherical image 213 is further projected 214 onto a cylinder 215 (without the top and bottom faces).
  • the cylinder 215 is unfolded 216 to form a two-dimensional projected frame 217.
  • the input images 213 may be directly projected onto a cylinder 217 without an intermediate projection onto the sphere 213 and/or to the cylinder 215.
  • the projection structure for equirectangular panorama may be considered to be a cylinder that comprises a single surface.
  • the equirectangular projection may be defined as a process that converts any sample location within the projected picture (of the equirectangular projection format) to sphere coordinates of a coordinate system.
  • the sample location within the projected picture may be defined relative to pictureWidth and pictureHeight, which are the width and height, respectively, of the equirectangular panorama picture in samples.
  • pictureWidth and pictureHeight are the width and height, respectively, of the equirectangular panorama picture in samples.
  • 360-degree content can be mapped onto different types of solid geometrical structures, such as polyhedron (i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid), cylinder (by projecting a spherical image onto the cylinder, as described above with the equirectangular projection), cylinder (directly without projecting onto a sphere first), cone, etc. and then unwrapped to a two-dimensional image plane.
  • the two-dimensional image plane can also be regarded as a geometrical structure.
  • an omnidirectional projection format may be defined as a format to represent (up to) 360-degree content on a two-dimensional image plane.
  • Examples of omnidirectional projection formats include the equirectangular projection format and the cubemap projection format.
  • panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of equirectangular projection, where the polar areas of the sphere have not been mapped onto the two-dimensional image plane.
  • a panoramic image may have less than 360-degree horizontal field-of-view and up to 180- degree vertical field-of-view, while otherwise has the characteristics of equirectangular projection format.
  • Human eyes are not capable of viewing the whole 360 degrees space, but are limited to a maximum horizontal and vertical field-of-views (HHFoV, HVFoV).
  • HHFoV maximum horizontal and vertical field-of-views
  • a HMD device has technical limitations that allow only viewing a subset of the whole 360 degrees space in horizontal and vertical directions (DHFoV, DVFoV)).
  • HMDs head-mounted displays
  • Typical flat-panel viewing environments display up to 40-degree field-of-view.
  • wide-FOV content e.g. fisheye
  • a viewport may be defined as a region of omnidirectional image or video suitable for display and viewing by the user.
  • a current viewport (which may be sometimes referred simply as a viewport) may be defined as the part of the spherical video that is currently displayed and hence is viewable by the user(s). In many viewing modes, the user consuming the video/image may choose the current viewport freely. For example, when viewing happens with a head-mounted display, the orientation of the head determines the viewing orientation and hence the viewport.
  • a video rendered by an application on a HMD renders a portion of the 360-degrees video, which is referred to as a viewport.
  • the spatial part that is currently displayed is a viewport.
  • a viewport is a window on the 360-degrees world represented in the omnidirectional video displayed via a rendering display.
  • a viewport may be characterized by a horizontal field-of-view (VHFoV) and a vertical field-of-view (WFoV).
  • VHFoV horizontal field-of-view
  • WFoV vertical field-of-view
  • HFoV horizontal field-of-view
  • VFoV vertical field-of-view
  • omnidirectional video or image content may refer to content that has greater spatial extent than the field-of-view of the device rendering the content.
  • Omnidirectional content may cover substantially 360 degrees in horizontal dimension and substantially 180 degrees in vertical dimension, but“omnidirectional” may also refer to content covering less than the entire 360 degree view in horizontal direction and/or the 180 degree view in vertical direction.
  • a sphere region may be defined as a region on a sphere that may be specified by four great circles or by two azimuth circles and two elevation circles and additionally by a tile angle indicating rotation along the axis originating from the sphere origin passing through the centre point of the sphere region.
  • a great circle may be defined as an intersection of the sphere and a plane that passes through the centre point of the sphere.
  • OMAF specifies a generic timed metadata syntax for sphere regions.
  • a purpose for the timed metadata track is indicated by the track sample entry type.
  • the sample format of all metadata tracks for sphere regions specified starts with a common part and may be followed by an extension part that is specific to the sample entry of the metadata track. Each sample specifies a sphere region.
  • recommended viewport timed metadata track which indicates the viewport that should be displayed when the user does not have control of the viewing orientation or has released control of the viewing orientation. This provides a method for users to consume omnidirectional content without head rotation while wearing a head mounted display (HMD).
  • the recommended viewport timed metadata track may be used for indicating a recommended viewport based on a director's cut or based on measurements of viewing statistics.
  • the recommended viewport timed metadata track may also facilitate omnidirectional content consumption over limited field of view (FOV) displays or conventional 2D displays without the need for proactive viewport changes with gestures or interactions.
  • a textual description of the recommended viewport may be provided in the sample entry.
  • the type of the recommended viewport may be indicated in the sample entry and may be among the following:
  • a recommended viewport per the director's cut i.e., a viewport suggested according to the creative intent of the content author or content provider.
  • a recommended viewport selected based on measurements of viewing statistics.
  • class RcvpInfoBox extends FullBox('rvif, 0, 0) ⁇
  • [02411 viewport type specifies the type of the recommended viewport as listed in the table below:
  • the current OMAF v2 specification specifies a recommended viewport but does not take into account the need for consistent quality. Furthermore, there is no signalling in MPEG DASH to enable selection of media representations which have recommended viewport regions in high quality.
  • the currently defined MPEG DASH signalling for recommended viewport is agnostic to quality of the media representations. This can lead to suboptimal experience if the player depends on the currently provided signaling.
  • a content author may want to encode and/or make available one or more specific versions of the content that are tailor-made for covering a recommended viewport at a high quality while the remaining areas may have a lower quality.
  • Such specific versions of the content are suitable for viewing the content on a 2D display when the user is expected to control the viewport manually only occasionally (i.e., when the player typically displays the recommended viewport, but lets the user to have manual control of the viewport too).
  • Embodiments enable content authors indicating such specific versions of the content and players concluding which tracks or representations are such specific versions of the content.
  • a method comprises, with reference to the flow diagram of Fig. 10a:
  • a method comprises, with reference to the flow diagram of Fig. 10b:
  • the metadata may comprise but is not limited to one or more of the following:
  • an element SupplementalProperty is defined with an attribute called as @SchemeIdUri so that the value of the @SchemeIdUri attribute has an‘rcqr’ descriptor.
  • The‘rcqr’ descriptor can be referred to as a RcvpQualityRanking descriptor.
  • The‘rcqr’ descriptor indicates the identifier of the representation (representation id) which presents a recommended viewport at maximum quality ranking (QR). This scenario may use the descriptor to enable selection of a track which covers the recommended viewport at maximum quality ranking for maximum temporal duration.
  • an attribute called quality representation sorting is added for the ‘rcqr’ descriptor of the SupplementalProperty attribute.
  • the associated media representation based on the coverage quality in the recommended viewport can either be in descending or ascending order.
  • the above described attribute quality representation sorting for the SupplementalProperty is defined to list the associated media representation based on the coverage quality in the recommended viewport, in descending or ascending order.
  • the sorting order may be defined, for example, by a standard.
  • an ElementalProperty or SupplementalProperty attribute may be defined which is comprised of one or more quality information elements, which can be named as rcqrQualitylnfo.
  • Each element corresponds to the representation of the media track associated with the recommended viewport track.
  • the element rcqrQualitylnfo contains information such as the maximum quality ranking covered or the minimum quality ranking.
  • the quality information element rcqrQualitylnfo is utilized, the rcqrQualitylnfo element has an attribute called as quality ranking which mandates the referred representation to have uniform quality for the recommended viewport sphere regions, when consumed by a 2D display.
  • the enhanced recommended viewport media track selection information can be carried in a media presentation description (MPD) of DASH with a SupplementalProperty element and/or EssentialProperty element comprising a descriptor which has an association or relationship with the recommended viewport track adaptation set.
  • MPD media presentation description
  • the recommended viewport quality ranking descriptor indicates the high quality Representation Sets covering the recommended viewport track.
  • EssentialProperty element or a SupplementalProperty element may be used with the @schemeIdUri attribute comprising the RcvpQualityRanking descriptor equal to "um:mpeg:mpegl:omaf:2018:rcqr".
  • the @value attribute of the RcvpQualityRanking descriptor is not present.
  • the RcvpQualityRanking descriptor may include elements and attributes as specified in the table below.
  • the RcvpQualityRanking descriptor can have the
  • a property is used which lists all representation ids which cover the associated recommended viewport track at high quality uniformly.
  • RcvpQualityRanking.rcqrQualityInfo@consistent_quality_ranking to indicate such a consistent quality. This may be advantageous for consuming immersive media over conventional displays.
  • Such a descriptor can be of use while creating new content and to ensure that it is optimal for recommended viewport track based viewing for 2D display. Presence of the descriptor with the recommended viewport descriptor indicates that the media representation is at consistently high quality for recommended viewport sphere regions.
  • the above described embodiments may enable selection of high quality representation from multiple representations that show the recommended viewport track and may provide easy to use indications for the client. Having high quality consistent experience over a conventional display may improve the immersion of the user when watching the content.
  • One RcvpConsistentQuality descriptor may be present for every adaptation set corresponding to the recommended viewport timed metadata track in the MPD.
  • the recommended viewport timed metadata track can be enhanced to also contain the per fragment coverage values. This will assist in requesting the appropriate video tracks which best match the player preferences.
  • the recommended viewport coverage struct per fragment can be included in the fragment header. The structure is presented below:
  • the RcvpExtentCoveragelnformationBox is signaled per movie fragment in order to enable the player to request the best matching visual tracks for rendering per fragment.
  • the metadata expressing the association of the track or representation with a particular recommended viewport timed metadata track or representation may comprise but is not limited to one or more of the following: A track reference of a particular type (e.g. 'rvpv' - recommended viewport version) from the recommended viewport timed metadata track to the video track(s) (wherein the recommended viewport has higher quality than remaining areas);
  • a track reference of a particular type e.g. 'rvpv' - recommended viewport version
  • a track reference of a particular type (e.g. 'rvpv') from the video track (wherein the recommended viewport has higher quality than remaining areas) to the recommended viewport timed metadata track;
  • @associationId from a Representation containing the recommended viewport timed metadata track to the Representation(s) containing video track(s) (wherein the recommended viewport has higher quality than remaining areas), and @associationType of a particular type (e.g. 'rvpv');
  • Fig. 6a illustrates an example of an omnidirectional video/image from an event, for example a dance party.
  • the video/image frame 61 is represented by the equirectangular projection, although problems/solutions described herein apply more generally to all the projection formats used for representing omnidirectional videos/images.
  • the omnidirectional video/image may have a region called a director’s viewport also known as a recommended viewport 63, which represents a spatial area in the video/image frame which can, for example, represent one of the following.
  • the director’s viewport may be prescribed by the content provider/author. It may represent the region which was viewed by the user's friend or the region which was selected based on measurements of viewing statistics by a crowd.
  • the director’s viewport is not limited to these examples but may represent some other visual information.
  • the term current viewport refers to the region that is currently being displayed to the user.
  • An example of this kind of region is illustrated in Fig. 6a as the dotted area 62.
  • This dotted area represents the current viewport 65 shown in Fig. 6b.
  • the user consuming the video/image may choose the current viewport freely.
  • the orientation of the head determines the viewing orientation and hence the viewport.
  • the user views a spatial region/area, which may be the same as or may differ from the director's viewport.
  • Fig. 9a shows some elements of a video encoding section 510, in accordance with an embodiment.
  • the video encoding section 510 may be a part of the omnidirectional streaming system 600 or separate from it.
  • a signaling constructor 512 may comprise an input to obtain omnidirectional video/image 511, and a second input to obtain quality rank definitions 513.
  • the signaling constructor 512 forms different kinds of signals and provides them to an encoding element 513.
  • the encoding element 513 may encode the signals as well as the omnidirectional video/image and the signaling information for storing and/or transmission. However, there may be separate encoding elements for signal encoding and visual information encoding.
  • Encoding may refer to compression of video or image data, but it may also comprise generating, encapsulating, or packetizing signalling information associated with the compressed video or image data, for example in a manifest and/or a container file.
  • Fig. 9b shows a video decoding section 520, in accordance with an embodiment.
  • the video encoding section 510 may be a part of the omnidirectional streaming system 600 or separate from it.
  • the video decoding section 520 may obtain signaling data via a first input 521 and encoded visual information (omnidirectional video/image) via a second input 522.
  • the signaling data and the encoded visual information may be decoded by a decoding element 523.
  • Decoded signaling data may be used by a rendering element 524 to control in image reconstruction from decoded visual information.
  • the rendering element 524 may also receive viewport data 525 to determine the location of a current viewport within the image area of the omnidirectional video/image).
  • bitstream may, for example, be a video or image bitstream (such as an HEVC bitstream), wherein the indicating may utilize, for example, supplemental enhancement information (SEI) messages.
  • SEI Supplemental Enhancement information
  • the container file may, for example, comply with the ISO base media file format, the Matroska file format, or the Material exchange Format (MXF).
  • the manifest may, for example, conform to the Media Presentation Description (MPD) of MPEG-DASH (ISO/IEC 23009-1), the M3U format, or the Composition Playlist (CPL) of the Interoperable Master Format (IMF).
  • MPD Media Presentation Description
  • CPL Composition Playlist
  • IMF Interoperable Master Format
  • Embodiments may be similarly realized with any other similar container or media description formats, such as the Session Description Protocol (SDP).
  • SDP Session Description Protocol
  • Embodiments may be realized with a suite of bitstream format(s), container file format(s) and manifest format(s), in which the indications may be.
  • MPEG OMAF is an example of such a suite of formats.
  • the metadata may reside e.g. in one or more of the following container structures or mechanisms:
  • Track header such as a box contained directly or indirectly within TrackHeaderBox
  • Sample entry such as a particular box within the sample entry
  • TrackGroupBox may be extended to carry the metadata
  • the EntityToGroupBox may be extended to carry the metadata
  • nouns e.g. an encoded bitstream, a viewport, a spatial region, and so on
  • the embodiments generally apply to plural forms of nouns.
  • the above described embodiments may help in enhancing the viewing experience of the user. Furthermore, they may help the content author in guiding the viewer to his (authors) intended viewing conditions in the omnidirectional video/image.
  • indications, conditions, and/or parameters described in different embodiments may be represented with syntax elements in syntax structure(s), such as SEI messages, in a video bitstream, and/or in static or dynamic syntax structures in a container file, and/or in a manifest.
  • syntax structure(s) such as SEI messages
  • An example of a static syntax structure in ISOBMFF is a box in a sample entry of a track.
  • Another example of a static syntax structure in ISOBMFF is an item property for an image item. Examples of dynamic syntax structures in ISOBMFF were described earlier with reference to timed metadata.
  • a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment.
  • a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of an embodiment.
  • the first approach is viewport-specific encoding and streaming, a.k.a. viewport-dependent encoding and streaming, a.k.a. asymmetric projection.
  • 360-degree image content is packed into the same frame with an emphasis (e.g. greater spatial area) on the primary viewport.
  • the packed VR frames are encoded into a single bitstream.
  • the front face of a cube map may be sampled with a higher resolution compared to other cube faces and the cube faces may be mapped to the same packed VR frame, where the front cube face is sampled with twice the resolution compared to the other cube faces.
  • the second approach is tile-based encoding and streaming.
  • 360-degree content is encoded and made available in a manner that enables selective streaming of viewports from different encodings.
  • tile-based encoding and streaming may be used with any video codec, even if tiles similar to HEVC were not available in the codec or even if motion- constrained tile sets or alike were not implemented in an encoder.
  • the source content may be split into tile rectangle sequences (a.k.a. sub-picture sequences) before encoding.
  • Each tile rectangle sequence covers a subset of the spatial area of the source content, such as full panorama content, which may e.g. be of equirectangular projection format.
  • Each tile rectangle sequence may then be encoded independently from each other as a single-layer bitstream, such as HEVC Main profile bitstream.
  • Several bitstreams may be encoded from the same tile rectangle sequence, e.g. for different bitrates.
  • Each tile rectangle bitstream may be encapsulated in a file as its own track (or alike) and made available for streaming.
  • the tracks to be streamed may be selected based on the viewing orientation.
  • the client may receive tracks covering the entire omnidirectional content. Better quality or higher resolution tracks may be received for the current viewport compared to the quality or resolution covering the remaining, currently non- visible viewports.
  • each track may be decoded with a separate decoder instance.
  • each cube face may be separately encoded and encapsulated in its own track (and Representation). More than one encoded bitstream for each cube face may be provided, e.g. each with different spatial resolution. Players can choose tracks (or Representations) to be decoded and played based on the current viewing orientation. High-resolution tracks (or Representations) may be selected for the cube faces used for rendering for the present viewing orientation, while the remaining cube faces may be obtained from their low- resolution tracks (or Representations).
  • encoding is performed in a manner that the resulting bitstream comprises motion-constrained tile sets. Several bitstreams of the same source content are encoded using motion-constrained tile sets.
  • one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is stored as a tile set track (e.g. an HEVC tile track or a full-picture-compliant tile set track) or a sub-picture track in a file.
  • a tile base track e.g. an HEVC tile base track or a full picture track comprising extractors to extract data from the tile set tracks
  • the tile base track represents the bitstream by implicitly collecting motion-constrained tile sets from the tile set tracks or by explicitly extracting (e.g. by HEVC extractors) motion-constrained tile sets from the tile set tracks.
  • Tile set tracks and the tile base track of each bitstream may be encapsulated in an own file, and the same track identifiers may be used in all files.
  • the tile set tracks to be streamed may be selected based on the viewing orientation.
  • the client may receive tile set tracks covering the entire omnidirectional content. Better quality or higher resolution tile set tracks may be received for the current viewport compared to the quality or resolution covering the remaining, currently non-visible viewports.
  • equirectangular panorama content is encoded using motion-constrained tile sets. More than one encoded bitstream may be provided, e.g. with different spatial resolution and/or picture quality. Each motion-constrained tile set is made available in its own track (and
  • Players can choose tracks (or Representations) to be decoded and played based on the current viewing orientation.
  • High-resolution or high-quality tracks (or Representations) may be selected for tile sets covering the present primary viewport, while the remaining area of the 360-degree content may be obtained from low-resolution or low-quality tracks (or Representations).
  • each received tile set track is decoded with a separate decoder or decoder instance.
  • a tile base track is utilized in decoding as follows. If all the received tile tracks originate from bitstreams of the same resolution (or more generally if the tile base tracks of the bitstreams are identical or equivalent, or if the initialization segments or other initialization data, such as parameter sets, of all the bitstreams is the same), a tile base track may be received and used to construct a bitstream. The constructed bitstream may be decoded with a single decoder.
  • a first set of tile rectangle tracks and/or tile set tracks may be merged into a first full-picture-compliant bitstream, and a second set of tile rectangle tracks and/or tile set tracks may be merged into a second full-picture-compliant bitstream.
  • the first full-picture- compliant bitstream may be decoded with a first decoder or decoder instance
  • the second full- picture-compliant bitstream may be decoded with a second decoder or decoder instance.
  • this approach is not limited to two sets of tile rectangle tracks and/or tile set tracks, two full-picture- compliant bitstreams, or two decoders or decoder instances, but applies to any number of them.
  • the client can control the number of parallel decoders or decoder instances.
  • clients that are not capable of decoding tile tracks e.g. HEVC tile tracks
  • full-picture-compliant bitstreams can perform the merging in a manner that full-picture-compliant bitstreams are obtained.
  • the merging may be solely performed in the client or full-picture-compliant tile set tracks may be generated to assist in the merging performed by the client.
  • a motion-constrained coded sub-picture sequence may be defined as a collective term of such a coded sub-picture sequence in which the coded pictures are motion-constrained pictures, as defined earlier, and an MCTS sequence.
  • motion- constrained coded sub-picture sequence it may be interpreted to mean either one or both of a coded sub-picture sequence in which the coded pictures are motion-constrained pictures, as defined earlier, and/or an MCTS sequence.
  • a collector track may be defined as a track that extracts implicitly or explicitly MCTSs or sub-pictures from other tracks.
  • a collector track may be a full-picture-compliant track.
  • a collector track may for example extract MCTSs or sub-pictures to form a coded picture sequence where MCTSs or sub-pictures are arranged to a grid. For example, when a collector track extracts two MCTSs or sub pictures, they may be arranged into a 2x1 grid of MCTSs or sub-pictures.
  • a tile base track may be regarded as a collector track, and an extractor track that extracts MCTSs or sub-pictures from other tracks may be regarded as a collector track.
  • a collector track may also be referred to as a collection track.
  • a track that is a source for extracting to a collector track may be referred to as a collection item track.
  • tile merging in coded domain
  • a creation of a collector track may be regarded as tile merging that is performed by the file creator.
  • Resolving a collector track into a full-picture-compliant bitstream may be regarded as tile merging, which is assisted by the collector track.
  • tile-based encoding and streaming may be realized by splitting a source picture in sub-picture sequences that are partly overlapping.
  • bitstreams with motion-constrained tile sets may be generated from the same source content with different tile grids or tile set grids.
  • the 360 degrees space divided into a discrete set of viewports, each separate by a given distance (e.g., expressed in degrees), so that the omnidirectional space can be imagined as a map of overlapping viewports, and the primary viewport is switched discretely as the user changes his/her orientation while watching content with a head-mounted display.
  • the viewports could be imagined as adjacent non-overlapping tiles within the 360 degrees space.
  • the primary viewport i.e., the current viewing orientation
  • the remaining of 360-degree video is transmitted at a lower quality/resolution.
  • the viewing orientation changes, e.g. when the user turns his/her head when viewing the content with a head-mounted display
  • another version of the content needs to be streamed, matching the new viewing orientation.
  • the new version can be requested starting from a stream access point (SAP), which are typically aligned with SAP.
  • SAP stream access point
  • An extractor is a NAL-unit-like structure.
  • a NAL-unit-like structure may be specified to comprise a NAL unit header and NAL unit payload like any NAL units, but start code emulation prevention (that is required for a NAL unit) might not be followed in a NAL-unit-like structure.
  • an extractor contains one or more constructors.
  • a sample constructor extracts, by reference, NAL unit data from a sample of another track.
  • An in-line constructor includes NAL unit data.
  • Nested extraction may be disallowed, e.g. the bytes referred to by a sample constructor shall not contain extractors; an extractor shall not reference, directly or indirectly, another extractor.
  • An extractor may contain one or more constructors for extracting data from the current track or from another track that is linked to the track in which the extractor resides by means of a track reference of type 'seal'.
  • the bytes of a resolved extractor may represent one or more entire NAL units. A resolved extractor starts with a valid length field and a NAL unit header.
  • the bytes of a sample constructor are copied only from the single identified sample in the track referenced through the indicated 'seal' track reference.
  • the alignment is on decoding time, i.e. using the time-to-sample table only, followed by a counted offset in sample number.
  • Extractors are a media-level concept and hence apply to the destination track before any edit list is considered. (However, one would normally expect that the edit lists in the two tracks would be identical).
  • viewport-dependent streaming which may be also referred to as viewport-adaptive streaming (VAS) or viewport-specific streaming
  • VAS viewport-adaptive streaming
  • a subset of 360-degree video content covering the viewport i.e., the current view orientation
  • VAS viewport-adaptive streaming
  • MCTSs motion-constrained tile sets
  • Several versions of the content are encoded at different bitrates or qualities using the same MCTS partitioning. Each MCTS sequence is made available for streaming as a DASH Representation or alike. The player selects on MCTS basis which bitrate or quality is received.
  • H.264/AVC does not include the concept of tiles, but the operation like MCTSs can be achieved by arranging regions vertically as slices and restricting the encoding similarly to encoding of MCTSs.
  • tile and MCTS are used in this document but should be understood to apply to H.264/AVC too in a limited manner. In general, the terms tile and MCTS should be understood to apply to similar concepts in any coding format or specification.
  • tile-based viewport-dependent streaming schemes are the following:
  • RWMQ Region-wise mixed quality
  • One or more bitrate and/or resolution versions of a complete low- resolution/low-quality omnidirectional video are encoded and made available for streaming.
  • MCTS-based encoding is performed and MCTS sequences are made available for streaming.
  • Players receive a complete low-resolution/low-quality omnidirectional video and select and receive the high-resolution MCTSs covering the viewport.
  • MCTSs are encoded at multiple
  • tile-based viewport- dependent streaming methods may be subdivided to categories than the one described above.
  • the above- described subdivision may not be exhaustive, i.e. they may be tile-based viewport-dependent streaming methods that do not belong to any of the described categories.
  • All above-described viewport-dependent streaming approaches may be realized with client- driven bitstream rewriting (a.k.a. late binding) or with author-driven MCTS merging (a.k.a. early binding).
  • client- driven bitstream rewriting a.k.a. late binding
  • author-driven MCTS merging a.k.a. early binding
  • a player selects MCTS sequences to be received, selectively rewrites portions of the received video data as necessary (e.g. parameter sets and slice segment headers may need to be rewritten) for combining the received MCTSs into a single bitstream, and decodes the single bitstream.
  • Early binding refers to the use of author-driven information for rewriting portions of the received video data as necessary, for merging of MCTSs into a single bitstream to be decoded, and in some cases for selection of MCTS sequences to be received.
  • Early binding approaches include an extractor-driven approach and tile track approach, which are described subsequently.
  • one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is stored as a tile track (e.g. an HEVC tile track) in a file.
  • a tile base track (e.g. an HEVC tile base track) may be generated and stored in a file.
  • the tile base track represents the bitstream by implicitly collecting motion-constrained tile sets from the tile tracks.
  • the tile tracks to be streamed may be selected based on the viewing orientation.
  • the client may receive tile tracks covering the entire omnidirectional content. Better quality or higher resolution tile tracks may be received for the current viewport compared to the quality or resolution covering the remaining 360-degree video.
  • a tile base track may include track references to the tile tracks, and/or tile tracks may include track references to the tile base track.
  • the 'sabt' track reference is used to refer to tile tracks from a tile base track, and the tile ordering is indicated by the order of the tile tracks contained by a 'sabf track reference.
  • a tile track has is a 'tbas' track reference to the tile base track.
  • one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is modified to become a compliant bitstream of its own (e.g. HEVC bitstream) and stored as a sub-picture track (e.g. with untransformed sample entry type 'hvcT for HEVC) in a file.
  • a compliant bitstream of its own e.g. HEVC bitstream
  • a sub-picture track e.g. with untransformed sample entry type 'hvcT for HEVC
  • One or more extractor tracks e.g. an HEVC extractor tracks
  • the extractor track represents the bitstream by explicitly extracting (e.g. by HEVC extractors) motion-constrained tile sets from the sub picture tracks.
  • the sub-picture tracks to be streamed may be selected based on the viewing orientation.
  • the client may receive sub-picture tracks covering the entire omnidirectional content. Better quality or higher resolution sub-picture tracks may be received for the current viewport compared to the quality or resolution covering the remaining 360-degree video.
  • tile track approach and extractor-driven approach are described in details, specifically in the context of HEVC, they apply to other codecs and similar concepts as tile tracks or extractors.
  • a combination or a mixture of tile track and extractor-driven approach is possible.
  • such a mixture could be based on the tile track approach, but where a tile base track could contain guidance for rewriting operations for the client, e.g. the tile base track could include rewritten slice or tile group headers.
  • content authoring for tile-based viewport-dependent streaming may be realized with sub-picture-based content authoring, described as follows.
  • the pre-processing (prior to encoding) comprises partitioning uncompressed pictures to sub pictures.
  • Several sub-picture bitstreams of the same uncompressed sub-picture sequence are encoded, e.g. at the same resolution but different qualities and bitrates.
  • the encoding may be constrained in a manner that merging of coded sub-picture bitstream to a compliant bitstream representing omnidirectional video is enabled.
  • each sub-picture bitstream may be encapsulated as a sub-picture track, and one or more extractor tracks merging the sub-picture tracks of different sub-picture locations may be additionally formed. If a tile track based approach is targeted, each sub-picture bitstream is modified to become an MCTS sequence and stored as a tile track in a file, and one or more tile base tracks are created for the tile tracks.
  • Tile-based viewport-dependent streaming approaches may be realized by executing a single decoder instance or one decoder instance per MCTS sequence (or in some cases, something in between, e.g. one decoder instance per MCTSs of the same resolution), e.g. depending on the capability of the device and operating system where the player runs.
  • the use of single decoder instance may be enabled by late binding or early binding.
  • the extractor-driven approach may use sub-picture tracks that are compliant with the coding format or standard without modifications.
  • Other approaches may need either to rewrite image segment headers, parameter sets, and/or alike information in the client side to construct a conforming bitstream or to have a decoder implementation capable of decoding an MCTS sequence without the presence of other coded video data.
  • tile group identifiers from a tile base track or an extractor track, wherein the tile group identified by a tile group identifier contains the collocated tile tracks or the sub-picture tracks that are alternatives for extraction.
  • one extractor track per each picture size and each tile grid is sufficient.
  • one extractor track may be needed for each distinct viewing orientation.
  • tile rectangle based encoding and streaming An approach similar to above-described tile-based viewport-dependent streaming approaches, which may be referred to as tile rectangle based encoding and streaming, is described next. This approach may be used with any video codec, even if tiles similar to HEVC were not available in the codec or even if motion-constrained tile sets or alike were not implemented in an encoder.
  • the source content is split into tile rectangle sequences before encoding.
  • Each tile rectangle sequence covers a subset of the spatial area of the source content, such as full panorama content, which may e.g. be of equirectangular projection format.
  • Each tile rectangle sequence is then encoded independently from each other as a single-layer bitstream.
  • bitstreams may be encoded from the same tile rectangle sequence, e.g. for different bitrates.
  • Each tile rectangle bitstream may be encapsulated in a file as its own track (or alike) and made available for streaming.
  • the tracks to be streamed may be selected based on the viewing orientation.
  • the client may receive tracks covering the entire omnidirectional content. Better quality or higher resolution tracks may be received for the current viewport compared to the quality or resolution covering the remaining, currently non-visible viewports.
  • each track may be decoded with a separate decoder instance.
  • the primary viewport i.e., the current viewing orientation
  • the remaining of 360-degree video is transmitted at a lower quality/resolution.
  • the viewing orientation changes, e.g. when the user turns his/her head when viewing the content with a head-mounted display
  • another version of the content needs to be streamed, matching the new viewing orientation.
  • the new version can be requested starting from a stream access point (SAP), which are typically aligned with (Sub)segments.
  • SAPs correspond to random-access pictures, are intra-coded, and are hence costly in terms of rate-distortion performance.
  • the delay (here referred to as the viewport quality update delay) in upgrading the quality after a viewing orientation change (e.g. a head turn) is conventionally in the order of seconds and is therefore clearly noticeable and annoying.
  • viewport switching in viewport-dependent streaming which may be compliant with MPEG OMAF, is enabled at stream access points, which involve intra coding and hence a greater bitrate compared to respective inter coded pictures at the same quality.
  • a compromise between the stream access point interval and the rate-distortion performance is hence chosen in an encoding configuration.
  • HEVC bitstreams of the same omnidirectional source content may be encoded at the same resolution but different qualities and bitrates using motion- constrained tile sets.
  • the MCTS grid in all bitstreams is identical.
  • each bitstream is encapsulated in its own file, and the same track identifier is used for each tile track of the same tile grid position in all these files.
  • HEVC tile tracks are formed from each motion-constrained tile set sequence, and a tile base track is additionally formed.
  • the client may parse tile base track to implicitly reconstruct a bitstream from the tile tracks.
  • the reconstructed bitstream can be decoded with a conforming HEVC decoder.
  • Clients can choose which version of each MCTS is received.
  • the same tile base track suffices for combining MCTSs from different bitstreams, since the same track identifiers are used in the respective tile tracks.
  • Fig. 5 presents an example how extractor tracks can be used for tile-based omnidirectional video streaming.
  • a 4x2 tile grid has been used in forming of the motion-constrained tile sets 81a, 81b. In many viewing orientations 2x2 tiles out of the 4x2 tile grid are needed to cover a typical field of view of a head-mounted display.
  • the presented extractor track for high-resolution motion-constrained tile sets 1, 2, 5 and 6 covers certain viewing orientations, while the extractor track for low-resolution motion-constrained tile sets 3, 4, 7, and 8 includes a region assumed to be non- visible for these viewing orientations.
  • Two HEVC decoders are used in this example, one for the high- resolution extractor track and another for the low-resolution extractor track.
  • FIG. 11 is a graphical representation of an example multimedia communication system within which various embodiments may be implemented.
  • a data source 1510 provides a source signal in an analog, uncompressed digital, or compressed digital format, or any combination of these formats.
  • An encoder 1520 may include or be connected with a pre-processing, such as data format conversion and/or filtering of the source signal.
  • the encoder 1520 encodes the source signal into a coded media bitstream. It should be noted that a bitstream to be decoded may be received directly or indirectly from a remote device located within virtually any type of network. Additionally, the bitstream may be received from local hardware or software.
  • the encoder 1520 may be capable of encoding more than one media type, such as audio and video, or more than one encoder 1520 may be required to code different media types of the source signal.
  • the encoder 1520 may also get synthetically produced input, such as graphics and text, or it may be capable of producing coded bitstreams of synthetic media. In the following, only processing of one coded media bitstream of one media type is considered to simplify the description. It should be noted, however, that typically real-time broadcast services comprise several streams (typically at least one audio, video and text sub-titling stream). It should also be noted that the system may include many encoders, but in the figure only one encoder 1520 is represented to simplify the description without a lack of generality. It should be further understood that, although text and examples contained herein may specifically describe an encoding process, one skilled in the art would understand that the same concepts and principles also apply to the same concepts and principles also apply to the same concepts and principles also apply to the same concepts and principles also apply to the same concepts and principles also apply to the same concepts
  • the coded media bitstream may be transferred to a storage 1530.
  • the storage 1530 may comprise any type of mass memory to store the coded media bitstream.
  • the format of the coded media bitstream in the storage 1530 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file, or the coded media bitstream may be encapsulated into a Segment format suitable for DASH (or a similar streaming system) and stored as a sequence of Segments. If one or more media bitstreams are encapsulated in a container file, a file generator (not shown in the figure) may be used to store the one more media bitstreams in the file and create file format metadata, which may also be stored in the file.
  • the encoder 1520 or the storage 1530 may comprise the file generator, or the file generator is operationally attached to either the encoder 1520 or the storage 1530.
  • Some systems operate“live”, i.e. omit storage and transfer coded media bitstream from the encoder 1520 directly to the sender 1540.
  • the coded media bitstream may then be transferred to the sender 1540, also referred to as the server, on a need basis.
  • the format used in the transmission may be an elementary self-contained bitstream format, a packet stream format, a Segment format suitable for DASH (or a similar streaming system), or one or more coded media bitstreams may be encapsulated into a container file.
  • the encoder 1520, the storage 1530, and the server 1540 may reside in the same physical device or they may be included in separate devices.
  • the encoder 1520 and server 1540 may operate with live real-time content, in which case the coded media bitstream is typically not stored permanently, but rather buffered for small periods of time in the content encoder 1520 and/or in the server 1540 to smooth out variations in processing delay, transfer delay, and coded media bitrate.
  • the server 1540 sends the coded media bitstream using a communication protocol stack.
  • the stack may include but is not limited to one or more of Hypertext Transfer Protocol (HTTP), Transmission Control Protocol (TCP), and Internet Protocol (IP).
  • HTTP Hypertext Transfer Protocol
  • TCP Transmission Control Protocol
  • IP Internet Protocol
  • the server 1540 encapsulates the coded media bitstream into packets. It should be again noted that a system may contain more than one server 1540, but for the sake of simplicity, the following description only considers one server 1540.
  • the sender 1540 may comprise or be operationally attached to a "sending file parser" (not shown in the figure).
  • a sending file parser locates appropriate parts of the coded media bitstream to be conveyed over the communication protocol.
  • the sending file parser may also help in creating the correct format for the communication protocol, such as packet headers and payloads.
  • the multimedia container file may contain encapsulation instructions, such as hint tracks in the ISOBMFF, for encapsulation of the at least one of the contained media bitstream on the communication protocol.
  • the server 1540 may or may not be connected to a gateway 1550 through a communication network, which may e.g. be a combination of a CDN, the Internet and/or one or more access networks.
  • the gateway may also or alternatively be referred to as a middle-box.
  • the gateway may be an edge server (of a CDN) or a web proxy. It is noted that the system may generally comprise any number gateways or alike, but for the sake of simplicity, the following description only considers one gateway 1550.
  • the gateway 1550 may perform different types of functions, such as translation of a packet stream according to one communication protocol stack to another communication protocol stack, merging and forking of data streams, and manipulation of data stream according to the downlink and/or receiver capabilities, such as controlling the bit rate of the forwarded stream according to prevailing downlink network conditions.
  • the system includes one or more receivers 1560, typically capable of receiving, de modulating, and de-capsulating the transmitted signal into a coded media bitstream.
  • the coded media bitstream may be transferred to a recording storage 1570.
  • the recording storage 1570 may comprise any type of mass memory to store the coded media bitstream.
  • the recording storage 1570 may alternatively or additively comprise computation memory, such as random access memory.
  • the format of the coded media bitstream in the recording storage 1570 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file.
  • a container file is typically used and the receiver 1560 comprises or is attached to a container file generator producing a container file from input streams.
  • Some systems operate“live,” i.e. omit the recording storage 1570 and transfer coded media bitstream from the receiver 1560 directly to the decoder 1580.
  • the most recent part of the recorded stream e.g., the most recent 10-minute excerption of the recorded stream, is maintained in the recording storage 1570, while any earlier recorded data is discarded from the recording storage 1570.
  • the coded media bitstream may be transferred from the recording storage 1570 to the decoder 1580. If there are many coded media bitstreams, such as an audio stream and a video stream, associated with each other and encapsulated into a container file or a single media bitstream is encapsulated in a container file e.g. for easier access, a file parser (not shown in the figure) is used to decapsulate each coded media bitstream from the container file.
  • the recording storage 1570 or a decoder 1580 may comprise the file parser, or the file parser is attached to either recording storage 1570 or the decoder 1580. It should also be noted that the system may include many decoders, but here only one decoder 1570 is discussed to simplify the description without a lack of generality
  • the coded media bitstream may be processed further by a decoder 1570, whose output is one or more uncompressed media streams.
  • a Tenderer 1590 may reproduce the uncompressed media streams with a loudspeaker or a display, for example.
  • the receiver 1560, recording storage 1570, decoder 1570, and Tenderer 1590 may reside in the same physical device or they may be included in separate devices.
  • a sender 1540 and/or a gateway 1550 may be configured to perform switching between different representations e.g. for view switching, bitrate adaptation and/or fast start-up, and/or a sender 1540 and/or a gateway 1550 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to respond to requests of the receiver 1560 or prevailing conditions, such as throughput, of the network over which the bitstream is conveyed.
  • a request from the receiver can be, e.g., a request for a Segment or a
  • a request for a Segment may be an HTTP GET request.
  • a request for a Subsegment may be an HTTP GET request with a byte range.
  • bitrate adjustment or bitrate adaptation may be used for example for providing so-called fast start-up in streaming services, where the bitrate of the transmitted stream is lower than the channel bitrate after starting or random-accessing the streaming in order to start playback immediately and to achieve a buffer occupancy level that tolerates occasional packet delays and/or retransmissions.
  • Bitrate adaptation may include multiple representation or layer up-switching and representation or layer down-switching operations taking place in various orders.
  • a decoder 1580 may be configured to perform switching between different representations e.g. for view switching, bitrate adaptation and/or fast start-up, and/or a decoder 1580 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to achieve faster decoding operation or to adapt the transmitted bitstream, e.g. in terms of bitrate, to prevailing conditions, such as throughput, of the network over which the bitstream is conveyed. Faster decoding operation might be needed for example if the device including the decoder 580 is multi-tasking and uses computing resources for other purposes than decoding the scalable video bitstream.
  • faster decoding operation might be needed when content is played back at a faster pace than the normal playback speed, e.g. twice or three times faster than conventional real-time playback rate.
  • the speed of decoder operation may be changed during the decoding or playback for example as response to changing from a fast-forward play from normal playback rate or vice versa, and consequently multiple layer up-switching and layer down switching operations may take place in various orders.
  • embodiments similarly apply to equirectangular pictures where the vertical coverage is less than 180 degrees.
  • the covered elevation range may be from -75° to 75°, or from -60° to 90° (i.e., covering one both not both poles).
  • embodiments similarly cover horizontally segmented equirectangular projection format, where a horizontal segment covers an azimuth range of 360 degrees and may have a resolution potentially differing from the resolution of other horizontal segments.
  • embodiments similarly apply to omnidirectional picture formats, where a first sphere region of the content is represented by the equirectangular projection of limited elevation range and a second sphere region of the content is represented by another projection, such as cube map projection.
  • the elevation range -45° to 45° may be represented by a "middle" region of
  • equirectangular projection and the other sphere regions may be represented by a rectilinear projection, similar to cube faces of a cube map but where the comers overlapping with the middle region on the spherical domain are cut out.
  • embodiments can be applied to the middle region represented by the equirectangular projection.
  • the phrase along the bitstream may be used in claims and described embodiments to refer to out-of-band transmission, signaling, or storage in a manner that the out-of-band data is associated with the bitstream.
  • the phrase decoding along the bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream.
  • the phrase along the track may be used in claims and described embodiments to refer to out-of-band transmission, signaling, or storage in a manner that the out-of-band data is associated with the track.
  • the phrase "a description along the track” may be understood to mean that the description is not stored in the file or segments that carry the track, but within another resource, such as a media presentation description.
  • the description of the motion-constrained coded sub-picture sequence may be included in a media presentation description that includes information of a Representation conveying the track.
  • decoding along the track or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the track.
  • embodiments have been described in relation to DASH or MPEG- DASH. It needs to be understood that embodiments could be similarly realized with any other similar streaming system, and/or any similar protocols as those used in DASH, and/or any similar segment and/or manifest formats as those used in DASH, and/or any similar client operation as that of a DASH client. For example, some embodiments could be realized with the M3U manifest format.
  • indications or metadata may additionally or alternatively be encoded or included along the bitstream and/or decoded along the bitstream.
  • indications or metadata may be included in or decoded from a container file that encapsulates the bitstream.
  • indications or metadata may additionally or alternatively be encoded or included in the video bitstream, for example as SEI message(s) or VUI, and/or decoded in the video bitstream, for example from SEI message(s) or VUI.
  • Fig. 12 shows a schematic block diagram of an exemplary apparatus or electronic device 50 depicted in Fig. 13, which may incorporate a transmitter according to an embodiment of the invention.
  • the electronic device 50 may for example be a mobile terminal or user equipment of a wireless communication system. However, it would be appreciated that embodiments of the invention may be implemented within any electronic device or apparatus which may require transmission of radio frequency signals.
  • the apparatus 50 may comprise a housing 30 for incorporating and protecting the device.
  • the apparatus 50 further may comprise a display 32 in the form of a liquid crystal display.
  • the display may be any suitable display technology suitable to display an image or video.
  • the apparatus 50 may further comprise a keypad 34.
  • any suitable data or user interface mechanism may be employed.
  • the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display.
  • the apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input.
  • the apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection.
  • the apparatus 50 may also comprise a battery 40 (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator).
  • the term battery discussed in connection with the embodiments may also be one of these mobile energy devices.
  • the apparatus 50 may comprise a combination of different kinds of energy devices, for example a rechargeable battery and a solar cell.
  • the apparatus may further comprise an infrared port 41 for short range line of sight communication to other devices.
  • the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/FireWire wired connection.
  • the apparatus 50 may comprise a controller 56 or processor for controlling the apparatus 50.
  • the controller 56 may be connected to memory 58 which in embodiments of the invention may store both data and/or may also store instructions for implementation on the controller 56.
  • the controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of audio and/or video data or assisting in coding and decoding carried out by the controller 56.
  • the apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a universal integrated circuit card (UICC) reader and a universal integrated circuit card for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network.
  • a card reader 48 and a smart card 46 for example a universal integrated circuit card (UICC) reader and a universal integrated circuit card for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network.
  • a smart card 46 for example a universal integrated circuit card (UICC) reader and a universal integrated circuit card for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network.
  • UICC universal integrated circuit card
  • the apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system or a wireless local area network.
  • the apparatus 50 may further comprise an antenna 60 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es).
  • the apparatus 50 comprises a camera 42 capable of recording or detecting imaging.
  • the system 10 comprises multiple communication devices which can communicate through one or more networks.
  • the system 10 may comprise any combination of wired and/or wireless networks including, but not limited to a wireless cellular telephone network (such as a global systems for mobile communications (GSM), universal mobile telecommunications system (UMTS), long term evolution (LTE) based network, code division multiple access (CDMA) network etc.), a wireless local area network (WLAN) such as defined by any of the IEEE 802.x standards, a Bluetooth personal area network, an Ethernet local area network, a token ring local area network, a wide area network, and the Internet.
  • GSM global systems for mobile communications
  • UMTS universal mobile telecommunications system
  • LTE long term evolution
  • CDMA code division multiple access
  • the system shown in Fig. 14 shows a mobile telephone network 11 and a representation of the internet 28.
  • Connectivity to the internet 28 may include, but is not limited to, long range wireless connections, short range wireless connections, and various wired connections including, but not limited to, telephone lines, cable lines, power lines, and similar communication pathways.
  • the example communication devices shown in the system 10 may include, but are not limited to, an electronic device or apparatus 50, a combination of a personal digital assistant (PDA) and a mobile telephone 14, a PDA 16, an integrated messaging device (IMD) 18, a desktop computer 20, a notebook computer 22, a tablet computer.
  • PDA personal digital assistant
  • IMD integrated messaging device
  • the apparatus 50 may be stationary or mobile when carried by an individual who is moving.
  • the apparatus 50 may also be located in a mode of transport including, but not limited to, a car, a truck, a taxi, a bus, a train, a boat, an airplane, a bicycle, a motorcycle or any similar suitable mode of transport.
  • Some or further apparatus may send and receive calls and messages and communicate with service providers through a wireless connection 25 to a base station 24.
  • the base station 24 may be connected to a network server 26 that allows communication between the mobile telephone network 11 and the internet 28.
  • the system may include additional communication devices and communication devices of various types.
  • the communication devices may communicate using various transmission technologies including, but not limited to, code division multiple access (CDMA), global systems for mobile communications (GSM), universal mobile telecommunications system (UMTS), time divisional multiple access (TDMA), frequency division multiple access (FDMA), transmission control protocol- internet protocol (TCP-IP), short messaging service (SMS), multimedia messaging service (MMS), email, instant messaging service (IMS), Bluetooth, IEEE 802.11, Long Term Evolution wireless communication technique (LTE) and any similar wireless communication technology.
  • CDMA code division multiple access
  • GSM global systems for mobile communications
  • UMTS universal mobile telecommunications system
  • TDMA time divisional multiple access
  • FDMA frequency division multiple access
  • TCP-IP transmission control protocol- internet protocol
  • SMS short messaging service
  • MMS multimedia messaging service
  • email instant messaging service
  • IMS instant messaging service
  • Bluetooth IEEE 802.11, Long Term Evolution wireless communication technique (LTE) and any similar wireless communication technology.
  • LTE Long Term Evolution wireless communication technique
  • communications device involved in implementing various embodiments of the present invention may communicate using various media including, but not limited to, radio, infrared, laser, cable connections, and any suitable connection.
  • embodiments of the invention operating within a wireless communication device
  • the invention as described above may be implemented as a part of any apparatus comprising a circuitry in which radio frequency signals are transmitted and received.
  • embodiments of the invention may be implemented in a mobile phone, in a base station, in a computer such as a desktop computer or a tablet computer comprising radio frequency communication means (e.g. wireless local area network, cellular radio, etc.).
  • radio frequency communication means e.g. wireless local area network, cellular radio, etc.
  • Embodiments of the inventions may be practiced in various components such as integrated circuit modules.
  • the design of integrated circuits is by and large a highly automated process.
  • Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.

Abstract

There are disclosed various methods, apparatuses and computer program products for video encoding and decoding. In some embodiments the method for video encoding comprises obtaining a track or a representation wherein a recommended viewport is covered with a specified quality with respect to a quality of the remaining areas; and including, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with the specified quality with respect to the quality of the remaining areas. In some embodiments the method for video decoding comprises receiving, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a higher specified quality with respect to a quality of the than remaining areas; and selecting, based on the metadata, to process the track or the representation.

Description

AN APPARATUS, A METHOD AND A COMPUTER PROGRAM FOR
OMNIDIRECTIONAL VIDEO
TECHNICAL FIELD
[0001] The present invention relates to an apparatus, a method and a computer program for omnidirectional video/image coding, decoding, file writing, file reading, and delivery an
omnidirectional scene.
BACKGROUND
[00021 This section is intended to provide a background or context to the invention that is recited in the claims. The description herein may include concepts that could be pursued, but are not necessarily ones that have been previously conceived or pursued. Therefore, unless otherwise indicated herein, what is described in this section is not prior art to the description and claims in this application and is not admitted to be prior art by inclusion in this section.
[0003] A video coding system may comprise an encoder that transforms an input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. The encoder may discard some information in the original video sequence in order to represent the video in a more compact form, for example, to enable the storage/transmission of the video information at a lower bitrate than otherwise might be needed.
[0004] Various technologies for providing three-dimensional (3D) video content are currently investigated and developed. Especially, intense studies have been focused on various multiview applications wherein a viewer is able to see only one pair of stereo video from a specific viewpoint and another pair of stereo video from a different viewpoint. One of the most feasible approaches for such multiview applications has turned out to be such wherein only a limited number of input views, e.g. a mono or a stereo video plus some supplementary data, is provided to a decoder side and all required views are then rendered (i.e. synthesized) locally by the decoder to be displayed on a display.
[0005| In the encoding of 3D video content, video compression systems, such as Advanced Video Coding standard (H.264/AVC), the Multiview Video Coding (MVC) extension of H.264/AVC or scalable extensions of HEVC (High Efficiency Video Coding) can be used.
SUMMARY
[0006] Some embodiments provide a method for encoding and decoding video information. In some embodiments of the present invention there is provided a method, apparatus and computer program product for video coding as well as decoding. [0007] In order to facilitate selection of the correct media representation associated with the recommended viewport, additional DASH signaling mechanism is provided by some embodiments. Therefore, the recommended viewport content signaling is enhanced to make it better suited for consumption over conventional displays and to enable selection of high-quality media representations for the associated recommended viewport timed metadata track.
[0008] Various aspects of examples of the invention are provided in the detailed description.
[0009] According to a first aspect, there is provided a method comprising:
obtaining a track or a representation wherein a recommended viewport is covered with a specified quality with respect to a quality of the remaining areas;
including, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with the specified quality with respect to the quality of the remaining areas.
[00101 An apparatus according to a second aspect comprises at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least:
obtain a track or a representation wherein a recommended viewport is covered with a specified quality with respect to a quality of the remaining areas;
include, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with the specified quality with respect to the quality of the than remaining areas.
[0011 j A computer readable storage medium according to a third aspect comprises code for use by an apparatus, which when executed by a processor, causes the apparatus to perform:
obtain a track or a representation wherein a recommended viewport is covered with a specified quality with respect to a quality of the remaining areas;
include, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with the specified quality with respect to the quality of the remaining areas.
[00121 An apparatus according to a fourth aspect comprises:
a first circuitry configured to obtain a track or a representation wherein a recommended viewport is covered with a specified quality with respect to a quality of the remaining areas;
a second circuitry configured to include, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with the specified quality with respect to the quality of the remaining areas.
[0013] A method according to a fifth aspect comprises:
receiving, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a specified quality with respect to a quality of the remaining areas; selecting, based on the metadata, to process the track or the representation.
[0014] An apparatus according to a sixth aspect comprises at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least:
receive, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a specified quality with respect to a quality of the remaining areas;
select, based on the metadata, to process the track or the representation.
[0015] A computer readable storage medium according to a seventh aspect comprises code for use by an apparatus, which when executed by a processor, causes the apparatus to perform:
receive, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a specified quality with respect to a quality of the remaining areas;
select, based on the metadata, to process the track or the representation.
[0016] An apparatus according to an eighth aspect comprises:
a first circuitry configured to receive, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a specified quality with respect to a quality of the remaining areas;
a second circuitry configured to select, based on the metadata, to process the track or the representation.
[0017| Further aspects include at least apparatuses and computer program products/code stored on a non-transitory memory medium arranged to carry out the above methods.
BRIEF DESCRIPTION OF THE DRAWINGS
[0018] For a more complete understanding of example embodiments of the present invention, reference is now made to the following descriptions taken in connection with the accompanying drawings in which:
[0019] Fig. la shows an example of a multi-camera system as a simplified block diagram, in accordance with an embodiment;
[00201 Fig. lb shows a perspective view of a multi-camera system, in accordance with an embodiment;
[002 j Fig. 2a illustrates image stitching, projection, and mapping processes, in accordance with an embodiment;
[0022] Fig. 2b illustrates a process of forming a monoscopic equirectangular panorama picture, in accordance with an embodiment;
[00231 Fig. 3 shows an example of mapping a higher resolution sampled front face of a cube map on the same packed virtual reality frame as other cube faces, in accordance with an embodiment; [0024] Fig. 4ashows an example of image stitching, projection and region-wise packing;
[0025] Fig. 4b shows an example of a process of forming a monoscopic equirectangular panorama picture;
[00261 Fig. 5 shows an example how extractor tracks can be used for tile-based omnidirectional video streaming, in accordance with an embodiment;
[0027J Fig. 6a illustrates an example of an omnidirectional video/image from an event, in accordance with an embodiment;
[0028] Fig. 6b illustrates a user’s viewport of Fig. 6a, in accordance with an embodiment;
[0029] Figs. 6c illustrates an example of an omnidirectional video/image represented by an equirectangular projection, in accordance with an embodiment;
[0030] Fig. 7a shows an example of a hierarchical data model used in Dynamic adaptive streaming over HTTP (DASH);
[00311 Fig. 7b shows an example of an omnidirectional streaming system;
[0032] Fig. 7c shows an example of content flow in a DASH delivery function of MPEG omnidirectional media format;
[0033] Fig. 8a shows a schematic diagram of an encoder suitable for implementing embodiments of the invention;
[0034] Fig. 8b shows a schematic diagram of a decoder suitable for implementing embodiments of the invention;
[0035| Fig. 9a shows some elements of a video encoding section, in accordance with an embodiment;
[0036] Fig. 9b shows a video decoding section, in accordance with an embodiment;
[0037] Fig. 10a shows a flow chart of an encoding method, in accordance with an embodiment;
[0038] Fig. 10b shows a flow chart of a decoding method, in accordance with an embodiment;
[0039] Fig. 11 shows a schematic diagram of an example multimedia communication system within which various embodiments may be implemented;
[00401 Fig. 12 shows schematically an electronic device employing embodiments of the invention;
[00411 Fig. 13 shows schematically a user equipment suitable for employing embodiments of the invention;
[0042| Fig. 14 further shows schematically electronic devices employing embodiments of the invention connected using wireless and wired network connections.
PET ATT, ED DESCRIPTON OF SOME EXAMPLE EMBODIMENTS
[0043] In the following, several embodiments of the invention will be described in the context of one video coding arrangement. It is to be noted, however, that the invention is not limited to this particular arrangement. In fact, the different embodiments have applications widely in any
environment where improvement of coding when switching between coded fields and frames is desired. For example, the invention may be applicable to video coding systems like streaming systems, DVD players, digital television receivers, personal video recorders, systems and computer programs on personal computers, handheld computers and communication devices, as well as network elements such as transcoders and cloud computing arrangements where video data is handled.
[0044] In the following, several embodiments are described using the convention of referring to (de)coding, which indicates that the embodiments may apply to decoding and/or encoding.
[0045] The Advanced Video Coding standard (which may be abbreviated AVC or H.264/AVC) was developed by the Joint Video Team (JVT) of the Video Coding Experts Group (VCEG) of the Telecommunications Standardization Sector of International Telecommunication Union (ITU-T) and the Moving Picture Experts Group (MPEG) of International Organisation for Standardization (ISO) / International Electrotechnical Commission (IEC). The H.264/AVC standard is published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.264 and ISO/IEC International Standard 14496-10, also known as MPEG-4 Part 10 Advanced Video Coding (AVC). There have been multiple versions of the H.264/AVC standard, each integrating new extensions or features to the specification. These extensions include Scalable Video Coding (SVC) and Multiview Video Coding (MVC).
[0046] The High Efficiency Video Coding standard (which may be abbreviated HEVC or H.265/HEVC) was developed by the Joint Collaborative Team - Video Coding (JCT-VC) of VCEG and MPEG. The standard is published by both parent standardization organizations, and it is referred to as ITU-T Recommendation H.265 and ISO/IEC International Standard 23008-2, also known as MPEG-H Part 2 High Efficiency Video Coding (HEVC). Extensions to H.265/HEVC include scalable, multiview, three-dimensional, and fidelity range extensions, which may be referred to as SHVC, MV- HEVC, 3D-HEVC, and REXT, respectively. The references in this description to H.265/HEVC, SHVC, MV-HEVC, 3D-HEVC and REXT that have been made for the purpose of understanding definitions, structures or concepts of these standard specifications are to be understood to be references to the latest versions of these standards that were available before the date of this application, unless otherwise indicated.
[00471 Some key definitions, bitstream and coding structures, and concepts of H.264/AVC and HEVC and some of their extensions are described in this section as an example of a video encoder, decoder, encoding method, decoding method, and a bitstream structure, wherein the embodiments may be implemented. Some of the key definitions, bitstream and coding structures, and concepts of H.264/AVC are the same as in HEVC standard - hence, they are described below jointly. The aspects of the invention are not limited to H.264/AVC or HEVC or their extensions, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
[0048] In the description of existing standards as well as in the description of example
embodiments, a syntax element may be defined as an element of data represented in the bitstream. A syntax structure may be defined as zero or more syntax elements present together in the bitstream in a specified order.
[00491 Similarly to many earlier video coding standards, the bitstream syntax and semantics as well as the decoding process for error-free bitstreams are specified in H.264/AVC and HEVC. The encoding process is not specified, but encoders must generate conforming bitstreams. Bitstream and decoder conformance can be verified with the Hypothetical Reference Decoder (HRD). The standards contain coding tools that help in coping with transmission errors and losses, but the use of the tools in encoding is optional and no decoding process has been specified for erroneous bitstreams.
[0050] The elementary unit for the input to an H.264/AVC or HEVC encoder and the output of an H.264/AVC or HEVC decoder, respectively, is a picture. A picture given as an input to an encoder may also be referred to as a source picture, and a picture decoded by a decoder may be referred to as a decoded picture.
[00511 The source and decoded pictures may each be comprised of one or more sample arrays, such as one of the following sets of sample arrays:
Luma (Y) only (monochrome).
Luma and two chroma (YCbCr or YCgCo).
Green, Blue and Red (GBR, also known as RGB).
Arrays representing other unspecified monochrome or tri-stimulus color samplings (for example, YZX, also known as XYZ).
[00521 In the following, these arrays may be referred to as luma (or L or Y) and chroma, where the two chroma arrays may be referred to as Cb and Cr; regardless of the actual color representation method in use. The actual color representation method in use may be indicated e.g. in a coded bitstream e.g. using the Video Usability Information (VUI) syntax of H.264/AVC and/or HEVC. A component may be defined as an array or a single sample from one of the three sample arrays (luma and two chroma) or the array or a single sample of the array that compose a picture in monochrome format.
[00531 In H.264/AVC and HEVC, a picture may either be a frame or a field. A frame comprises a matrix of luma samples and possibly the corresponding chroma samples. A field is a set of alternate sample rows of a frame. Fields may be used as encoder input for example when the source signal is interlaced. Chroma sample arrays may be absent (and hence monochrome sampling may be in use) or may be subsampled when compared to luma sample arrays. Some chroma formats may be summarized as follows:
In monochrome sampling there is only one sample array, which may be nominally considered the luma array.
In 4:2:0 sampling, each of the two chroma arrays has half the height and half the width of the luma array. In 4:2:2 sampling, each of the two chroma arrays has the same height and half the width of the luma array.
In 4:4:4 sampling when no separate color planes are in use, each of the two chroma arrays has the same height and width as the luma array.
[0054] In H.264/AVC and HEVC, it is possible to code sample arrays as separate color planes into the bitstream and respectively decode separately coded color planes from the bitstream. When separate color planes are in use, each one of them is separately processed (by the encoder and/or the decoder) as a picture with monochrome sampling.
[0055] When chroma subsampling is in use (e.g. 4:2:0 or 4:2:2 chroma sampling), the location of chroma samples with respect to luma samples may be determined in the encoder side (e.g. as pre processing step or as part of encoding). The chroma sample positions with respect to luma sample positions may be pre-defined for example in a coding standard, such as H.264/AVC or HEVC, or may be indicated in the bitstream for example as part of VUI of H.264/AVC or HEVC.
[0056] Generally, the source video sequence(s) provided as input for encoding may either represent interlaced source content or progressive source content. Fields of opposite parity have been captured at different times for interlaced source content. Progressive source content contains captured frames. An encoder may encode fields of interlaced source content in two ways: a pair of interlaced fields may be coded into a coded frame or a field may be coded as a coded field. Likewise, an encoder may encode frames of progressive source content in two ways: a frame of progressive source content may be coded into a coded frame or a pair of coded fields. A field pair or a complementary field pair may be defined as two fields next to each other in decoding and/or output order, having opposite parity (i.e. one being a top field and another being a bottom field) and neither belonging to any other complementary field pair. Some video coding standards or schemes allow mixing of coded frames and coded fields in the same coded video sequence. Moreover, predicting a coded field from a field in a coded frame and/or predicting a coded frame for a complementary field pair (coded as fields) may be enabled in encoding and/or decoding.
[00571 A partitioning may be defined as a division of a set into subsets such that each element of the set is in exactly one of the subsets. A picture partitioning may be defined as a division of a picture into smaller non-overlapping units. A block partitioning may be defined as a division of a block into smaller non-overlapping units, such as sub-blocks. In some cases term block partitioning may be considered to cover multiple levels of partitioning, for example partitioning of a picture into slices, and partitioning of each slice into smaller units, such as macroblocks of H.264/AVC. It is noted that the same unit, such as a picture, may have more than one partitioning. For example, a coding unit of HEVC may be partitioned into prediction units and separately by another quadtree into transform units.
[00581 A coded picture is a coded representation of a picture. [0059] Video coding standards and specifications may allow encoders to divide a coded picture to coded slices or alike. In-picture prediction is typically disabled across slice boundaries. Thus, slices can be regarded as a way to split a coded picture to independently decodable pieces. In H.264/AVC and HEVC, in-picture prediction may be disabled across slice boundaries. Thus, slices can be regarded as a way to split a coded picture into independently decodable pieces, and slices are therefore often regarded as elementary units for transmission. In many cases, encoders may indicate in the bitstream which types of in-picture prediction are turned off across slice boundaries, and the decoder operation takes this information into account for example when concluding which prediction sources are available. For example, samples from a neighbouring macroblock or CU may be regarded as unavailable for intra prediction, if the neighbouring macroblock or CU resides in a different slice.
[0060] In H.264/AVC, a macroblock is a 16x16 block of luma samples and the corresponding blocks of chroma samples. For example, in the 4:2:0 sampling pattern, a macroblock contains one 8x8 block of chroma samples per each chroma component. In H.264/AVC, a picture is partitioned to one or more slice groups, and a slice group contains one or more slices. In H.264/AVC, a slice consists of an integer number of macroblocks ordered consecutively in the raster scan within a particular slice group.
[0061] When describing the operation of HEVC, the following terms may be used. A coding block may be defined as an NxN block of samples for some value of N such that the division of a coding tree block into coding blocks is a partitioning. A coding tree block (CTB) may be defined as an NxN block of samples for some value of N such that the division of a component into coding tree blocks is a partitioning. A coding tree unit (CTU) may be defined as a coding tree block of luma samples, two corresponding coding tree blocks of chroma samples of a picture that has three sample arrays, or a coding tree block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples. A coding unit (CU) may be defined as a coding block of luma samples, two corresponding coding blocks of chroma samples of a picture that has three sample arrays, or a coding block of samples of a monochrome picture or a picture that is coded using three separate color planes and syntax structures used to code the samples.
[00621 In some video codecs, such as High Efficiency Video Coding (HEVC) codec, video pictures are divided into coding units (CU) covering the area of the picture. A CU consists of one or more prediction units (PU) defining the prediction process for the samples within the CU and one or more transform units (TU) defining the prediction error coding process for the samples in the said CU. Typically, a CU consists of a square block of samples with a size selectable from a predefined set of possible CU sizes. A CU with the maximum allowed size may be named as LCU (largest coding unit) or coding tree unit (CTU) and the video picture is divided into non-overlapping LCUs. An LCU can be further split into a combination of smaller CUs, e.g. by recursively splitting the LCU and resultant CUs. Each resulting CU typically has at least one PU and at least one TU associated with it. Each PU and TU can be further split into smaller PUs and TUs in order to increase granularity of the prediction and prediction error coding processes, respectively. Each PU has prediction information associated with it defining what kind of a prediction is to be applied for the pixels within that PU (e.g. motion vector information for inter predicted PUs and intra prediction directionality information for intra predicted PUs).
[0063] Each TU can be associated with information describing the prediction error decoding process for the samples within the said TU (including e.g. DCT coefficient information). It is typically signaled at CU level whether prediction error coding is applied or not for each CU. In the case there is no prediction error residual associated with the CU, it can be considered there are no TUs for the said CU. The division of the image into CUs, and division of CUs into PUs and TUs is typically signaled in the bitstream allowing the decoder to reproduce the intended structure of these units.
[00641 In the HEVC standard, a picture can be partitioned in tiles, which are rectangular and contain an integer number of CTUs. In the HEVC standard, the partitioning to tiles forms a grid that may be characterized by a list of tile column widths (in CTUs) and a list of tile row heights (in CTUs). Tiles are ordered in the bitstream consecutively in the raster scan order of the tile grid. A tile may contain an integer number of slices.
[0065] In the HEVC, a slice consists of an integer number of CTUs. The CTUs are scanned in the raster scan order of CTUs within tiles or within a picture, if tiles are not in use. A slice may contain an integer number of tiles or a slice can be contained in a tile. Within a CTU, the CUs have a specific scan order.
[00661 In HEVC, a slice may be defined as an integer number of coding tree units contained in one independent slice segment and all subsequent dependent slice segments (if any) that precede the next independent slice segment (if any) within the same access unit. An independent slice segment may be defined as a slice segment for which the values of the syntax elements of the slice segment header are not inferred from the values for a preceding slice segment. A dependent slice segment may be defined as a slice segment for which the values of some syntax elements of the slice segment header are inferred from the values for the preceding independent slice segment in decoding order. In other words, only the independent slice segment may have a“full” slice header. An independent slice segment may be conveyed in one NAL unit (without other slice segments in the same NAL unit) and likewise a dependent slice segment may be conveyed in one NAL unit (without other slice segments in the same NAL unit).
[0067] In HEVC, a coded slice segment may be considered to comprise a slice segment header and slice segment data. A slice segment header may be defined as part of a coded slice segment containing the data elements pertaining to the first or all coding tree units represented in the slice segment. A slice header may be defined as the slice segment header of the independent slice segment that is a current slice segment or the most recent independent slice segment that precedes a current dependent slice segment in decoding order. Slice segment data may comprise an integer number of coding tree unit syntax structures. [0068] In H.264/AVC and HEVC, in-picture prediction may be disabled across slice boundaries. Thus, slices can be regarded as a way to split a coded picture into independently decodable pieces, and slices are therefore often regarded as elementary units for transmission. In many cases, encoders may indicate in the bitstream which types of in-picture prediction are turned off across slice boundaries, and the decoder operation takes this information into account for example when concluding which prediction sources are available. For example, samples from a neighboring macroblock or CU may be regarded as unavailable for intra prediction, if the neighboring macroblock or CU resides in a different slice.
[0069] The elementary unit for the output of an H.264/AVC or HEVC encoder and the input of an H.264/AVC or HEVC decoder, respectively, is a Network Abstraction Layer (NAL) unit. For transport over packet-oriented networks or storage into structured files, NAL units may be encapsulated into packets or similar structures.
[00701 A NAL unit may be defined as a syntax structure containing an indication of the type of data to follow and bytes containing that data in the form of an RBSP interspersed as necessary with emulation prevention bytes. A raw byte sequence payload (RBSP) may be defined as a syntax structure containing an integer number of bytes that is encapsulated in a NAL unit. An RBSP is either empty or has the form of a string of data bits containing syntax elements followed by an RBSP stop bit and followed by zero or more subsequent bits equal to 0.
[00711 NAL units consist of a header and payload. In H.264/AVC, the NAL unit header indicates the type of the NAL unit and whether a coded slice contained in the NAL unit is a part of a reference picture or a non-reference picture. H.264/AVC includes a 2-bit nal ref idc syntax element, which when equal to 0 indicates that a coded slice contained in the NAL unit is a part of a non-reference picture and when greater than 0 indicates that a coded slice contained in the NAL unit is a part of a reference picture. The NAL unit header for SVC and MVC NAL units may additionally contain various indications related to the scalability and multiview hierarchy.
[0072] In HEVC, a two-byte NAL unit header is used for all specified NAL unit types. The NAL unit header contains one reserved bit, a six-bit NAL unit type indication (called nal unit type), a six- bit reserved field (called nuh layer id) and a three-bit temporal_id_plusl indication for temporal level. The temporal_id_plusl syntax element may be regarded as a temporal identifier for the NAL unit, and a zero-based Temporalld variable may be derived as follows: Temporalld =
temporal_id_plusl - 1. Temporalld equal to 0 corresponds to the lowest temporal level. The value of temporal_id_plusl is required to be non-zero in order to avoid start code emulation involving the two NAL unit header bytes. The bitstream created by excluding all VCL NAL units having a Temporalld greater than or equal to a selected value and including all other VCL NAL units remains conforming. Consequently, a picture having Temporalld equal to TID does not use any picture having a
Temporalld greater than TID as inter prediction reference. A sub-layer or a temporal sub-layer may be defined to be a temporal scalable layer of a temporal scalable bitstream, consisting of VCL NAL units with a particular value of the Temporalld variable and the associated non-VCL NAL units. Without loss of generality, in some example embodiments a variable Layerld is derived from the value of nuh layer id for example as follows: Layerld = nuh layer id. In the following, layer identifier, Layerld, nuh layer id and layer id are used interchangeably unless otherwise indicated.
[0073] In HEVC extensions nuh layer id and/or similar syntax elements in NAL unit header carries scalability layer information. For example, the Layerld value nuh layer id and/or similar syntax elements may be mapped to values of variables or syntax elements describing different scalability dimensions.
[0074] NAL units can be categorized into Video Coding Layer (VCL) NAL units and non-VCL NAL units. VCL NAL units are typically coded slice NAL units. In H.264/AVC, coded slice NAL units contain syntax elements representing one or more coded macroblocks, each of which corresponds to a block of samples in the uncompressed picture. In HEVC, coded slice NAL units contain syntax elements representing one or more CU.
[0075] A non-VCL NAL unit may be for example one of the following types: a sequence parameter set, a picture parameter set, a supplemental enhancement information (SEI) NAL unit, an access unit delimiter, an end of sequence NAL unit, an end of bitstream NAL unit, or a filler data NAL unit. Parameter sets may be needed for the reconstruction of decoded pictures, whereas many of the other non-VCL NAL units are not necessary for the reconstruction of decoded sample values.
[0076] Parameters that remain unchanged through a coded video sequence may be included in a sequence parameter set. Examples of parameters that are required to be unchanged within a coded video sequence in many coding systems and hence included in a sequence parameter set are the width and height of the pictures included in the coded video sequence. In addition to the parameters that may be needed by the decoding process, the sequence parameter set may optionally contain video usability information (VUI), which includes parameters that may be important for buffering, picture output timing, rendering, and resource reservation. In HEVC a sequence parameter set RBSP includes parameters that can be referred to by one or more picture parameter set RBSPs or one or more SEI NAL units containing a buffering period SEI message. A picture parameter set contains such parameters that are likely to be unchanged in several coded pictures. A picture parameter set RBSP may include parameters that can be referred to by the coded slice NAL units of one or more coded pictures.
[0077] In HEVC, a video parameter set (VPS) may be defined as a syntax structure containing syntax elements that apply to zero or more entire coded video sequences as determined by the content of a syntax element found in the SPS referred to by a syntax element found in the PPS referred to by a syntax element found in each slice segment header. A video parameter set RBSP may include parameters that can be referred to by one or more sequence parameter set RBSPs.
[0078| The relationship and hierarchy between video parameter set (VPS), sequence parameter set (SPS), and picture parameter set (PPS) may be described as follows. VPS resides one level above SPS in the parameter set hierarchy and in the context of scalability and/or 3D video. VPS may include parameters that are common for all slices across all (scalability or view) layers in the entire coded video sequence. SPS includes the parameters that are common for all slices in a particular (scalability or view) layer in the entire coded video sequence, and may be shared by multiple (scalability or view) layers. PPS includes the parameters that are common for all slices in a particular layer representation (the representation of one scalability or view layer in one access unit) and are likely to be shared by all slices in multiple layer representations.
[0079] VPS may provide information about the dependency relationships of the layers in a bitstream, as well as many other information that are applicable to all slices across all (scalability or view) layers in the entire coded video sequence. VPS may be considered to comprise two parts, the base VPS and a VPS extension, where the VPS extension may be optionally present.
[00801 A SEI NAL unit may contain one or more SEI messages, which are not required for the decoding of output pictures but may assist in related processes, such as picture output timing, rendering, error detection, error concealment, and resource reservation. Several SEI messages are specified in H.264/AVC and HEVC, and the user data SEI messages enable organizations and companies to specify SEI messages for their own use. H.264/AVC and HEVC contain the syntax and semantics for the specified SEI messages but no process for handling the messages in the recipient is defined. Consequently, encoders are required to follow the H.264/AVC standard or the HEVC standard when they create SEI messages, and decoders conforming to the H.264/AVC standard or the HEVC standard, respectively, are not required to process SEI messages for output order conformance. One of the reasons to include the syntax and semantics of SEI messages in H.264/AVC and HEVC is to allow different system specifications to interpret the supplemental information identically and hence interoperate. It is intended that system specifications can require the use of particular SEI messages both in the encoding end and in the decoding end, and additionally the process for handling particular SEI messages in the recipient can be specified.
[0081] In HEVC, there are two types of SEI NAL units, namely the suffix SEI NAL unit and the prefix SEI NAL unit, having a different nal unit type value from each other. The SEI message(s) contained in a suffix SEI NAL unit are associated with the VCL NAL unit preceding, in decoding order, the suffix SEI NAL unit. The SEI message(s) contained in a prefix SEI NAL unit are associated with the VCL NAL unit following, in decoding order, the prefix SEI NAL unit.
[0082] In HEVC, a coded picture may be defined as a coded representation of a picture containing all coding tree units of the picture. In HEVC, an access unit (AU) may be defined as a set of NAL units that are associated with each other according to a specified classification rule, are consecutive in decoding order, and contain at most one picture with any specific value of nuh layer id. In addition to containing the VCL NAL units of the coded picture, an access unit may also contain non-VCL NAL units. [0083] It may be required that coded pictures appear in certain order within an access unit. For example, a coded picture with nuh layer id equal to nuhLayerldA may be required to precede, in decoding order, all coded pictures with nuh layer id greater than nuhLayerldA in the same access unit. An AU typically contains all the coded pictures that represent the same output time and/or capturing time.
[0084] A bitstream may be defined as a sequence of bits, in the form of a NAL unit stream or a byte stream, that forms the representation of coded pictures and associated data forming one or more coded video sequences. A first bitstream may be followed by a second bitstream in the same logical channel, such as in the same file or in the same connection of a communication protocol. An elementary stream (in the context of video coding) may be defined as a sequence of one or more bitstreams. The end of the first bitstream may be indicated by a specific NAL unit, which may be referred to as the end of bitstream (EOB) NAL unit and which is the last NAL unit of the bitstream.
[00851 A byte stream format has been specified in H.264/AVC and HEVC for transmission or storage environments that do not provide framing structures. The byte stream format separates NAL units from each other by attaching a start code in front of each NAL unit. To avoid false detection of NAL unit boundaries, encoders run a byte-oriented start code emulation prevention algorithm, which adds an emulation prevention byte to the NAL unit payload if a start code would have occurred otherwise. In order to, for example, enable straightforward gateway operation between packet- and stream-oriented systems, start code emulation prevention may always be performed regardless of whether the byte stream format is in use or not. The bit order for the byte stream format may be specified to start with the most significant bit (MSB) of the first byte, proceed to the least significant bit (LSB) of the first byte, followed by the MSB of the second byte, etc. The byte stream format may be considered to consist of a sequence of byte stream NAL unit syntax structures. Each byte stream NAL unit syntax structure may be considered to comprise one start code prefix followed by one NAL unit syntax structure, as well as trailing and/or heading padding bits and/or bytes.
[0086] A motion-constrained tile set (MCTS) is such a set of one or more tiles that the inter prediction process is constrained in encoding such that no sample value outside the motion- constrained tile set, and no sample value at a fractional sample position that is derived using one or more sample values outside the motion-constrained tile set, is used for inter prediction of any sample within the motion-constrained tile set. An MCTS may be required to be rectangular. Additionally, the encoding of an MCTS is constrained in a manner that motion vector candidates are not derived from blocks outside the MCTS. This may be enforced by turning off temporal motion vector prediction of HEVC, or by disallowing the encoder to use the TMVP candidate or any motion vector prediction candidate following the TMVP candidate in the merge or AMVP candidate list for PUs located directly left of the right tile boundary of the MCTS except the last one at the bottom right of the MCTS. [0087] Note that sample locations used in inter prediction may be saturated so that a location that would be outside the picture otherwise is saturated to point to the corresponding boundary sample of the picture. Hence, if a tile boundary is also a picture boundary, motion vectors may effectively cross that boundary or a motion vector may effectively cause fractional sample interpolation that would refer to a location outside that boundary, since the sample locations are saturated onto the boundary. However, if tiles may be re-located in a tile merging operation (see e.g. embodiments of the present invention), encoders generating MCTSs may apply motion constraints to all tile boundaries of the MCTS, including picture boundaries.
[0088] The temporal motion-constrained tile sets SEI message of HEVC can be used to indicate the presence of motion-constrained tile sets in the bitstream.
[00891 A motion-constrained picture is such that the inter prediction process is constrained in encoding such that no sample value outside the picture, and no sample value at a fractional sample position that is derived using one or more sample values outside the picture, would be used for inter prediction of any sample within the picture and/or sample locations used for prediction need not be saturated to be within picture boundaries.
[0090] It may be considered that in stereoscopic or two-view video, one video sequence or view is presented for the left eye while a parallel view is presented for the right eye. More than two parallel views may be needed for applications which enable viewpoint switching or for autostereoscopic displays which may present a large number of views simultaneously and let the viewers to observe the content from different viewpoints.
[00911 A view may be defined as a sequence of pictures representing one camera or viewpoint. The pictures representing a view may also be called view components. In other words, a view component may be defined as a coded representation of a view in a single access unit. In multiview video coding, more than one view are coded in a bitstream. Since views are typically intended to be displayed on stereoscopic or multiview autostereoscopic display or to be used for other 3D arrangements, they typically represent the same scene and are content-wise partly overlapping although representing different viewpoints to the content. Hence, inter- view prediction may be utilized in multiview video coding to take advantage of inter- view correlation and improve compression efficiency. One way to realize inter- view prediction is to include one or more decoded pictures of one or more other views in the reference picture list(s) of a picture being coded or decoded residing within a first view. View scalability may refer to such multiview video coding or multiview video bitstreams, which enable removal or omission of one or more coded views, while the resulting bitstream remains conforming and represents video with a smaller number of views than originally.
[0092] Frame packing may be defined to comprise arranging more than one input picture, which may be referred to as (input) constituent frames, into an output picture. In general, frame packing is not limited to any particular type of constituent frames or the constituent frames need not have a particular relation with each other. In many cases, frame packing is used for arranging constituent frames of a stereoscopic video clip into a single picture sequence, as explained in more details in the next paragraph. The arranging may include placing the input pictures in spatially non-overlapping areas within the output picture. For example, in a side-by-side arrangement, two input pictures are placed within an output picture horizontally adjacently to each other. The arranging may also include partitioning of one or more input pictures into two or more constituent frame partitions and placing the constituent frame partitions in spatially non-overlapping areas within the output picture. The output picture or a sequence of frame-packed output pictures may be encoded into a bitstream e.g. by a video encoder. The bitstream may be decoded e.g. by a video decoder. The decoder or a post-processing operation after decoding may extract the decoded constituent frames from the decoded picture(s) e.g. for displaying.
[00931 In frame-compatible stereoscopic video (a.k.a. frame packing of stereoscopic video), a spatial packing of a stereo pair into a single frame is performed at the encoder side as a pre-processing step for encoding and then the frame-packed frames are encoded with a conventional 2D video coding scheme. The output frames produced by the decoder contain constituent frames of a stereo pair.
[0094] In a typical operation mode, the spatial resolution of the original frames of each view and the packaged single frame have the same resolution. In this case the encoder downsamples the two views of the stereoscopic video before the packing operation. The spatial packing may use for example a side-by-side or top-bottom format, and the downsampling should be performed accordingly.
[00951 A uniform resource identifier (URI) may be defined as a string of characters used to identify a name of a resource. Such identification enables interaction with representations of the resource over a network, using specific protocols. A URI is defined through a scheme specifying a concrete syntax and associated protocol for the URI. The uniform resource locator (URL) and the uniform resource name (URN) are forms of URI. A URL may be defined as a URI that identifies a web resource and specifies the means of acting upon or obtaining the representation of the resource, specifying both its primary access mechanism and network location. A URN may be defined as a URI that identifies a resource by name in a particular namespace. A URN may be used for identifying a resource without implying its location or how to access it. The term requesting locator may be defined to an identifier that can be used to request a resource, such as a file or a segment. A requesting locator may, for example, be a URL or specifically an HTTP URL. A client may use a requesting locator with a communication protocol, such as HTTP, to request a resource from a server or a sender.
[0096] Available media file format standards include ISO base media file format (ISO/IEC 14496- 12, which may be abbreviated ISOBMFF), MPEG-4 file format (ISO/IEC 14496-14, also known as the MP4 format), file format for NAL unit structured video (ISO/IEC 14496-15) and 3 GPP file format (3GPP TS 26.244, also known as the 3GP format). ISOBMFF is the base for derivation of all the above mentioned file formats (excluding the ISOBMFF itself).
[00971 Some concepts, structures, and specifications of ISOBMFF are described below as an example of a container file format, based on which some embodiments may be implemented. The aspects of the invention are not limited to ISOBMFF, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
[0098| A basic building block in the ISO base media file format is called a box. Each box has a header and a payload. The box header indicates the type of the box and the size of the box in terms of bytes. A box may enclose other boxes, and the ISO file format specifies which box types are allowed within a box of a certain type. Furthermore, the presence of some boxes may be mandatory in each file, while the presence of other boxes may be optional. Additionally, for some box types, it may be allowable to have more than one box present in a file. Thus, the ISO base media file format may be considered to specify a hierarchical structure of boxes.
[0099] According to the ISO family of file formats, a file includes media data and metadata that are encapsulated into boxes. Each box is identified by a four character code (4CC) and starts with a header which informs about the type and size of the box.
[0100| In files conforming to the ISO base media file format, the media data may be provided in a media data‘mdat‘ box and the movie‘moov’ box may be used to enclose the metadata. In some cases, for a file to be operable, both of the‘mdat’ and‘moov’ boxes may be required to be present. The movie‘moov’ box may include one or more tracks, and each track may reside in one corresponding track‘trak’ box. A track may be one of the many types, including a media track that refers to samples formatted according to a media compression format (and its encapsulation to the ISO base media file format). A track may be regarded as a logical channel.
[01011 Movie fragments may be used e.g. when recording content to ISO files e.g. in order to avoid losing data if a recording application crashes, runs out of memory space, or some other incident occurs. Without movie fragments, data loss may occur because the file format may require that all metadata, e.g., the movie box, be written in one contiguous area of the file. Furthermore, when recording a file, there may not be sufficient amount of memory space (e.g., random access memory RAM) to buffer a movie box for the size of the storage available, and re-computing the contents of a movie box when the movie is closed may be too slow. Moreover, movie fragments may enable simultaneous recording and playback of a file using a regular ISO file parser. Furthermore, a smaller duration of initial buffering may be required for progressive downloading, e.g., simultaneous reception and playback of a file when movie fragments are used and the initial movie box is smaller compared to a file with the same media content but structured without movie fragments.
[0102] The movie fragment feature may enable splitting the metadata that otherwise might reside in the movie box into multiple pieces. Each piece may correspond to a certain period of time of a track. In other words, the movie fragment feature may enable interleaving file metadata and media data. Consequently, the size of the movie box may be limited and the use cases mentioned above be realized.
[01031 In some examples, the media samples for the movie fragments may reside in an mdat box. For the metadata of the movie fragments, however, a moof box may be provided. The moof box may include the information for a certain duration of playback time that would previously have been in the moov box. The moov box may still represent a valid movie on its own, but in addition, it may include an mvex box indicating that movie fragments will follow in the same file. The movie fragments may extend the presentation that is associated to the moov box in time.
[0104] Within the movie fragment there may be a set of track fragments, including anywhere from zero to a plurality per track. The track fragments may in turn include anywhere from zero to a plurality of track runs, each of which document is a contiguous run of samples for that track (and hence are similar to chunks). Within these structures, many fields are optional and can be defaulted. The metadata that may be included in the moof box may be limited to a subset of the metadata that may be included in a moov box and may be coded differently in some cases. Details regarding the boxes that can be included in a moof box may be found from the ISOBMFF specification. A self-contained movie fragment may be defined to consist of a moof box and an mdat box that are consecutive in the file order and where the mdat box contains the samples of the movie fragment (for which the moof box provides the metadata) and does not contain samples of any other movie fragment (i.e. any other moof box).
[0105] The TrackBox (‘trak’ box) includes in its hierarchy of boxes the SampleDescriptionBox, which gives detailed information about the coding type used, and any initialization information needed for that coding. The SampleDescriptionBox contains an entry-count and as many sample entries as the entry-count indicates. The format of sample entries is track-type specific but derived from generic classes (e.g. VisualSampleEntry, AudioSampleEntry). Which type of sample entry form is used for derivation of the track-type specific sample entry format is determined by the media handler of the track.
[0106] The track reference mechanism can be used to associate tracks with each other. The TrackReferenceBox includes box(es), each of which provides a reference from the containing track to a set of other tracks. These references are labeled through the box type (i.e. the four-character code of the box) of the contained box(es).
[0107 ] The ISO Base Media File Format contains three mechanisms for timed metadata that can be associated with particular samples: sample groups, timed metadata tracks, and sample auxiliary information. Derived specification may provide similar functionality with one or more of these three mechanisms.
[0108] A sample grouping in the ISO base media file format and its derivatives, such as the AVC file format and the SVC file format, may be defined as an assignment of each sample in a track to be a member of one sample group, based on a grouping criterion. A sample group in a sample grouping is not limited to being contiguous samples and may contain non-adjacent samples. As there may be more than one sample grouping for the samples in a track, each sample grouping may have a type field to indicate the type of grouping. Sample groupings may be represented by two linked data structures: (1) a SampleToGroupBox (sbgp box) represents the assignment of samples to sample groups; and (2) a SampleGroupDescriptionBox (sgpd box) contains a sample group entry for each sample group describing the properties of the group. There may be multiple instances of the SampleToGroupBox and SampleGroupDescriptionBox based on different grouping criteria. These may be distinguished by a type field used to indicate the type of grouping. SampleToGroupBox may comprise a
grouping_type_parameter field that can be used e.g. to indicate a sub-type of the grouping.
[0109] In ISOBMFF, a Track group enables grouping of tracks based on certain characteristics or the tracks within a group have a particular relationship. Track grouping, however, does not allow any image items in the group.
[0110] The syntax of TrackGroupBox in ISOBMFF is as follows:
aligned(8) class TrackGroupBox extends Box('trgr') {
}
aligned(8) class TrackGroupTypeBox(unsigned int(32) track group type) extends
FullBox(track_group_type, version = 0, flags = 0)
{
unsigned int(32) track group id;
// the remaining data may be specified for a particular track group type
}
[0111] track group type indicates the grouping type and may be set to a value, or a value registered, or a value from a derived specification or registration. Example value include 'msrc' which indicates that this track belongs to a multi-source presentation. The tracks that have the same value of track group id within a TrackGroupTypeBox of track group type 'msrc' are mapped as being originated from the same source. For example, a recording of a video telephony call may have both audio and video for both participants, and the value of track group id associated with the audio track and the video track of one participant differs from value of track group id associated with the tracks of the other participant. The pair of track group id and track group type identifies a track group within the file. The tracks that contain a particular TrackGroupTypeBox having the same value of track group id and track group type belong to the same track group.
[0112] The Entity grouping is similar to track grouping but enables grouping of both tracks and image items in the same group. The syntax of EntityToGroupBox in ISOBMFF is as follows:
aligned(8) class EntityToGroupBox(grouping_type, version, flags)
extends FullBox(grouping_type, version, flags) {
unsigned int(32) group id;
unsigned int(32) num entities in group;
for(i=0; i<num_entities_in_group; i++)
unsigned int(32) entity id;
}
[0113] group id is a non-negative integer assigned to the particular grouping that may not be equal to any group id value of any other EntityToGroupBox, any item ID value of the hierarchy level (file, movie or track) that contains the GroupsListBox, or any track ID value (when the GroupsListBox is contained in the file level) num entities in group specifies the number of entity id values mapped to this entity group entity id is resolved to an item, when an item with item ID equal to entity id is present in the hierarchy level (file, movie or track) that contains the GroupsListBox, or to a track, when a track with track ID equal to entity id is present and the GroupsListBox is contained in the file level.
[0114| Files conforming to the ISOBMFF may contain any non-timed objects, referred to as items, meta items, or metadata items, in a meta box (four-character code:‘meta’). While the name of the meta box refers to metadata, items can generally contain metadata or media data. The meta box may reside at the top level of the file, within a movie box (four-character code:‘moov’), and within a track box (four-character code:‘trak’), but at most one meta box may occur at each of the file level, movie level, or track level. The meta box may be required to contain a HandlerBox (‘hdlr’) box indicating the structure or format of the‘meta’ box contents. The meta box may list and characterize any number of items that can be referred and each one of them can be associated with a file name and are uniquely identified with the filef by item identifier (item id) which is an integer value. The metadata items may be for example stored in the Item Data Box (‘idat’) box of the meta box or in an 'mdat' box or reside in a separate file. If the metadata is located external to the file then its location may be declared by the DatalnformationBox (four-character code:‘dinf ). In the specific case that the metadata is formatted using extensible Markup Language (XML) syntax and is required to be stored directly in the
MetaBox, the metadata may be encapsulated into either the XMLBox (four-character code:‘xml’) or the BinaryXMLBox (four-character code:‘bxml’). An item may be stored as a contiguous byte range, or it may be stored in several extents, each being a contiguous byte range. In other words, items may be stored fragmented into extents, e.g. to enable interleaving. An extent is a contiguous subset of the bytes of the resource. The resource can be formed by concatenating the extents. The
ItemPropertiesBox enables the association of any item with an ordered set of item properties. Item properties may be regarded as small data records. The ItemPropertiesBox consists of two parts:
ItemPropertyContainerBox that contains an implicitly indexed list of item properties, and one or more ItemPropertyAssociationBox(es) that associate items with item properties.
[0115] The restricted video ('resv') sample entry and mechanism has been specified for the ISOBMFF in order to handle situations where the file author requires certain actions on the player or renderer after decoding of a visual track. Players not recognizing or not capable of processing the required actions are stopped from decoding or rendering the restricted video tracks. The 'resv' sample entry mechanism applies to any type of video codec. A RestrictedSchemelnfoBox is present in the sample entry of 'resv' tracks and comprises a OriginalFormatBox, SchemeTypeBox, and
SchemelnformationBox. The original sample entry type that would have been unless the 'resv' sample entry type were used is contained in the OriginalFormatBox. The SchemeTypeBox provides an indication which type of processing is required in the player to process the video. The
SchemelnformationBox comprises further information of the required processing. The scheme type may impose requirements on the contents of the SchemelnformationBox. For example, the stereo video scheme indicated in the SchemeTypeBox indicates that when decoded frames either contain a representation of two spatially packed constituent frames that form a stereo pair (frame packing) or only one view of a stereo pair (left and right views in different tracks). StereoVideoBox may be contained in SchemelnformationBox to provide further information e.g. on which type of frame packing arrangement has been used (e.g. side-by-side or top-bottom).
[0116| The Matroska file format is capable of (but not limited to) storing any of video, audio, picture, or subtitle tracks in one file. Matroska may be used as a basis format for derived file formats, such as WebM. Matroska uses Extensible Binary Meta Language (EBML) as basis. EBML specifies a binary and octet (byte) aligned format inspired by the principle of XML. EBML itself is a generalized description of the technique of binary markup. A Matroska file consists of Elements that make up an EBML "document." Elements incorporate an Element ID, a descriptor for the size of the element, and the binary data itself. Elements can be nested. A Segment Element of Matroska is a container for other top-level (level 1) elements. A Matroska file may comprise (but is not limited to be composed of) one Segment. Multimedia data in Matroska files is organized in Clusters (or Cluster Elements), each containing typically a few seconds of multimedia data. A Cluster comprises BlockGroup elements, which in turn comprise Block Elements. A Cues Element comprises metadata which may assist in random access or seeking and may include file pointers or respective timestamps for seek points.
[0117] Several commercial solutions for adaptive streaming over HTTP, such as Microsoft® Smooth Streaming, Apple® Adaptive HTTP Live Streaming and Adobe® Dynamic Streaming, have been launched as well as standardization projects have been carried out. Adaptive HTTP streaming (AHS) was first standardized in Release 9 of 3rd Generation Partnership Project (3GPP) packet- switched streaming (PSS) service (3GPP TS 26.234 Release 9:“Transparent end-to-end packet- switched streaming service (PSS); protocols and codecs”). MPEG took 3GPP AHS Release 9 as a starting point for the MPEG DASH standard (ISO/IEC 23009-1 :“Dynamic adaptive streaming over HTTP (DASH)-Part 1 : Media presentation description and segment formats,” International Standard, 2nd Edition, 2014). MPEG DASH and 3GP-DASH are technically close to each other and may therefore be collectively referred to as DASH. Some concepts, formats, and operations of DASH are described below as an example of a video streaming system, wherein the embodiments may be implemented. The aspects of the invention are not limited to DASH, but rather the description is given for one possible basis on top of which the invention may be partly or fully realized.
[01181 In DASH, the multimedia content may be stored on an HTTP server and may be delivered using HTTP. The content may be stored on the server in two parts: Media Presentation Description (MPD), which describes a manifest of the available content, its various alternatives, their URL addresses, and other characteristics; and segments, which contain the actual multimedia bitstreams in the form of chunks, in a single or multiple files. The MDP provides the necessary information for clients to establish a dynamic adaptive streaming over HTTP. The MPD contains information describing media presentation, such as an HTTP- uniform resource locator (URL) of each Segment to make GET Segment request. To play the content, the DASH client may obtain the MPD e.g. by using HTTP, email, thumb drive, broadcast, or other transport methods. By parsing the MPD, the DASH client may become aware of the program timing, media-content availability, media types, resolutions, minimum and maximum bandwidths, and the existence of various encoded alternatives of multimedia components, accessibility features and required digital rights management (DRM), media-component locations on the network, and other content characteristics. Using this information, the DASH client may select the appropriate encoded alternative and start streaming the content by fetching the segments using e.g. HTTP GET requests. After appropriate buffering to allow for network throughput variations, the client may continue fetching the subsequent segments and also monitor the network bandwidth fluctuations. The client may decide how to adapt to the available bandwidth by fetching segments of different alternatives (with lower or higher bitrates) to maintain an adequate buffer.
[0119] In DASH, hierarchical data model is used to structure media presentation as shown in Fig. 7a. A media presentation consists of a sequence of one or more Periods, each Period contains one or more Groups, each Group contains one or more Adaptation Sets, each Adaptation Sets contains one or more Representations, each Representation consists of one or more Segments. A Representation is one of the alternative choices of the media content or a subset thereof typically differing by the encoding choice, e.g. by bitrate, resolution, language, codec, etc. The Segment contains certain duration of media data, and metadata to decode and present the included media content. A Segment is identified by a URI and can typically be requested by a HTTP GET request. A Segment may be defined as a unit of data associated with an HTTP -URL and optionally a byte range that are specified by an MPD.
[0120] The DASH MPD complies with Extensible Markup Language (XML) and is therefore specified through elements and attributes as defined in XML. The MPD may be specified using the following conventions: Elements in an XML document may be identified by an upper-case first letter and may appear in bold face as Element. To express that an element Elementl is contained in another element Element2, one may write Element2.Elementl. If an element's name consists of two or more combined words, camel-casing may be used, e.g. ImportantElement. Elements may be present either exactly once, or the minimum and maximum occurrence may be defined by <minOccurs> ...
<maxOccurs>. Attributes in an XML document may be identified by a lower-case first letter as well as they may be preceded by a‘@’-sign, e.g. @attribute. To point to a specific attribute @attribute contained in an element Element, one may write Element@attribute. If an attribute's name consists of two or more combined words, camel-casing may be used after the first word, e.g.
@veryImportantAttribute. Attributes may have assigned a status in the XML as mandatory (M), optional (O), optional with default value (OD) and conditionally mandatory (CM).
[0121] In DASH, an independent representation may be defined as a representation that can be processed independently of any other representations. An independent representation may be understood to comprise an independent bitstream or an independent layer of a bitstream. A dependent representation may be defined as a representation for which Segments from its complementary representations are necessary for presentation and/or decoding of the contained media content components. A dependent representation may be understood to comprise e.g. a predicted layer of a scalable bitstream. A complementary representation may be defined as a representation which complements at least one dependent representation. A complementary representation may be an independent representation or a dependent representation. Dependent Representations may be described by a Representation element that contains a @dependencyld attribute. Dependent
Representations can be regarded as regular Representations except that they depend on a set of complementary Representations for decoding and/or presentation. The @dependencyld contains the values of the @id attribute of all the complementary Representations, i.e. Representations that are necessary to present and/or decode the media content components contained in this dependent Representation.
[0122] In the context of DASH, the following definitions may be used: A media content component or a media component may be defined as one continuous component of the media content with an assigned media component type that can be encoded individually into a media stream. Media content may be defined as one media content period or a contiguous sequence of media content periods. Media content component type may be defined as a single type of media content such as audio, video, or text. A media stream may be defined as an encoded version of a media content component.
[0123] An Initialization Segment may be defined as a Segment containing metadata that is necessary to present the media streams encapsulated in Media Segments. In ISOBMFF based segment formats, an Initialization Segment may comprise the Movie Box ('moov') which might not include metadata for any samples, i.e. any metadata for samples is provided in 'moof boxes.
[0124| A Media Segment contains certain duration of media data for playback at a normal speed, such duration is referred as Media Segment duration or Segment duration. The content producer or service provider may select the Segment duration according to the desired characteristics of the service. For example, a relatively short Segment duration may be used in a live service to achieve a short end-to-end latency. The reason is that Segment duration is typically a lower bound on the end-to- end latency perceived by a DASH client since a Segment is a discrete unit of generating media data for DASH. Content generation is typically done such a manner that a whole Segment of media data is made available for a server. Furthermore, many client implementations use a Segment as the unit for GET requests. Thus, in typical arrangements for live services a Segment can be requested by a DASH client only when the whole duration of Media Segment is available as well as encoded and encapsulated into a Segment. For on-demand service, different strategies of selecting Segment duration may be used.
[0125] DASH supports rate adaptation by dynamically requesting Media Segments from different Representations within an Adaptation Set to match varying network bandwidth. When a DASH client switches up/down Representation, coding dependencies within Representation have to be taken into account. A Representation switch may only happen at a random access point (RAP), which is typically used in video coding techniques such as H.264/AVC. In DASH, a more general concept named Stream Access Point (SAP) is introduced to provide a codec-independent solution for accessing a
Representation and switching between Representations. In DASH, a SAP is specified as a position in a Representation that enables playback of a media stream to be started using only the information contained in Representation data starting from that position onwards (preceded by initialising data in the Initialisation Segment, if any). Hence, Representation switching can be performed in SAP.
[0126] Several types of SAP have been specified, including the following. SAP Type 1 corresponds to what is known in some coding schemes as a“Closed GOP random access point” (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps) and in addition the first picture in decoding order is also the first picture in presentation order. SAP Type 2 corresponds to what is known in some coding schemes as a“Closed GOP random access point” (in which all pictures, in decoding order, can be correctly decoded, resulting in a continuous time sequence of correctly decoded pictures with no gaps), for which the first picture in decoding order may not be the first picture in presentation order. SAP Type 3 corresponds to what is known in some coding schemes as an“Open GOP random access point”, in which there may be some pictures in decoding order that cannot be correctly decoded and have presentation times earlier than intra-coded picture associated with the SAP.
[0127] A stream access point (SAP) sample group as specified in ISOBMFF identifies samples as being of the indicated SAP type. The grouping_type_parameter for the SAP sample group comprises the fields target layers and layer id method idc. target layers specifies the target layers for the indicated SAPs. The semantics of target layers may depend on the value of layer id method idc, which specifies the semantics of target layers. layer id method idc equal to 0 specifies that the target layers consist of all the layers represented by the track. The sample group description entry for the SAP sample group comprises the fields dependent flag and SAP type. dependent flag may be required to be 0 for non-layered media dependent flag equal to 1 specifies that the reference layers, if any, for predicting the target layers may have to be decoded for accessing a sample of this sample group dependent flag equal to 0 specifies that the reference layers, if any, for predicting the target layers need not be decoded for accessing any SAP of this sample group sap type values in the range of 1 to 6, inclusive, specify the SAP type, of the associated samples.
[0128| A sync sample may be defined as a sample in a track that is of a SAP of type 1 or 2. Sync samples may be indicated with SyncSampleBox or by sample is non sync sample equal to 0 in the signaling for track fragments.
[0129] A Segment may further be partitioned into Subsegments e.g. to enable downloading segments in multiple parts. Subsegments may be required to contain complete access units.
Subsegments may be indexed by Segment Index box, which contains information to map presentation time range and byte range for each Subsegment. The Segment Index box may also describe subsegments and stream access points in the segment by signaling their durations and byte offsets. A DASH client may use the information obtained from Segment Index box(es) to make a HTTP GET request for a specific Subsegment using byte range HTTP request. If relatively long Segment duration is used, then Subsegments may be used to keep the size of HTTP responses reasonable and flexible for bitrate adaptation. The indexing information of a segment may be put in the single box at the beginning of that segment, or spread among many indexing boxes in the segment. Different methods of spreading are possible, such as hierarchical, daisy chain, and hybrid. This technique may avoid adding a large box at the beginning of the segment and therefore may prevent a possible initial download delay.
[0130] It may be required that for any dependent Representation X that depends on
complementary Representation Y, the m- th Subsegment of X and the n-th Subsegment of Y shall be non-overlapping whenever m is not equal to n. It may be required that for dependent Representations the concatenation of the Initialization Segment with the sequence of Subsegments of the dependent Representations, each being preceded by the corresponding Subsegment of each of the complementary Representations in order as provided in the @dependencyld attribute shall represent a conforming Subsegment sequence conforming to the media format as specified in the @mimeType attribute for this dependent Representation.
[0131] Track references of ISOBMFF can be reflected in the list of four-character codes in the @associationType attribute of DASH MPD that is mapped to the list of Representation@id values given in the @associationId in a one to one manner. These attributes may be used for linking media Representations with metadata Representations.
[01321 MPEG-DASH defines segment-container formats for both ISOBMFF and MPEG-2 Transport Streams. Other specifications may specify segment formats based on other container formats. For example, a segment format based on Matroska container file format has been proposed and may be summarized as follows. When Matroska files are carried as DASH segments or alike, the association of DASH units and Matroska units may be specified as follows. A subsegment (of DASH) may be defined as one or more consecutive Clusters of Matroska-encapsulated content. An
Initialization Segment of DASH may be required to comprise the EBML header, Segment header (of Matroska), Segment Information (of Matroska) and Tracks, and may optionally comprise other levell elements and padding. A Segment Index of DASH may comprise a Cues Element of Matroska.
[01331 MPEG Omnidirectional Media Format (ISO/IEC 23090-2) is a virtual reality (VR) system standard. OMAF defines a media format (comprising both file format derived from ISOBMFF and streaming formats for DASH and MPEG Media Transport). OMAF version 1 supports 360° video, images, and audio, as well as the associated timed text and facilitates three degrees of freedom (3DoF) content consumption, meaning that a viewport can be selected with any azimuth and elevation range and tilt angle that are covered by the omnidirectional content but the content is not adapted to any translational changes of the viewing position. The viewport-dependent streaming scenarios described further below have also been designed for 3DoF although could potentially be adapted to a different number of degrees of freedom. [0134] Standardization of OMAF version 2 is ongoing. OMAF v2 is planned to include features like support for multiple viewpoints, overlays, sub-picture compositions, and six degrees of freedom with a viewing space limited roughly to upper-body movements only.
[0135| OMAF is discussed with reference to Fig. 3. A real-world audio-visual scene (A) may be captured 220 by audio sensors as well as a set of cameras or a camera device with multiple lenses and sensors. The acquisition results in a set of digital image/video (Bi) and audio (Ba) signals. The cameras/lenses may cover all directions around the center point of the camera set or camera device, thus the name of 360-degree video.
[0136] Audio can be captured using many different microphone configurations and stored as several different content formats, including channel-based signals, static or dynamic (i.e. moving through the 3D scene) object signals, and scene-based signals (e.g., Higher Order Ambisonics). The channel-based signals may conform to one of the loudspeaker layouts defined in CICP (Coding- Independent Code-Points). In an omnidirectional media application, the loudspeaker layout signals of the rendered immersive audio program may be binaraulized for presentation via headphones.
[0137] The images (Bi) of the same time instance are stitched, projected, and mapped 221 onto a packed picture (D).
[0138] For monoscopic 360-degree video, the input images of one time instance may be stitched to generate a projected picture representing one view. An example of image stitching, projection, and region-wise packing process for monoscopic content is illustrated with Fig. 2b. Input images (Bi) are stitched and projected 202 onto a three-dimensional projection structure that may for example be a unit sphere. The projection structure may be considered to comprise one or more surfaces, such as plane(s) or part(s) thereof. A projection structure may be defined as three-dimensional structure consisting of one or more surface(s) on which the captured VR image/video content is projected, and from which a respective projected picture can be formed. The image data on the projection structure is further arranged onto a two-dimensional projected picture (C) 203. The term projection may be defined as a process by which a set of input images are projected onto a projected picture. There may be a pre-defined set of representation formats of the projected picture, including for example an equirectangular projection (ERP) format and a cube map projection (CMP) format. It may be considered that the projected picture covers the entire sphere.
[0139| Optionally, a region-wise packing 204 is then applied to map the projected picture 203 (C) onto a packed picture 205 (D). If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding 206. Otherwise, regions of the projected picture (C) are mapped onto a packed picture (D) by indicating the location, shape, and size of each region in the packed picture, and the packed picture (D) is given as input to image/video encoding. The term region-wise packing may be defined as a process by which a projected picture is mapped to a packed picture. The term packed picture may be defined as a picture that results from region- wise packing of a projected picture. [0140] In the case of stereoscopic 360-degree video, as shown in an example of Fig. 4a, the input images of one time instance are stitched to generate a projected picture representing two views (CL, CR), one for each eye. Both views (CL, CR) can be mapped onto the same packed picture (D), and encoded by a traditional 2D video encoder. Alternatively, each view of the projected picture can be mapped to its own packed picture, in which case the image stitching, projection, and region- wise packing is performed as illustrated in Fig. 2a. A sequence of packed pictures of either the left view or the right view can be independently coded or, when using a multiview video encoder, predicted from the other view.
[0141] An example of image stitching, projection, and region- wise packing process for stereoscopic content where both views are mapped onto the same packed picture, as shown in Fig. 3 is described next in more detailed manner. Input images (Bi) are stitched and projected onto two three- dimensional projection structures, one for each eye. The image data on each projection structure is further arranged onto a two-dimensional projected picture (CL for left eye, CR for right eye), which covers the entire sphere. Frame packing is applied to pack the left view picture and right view picture onto the same projected picture. Optionally, region-wise packing is then applied to the pack projected picture onto a packed picture, and the packed picture (D) is given as input to image/video encoding. If the region-wise packing is not applied, the packed picture is identical to the projected picture, and this picture is given as input to image/video encoding.
[01421 The image stitching, projection, and region-wise packing process can be carried out multiple times for the same source images to create different versions of the same content, e.g. for different orientations of the projection structure. Similarly, the region- wise packing process can be performed multiple times from the same projected picture to create more than one sequence of packed pictures to be encoded.
[0143] 360-degree panoramic content (i.e., images and video) cover horizontally the full 360- degree field-of-view around the capturing position of an imaging device. The vertical field-of-view may vary and can be e.g. 180 degrees. Panoramic image covering 360-degree field-of-view horizontally and 180-degree field-of-view vertically can be represented by a sphere that has been mapped to a two-dimensional image plane using equirectangular projection (ERP). In this case, the horizontal coordinate may be considered equivalent to a longitude, and the vertical coordinate may be considered equivalent to a latitude, with no transformation or scaling applied. The process of forming a monoscopic equirectangular panorama picture is illustrated in Fig. 4b. A set of input images, such as fisheye images of a camera array or a camera device with multiple lenses and sensors, is stitched onto a spherical image. The spherical image is further projected onto a cylinder (without the top and bottom faces). The cylinder is unfolded to form a two-dimensional projected picture. In practice one or more of the presented steps may be merged; for example, the input images may be directly projected onto a cylinder without an intermediate projection onto a sphere. The projection structure for equirectangular panorama may be considered to be a cylinder that comprises a single surface. [0144] In general, 360-degree content can be mapped onto different types of solid geometrical structures, such as polyhedron (i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid), cylinder (by projecting a spherical image onto the cylinder, as described above with the equirectangular projection), cylinder (directly without projecting onto a sphere first), cone, etc. and then unwrapped to a two-dimensional image plane.
[0145] In some cases panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of equirectangular projection, where the polar areas of the sphere have not been mapped onto the two-dimensional image plane. In some cases a panoramic image may have less than 360-degree horizontal field-of-view and up to 180- degree vertical field-of-view, while otherwise has the characteristics of equirectangular projection format.
[0146| Region-wise packing information may be encoded as metadata in or along the bitstream. For example, the packing information may comprise a region- wise mapping from a pre-defined or indicated source format to the packed picture format, e.g. from a projected picture to a packed picture, as described earlier.
[0147] Rectangular region-wise packing metadata may be described as follows:
[0148] For each region, the metadata defines a rectangle in a projected picture, the respective rectangle in the packed picture, and an optional transformation of rotation by 90, 180, or 270 degrees and/or horizontal and/or vertical mirroring. Rectangles may, for example, be indicated by the locations of the top-left comer and the bottom-right comer. The mapping may comprise resampling. As the sizes of the respective rectangles can differ in the projected and packed pictures, the mechanism infers region-wise resampling.
[0149] Among other things, region-wise packing provides signalling for the following usage scenarios:
1) Additional compression for viewport-independent projections is achieved by densifying
sampling of different regions to achieve more uniformity across the sphere. For example, the top and bottom parts of ERP are oversampled, and region- wise packing can be applied to down-sample them horizontally.
2) Arranging the faces of plane-based projection formats, such as cube map projection, in an adaptive manner.
3) Generating viewport-dependent bitstreams that use viewport-independent projection formats.
For example, regions of ERP or faces of CMP can have different sampling densities and the underlying projection structure can have different orientations.
4 ) Indicating regions of the packed pictures represented by an extractor track. This is needed when an extractor track collects tiles from bitstreams of different resolutions. [0150] A guard band may be defined as an area in a packed picture that is not rendered but may be used to improve the rendered part of the packed picture to avoid or mitigate visual artifacts such as seams.
[01511 Referring again to Fig. 3, the OMAF allows the omission of image stitching, projection, and region-wise packing and encode the image/video data in their captured format. In this case, images (D) are considered the same as images (Bi) and a limited number of fisheye images per time instance are encoded.
[0152] For audio, the stitching process is not needed, since the captured signals are inherently immersive and omnidirectional.
[0153] The stitched images (D) are encoded 206 as coded images (Ei) or a coded video bitstream (Ev). The captured audio (Ba) is encoded 222 as an audio bitstream (Ea). The coded images, video, and/or audio are then composed 224 into a media file for file playback (F) or a sequence of an initialization segment and media segments for streaming (Fs), according to a particular media container file format. In this specification, the media container file format is the ISO base media file format. The file encapsulator 224 also includes metadata into the file or the segments, such as projection and region- wise packing information assisting in rendering the decoded packed pictures.
[0154] The metadata in the file may include:
- the projection format of the projected picture,
- fisheye video parameters,
- the area of the spherical surface covered by the packed picture,
- the orientation of the projection structure corresponding to the projected picture relative to the global coordinate axes,
- region-wise packing information, and
- region-wise quality ranking (optional).
[0155] Region-wise packing information may be encoded as metadata in or along the bitstream, for example as region- wise packing SEI message(s) and/or as region- wise packing boxes in a file containing the bitstream. For example, the packing information may comprise a region- wise mapping from a pre-defined or indicated source format to the packed picture format, e.g. from a projected picture to a packed picture, as described earlier. The region-wise mapping information may for example comprise for each mapped region a source rectangle (a.k.a. projected region) in the projected picture and a destination rectangle (a.k.a. packed region) in the packed picture, where samples within the source rectangle are mapped to the destination rectangle and rectangles may for example be indicated by the locations of the top-left comer and the bottom-right comer. The mapping may comprise resampling. Additionally or alternatively, the packing information may comprise one or more of the following: the orientation of the three-dimensional projection structure relative to a coordinate system, indication which projection format is used, region- wise quality ranking indicating the picture quality ranking between regions and/or first and second spatial region sequences, one or more transformation operations, such as rotation by 90, 180, or 270 degrees, horizontal mirroring, and vertical mirroring. The semantics of packing information may be specified in a manner that they are indicative for each sample location within packed regions of a decoded picture which is the respective spherical coordinate location.
[0156] The segments (Fs) may be delivered 225 using a delivery mechanism to a player.
[0157] The file that the file encapsulator outputs (F) is identical to the file that the file decapsulator inputs (F'). A file decapsulator 226 processes the file (F') or the received segments (F's) and extracts the coded bitstreams (E'a, E'v, and/or E'i) and parses the metadata. The audio, video, and/or images are then decoded 228 into decoded signals (B'a for audio, and D' for images/video). The decoded packed pictures (D') are projected 229 onto the screen of a head-mounted display or any other display device 230 based on the current viewing orientation or viewport and the projection, spherical coverage, projection structure orientation, and region- wise packing metadata parsed from the file. Likewise, decoded audio (B'a) is rendered 229, e.g. through headphones 231, according to the current viewing orientation. The current viewing orientation is determined by the head tracking and possibly also eye tracking functionality 227. Besides being used by the Tenderer 229 to render the appropriate part of decoded video and audio signals, the current viewing orientation may also be used the video and audio decoders 228 for decoding optimization.
[0158] The process described above is applicable to both live and on-demand use cases.
[0159| At any point of time, a video rendered by an application on a HMD or on another display device renders a portion of the 360-degree video. This portion may be defined as a viewport. A viewport may be understood as a window on the 360-degree world represented in the omnidirectional video displayed via a rendering display. According to another definition, a viewport may be defined as a part of the spherical video that is currently displayed. A viewport may be characterized by horizontal and vertical field of views (FOV or FoV).
[0160] A viewpoint may be defined as the point or space from which the user views the scene; it usually corresponds to a camera position. Slight head motion does not imply a different viewpoint. A viewing position may be defined as the position within a viewing space from which the user views the scene. A viewing space may be defined as a 3D space of viewing positions within which rendering of image and video is enabled and VR experience is valid.
[01611 An omnidirectional image (360-degree video) may be divided into several regions called tiles. The tiles may have been encoded as motion constrained tiles with different quality/resolution. A client apparatus may request the regions/tiles corresponding to a current viewport of the user with high resolution/quality. As used herein the term omnidirectional may refer to media content that has greater spatial extent than a field-of-view of a device rendering the content. Omnidirectional content may for example cover substantially 360 degrees in horizontal dimension and substantially 180 degrees in vertical dimension, but omnidirectional may also refer to content covering less than 360 degree view in horizontal direction and/or 180 degree view in vertical direction. [0162] The client (e.g. the player) may request the whole 360-degree video/image either with uniform quality, which means a viewport independent delivery, or such that the quality of the video/image in a viewport of the user is higher than the quality of the video/image in the non- viewport part of the scene, which means a viewport dependent delivery.
[0163] In the viewport-independent streaming the (requested) 360-degree video may be encoded at different bitrates. Each encoded bitstream may be stored with, for example, ISOBMFF and then segmented based on MPEG-DASH. The whole 360-degree video may be delivered to the client/player uniformly at the same quality.
[01 4] In the viewport-dependent streaming, the (requested) 360-degree video may be divided into several regions/tiles and encoded as, for example, motion constrained tiles. Each encoded tiled bitstream may be stored with, for example, ISOBMFF and then segmented based on MPEG-DASH. The regions/tiles corresponding to the user's viewport may be delivered at high quality/resolution, whereas other parts of 360-degree video which are not within the user's viewport may be delivered at a lower quality/resolution.
[0165] Fig. 6c illustrates an example of an omnidirectional video/image from an event, for example a football game. The video/image frame is represented by the equirectangular projection, although problems/solutions described herein apply more generally to all the projection formats used for representing omnidirectional videos/images. Tiles which include information of the viewport (illustrated with hashed or cross-hatched blocks in Fig. 6c) are encoded with higher resolution and/or quality and the other tiles are encoded with lower resolution and/or quality (illustrated with solid white blocks in Fig. 6c). The rectangle drawn with solid, thick lines illustrate the current viewport.
[0166] Fig. 7b shows an example of an omnidirectional streaming system 600. Raw video signal may be input 601 and motion constrained encoding 602 may be applied to the raw video data. The motion constrained encoding 602 may form a first motion constrained bitstream 610 with a first quality and/or a first resolution and a second motion constrained bitstream 611 with a second quality and/or a second resolution. The encoded image information may be capsulated to one or more files and stored 603. In the viewport-dependent streaming the encapsulation stage 603 may take into consideration a user’s viewport information so that those parts of the video/image which are within the user’s viewport may be encapsulated from that motion constrained bitstream which has higher quality and/or resolution (e.g. the first motion constrained bitstream 610) and the other parts may be encapsulated from the other motion constrained bitstream (e.g. the second motion constrained bitstream 611). The file(s) may be segmented 604 e.g. to comply with a segment format of MPEG DASH, and Media Presentation Description may be formed 605. The content may be delivered 606 via a communication network to a player e.g. as a response to a request from the player. In the player the received segments, which in the viewport-dependent streaming may comprise parts of two (or more) motion constrained bitstreams having different quality and/or resolution, as is illustrated with 612 in Fig. 7b. are parsed and file(s) decapsulated 607 before decoding 608 and playback 609. [0167] Fig. 7c shows an example of content flow in the DASH delivery function of MPEG omnidirectional media format (OMAF). The following interfaces may be specified. Fs/F's are initialization and media segments. G illustrates DASH Media Presentation Description (MPD), which may include omnidirectional media-specific metadata, such as information on projection and region- wise packing. An MPD (G) may be generated based on the segments (Fs) and other media files representing the same content. The DASH MPD generator includes omnidirectional media-specific descriptors. The descriptors may include projection type, region-wise packing type, content coverage, spherical region-wise quality ranking, 2D region-wise quality ranking, and fisheye omnidirectional video information. This information may be generated on the basis of the equivalent information in the segments.
[0168| The player may be informed of the orientation of the user’s gaze e.g. on the basis of information provided by a head mounted display 614 the user is carrying on to watch the
omnidirectional video. The parser and file decapsulator 607 may use that information to select and request coding units so that the quality/resolution of different areas of the viewport correspond with desired or recommended quality/resolution.
[0169] A tile track may be defined as a track that contains sequences of one or more motion- constrained tile sets of a coded bitstream. Decoding of a tile track without the other tile tracks of the bitstream may require a specialized decoder, which may be e.g. required to skip absent tiles in the decoding process. An HEVC tile track specified in ISO/IEC 14496-15 enables storage of one or more temporal motion-constrained tile sets as a track. When a tile track contains tiles of an HEVC base layer, the sample entry type 'hvtl ' is used. When a tile track contains tiles of a non-base layer, the sample entry type 'lhtl' is used. A sample of a tile track consists of one or more complete tiles in one or more complete slice segments. A tile track is independent from any other tile track that includes VCL NAL units of the same layer as this tile track. A tile track has a 'tbas' track reference to a tile base track. The tile base track does not include VCL NAL units. A tile base track indicates the tile ordering using a 'sabt' track reference to the tile tracks. An HEVC coded picture corresponding to a sample in the tile base track can be reconstructed by collecting the coded data from the tile-aligned samples of the tracks indicated by the 'sabt' track reference in the order of the track references.
[0170| A constructed tile set track is a tile set track, e.g. a track according to ISOBMFF, containing constructors that, when executed, result into a tile set bitstream.
[0171] A constructor is a set of instructions that, when executed, results into a valid piece of sample data according to the underlying sample format.
[0172] An extractor is a constructor that, when executed, copies the sample data of an indicated byte range of an indicated sample of an indicated track. Inclusion by reference may be defined as an extractor or alike that, when executed, copies the sample data of an indicated byte range of an indicated sample of an indicated track. [0173] A full-picture-compliant tile set {track | bitstream} is a tile set {track | bitstream} that conforms to the full-picture {track | bitstream} format. Here, the notation {optionA | optionB} illustrates alternatives, i.e. either optionA or optionB, which is selected consistently in all selections. A full-picture-compliant tile set track can be played as with any full-picture track using the parsing and decoding process of full-picture tracks. A full-picture-compliant bitstream can be decoded as with any full-picture bitstream using the decoding process of full-picture bitstreams. A full-picture track is a track representing an original bitstream (including all its tiles). A tile set bitstream is a bitstream that contains a tile set of an original bitstream but not representing the entire original bitstream. A tile set track is a track representing a tile set of an original bitstream but not representing the entire original bitstream.
[0174] A full-picture-compliant tile set track may comprise extractors as defined for HEVC. An extractor may, for example, be an in-line constructor including a slice segment header and a sample constructor extracting coded video data for a tile set from a referenced full-picture track.
[0175] An in-line constructor is a constructor that, when executed, returns the sample data that it contains. For example, an in-line constructor may comprise a set of instructions for rewriting a new slice header. The phrase in-line may be used to indicate coded data that is included in the sample of a track.
[0176] A full-picture track is a track representing an original bitstream (including all its tiles).
[0177] A NAL-unit-like structure refers to a structure with the properties of a NAL unit except that start code emulation prevention is not performed.
[01781 A pre-constructed tile set track is a tile set track containing the sample data in-line.
[0179] A tile set bitstream is a bitstream that contains a tile set of an original bitstream but not representing the entire original bitstream.
[0180] A tile set track is a track representing a tile set of an original bitstream but not representing the entire original bitstream.
[0181] Video codec may comprise an encoder that transforms the input video into a compressed representation suited for storage/transmission and a decoder that can uncompress the compressed video representation back into a viewable form. A video encoder and/or a video decoder may also be separate from each other, i.e. need not form a codec. Typically, encoder discards some information in the original video sequence in order to represent the video in a more compact form (that is, at lower bitrate). A video encoder may be used to encode an image sequence, as defined subsequently, and a video decoder may be used to decode a coded image sequence. A video encoder or an intra coding part of a video encoder or an image encoder may be used to encode an image, and a video decoder or an inter decoding part of a video decoder or an image decoder may be used to decode a coded image.
[0182] Some hybrid video encoders, for example many encoder implementations of ITU-T H.263 and H.264, encode the video information in two phases. Firstly, pixel values in a certain picture area (or "block") are predicted for example by motion compensation means (finding and indicating an area in one of the previously coded video frames that corresponds closely to the block being coded) or by spatial means (using the pixel values around the block to be coded in a specified manner). Secondly the prediction error, i.e. the difference between the predicted block of pixels and the original block of pixels, is coded. This is typically done by transforming the difference in pixel values using a specified transform (e.g. Discrete Cosine Transform (DCT) or a variant of it), quantizing the coefficients and entropy coding the quantized coefficients. By varying the fidelity of the quantization process, encoder can control the balance between the accuracy of the pixel representation (picture quality) and size of the resulting coded video representation (file size or transmission bitrate).
[0183] In temporal prediction, the sources of prediction are previously decoded pictures (a.k.a. reference pictures). In intra block copy (a.k.a. intra-block-copy prediction), prediction is applied similarly to temporal prediction but the reference picture is the current picture and only previously decoded samples can be referred in the prediction process. Inter-layer or inter-view prediction may be applied similarly to temporal prediction, but the reference picture is a decoded picture from another scalable layer or from another view, respectively. In some cases, inter prediction may refer to temporal prediction only, while in other cases inter prediction may refer collectively to temporal prediction and any of intra block copy, inter-layer prediction, and inter- view prediction provided that they are performed with the same or similar process than temporal prediction. Inter prediction or temporal prediction may sometimes be referred to as motion compensation or motion-compensated prediction.
[0184| Intra prediction utilizes the fact that adjacent pixels within the same picture are likely to be correlated. Intra prediction can be performed in spatial or transform domain, i.e., either sample values or transform coefficients can be predicted. Intra prediction is typically exploited in intra coding, where no inter prediction is applied.
[0185] There may be different types of intra prediction modes available in a coding scheme, out of which an encoder can select and indicate the used one, e.g. on block or coding unit basis. A decoder may decode the indicated intra prediction mode and reconstruct the prediction block accordingly. For example, several angular intra prediction modes, each for different angular directions, may be available. Angular intra prediction may be considered to extrapolate the border samples of adjacent blocks along a linear prediction direction. Additionally or alternatively, a planar prediction mode may be available. Planar prediction may be considered to essentially form a prediction block, in which each sample of a prediction block may be specified to be an average of the vertically aligned sample in the adjacent sample column on the left of the current block and the horizontally aligned sample in the adjacent sample line above the current block. Additionally or alternatively, a DC prediction mode may be available, in which the prediction block is essentially an average sample value of a neighboring block or blocks.
[0186] One outcome of the coding procedure is a set of coding parameters, such as motion vectors and quantized transform coefficients. Many parameters can be entropy-coded more efficiently if they are predicted first from spatially or temporally neighbouring parameters. For example, a motion vector may be predicted from spatially adjacent motion vectors and only the difference relative to the motion vector predictor may be coded. Prediction of coding parameters and intra prediction may be collectively referred to as in-picture prediction.
[0187| Fig. 8a shows a block diagram of a video encoder suitable for employing embodiments of the invention. Fig. 8a presents an encoder for two layers, but it would be appreciated that presented encoder could be similarly simplified to encode only one layer or extended to encode more than two layers. Fig. 8a illustrates an embodiment of a video encoder comprising a first encoder section 500 for a base layer and a second encoder section 502 for an enhancement layer. Each of the first encoder section 500 and the second encoder section 502 may comprise similar elements for encoding incoming pictures. The encoder sections 500, 502 may comprise a pixel predictor 302, 402, prediction error encoder 303, 403 and prediction error decoder 304, 404. Fig. 8a also shows an embodiment of the pixel predictor 302, 402 as comprising an inter-predictor 306, 406, an intra-predictor 308, 408, a mode selector 310, 410, a filter 316, 416, and a reference frame memory 318, 418. The pixel predictor 302 of the first encoder section 500 receives 300 base layer images of a video stream to be encoded at both the inter-predictor 306 (which determines the difference between the image and a motion compensated reference frame 318) and the intra-predictor 308 (which determines a prediction for an image block based only on the already processed parts of current frame or picture). The output of both the inter predictor and the intra-predictor are passed to the mode selector 310. The intra-predictor 308 may have more than one intra-prediction modes. Hence, each mode may perform the intra-prediction and provide the predicted signal to the mode selector 310. The mode selector 310 also receives a copy of the base layer picture 300. Correspondingly, the pixel predictor 402 of the second encoder section 502 receives 400 enhancement layer images of a video stream to be encoded at both the inter-predictor 406 (which determines the difference between the image and a motion compensated reference frame 418) and the intra-predictor 408 (which determines a prediction for an image block based only on the already processed parts of current frame or picture). The output of both the inter-predictor and the intra-predictor are passed to the mode selector 410. The intra-predictor 408 may have more than one intra-prediction modes. Hence, each mode may perform the intra-prediction and provide the predicted signal to the mode selector 410. The mode selector 410 also receives a copy of the enhancement layer picture 400.
[0188| Depending on which encoding mode is selected to encode the current block, the output of the inter-predictor 306, 406 or the output of one of the optional intra-predictor modes or the output of a surface encoder within the mode selector is passed to the output of the mode selector 310, 410. The output of the mode selector is passed to a first summing device 321, 421. The first summing device may subtract the output of the pixel predictor 302, 402 from the base layer picture 300/enhancement layer picture 400 to produce a first prediction error signal 320, 420 which is input to the prediction error encoder 303, 403. [0189] The pixel predictor 302, 402 further receives from a preliminary reconstructor 339, 439 the combination of the prediction representation of the image block 312, 412 and the output 338, 438 of the prediction error decoder 304, 404. The preliminary reconstructed image 314, 414 may be passed to the intra-predictor 308, 408 and to a filter 316, 416. The filter 316, 416 receiving the preliminary representation may filter the preliminary representation and output a final reconstructed image 340, 440 which may be saved in a reference frame memory 318, 418. The reference frame memory 318 may be connected to the inter-predictor 306 to be used as the reference image against which a future base layer picture 300 is compared in inter-prediction operations. Subject to the base layer being selected and indicated to be source for inter-layer sample prediction and/or inter-layer motion information prediction of the enhancement layer according to some embodiments, the reference frame memory 318 may also be connected to the inter-predictor 406 to be used as the reference image against which a future enhancement layer pictures 400 is compared in inter-prediction operations. Moreover, the reference frame memory 418 may be connected to the inter-predictor 406 to be used as the reference image against which a future enhancement layer picture 400 is compared in inter prediction operations.
[0190] Filtering parameters from the filter 316 of the first encoder section 500 may be provided to the second encoder section 502 subject to the base layer being selected and indicated to be source for predicting the filtering parameters of the enhancement layer according to some embodiments.
[01 11 The prediction error encoder 303, 403 comprises a transform unit 342, 442 and a quantizer 344, 444. The transform unit 342, 442 transforms the first prediction error signal 320, 420 to a transform domain. The transform is, for example, the DCT transform. The quantizer 344, 444 quantizes the transform domain signal, e.g. the DCT coefficients, to form quantized coefficients.
[0192J The prediction error decoder 304, 404 receives the output from the prediction error encoder 303, 403 and performs the opposite processes of the prediction error encoder 303, 403 to produce a decoded prediction error signal 338, 438 which, when combined with the prediction representation of the image block 312, 412 at the second summing device 339, 439, produces the preliminary reconstructed image 314, 414. The prediction error decoder may be considered to comprise a dequantizer 361, 461, which dequantizes the quantized coefficient values, e.g. DCT coefficients, to reconstruct the transform signal and an inverse transformation unit 363, 463, which performs the inverse transformation to the reconstructed transform signal wherein the output of the inverse transformation unit 363, 463 contains reconstructed block(s). The prediction error decoder may also comprise a block filter which may filter the reconstructed block(s) according to further decoded information and filter parameters.
[0193] The entropy encoder 330, 430 receives the output of the prediction error encoder 303, 403 and may perform a suitable entropy encoding/variable length encoding on the signal to provide error detection and correction capability. The outputs of the entropy encoders 330, 430 may be inserted into a bitstream e.g. by a multiplexer 508. [0194] Fig. 8b shows a block diagram of a video decoder suitable for employing embodiments of the invention. Fig. 8b depicts a structure of a two-layer decoder, but it would be appreciated that the decoding operations may similarly be employed in a single-layer decoder.
[0195| The video decoder 550 comprises a first decoder section 552 for base layer pictures and a second decoder section 554 for enhancement layer pictures. Block 556 illustrates a demultiplexer for delivering information regarding base layer pictures to the first decoder section 552 and for delivering information regarding enhancement layer pictures to the second decoder section 554. Reference P’n stands for a predicted representation of an image block. Reference D’n stands for a reconstructed prediction error signal. Blocks 704, 804 illustrate preliminary reconstructed images (I’n). Reference R’n stands for a final reconstructed image. Blocks 703, 803 illustrate inverse transform (T-l). Blocks 702, 802 illustrate inverse quantization (Q-l). Blocks 700, 800 illustrate entropy decoding (E-l). Blocks 706, 806 illustrate a reference frame memory (RFM). Blocks 707, 807 illustrate prediction (P) (either inter prediction or intra prediction). Blocks 708, 808 illustrate filtering (F). Blocks 709, 809 may be used to combine decoded prediction error information with predicted base or enhancement layer pictures to obtain the preliminary reconstructed images (I’n). Preliminary reconstructed and filtered base layer pictures may be output 710 from the first decoder section 552 and preliminary reconstructed and filtered enhancement layer pictures may be output 810 from the second decoder section 554.
[0196| Herein, the decoder could be interpreted to cover any operational unit capable to carry out the decoding operations, such as a player, a receiver, a gateway, a demultiplexer and/or a decoder.
[0197| The decoder reconstructs the output video by applying prediction means similar to the encoder to form a predicted representation of the pixel blocks (using the motion or spatial information created by the encoder and stored in the compressed representation) and prediction error decoding (inverse operation of the prediction error coding recovering the quantized prediction error signal in spatial pixel domain). After applying prediction and prediction error decoding means the decoder sums up the prediction and prediction error signals (pixel values) to form the output video frame. The decoder (and encoder) can also apply additional filtering means to improve the quality of the output video before passing it for display and/or storing it as prediction reference for the forthcoming frames in the video sequence.
[0198| In typical video codecs the motion information is indicated with motion vectors associated with each motion compensated image block, such as a prediction unit. Each of these motion vectors represents the displacement of the image block in the picture to be coded (in the encoder side) or decoded (in the decoder side) and the prediction source block in one of the previously coded or decoded pictures. In order to represent motion vectors efficiently those are typically coded differentially with respect to block specific predicted motion vectors. In typical video codecs the predicted motion vectors are created in a predefined way, for example calculating the median of the encoded or decoded motion vectors of the adjacent blocks. Another way to create motion vector predictions is to generate a list of candidate predictions from adjacent blocks and/or co-located blocks in temporal reference pictures and signaling the chosen candidate as the motion vector predictor. In addition to predicting the motion vector values, it can be predicted which reference picture(s) are used for motion-compensated prediction and this prediction information may be represented for example by a reference index of previously coded/decoded picture. The reference index is typically predicted from adjacent blocks and/or co-located blocks in temporal reference picture. Moreover, typical high efficiency video codecs employ an additional motion information coding/decoding mechanism, often called merging/merge mode, where all the motion field information, which includes motion vector and corresponding reference picture index for each available reference picture list, is predicted and used without any modification/correction. Similarly, predicting the motion field information is carried out using the motion field information of adjacent blocks and/or co-located blocks in temporal reference pictures and the used motion field information is signaled among a list of motion field candidate list filled with motion field information of available adjacent/co-located blocks.
[0199] Typical video codecs enable the use of uni-prediction, where a single prediction block is used for a block being (de)coded, and bi-prediction, where two prediction blocks are combined to form the prediction for a block being (de)coded. Some video codecs enable weighted prediction, where the sample values of the prediction blocks are weighted prior to adding residual information.
For example, multiplicative weighting factor and an additive offset which can be applied. In explicit weighted prediction, enabled by some video codecs, a weighting factor and offset may be coded for example in the slice header for each allowable reference picture index. In implicit weighted prediction, enabled by some video codecs, the weighting factors and/or offsets are not coded but are derived e.g. based on the relative picture order count (POC) distances of the reference pictures.
[0200] In typical video codecs the prediction residual after motion compensation is first transformed with a transform kernel (like DCT) and then coded. The reason for this is that often there still exists some correlation among the residual and transform can in many cases help reduce this correlation and provide more efficient coding.
[02011 Typical video encoders utilize Lagrangian cost functions to find optimal coding modes, e.g. the desired Macroblock mode and associated motion vectors. This kind of cost function uses a weighting factor l to tie together the (exact or estimated) image distortion due to lossy coding methods and the (exact or estimated) amount of information that is required to represent the pixel values in an image area:
[0202] C = D + LR (1)
[0203] where C is the Lagrangian cost to be minimized, D is the image distortion (e.g. Mean Squared Error) with the mode and motion vectors considered, and R the number of bits needed to represent the required data to reconstruct the image block in the decoder (including the amount of data to represent the candidate motion vectors). [0204] H.264/AVC and HEVC include a concept of picture order count (POC). A value of POC is derived for each picture and is non-decreasing with increasing picture position in output order. POC therefore indicates the output order of pictures. POC may be used in the decoding process, for example, for implicit scaling of motion vectors in the temporal direct mode of bi-predictive slices, for implicitly derived weights in weighted prediction, and for reference picture list initialization.
Furthermore, POC may be used in the verification of output order conformance.
[0205] Video encoders and/or decoders may be able to store multiple reference pictures in a decoded picture buffer (DPB) and use them adaptively for inter prediction. The reference picture management may be defined as a process to determine which reference pictures are maintained in the DPB. Examples of reference picture management are described in the following.
[0206] In HEVC, a reference picture set (RPS) syntax structure and decoding process are used. A reference picture set valid or active for a picture includes all the reference pictures used as reference for the picture and all the reference pictures that are kept marked as“used for reference” for any subsequent pictures in decoding order. There are six subsets of the reference picture set, which are referred to as namely RefPicSetStCurrO (a.k.a. RefPicSetStCurrBefore), RefPicSetStCurrl (a.k.a. RefPicSetStCurrAfter), RefPicSetStFollO, RefPicSetStFolll, RefPicSetLtCurr, and RefPicSetLtFoll. RefPicSetStFollO and RefPicSetStFolll may also be considered to form jointly one subset
RefPicSetStFoll. The notation of the six subsets is as follows.“Curr” refers to reference pictures that are included in the reference picture lists of the current picture and hence may be used as inter prediction reference for the current picture.“Foil” refers to reference pictures that are not included in the reference picture lists of the current picture but may be used in subsequent pictures in decoding order as reference pictures.“St” refers to short-term reference pictures, which may generally be identified through a certain number of least significant bits of their POC value.“Lt” refers to long term reference pictures, which are specifically identified and generally have a greater difference of POC values relative to the current picture than what can be represented by the mentioned certain number of least significant bits.“0” refers to those reference pictures that have a smaller POC value than that of the current picture.“1” refers to those reference pictures that have a greater POC value than that of the current picture. RefPicSetStCurrO, RefPicSetStCurrl, RefPicSetStFollO and
RefPicSetStFolll are collectively referred to as the short-term subset of the reference picture set. RefPicSetLtCurr and RefPicSetLtFoll are collectively referred to as the long-term subset of the reference picture set.
[0207] In HEVC, a reference picture set may be specified in a sequence parameter set and taken into use in the slice header through an index to the reference picture set. A reference picture set may also be specified in a slice header. A reference picture set may be coded independently or may be predicted from another reference picture set (known as inter-RPS prediction). In both types of reference picture set coding, a flag (used_by_curr_pic_X_flag) is additionally sent for each reference picture indicating whether the reference picture is used for reference by the current picture (included in a *Curr list) or not (included in a *Foll list). Pictures that are included in the reference picture set used by the current slice are marked as“used for reference”, and pictures that are not in the reference picture set used by the current slice are marked as“unused for reference”. If the current picture is an IDR picture, RefPicSetStCurrO, RefPicSetStCurrl, RefPicSetStFollO, RefPicSetStFolll,
RefPicSetLtCurr, and RefPicSetLtFoll are all set to empty.
[0208] In many coding modes of H.264/AVC and HEVC, the reference picture for inter prediction is indicated with an index to a reference picture list. The index may be coded with variable length coding, which usually causes a smaller index to have a shorter value for the corresponding syntax element. In H.264/AVC and HEVC, two reference picture lists (reference picture list 0 and reference picture list 1) are generated for each bi-predictive (B) slice, and one reference picture list (reference picture list 0) is formed for each inter-coded (P) slice.
[02091 A reference picture list, such as reference picture list 0 and reference picture list 1, is typically constructed in two steps: First, an initial reference picture list is generated. The initial reference picture list may be generated for example on the basis of POC, or information on the prediction hierarchy, or any combination thereof. Second, the initial reference picture list may be reordered by reference picture list reordering (RPLR) commands, also known as reference picture list modification syntax structure, which may be contained in slice headers. If reference picture sets are used, the reference picture list 0 may be initialized to contain RefPicSetStCurrO first, followed by RefPicSetStCurrl, followed by RefPicSetLtCurr. Reference picture list 1 may be initialized to contain RefPicSetStCurrl first, followed by RefPicSetStCurrO. In HEVC, the initial reference picture lists may be modified through the reference picture list modification syntax structure, where pictures in the initial reference picture lists may be identified through an entry index to the list. In other words, in HEVC, reference picture list modification is encoded into a syntax structure comprising a loop over each entry in the final reference picture list, where each loop entry is a fixed-length coded index to the initial reference picture list and indicates the picture in ascending position order in the final reference picture list.
[0210] Many coding standards, including H.264/AVC and HEVC, may have decoding process to derive a reference picture index to a reference picture list, which may be used to indicate which one of the multiple reference pictures is used for inter prediction for a particular block. A reference picture index may be coded by an encoder into the bitstream in some inter coding modes or it may be derived (by an encoder and a decoder) for example using neighboring blocks in some other inter coding modes.
[0211] Figs la and lb illustrate an example of a camera having multiple lenses and imaging sensors but also other types of cameras may be used to capture wide view images and/or wide view video.
[0212] In the following, the terms wide view image and wide view video mean an image and a video, respectively, which comprise visual information having a relatively large viewing angle, larger than 100 degrees. Hence, a so called 360 panorama image/video as well as images/videos captured by using a fish eye lens may also be called as a wide view image/video in this specification. More generally, the wide view image/video may mean an image/video in which some kind of projection distortion may occur when a direction of view changes between successive images or frames of the video so that a transform may be needed to find out co-located pixels from a reference image or a reference frame. This will be described in more detail later in this specification.
[0213] The camera 100 of Fig. la comprises two or more camera units 102 and is capable of capturing wide view images and/or wide view video. In this example the number of camera units 102 is eight, but may also be less than eight or more than eight. Each camera unit 102 is located at a different location in the multi-camera system and may have a different orientation with respect to other camera units 102. As an example, the camera units 102 may have an omnidirectional constellation so that it has a 360-degree viewing angle in a 3D-space. In other words, such camera 100 may be able to see each direction of a scene so that each spot of the scene around the camera 100 can be viewed by at least one camera unit 102.
[0214] The camera 100 of Fig. la may also comprise a processor 104 for controlling the operations of the camera 100. There may also be a memory 106 for storing data and computer code to be executed by the processor 104, and a transceiver 108 for communicating with, for example, a communication network and/or other devices in a wireless and/or wired manner. The camera 100 may further comprise a user interface (UI) 110 for displaying information to the user, for generating audible signals and/or for receiving user input. However, the camera 100 need not comprise each feature mentioned above, or may comprise other features as well. For example, there may be electric and/or mechanical elements for adjusting and/or controlling optics of the camera units 102 (not shown).
[0215] Fig. la also illustrates some operational elements which may be implemented, for example, as a computer code in the software of the processor, in a hardware, or both. A focus control element 114 may perform operations related to adjustment of the optical system of camera unit or units to obtain focus meeting target specifications or some other predetermined criteria. An optics adjustment element 116 may perform movements of the optical system or one or more parts of it according to instructions provided by the focus control element 114. It should be noted here that the actual adjustment of the optical system need not be performed by the apparatus but it may be performed manually, wherein the focus control element 114 may provide information for the user interface 110 to indicate a user of the device how to adjust the optical system.
[0216] Fig. lb shows as a perspective view the camera 100 of Fig. la. In Fig. lb seven camera units 102a-102g can be seen, but the camera 100 may comprise even more camera units which are not visible from this perspective. Fig. lb also shows two microphones 112a, 112b, but the apparatus may also comprise one or more than two microphones. [0217] It should be noted here that embodiments disclosed in this specification may also be implemented with apparatuses having only one camera unit 102 or less or more than eight camera units 102a-102g.
[02181 In accordance with an embodiment, the camera 100 may be controlled by another device (not shown), wherein the camera 100 and the other device may communicate with each other and a user may use a user interface of the other device for entering commands, parameters, etc. and the user may be provided information from the camera 100 via the user interface of the other device.
[0219] Terms 360-degree video, omnidirectional video, immersive video or virtual reality (VR) video may be used interchangeably. They may generally refer to video content that provides such a large field of view that only a part of the video is displayed at a single point of time in typical displaying arrangements. For example, a virtual reality video may be viewed on a head-mounted display (HMD) that may be capable of displaying e.g. about 100-degree field of view (FOV). The spatial subset of the virtual reality video content to be displayed may be selected based on the orientation of the head-mounted display. In another example, a flat-panel viewing environment is assumed, wherein e.g. up to 40-degree field-of-view may be displayed. When displaying wide field of view content (e.g. fisheye) on such a display, it may be preferred to display a spatial subset rather than the entire picture.
[0220] MPEG omnidirectional media format (OMAF) may be described with Figs. 2a and 2b.
[02211 360-degree image or video content may be acquired and prepared for example as follows. Images or video can be captured by a set of cameras or a camera device with multiple lenses and imaging sensors. The acquisition results in a set of digital image/video signals. The cameras/lenses may cover all directions around the center point of the camera set or camera device. The images of the same time instance are stitched, projected, and mapped onto a packed virtual reality frame, which may alternatively be referred to as a packed picture. The mapping may alternatively be referred to as region- wise mapping or region- wise packing. The breakdown of image stitching, projection, and mapping processes are illustrated with Fig. 2a and described as follows. Input images 201 are stitched and projected 202 onto a three-dimensional projection structure, such as a sphere or a cube. The projection structure may be considered to comprise one or more surfaces, such as plane(s) or part(s) thereof. A projection structure may be defined as a three-dimensional structure consisting of one or more surface(s) on which the captured virtual reality image/video content may be projected, and from which a respective projected frame can be formed. The image data on the projection structure is further arranged onto a two-dimensional projected frame 203. The term projection may be defined as a process by which a set of input images are projected onto a projected frame or a projected picture. There may be a pre-defined set of representation formats of the projected frame, including for example an equirectangular panorama and a cube map representation format.
[0222| Region-wise mapping 204 may be applied to map projected frames 203 onto one or more packed virtual reality frames 205. In some cases, the region-wise mapping may be understood to be equivalent to extracting two or more regions from the projected frame, optionally applying a geometric transformation (such as rotating, mirroring, and/or resampling) to the regions, and placing the transformed regions in spatially non-overlapping areas, a.k.a. constituent frame partitions, within the packed virtual reality frame. If the region-wise mapping is not applied, the packed virtual reality frame 205 may be identical to the projected frame 203. Otherwise, regions of the projected frame are mapped onto a packed virtual reality frame by indicating the location, shape, and size of each region in the packed virtual reality frame. The term mapping may be defined as a process by which a projected frame is mapped to a packed virtual reality frame. The term packed virtual reality frame may be defined as a frame that results from a mapping of a projected frame. In practice, the input images 201 may be converted to packed virtual reality frames 205 in one process without intermediate steps.
[02231 Packing information may be encoded as metadata in or along the bitstream. For example, the packing information may comprise a region-wise mapping from a pre-defined or indicated source format to the packed frame format, e.g. from a projected frame to a packed VR frame, as described earlier. The region-wise mapping information may for example comprise for each mapped region a source rectangle in the projected frame and a destination rectangle in the packed VR frame, where samples within the source rectangle are mapped to the destination rectangle and rectangles may for example be indicated by the locations of the top-left comer and the bottom-right comer. The mapping may comprise resampling. Additionally or alternatively, the packing information may comprise one or more of the following: the orientation of the three-dimensional projection structure relative to a coordinate system, indication which omnidirectional projection format is used, region- wise quality ranking indicating the picture quality ranking between regions and/or first and second spatial region sequences, one or more transformation operations, such as rotation by 90, 180, or 270 degrees, horizontal mirroring, and vertical mirroring. The semantics of packing information may be specified in a manner that they are indicative for each sample location within packed regions of a decoded picture which is the respective spherical coordinate location.
[0224] In 360-degree systems, a coordinate system may be defined through orthogonal coordinate axes, such as X (lateral), Y (vertical, pointing upwards), and Z (back-to-front axis, pointing outwards). Rotations around the axes may be defined and may be referred to as yaw, pitch, and roll. Y aw may be defined to rotate around the Y axis, pitch around the X axis, and roll around the Z axis. Rotations may be defined to be extrinsic, i.e., around the X, Y, and Z fixed reference axes. The angles may be defined to increase clockwise when looking from the origin towards the positive end of an axis. The coordinate system specified can be used for defining the sphere coordinates, which may be referred to azimuth (f) and elevation (Q).
[0225| Global coordinate axes may be defined as coordinate axes, e.g. according to the coordinate system as discussed above, that are associated with audio, video, and images representing the same acquisition position and intended to be rendered together. The origin of the global coordinate axes is usually the same as the center point of a device or rig used for omnidirectional audio/video acquisition as well as the position of the observer's head in the three-dimensional space in which the audio and video tracks are located. In the absence of the initial viewpoint metadata, the playback may be recommended to be started using the orientation (0, 0) in (azimuth, elevation) relative to the global coordinate axes.
[0226] As mentioned above, the projection structure may be rotated relative to the global coordinate axes. The rotation may be performed for example to achieve better compression performance based on the spatial and temporal activity of the content at certain spherical parts.
Alternatively or additionally, the rotation may be performed to adjust the rendering orientation for already encoded content. For example, if the horizon of the encoded content is not horizontal, it may be adjusted afterwards by indicating that the projection structure is rotated relative to the global coordinate axes. The projection orientation may be indicated as yaw, pitch, and roll angles that define the orientation of the projection structure relative to the global coordinate axes. The projection orientation may be included e.g. in a box in a sample entry of an ISOBMFF track for omnidirectional video.
[0227] 360-degree panoramic content (i.e., images and video) cover horizontally (up to) the full 360-degree field-of-view around the capturing position of an imaging device. The vertical field-of- view may vary and can be e.g. 180 degrees. Panoramic image covering 360-degree field-of-view horizontally and 180-degree field-of-view vertically can be represented by a sphere that has been mapped to a two-dimensional image plane using equirectangular projection (ERP). In this case, the horizontal coordinate may be considered equivalent to a longitude, and the vertical coordinate may be considered equivalent to a latitude, with no transformation or scaling applied. In some cases panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of equirectangular projection, where the polar areas of the sphere have not been mapped onto the two-dimensional image plane. In some cases panoramic content may have less than 360-degree horizontal field-of-view and up to 180-degree vertical field-of- view, while otherwise have the characteristics of equirectangular projection format.
[0228| In cube map projection format, spherical video is projected onto the six faces (a.k.a. sides) of a cube. The cube map may be generated e.g. by first rendering the spherical scene six times from a viewpoint, with the views defined by a 90 degree view frustum representing each cube face. The cube sides may be frame-packed into the same frame or each cube side may be treated individually (e.g. in encoding). There are many possible orders of locating cube sides onto a frame and/or cube sides may be rotated or mirrored. The frame width and height for frame-packing may be selected to fit the cube sides "tightly" e.g. at 3x2 cube side grid, or may include unused constituent frames e.g. at 4x3 cube side grid.
[0229] A cube map can be stereoscopic. A stereoscopic cube map can e.g. be reached by re projecting each view of a stereoscopic panorama to the cube map format. [0230] The process of forming a monoscopic equirectangular panorama picture is illustrated in Fig. 2b, in accordance with an embodiment. A set of input images 211, such as fisheye images of a camera array or a camera device 100 with multiple lenses and sensors 102, is stitched 212 onto a spherical image 213. The spherical image 213 is further projected 214 onto a cylinder 215 (without the top and bottom faces). The cylinder 215 is unfolded 216 to form a two-dimensional projected frame 217. In practice one or more of the presented steps may be merged; for example, the input images 213 may be directly projected onto a cylinder 217 without an intermediate projection onto the sphere 213 and/or to the cylinder 215. The projection structure for equirectangular panorama may be considered to be a cylinder that comprises a single surface.
[0231] The equirectangular projection may be defined as a process that converts any sample location within the projected picture (of the equirectangular projection format) to sphere coordinates of a coordinate system. The sample location within the projected picture may be defined relative to pictureWidth and pictureHeight, which are the width and height, respectively, of the equirectangular panorama picture in samples. In the following, let the center point of a sample location along horizontal and vertical axes be denoted as i and j, respectively. The sphere coordinates (f, Q) for the sample location, in degrees, are given by the following equirectangular mapping equations: f = ( 0.5 - i ÷ pictureWidth ) * 360, 0 = ( 0.5 - j ÷ pictureHeight ) * 180. It is noted that depending on the direction of axes for (f, 0) different conversion formulas may be derived.
[0232] In general, 360-degree content can be mapped onto different types of solid geometrical structures, such as polyhedron (i.e. a three-dimensional solid object containing flat polygonal faces, straight edges and sharp comers or vertices, e.g., a cube or a pyramid), cylinder (by projecting a spherical image onto the cylinder, as described above with the equirectangular projection), cylinder (directly without projecting onto a sphere first), cone, etc. and then unwrapped to a two-dimensional image plane. The two-dimensional image plane can also be regarded as a geometrical structure. In other words, 360-degree content can be mapped onto a first geometrical structure and further unfolded to a second geometrical structure. However, it may be possible to directly obtain the transformation to the second geometrical structure from the original 360-degree content or from other wide view visual content. In general, an omnidirectional projection format may be defined as a format to represent (up to) 360-degree content on a two-dimensional image plane. Examples of omnidirectional projection formats include the equirectangular projection format and the cubemap projection format.
[0233] In some cases panoramic content with 360-degree horizontal field-of-view but with less than 180-degree vertical field-of-view may be considered special cases of equirectangular projection, where the polar areas of the sphere have not been mapped onto the two-dimensional image plane. In some cases a panoramic image may have less than 360-degree horizontal field-of-view and up to 180- degree vertical field-of-view, while otherwise has the characteristics of equirectangular projection format. [0234] Human eyes are not capable of viewing the whole 360 degrees space, but are limited to a maximum horizontal and vertical field-of-views (HHFoV, HVFoV). Also, a HMD device has technical limitations that allow only viewing a subset of the whole 360 degrees space in horizontal and vertical directions (DHFoV, DVFoV)).
[0235] In many displaying situations only a partial picture is needed to be displayed while the remaining picture is required to be decoded but is not displayed. These displaying situations include:
Typical head-mounted displays (HMDs) display—100 degrees field of view, while often the input video for HMD consumption covers entire 360 degrees.
Typical flat-panel viewing environments display up to 40-degree field-of-view. When displaying wide-FOV content (e.g. fisheye) on such a display, it may be preferred to display a spatial subset rather than the entire picture.
[02361 A viewport may be defined as a region of omnidirectional image or video suitable for display and viewing by the user. A current viewport (which may be sometimes referred simply as a viewport) may be defined as the part of the spherical video that is currently displayed and hence is viewable by the user(s). In many viewing modes, the user consuming the video/image may choose the current viewport freely. For example, when viewing happens with a head-mounted display, the orientation of the head determines the viewing orientation and hence the viewport. At any point of time, a video rendered by an application on a HMD renders a portion of the 360-degrees video, which is referred to as a viewport. Likewise, when viewing a spatial part of the 360-degree content on a conventional display, the spatial part that is currently displayed is a viewport. A viewport is a window on the 360-degrees world represented in the omnidirectional video displayed via a rendering display.
A viewport may be characterized by a horizontal field-of-view (VHFoV) and a vertical field-of-view (WFoV). In the following, the horizontal field-of-view of the viewport will be abbreviated with HFoV and, respectively, the vertical field-of-view of the viewport will be abbreviated with VFoV. As used herein the term omnidirectional video or image content may refer to content that has greater spatial extent than the field-of-view of the device rendering the content. Omnidirectional content may cover substantially 360 degrees in horizontal dimension and substantially 180 degrees in vertical dimension, but“omnidirectional” may also refer to content covering less than the entire 360 degree view in horizontal direction and/or the 180 degree view in vertical direction. A sphere region may be defined as a region on a sphere that may be specified by four great circles or by two azimuth circles and two elevation circles and additionally by a tile angle indicating rotation along the axis originating from the sphere origin passing through the centre point of the sphere region. A great circle may be defined as an intersection of the sphere and a plane that passes through the centre point of the sphere.
A great circle is also known as an orthodrome or Riemannian circle. An azimuth circle may be defined as a circle on the sphere connecting all points with the same azimuth value. An elevation circle may be defined as a circle on the sphere connecting all points with the same elevation value. [0237] OMAF specifies a generic timed metadata syntax for sphere regions. A purpose for the timed metadata track is indicated by the track sample entry type. The sample format of all metadata tracks for sphere regions specified starts with a common part and may be followed by an extension part that is specific to the sample entry of the metadata track. Each sample specifies a sphere region.
[0238] One of the specific sphere region timed metadata tracks specified in OMAF is known as recommended viewport timed metadata track, which indicates the viewport that should be displayed when the user does not have control of the viewing orientation or has released control of the viewing orientation. This provides a method for users to consume omnidirectional content without head rotation while wearing a head mounted display (HMD). The recommended viewport timed metadata track may be used for indicating a recommended viewport based on a director's cut or based on measurements of viewing statistics. The recommended viewport timed metadata track may also facilitate omnidirectional content consumption over limited field of view (FOV) displays or conventional 2D displays without the need for proactive viewport changes with gestures or interactions. A textual description of the recommended viewport may be provided in the sample entry. The type of the recommended viewport may be indicated in the sample entry and may be among the following:
A recommended viewport per the director's cut, i.e., a viewport suggested according to the creative intent of the content author or content provider.
A recommended viewport selected based on measurements of viewing statistics.
Unspecified (for use by applications or specifications other than OMAF).
[02391 The track sample entry type 'rcvp' shall be used.
[02401 The sample entry of this sample entry type is specified as follows:
class RcvpSampleEntryO extends SphereRegionSampleEntry('rcvp') {
RcvpInfoBoxO; // mandatory
}
class RcvpInfoBox extends FullBox('rvif, 0, 0) {
unsigned int(8) viewport type;
string viewport description;
}
[02411 viewport type specifies the type of the recommended viewport as listed in the table below:
Figure imgf000048_0001
[0242] The current OMAF v2 specification specifies a recommended viewport but does not take into account the need for consistent quality. Furthermore, there is no signalling in MPEG DASH to enable selection of media representations which have recommended viewport regions in high quality. The currently defined MPEG DASH signalling for recommended viewport is agnostic to quality of the media representations. This can lead to suboptimal experience if the player depends on the currently provided signaling.
[0243] Current specification in OMAF v2 does not support selection of the representation which covers the recommended viewports in a timed metadata track with best quality amongst the multiple associated media tracks. The problem arises by the fact that QR (Quality Ranking) information is static for a given representation but the recommended viewport timed metadata track may comprise of viewports anywhere in the omnidirectional content and the location of the viewport may be time- varying in a recommended viewport timed metadata track. There is no signaling in ISOBMFF or MPEG DASH to indicate which representations in the MPD cover the recommended viewport content with high (or low) quality. The objective is to maintain best viewing experience for HMDs as well as conventional displays. For large conventional displays the quality variations can adversely impact the user experience.
[0244] A content author may want to encode and/or make available one or more specific versions of the content that are tailor-made for covering a recommended viewport at a high quality while the remaining areas may have a lower quality. Such specific versions of the content are suitable for viewing the content on a 2D display when the user is expected to control the viewport manually only occasionally (i.e., when the player typically displays the recommended viewport, but lets the user to have manual control of the viewport too). Embodiments enable content authors indicating such specific versions of the content and players concluding which tracks or representations are such specific versions of the content.
[0245] According to an embodiment, a method comprises, with reference to the flow diagram of Fig. 10a:
obtaining 71 a track or a representation wherein a recommended viewport is covered with a higher quality than remaining areas;
including 72, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with a higher quality than remaining areas.
[0246] According to an embodiment, a method comprises, with reference to the flow diagram of Fig. 10b:
receiving 90, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a higher quality than remaining areas; selecting 91, based on the metadata, to process the track or the representation. [0247] According to an embodiment, the metadata may comprise but is not limited to one or more of the following:
indication that the track or the representation covers the recommended viewport at a higher quality than remaining areas;
indication that the track or the representation covers the recommended viewport at a consistent quality which is higher quality than remaining areas;
indication that the track or the representation covers the recommended viewport at an
(approximately) uniform quality;
indication of an absolute quality ranking value indicating the quality for the recommended viewport;
indication of an absolute quality ranking value indicating the quality of remaining areas (excluding the recommended viewport);
indication of a quality ranking value difference between the quality ranking values of the recommended viewport and of the remaining areas;
statistics, such as an average, of the proportion of time that the track or the representation covers the recommended viewport at a higher quality than remaining areas
statistics, such as minimum, average, and/or maximum, of quality ranking values of the content covering the recommended viewport;
association of the track or representation with a particular recommended viewport timed metadata track or representation;
indication that the track or the representation covers only the recommended viewport and excludes remaining areas.
[02481 In order to facilitate selection of the correct media representation associated with the recommended viewport, additional DASH signaling mechanism may be utilized. In the following, some examples of such signaling mechanisms will be described.
[0249] In accordance with an embodiment, an element SupplementalProperty is defined with an attribute called as @SchemeIdUri so that the value of the @SchemeIdUri attribute has an‘rcqr’ descriptor. The‘rcqr’ descriptor can be referred to as a RcvpQualityRanking descriptor. The‘rcqr’ descriptor indicates the identifier of the representation (representation id) which presents a recommended viewport at maximum quality ranking (QR). This scenario may use the descriptor to enable selection of a track which covers the recommended viewport at maximum quality ranking for maximum temporal duration.
[0250] In another embodiment, an attribute called quality representation sorting is added for the ‘rcqr’ descriptor of the SupplementalProperty attribute. The associated media representation based on the coverage quality in the recommended viewport can either be in descending or ascending order.
This may enable specifying more than one representation at a certain quality. Consequently, the player can make a tradeoff while selecting the representation. For example, due to bandwidth restrictions, the player may choose the second best representation if there is a bandwidth constraint.
[02511 In another embodiment, the above described attribute quality representation sorting for the SupplementalProperty is defined to list the associated media representation based on the coverage quality in the recommended viewport, in descending or ascending order. The sorting order may be defined, for example, by a standard.
[0252] In another embodiment, an ElementalProperty or SupplementalProperty attribute may be defined which is comprised of one or more quality information elements, which can be named as rcqrQualitylnfo. Each element corresponds to the representation of the media track associated with the recommended viewport track. The element rcqrQualitylnfo contains information such as the maximum quality ranking covered or the minimum quality ranking. In an embodiment in which the quality information element rcqrQualitylnfo is utilized, the rcqrQualitylnfo element has an attribute called as quality ranking which mandates the referred representation to have uniform quality for the recommended viewport sphere regions, when consumed by a 2D display.
[0253] In an embodiment, the enhanced recommended viewport media track selection information can be carried in a media presentation description (MPD) of DASH with a SupplementalProperty element and/or EssentialProperty element comprising a descriptor which has an association or relationship with the recommended viewport track adaptation set.
[0254| In the following, an example of a useful method to enable content selection via DASH signalling (e.g., more than one tracks is associated with the recommended viewport) will be described.
[02551 In an example embodiment, the recommended viewport quality ranking descriptor indicates the high quality Representation Sets covering the recommended viewport track. An
EssentialProperty element or a SupplementalProperty element may be used with the @schemeIdUri attribute comprising the RcvpQualityRanking descriptor equal to "um:mpeg:mpegl:omaf:2018:rcqr".
[0256] In an example embodiment, the @value attribute of the RcvpQualityRanking descriptor is not present.
[02571 The RcvpQualityRanking descriptor may include elements and attributes as specified in the table below.
Figure imgf000051_0001
Figure imgf000052_0001
[0258] In the following, an example of the above descriptor implementation is described:
Representation id="recommended- viewport" mimeType='application/mp4'
associationld- 'videol, video2, video3" associationType="cdsc" codecs="rcvp"
bandwidth- ' 100">
<SupplementalProperty schemeIdUri="um:mpeg:mpegI:omaf:2018:rcqr“
max_quality_representation_id=“videol” quality representation sorting -‘video 1, video3, video2” />
[0259] In the following, a useful method to improve content-selection via DASH signalling for recommended viewport rendering on 2D displays is described. This method may be used, for example, when none of the tracks cover the recommended viewport with consistent high quality for the whole duration.
[0260] In this example embodiment the RcvpQualityRanking descriptor can have the
rcqrQualitylnfo element to describe the selection criteria with greater granularity.
Figure imgf000052_0002
Figure imgf000053_0001
[02611 In the following, an example method is described which may be optimal where content for recommended viewport rendering on 2D displays can be created.
[02621 In an embodiment of the implementation, a property is used which lists all representation ids which cover the associated recommended viewport track at high quality uniformly. By using this method adverse impact on user experience may be avoided when the user is watching virtual reality content on conventional displays. In this scenario the
RcvpQualityRanking.rcqrQualityInfo@max_quality_ranking and
RcvpQualityRanking.rcqrQualityInfo@min_quality_ranking can be replaced by
RcvpQualityRanking.rcqrQualityInfo@consistent_quality_ranking to indicate such a consistent quality. This may be advantageous for consuming immersive media over conventional displays. Such a descriptor can be of use while creating new content and to ensure that it is optimal for recommended viewport track based viewing for 2D display. Presence of the descriptor with the recommended viewport descriptor indicates that the media representation is at consistently high quality for recommended viewport sphere regions.
[0263] An example signaling related to this method is provided as follows:
Representation id="recommended- viewport" mimeType='application/mp4' associationld- 'videol, video2, video3" associationType- 'cdsc" codecs- 'rcvp" bandwidth- ' 100">
<EssentialProperty schemeIdUri="um:mpeg:mpegI:omaf:2018:rcqr">
<omaf:rcqr >
<omaf:rcvpQualityInfo
representation id = "video 1“
consistent_quality_ranking="60“
/>
<omaf:rcvpQualityInfo
representation id = "video2“
consistent_quality_ranking=" 55 "
/>
<omaf:rcvpQualityInfo
representation id = "video3“
consistent_quality_ranking=" 30 "
/>
</omaf:rcqr>
</EssentialProperty> [0264] The above described embodiments may enable selection of high quality representation from multiple representations that show the recommended viewport track and may provide easy to use indications for the client. Having high quality consistent experience over a conventional display may improve the immersion of the user when watching the content.
[0265] An EssentialProperty or a SupplementalProperty RcvpConsistentQuality element with a @schemeIdUri attribute equal to "um:mpeg:mpegl:omaf:2018:rcqr" is referred to as an
RcvpConsistentQuality descriptor.
[0266] One RcvpConsistentQuality descriptor may be present for every adaptation set corresponding to the recommended viewport timed metadata track in the MPD.
[0267] The presence of this descriptor indicates that the quality of the associated media representation tracks is consistent for the regions covered by the recommended viewport track. This indicates to the player that selecting a recommended viewport track with this descriptor is an assurance of a good viewing experience.
[0268] As another approach for improving the media track selection, the recommended viewport timed metadata track can be enhanced to also contain the per fragment coverage values. This will assist in requesting the appropriate video tracks which best match the player preferences. The recommended viewport coverage struct per fragment can be included in the fragment header. The structure is presented below:
[0269| aligned (8) class RcvpCoverageStruct ( ) {
unsigned int(8) num regions;
for ( i = 0; i < num regions; i++) {
SphereRegionStruct (1) ;
}
}
[0270] The number of regions is minimized to indicate the maximum recommended viewport coverage extent is signalled with as few regions as possible.
[02711 Aligned (8) class RcvpExtentCoveragelnformationBox ( ) extends FullBox ( 'ecor' , 0, 0) {
RcvpCoverageStruct () ;
}
[0272] The RcvpExtentCoveragelnformationBox is signaled per movie fragment in order to enable the player to request the best matching visual tracks for rendering per fragment.
[0273] According to an embodiment, the metadata expressing the association of the track or representation with a particular recommended viewport timed metadata track or representation may comprise but is not limited to one or more of the following: A track reference of a particular type (e.g. 'rvpv' - recommended viewport version) from the recommended viewport timed metadata track to the video track(s) (wherein the recommended viewport has higher quality than remaining areas);
A track reference of a particular type (e.g. 'rvpv') from the video track (wherein the recommended viewport has higher quality than remaining areas) to the recommended viewport timed metadata track;
@associationId from a Representation containing the recommended viewport timed metadata track to the Representation(s) containing video track(s) (wherein the recommended viewport has higher quality than remaining areas), and @associationType of a particular type (e.g. 'rvpv');
@associationId from the Representation(s) containing video track(s) (wherein the recommended viewport has higher quality than remaining areas), and @associationType of a particular type (e.g. 'rvpv') to a Representation containing the recommended viewport timed metadata track.
[0274] In the following, some examples of signaling will be described in more detail.
[0275] Fig. 6a illustrates an example of an omnidirectional video/image from an event, for example a dance party. The video/image frame 61 is represented by the equirectangular projection, although problems/solutions described herein apply more generally to all the projection formats used for representing omnidirectional videos/images.
[0276| Furthermore, the omnidirectional video/image may have a region called a director’s viewport also known as a recommended viewport 63, which represents a spatial area in the video/image frame which can, for example, represent one of the following. The director’s viewport may be prescribed by the content provider/author. It may represent the region which was viewed by the user's friend or the region which was selected based on measurements of viewing statistics by a crowd. However, the director’s viewport is not limited to these examples but may represent some other visual information.
[0277] The term current viewport refers to the region that is currently being displayed to the user. An example of this kind of region is illustrated in Fig. 6a as the dotted area 62. This dotted area represents the current viewport 65 shown in Fig. 6b. In many viewing modes, the user consuming the video/image may choose the current viewport freely. For example, when viewing happens with a head-mounted display, the orientation of the head determines the viewing orientation and hence the viewport. Thus, the user views a spatial region/area, which may be the same as or may differ from the director's viewport.
[0278] Fig. 9a shows some elements of a video encoding section 510, in accordance with an embodiment. The video encoding section 510 may be a part of the omnidirectional streaming system 600 or separate from it. A signaling constructor 512 may comprise an input to obtain omnidirectional video/image 511, and a second input to obtain quality rank definitions 513. The signaling constructor 512 forms different kinds of signals and provides them to an encoding element 513. The encoding element 513 may encode the signals as well as the omnidirectional video/image and the signaling information for storing and/or transmission. However, there may be separate encoding elements for signal encoding and visual information encoding. Encoding may refer to compression of video or image data, but it may also comprise generating, encapsulating, or packetizing signalling information associated with the compressed video or image data, for example in a manifest and/or a container file.
[0279] Fig. 9b shows a video decoding section 520, in accordance with an embodiment. Also the video encoding section 510 may be a part of the omnidirectional streaming system 600 or separate from it. The video decoding section 520 may obtain signaling data via a first input 521 and encoded visual information (omnidirectional video/image) via a second input 522. The signaling data and the encoded visual information may be decoded by a decoding element 523. Decoded signaling data may be used by a rendering element 524 to control in image reconstruction from decoded visual information. The rendering element 524 may also receive viewport data 525 to determine the location of a current viewport within the image area of the omnidirectional video/image).
[0280] Several embodiments relate to indicating in a bitstream, a container file, and/or a manifest or parsing information from a bitstream, a container file, and/or a manifest. The bitstream may, for example, be a video or image bitstream (such as an HEVC bitstream), wherein the indicating may utilize, for example, supplemental enhancement information (SEI) messages. The container file may, for example, comply with the ISO base media file format, the Matroska file format, or the Material exchange Format (MXF). The manifest may, for example, conform to the Media Presentation Description (MPD) of MPEG-DASH (ISO/IEC 23009-1), the M3U format, or the Composition Playlist (CPL) of the Interoperable Master Format (IMF). It needs to be understood that these formats are provided as examples and that embodiments are not limited to them. Embodiments may be similarly realized with any other similar container or media description formats, such as the Session Description Protocol (SDP). Embodiments may be realized with a suite of bitstream format(s), container file format(s) and manifest format(s), in which the indications may be. MPEG OMAF is an example of such a suite of formats.
[02811 It needs to be understood that instead of or in addition to a manifest, embodiments similarly apply to a container file format and/or a media bitstream.
[02821 When the metadata is stored in or along a track in a file, the metadata may reside e.g. in one or more of the following container structures or mechanisms:
Track header, such as a box contained directly or indirectly within TrackHeaderBox
Sample entry, such as a particular box within the sample entry
Sample group description entry, from which it is mapped to sample(s) of the track through the SampleToGroupBox(es)
Sample auxiliary information
Track reference(s) When the metadata is associated with a group of tracks, the TrackGroupBox may be extended to carry the metadata
When the metadata is associated with a group of tracks, the EntityToGroupBox may be extended to carry the metadata
[0283| It needs to be understood that while many embodiments are described using singular forms of nouns, e.g. an encoded bitstream, a viewport, a spatial region, and so on, the embodiments generally apply to plural forms of nouns.
[0284] The above described embodiments may help in enhancing the viewing experience of the user. Furthermore, they may help the content author in guiding the viewer to his (authors) intended viewing conditions in the omnidirectional video/image.
[0285| In general, indications, conditions, and/or parameters described in different embodiments may be represented with syntax elements in syntax structure(s), such as SEI messages, in a video bitstream, and/or in static or dynamic syntax structures in a container file, and/or in a manifest. An example of a static syntax structure in ISOBMFF is a box in a sample entry of a track. Another example of a static syntax structure in ISOBMFF is an item property for an image item. Examples of dynamic syntax structures in ISOBMFF were described earlier with reference to timed metadata.
Table 2
[0286| In the above, some embodiments have been described by referring to the term streaming. It needs to be understood that embodiments similarly apply to other forms of video transmission, such as progressive downloading, file delivery, broadcasting, and conversational video communications, such as video telephone.
[0287] The various embodiments of the invention can be implemented with the help of computer program code that resides in a memory and causes the relevant apparatuses to carry out the invention. For example, a device may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the device to carry out the features of an embodiment. Yet further, a network device like a server may comprise circuitry and electronics for handling, receiving and transmitting data, computer program code in a memory, and a processor that, when running the computer program code, causes the network device to carry out the features of an embodiment.
[0288] If desired, the different functions discussed herein may be performed in a different order and/or concurrently with other. Furthermore, if desired, one or more of the above-described functions and embodiments may be optional or may be combined.
[0289] Although various aspects of the embodiments are set out in the independent claims, other aspects comprise other combinations of features from the described embodiments and/or the dependent claims with the features of the independent claims, and not solely the combinations explicitly set out in the claims. [0290] A recent trend in streaming in order to reduce the streaming bitrate of virtual reality video may be known as a viewport dependent delivery and can be explained as follows: a subset of 360- degree video content covering a primary viewport (i.e., the current view orientation) is transmitted at the best quality/resolution, while the remaining of 360-degree video is transmitted at a lower quality/resolution. There are generally two approaches for viewport-adaptive streaming:
[0291] The first approach is viewport-specific encoding and streaming, a.k.a. viewport-dependent encoding and streaming, a.k.a. asymmetric projection. In this approach, 360-degree image content is packed into the same frame with an emphasis (e.g. greater spatial area) on the primary viewport. The packed VR frames are encoded into a single bitstream. For example, the front face of a cube map may be sampled with a higher resolution compared to other cube faces and the cube faces may be mapped to the same packed VR frame, where the front cube face is sampled with twice the resolution compared to the other cube faces.
[0292| The second approach is tile-based encoding and streaming. In this approach, 360-degree content is encoded and made available in a manner that enables selective streaming of viewports from different encodings.
[0293] An approach of tile-based encoding and streaming, which may be referred to as tile rectangle-based encoding and streaming or sub-picture based encoding and streaming, may be used with any video codec, even if tiles similar to HEVC were not available in the codec or even if motion- constrained tile sets or alike were not implemented in an encoder. In tile rectangle-based encoding, the source content may be split into tile rectangle sequences (a.k.a. sub-picture sequences) before encoding. Each tile rectangle sequence covers a subset of the spatial area of the source content, such as full panorama content, which may e.g. be of equirectangular projection format. Each tile rectangle sequence may then be encoded independently from each other as a single-layer bitstream, such as HEVC Main profile bitstream. Several bitstreams may be encoded from the same tile rectangle sequence, e.g. for different bitrates. Each tile rectangle bitstream may be encapsulated in a file as its own track (or alike) and made available for streaming. At the receiver side the tracks to be streamed may be selected based on the viewing orientation. The client may receive tracks covering the entire omnidirectional content. Better quality or higher resolution tracks may be received for the current viewport compared to the quality or resolution covering the remaining, currently non- visible viewports. In an example, each track may be decoded with a separate decoder instance.
[0294] In an example of tile rectangle-based encoding and streaming, each cube face may be separately encoded and encapsulated in its own track (and Representation). More than one encoded bitstream for each cube face may be provided, e.g. each with different spatial resolution. Players can choose tracks (or Representations) to be decoded and played based on the current viewing orientation. High-resolution tracks (or Representations) may be selected for the cube faces used for rendering for the present viewing orientation, while the remaining cube faces may be obtained from their low- resolution tracks (or Representations). [0295] In an approach of tile-based encoding and streaming, encoding is performed in a manner that the resulting bitstream comprises motion-constrained tile sets. Several bitstreams of the same source content are encoded using motion-constrained tile sets.
[02961 In an approach, one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is stored as a tile set track (e.g. an HEVC tile track or a full-picture-compliant tile set track) or a sub-picture track in a file. A tile base track (e.g. an HEVC tile base track or a full picture track comprising extractors to extract data from the tile set tracks) may be generated and stored in a file. The tile base track represents the bitstream by implicitly collecting motion-constrained tile sets from the tile set tracks or by explicitly extracting (e.g. by HEVC extractors) motion-constrained tile sets from the tile set tracks. Tile set tracks and the tile base track of each bitstream may be encapsulated in an own file, and the same track identifiers may be used in all files. At the receiver side the tile set tracks to be streamed may be selected based on the viewing orientation. The client may receive tile set tracks covering the entire omnidirectional content. Better quality or higher resolution tile set tracks may be received for the current viewport compared to the quality or resolution covering the remaining, currently non-visible viewports.
[0297] In an example, equirectangular panorama content is encoded using motion-constrained tile sets. More than one encoded bitstream may be provided, e.g. with different spatial resolution and/or picture quality. Each motion-constrained tile set is made available in its own track (and
Representation). Players can choose tracks (or Representations) to be decoded and played based on the current viewing orientation. High-resolution or high-quality tracks (or Representations) may be selected for tile sets covering the present primary viewport, while the remaining area of the 360-degree content may be obtained from low-resolution or low-quality tracks (or Representations).
[0298] In an approach, each received tile set track is decoded with a separate decoder or decoder instance.
[0299] In another approach, a tile base track is utilized in decoding as follows. If all the received tile tracks originate from bitstreams of the same resolution (or more generally if the tile base tracks of the bitstreams are identical or equivalent, or if the initialization segments or other initialization data, such as parameter sets, of all the bitstreams is the same), a tile base track may be received and used to construct a bitstream. The constructed bitstream may be decoded with a single decoder.
[03001 In yet another approach, a first set of tile rectangle tracks and/or tile set tracks may be merged into a first full-picture-compliant bitstream, and a second set of tile rectangle tracks and/or tile set tracks may be merged into a second full-picture-compliant bitstream. The first full-picture- compliant bitstream may be decoded with a first decoder or decoder instance, and the second full- picture-compliant bitstream may be decoded with a second decoder or decoder instance. In general, this approach is not limited to two sets of tile rectangle tracks and/or tile set tracks, two full-picture- compliant bitstreams, or two decoders or decoder instances, but applies to any number of them. With this approach, the client can control the number of parallel decoders or decoder instances. Moreover, clients that are not capable of decoding tile tracks (e.g. HEVC tile tracks) but only full-picture- compliant bitstreams can perform the merging in a manner that full-picture-compliant bitstreams are obtained. The merging may be solely performed in the client or full-picture-compliant tile set tracks may be generated to assist in the merging performed by the client.
[03011 A motion-constrained coded sub-picture sequence may be defined as a collective term of such a coded sub-picture sequence in which the coded pictures are motion-constrained pictures, as defined earlier, and an MCTS sequence. Depending on the context of using the term motion- constrained coded sub-picture sequence, it may be interpreted to mean either one or both of a coded sub-picture sequence in which the coded pictures are motion-constrained pictures, as defined earlier, and/or an MCTS sequence.
[03021 A collector track may be defined as a track that extracts implicitly or explicitly MCTSs or sub-pictures from other tracks. A collector track may be a full-picture-compliant track. A collector track may for example extract MCTSs or sub-pictures to form a coded picture sequence where MCTSs or sub-pictures are arranged to a grid. For example, when a collector track extracts two MCTSs or sub pictures, they may be arranged into a 2x1 grid of MCTSs or sub-pictures. A tile base track may be regarded as a collector track, and an extractor track that extracts MCTSs or sub-pictures from other tracks may be regarded as a collector track. A collector track may also be referred to as a collection track. A track that is a source for extracting to a collector track may be referred to as a collection item track.
[0303| The term tile merging (in coded domain) may be defined as a process to merge coded sub picture sequences and/or coded MCTS sequences, which may have been encapsulated as sub-picture tracks and tile tracks, respectively, into a full-picture-compliant bitstream. A creation of a collector track may be regarded as tile merging that is performed by the file creator. Resolving a collector track into a full-picture-compliant bitstream may be regarded as tile merging, which is assisted by the collector track.
[0304] It is also possible to combine the first approach (viewport-specific encoding and streaming) and the second approach (tile-based encoding and streaming) above.
[03051 It needs to be understood that tile-based encoding and streaming may be realized by splitting a source picture in sub-picture sequences that are partly overlapping. Alternatively or additionally, bitstreams with motion-constrained tile sets may be generated from the same source content with different tile grids or tile set grids. We could then imagine the 360 degrees space divided into a discrete set of viewports, each separate by a given distance (e.g., expressed in degrees), so that the omnidirectional space can be imagined as a map of overlapping viewports, and the primary viewport is switched discretely as the user changes his/her orientation while watching content with a head-mounted display. When the overlapping between viewports is reduced to zero, the viewports could be imagined as adjacent non-overlapping tiles within the 360 degrees space. [0306] As explained above, in viewport-adaptive streaming the primary viewport (i.e., the current viewing orientation) is transmitted at the best quality/resolution, while the remaining of 360-degree video is transmitted at a lower quality/resolution. When the viewing orientation changes, e.g. when the user turns his/her head when viewing the content with a head-mounted display, another version of the content needs to be streamed, matching the new viewing orientation. In general, the new version can be requested starting from a stream access point (SAP), which are typically aligned with
(sub)segments. In single-layer video bitstreams, SAPs are intra-coded and hence costly in terms of rate-distortion performance. Conventionally, relatively long SAP intervals and consequently relatively long (sub)segment durations in the order of seconds are hence used. Thus, the delay (here referred to as the viewport quality update delay) in upgrading the quality after a viewing orientation change (e.g. a head turn) is conventionally in the order of seconds and is therefore clearly noticeable and may be annoying.
[0307| Extractors specified in ISO/IEC 14496-15 for H.264/AVC and HEVC enable compact formation of tracks that extract NAL unit data by reference. An extractor is a NAL-unit-like structure. A NAL-unit-like structure may be specified to comprise a NAL unit header and NAL unit payload like any NAL units, but start code emulation prevention (that is required for a NAL unit) might not be followed in a NAL-unit-like structure. For HEVC, an extractor contains one or more constructors. A sample constructor extracts, by reference, NAL unit data from a sample of another track. An in-line constructor includes NAL unit data. When an extractor is processed by a file reader that requires it, the extractor is logically replaced by the bytes resulting when resolving the contained constructors in their appearance order. Nested extraction may be disallowed, e.g. the bytes referred to by a sample constructor shall not contain extractors; an extractor shall not reference, directly or indirectly, another extractor. An extractor may contain one or more constructors for extracting data from the current track or from another track that is linked to the track in which the extractor resides by means of a track reference of type 'seal'. The bytes of a resolved extractor may represent one or more entire NAL units. A resolved extractor starts with a valid length field and a NAL unit header. The bytes of a sample constructor are copied only from the single identified sample in the track referenced through the indicated 'seal' track reference. The alignment is on decoding time, i.e. using the time-to-sample table only, followed by a counted offset in sample number. Extractors are a media-level concept and hence apply to the destination track before any edit list is considered. (However, one would normally expect that the edit lists in the two tracks would be identical).
[0308] In viewport-dependent streaming, which may be also referred to as viewport-adaptive streaming (VAS) or viewport-specific streaming, a subset of 360-degree video content covering the viewport (i.e., the current view orientation) is transmitted at a better quality and/or higher resolution than the quality and/or resolution for the remaining of 360-degree video. There are several alternatives to achieve viewport-dependent omnidirectional video streaming. In tile-based viewport-dependent streaming, projected pictures are partitioned into tiles that are coded as motion-constrained tile sets (MCTSs) or alike. Several versions of the content are encoded at different bitrates or qualities using the same MCTS partitioning. Each MCTS sequence is made available for streaming as a DASH Representation or alike. The player selects on MCTS basis which bitrate or quality is received.
[0309| H.264/AVC does not include the concept of tiles, but the operation like MCTSs can be achieved by arranging regions vertically as slices and restricting the encoding similarly to encoding of MCTSs. For simplicity, the terms tile and MCTS are used in this document but should be understood to apply to H.264/AVC too in a limited manner. In general, the terms tile and MCTS should be understood to apply to similar concepts in any coding format or specification.
[0310] One possible subdivision of the tile-based viewport-dependent streaming schemes is the following:
- Region-wise mixed quality (RWMQ) 360° video: Several versions of the content are coded with the same resolution, the same tile grid, and different bitrate / picture quality. Players choose high-quality MCTSs for the viewport.
- Viewport + 360° video: One or more bitrate and/or resolution versions of a complete low- resolution/low-quality omnidirectional video are encoded and made available for streaming. In addition, MCTS-based encoding is performed and MCTS sequences are made available for streaming. Players receive a complete low-resolution/low-quality omnidirectional video and select and receive the high-resolution MCTSs covering the viewport.
- Region- wise mixed resolution (RWMR) 360° video: MCTSs are encoded at multiple
resolutions. Players select a combination of high resolution MCTSs covering the viewport and low-resolution MCTSs for the remaining areas.
[03111 It needs to be understood that there may be other ways to subdivide tile-based viewport- dependent streaming methods to categories than the one described above. Moreover, the above- described subdivision may not be exhaustive, i.e. they may be tile-based viewport-dependent streaming methods that do not belong to any of the described categories.
[0312] All above-described viewport-dependent streaming approaches, tiles or MCTSs (or guard bands of tiles or MCTSs) may overlap in sphere coverage by an amount selected in the pre-processing or encoding.
[0313| All above-described viewport-dependent streaming approaches may be realized with client- driven bitstream rewriting (a.k.a. late binding) or with author-driven MCTS merging (a.k.a. early binding). In late binding, a player selects MCTS sequences to be received, selectively rewrites portions of the received video data as necessary (e.g. parameter sets and slice segment headers may need to be rewritten) for combining the received MCTSs into a single bitstream, and decodes the single bitstream. Early binding refers to the use of author-driven information for rewriting portions of the received video data as necessary, for merging of MCTSs into a single bitstream to be decoded, and in some cases for selection of MCTS sequences to be received. There may be approaches in between early and late binding: for example, it may be possible to let players select MCTS sequences to be received without author guidance, while an author-driven approach is used for MCTS merging and header rewriting. Early binding approaches include an extractor-driven approach and tile track approach, which are described subsequently.
[03141 In the tile track approach, one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is stored as a tile track (e.g. an HEVC tile track) in a file. A tile base track (e.g. an HEVC tile base track) may be generated and stored in a file. The tile base track represents the bitstream by implicitly collecting motion-constrained tile sets from the tile tracks. At the receiver side the tile tracks to be streamed may be selected based on the viewing orientation. The client may receive tile tracks covering the entire omnidirectional content. Better quality or higher resolution tile tracks may be received for the current viewport compared to the quality or resolution covering the remaining 360-degree video. A tile base track may include track references to the tile tracks, and/or tile tracks may include track references to the tile base track. For example, in HEVC, the 'sabt' track reference is used used to refer to tile tracks from a tile base track, and the tile ordering is indicated by the order of the tile tracks contained by a 'sabf track reference. Furthermore, in HEVC, a tile track has is a 'tbas' track reference to the tile base track.
[0315] In the extractor-driven approach, one or more motion-constrained tile set sequences are extracted from a bitstream, and each extracted motion-constrained tile set sequence is modified to become a compliant bitstream of its own (e.g. HEVC bitstream) and stored as a sub-picture track (e.g. with untransformed sample entry type 'hvcT for HEVC) in a file. One or more extractor tracks (e.g. an HEVC extractor tracks) may be generated and stored in a file. The extractor track represents the bitstream by explicitly extracting (e.g. by HEVC extractors) motion-constrained tile sets from the sub picture tracks. At the receiver side the sub-picture tracks to be streamed may be selected based on the viewing orientation. The client may receive sub-picture tracks covering the entire omnidirectional content. Better quality or higher resolution sub-picture tracks may be received for the current viewport compared to the quality or resolution covering the remaining 360-degree video.
[0316] It needs to be understood that even though the tile track approach and extractor-driven approach are described in details, specifically in the context of HEVC, they apply to other codecs and similar concepts as tile tracks or extractors. Moreover, a combination or a mixture of tile track and extractor-driven approach is possible. For example, such a mixture could be based on the tile track approach, but where a tile base track could contain guidance for rewriting operations for the client, e.g. the tile base track could include rewritten slice or tile group headers.
[0317] As an alternative to MCTS-based content encoding, content authoring for tile-based viewport-dependent streaming may be realized with sub-picture-based content authoring, described as follows. The pre-processing (prior to encoding) comprises partitioning uncompressed pictures to sub pictures. Several sub-picture bitstreams of the same uncompressed sub-picture sequence are encoded, e.g. at the same resolution but different qualities and bitrates. The encoding may be constrained in a manner that merging of coded sub-picture bitstream to a compliant bitstream representing omnidirectional video is enabled. For example, dependencies on samples outside the decoded picture boundaries may be avoided in the encoding by selecting motion vectors in a manner that sample locations outside the picture would not be referred in the inter prediction process. Each sub-picture bitstream may be encapsulated as a sub-picture track, and one or more extractor tracks merging the sub-picture tracks of different sub-picture locations may be additionally formed. If a tile track based approach is targeted, each sub-picture bitstream is modified to become an MCTS sequence and stored as a tile track in a file, and one or more tile base tracks are created for the tile tracks.
[0318] Tile-based viewport-dependent streaming approaches may be realized by executing a single decoder instance or one decoder instance per MCTS sequence (or in some cases, something in between, e.g. one decoder instance per MCTSs of the same resolution), e.g. depending on the capability of the device and operating system where the player runs. The use of single decoder instance may be enabled by late binding or early binding. To facilitate multiple decoder instances, the extractor-driven approach may use sub-picture tracks that are compliant with the coding format or standard without modifications. Other approaches may need either to rewrite image segment headers, parameter sets, and/or alike information in the client side to construct a conforming bitstream or to have a decoder implementation capable of decoding an MCTS sequence without the presence of other coded video data.
[0319] There may be at least two approaches for encapsulating and referencing tile tracks or sub picture tracks in the tile track approach and the extractor-driven approach, respectively:
- Referencing track identifiers from a tile base track or an extractor track.
- Referencing tile group identifiers from a tile base track or an extractor track, wherein the tile group identified by a tile group identifier contains the collocated tile tracks or the sub-picture tracks that are alternatives for extraction.
[0320] In the RWMQ method, one extractor track per each picture size and each tile grid is sufficient. In 360° + viewport video and RWMR video, one extractor track may be needed for each distinct viewing orientation.
[0321] An approach similar to above-described tile-based viewport-dependent streaming approaches, which may be referred to as tile rectangle based encoding and streaming, is described next. This approach may be used with any video codec, even if tiles similar to HEVC were not available in the codec or even if motion-constrained tile sets or alike were not implemented in an encoder. In tile rectangle based encoding, the source content is split into tile rectangle sequences before encoding. Each tile rectangle sequence covers a subset of the spatial area of the source content, such as full panorama content, which may e.g. be of equirectangular projection format. Each tile rectangle sequence is then encoded independently from each other as a single-layer bitstream. Several bitstreams may be encoded from the same tile rectangle sequence, e.g. for different bitrates. Each tile rectangle bitstream may be encapsulated in a file as its own track (or alike) and made available for streaming. At the receiver side the tracks to be streamed may be selected based on the viewing orientation. The client may receive tracks covering the entire omnidirectional content. Better quality or higher resolution tracks may be received for the current viewport compared to the quality or resolution covering the remaining, currently non-visible viewports. In an example, each track may be decoded with a separate decoder instance.
[0322] In viewport-adaptive streaming, the primary viewport (i.e., the current viewing orientation) is transmitted at a good quality/resolution, while the remaining of 360-degree video is transmitted at a lower quality/resolution. When the viewing orientation changes, e.g. when the user turns his/her head when viewing the content with a head-mounted display, another version of the content needs to be streamed, matching the new viewing orientation. In general, the new version can be requested starting from a stream access point (SAP), which are typically aligned with (Sub)segments. In single-layer video bitstreams, SAPs correspond to random-access pictures, are intra-coded, and are hence costly in terms of rate-distortion performance. Conventionally, relatively long SAP intervals and consequently relatively long (Sub)segment durations in the order of seconds are hence typically used. Thus, the delay (here referred to as the viewport quality update delay) in upgrading the quality after a viewing orientation change (e.g. a head turn) is conventionally in the order of seconds and is therefore clearly noticeable and annoying.
[0323] As explained above, viewport switching in viewport-dependent streaming, which may be compliant with MPEG OMAF, is enabled at stream access points, which involve intra coding and hence a greater bitrate compared to respective inter coded pictures at the same quality. A compromise between the stream access point interval and the rate-distortion performance is hence chosen in an encoding configuration.
[0324] Viewport-adaptive streaming of equal-resolution HEVC bitstreams with MCTSs is described in the following as an example. Several HEVC bitstreams of the same omnidirectional source content may be encoded at the same resolution but different qualities and bitrates using motion- constrained tile sets. The MCTS grid in all bitstreams is identical. In order to enable the client to use the same tile base track for reconstructing a bitstream from MCTSs received from different original bitstreams, each bitstream is encapsulated in its own file, and the same track identifier is used for each tile track of the same tile grid position in all these files. HEVC tile tracks are formed from each motion-constrained tile set sequence, and a tile base track is additionally formed. The client may parse tile base track to implicitly reconstruct a bitstream from the tile tracks. The reconstructed bitstream can be decoded with a conforming HEVC decoder.
[0325] Clients can choose which version of each MCTS is received. The same tile base track suffices for combining MCTSs from different bitstreams, since the same track identifiers are used in the respective tile tracks.
[0326] Fig. 5 presents an example how extractor tracks can be used for tile-based omnidirectional video streaming. A 4x2 tile grid has been used in forming of the motion-constrained tile sets 81a, 81b. In many viewing orientations 2x2 tiles out of the 4x2 tile grid are needed to cover a typical field of view of a head-mounted display. In the example, the presented extractor track for high-resolution motion-constrained tile sets 1, 2, 5 and 6 covers certain viewing orientations, while the extractor track for low-resolution motion-constrained tile sets 3, 4, 7, and 8 includes a region assumed to be non- visible for these viewing orientations. Two HEVC decoders are used in this example, one for the high- resolution extractor track and another for the low-resolution extractor track.
[0327] While the description above referred to tile tracks, it should be understood that sub-picture tracks can be similarly formed.
[0328] Tile merging in coded domain is needed or beneficial for the following purposes:
- Enable a number of tiles that is greater than the number of decoder instances, down to one decoder only
- Avoid synchronization challenges of multiple decoder instances
- Reach higher effective spatial and temporal resolutions, e.g. 6k@60fps with 4k@60fps
decoding capacity
- Enable specifying interoperability points for standards as well as client APIs that require one decoder only
[0329] Fig. 11 is a graphical representation of an example multimedia communication system within which various embodiments may be implemented. A data source 1510 provides a source signal in an analog, uncompressed digital, or compressed digital format, or any combination of these formats. An encoder 1520 may include or be connected with a pre-processing, such as data format conversion and/or filtering of the source signal. The encoder 1520 encodes the source signal into a coded media bitstream. It should be noted that a bitstream to be decoded may be received directly or indirectly from a remote device located within virtually any type of network. Additionally, the bitstream may be received from local hardware or software. The encoder 1520 may be capable of encoding more than one media type, such as audio and video, or more than one encoder 1520 may be required to code different media types of the source signal. The encoder 1520 may also get synthetically produced input, such as graphics and text, or it may be capable of producing coded bitstreams of synthetic media. In the following, only processing of one coded media bitstream of one media type is considered to simplify the description. It should be noted, however, that typically real-time broadcast services comprise several streams (typically at least one audio, video and text sub-titling stream). It should also be noted that the system may include many encoders, but in the figure only one encoder 1520 is represented to simplify the description without a lack of generality. It should be further understood that, although text and examples contained herein may specifically describe an encoding process, one skilled in the art would understand that the same concepts and principles also apply to the
corresponding decoding process and vice versa.
[0330] The coded media bitstream may be transferred to a storage 1530. The storage 1530 may comprise any type of mass memory to store the coded media bitstream. The format of the coded media bitstream in the storage 1530 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file, or the coded media bitstream may be encapsulated into a Segment format suitable for DASH (or a similar streaming system) and stored as a sequence of Segments. If one or more media bitstreams are encapsulated in a container file, a file generator (not shown in the figure) may be used to store the one more media bitstreams in the file and create file format metadata, which may also be stored in the file. The encoder 1520 or the storage 1530 may comprise the file generator, or the file generator is operationally attached to either the encoder 1520 or the storage 1530. Some systems operate“live”, i.e. omit storage and transfer coded media bitstream from the encoder 1520 directly to the sender 1540. The coded media bitstream may then be transferred to the sender 1540, also referred to as the server, on a need basis. The format used in the transmission may be an elementary self-contained bitstream format, a packet stream format, a Segment format suitable for DASH (or a similar streaming system), or one or more coded media bitstreams may be encapsulated into a container file. The encoder 1520, the storage 1530, and the server 1540 may reside in the same physical device or they may be included in separate devices. The encoder 1520 and server 1540 may operate with live real-time content, in which case the coded media bitstream is typically not stored permanently, but rather buffered for small periods of time in the content encoder 1520 and/or in the server 1540 to smooth out variations in processing delay, transfer delay, and coded media bitrate.
[0331] The server 1540 sends the coded media bitstream using a communication protocol stack. The stack may include but is not limited to one or more of Hypertext Transfer Protocol (HTTP), Transmission Control Protocol (TCP), and Internet Protocol (IP). When the communication protocol stack is packet-oriented, the server 1540 encapsulates the coded media bitstream into packets. It should be again noted that a system may contain more than one server 1540, but for the sake of simplicity, the following description only considers one server 1540.
[0332] If the media content is encapsulated in a container file for the storage 1530 or for inputting the data to the sender 1540, the sender 1540 may comprise or be operationally attached to a "sending file parser" (not shown in the figure). In particular, if the container file is not transmitted as such but at least one of the contained coded media bitstream is encapsulated for transport over a communication protocol, a sending file parser locates appropriate parts of the coded media bitstream to be conveyed over the communication protocol. The sending file parser may also help in creating the correct format for the communication protocol, such as packet headers and payloads. The multimedia container file may contain encapsulation instructions, such as hint tracks in the ISOBMFF, for encapsulation of the at least one of the contained media bitstream on the communication protocol.
[0333] The server 1540 may or may not be connected to a gateway 1550 through a communication network, which may e.g. be a combination of a CDN, the Internet and/or one or more access networks. The gateway may also or alternatively be referred to as a middle-box. For DASH, the gateway may be an edge server (of a CDN) or a web proxy. It is noted that the system may generally comprise any number gateways or alike, but for the sake of simplicity, the following description only considers one gateway 1550. The gateway 1550 may perform different types of functions, such as translation of a packet stream according to one communication protocol stack to another communication protocol stack, merging and forking of data streams, and manipulation of data stream according to the downlink and/or receiver capabilities, such as controlling the bit rate of the forwarded stream according to prevailing downlink network conditions.
[0334] The system includes one or more receivers 1560, typically capable of receiving, de modulating, and de-capsulating the transmitted signal into a coded media bitstream. The coded media bitstream may be transferred to a recording storage 1570. The recording storage 1570 may comprise any type of mass memory to store the coded media bitstream. The recording storage 1570 may alternatively or additively comprise computation memory, such as random access memory. The format of the coded media bitstream in the recording storage 1570 may be an elementary self-contained bitstream format, or one or more coded media bitstreams may be encapsulated into a container file. If there are multiple coded media bitstreams, such as an audio stream and a video stream, associated with each other, a container file is typically used and the receiver 1560 comprises or is attached to a container file generator producing a container file from input streams. Some systems operate“live,” i.e. omit the recording storage 1570 and transfer coded media bitstream from the receiver 1560 directly to the decoder 1580. In some systems, only the most recent part of the recorded stream, e.g., the most recent 10-minute excerption of the recorded stream, is maintained in the recording storage 1570, while any earlier recorded data is discarded from the recording storage 1570.
[0335| The coded media bitstream may be transferred from the recording storage 1570 to the decoder 1580. If there are many coded media bitstreams, such as an audio stream and a video stream, associated with each other and encapsulated into a container file or a single media bitstream is encapsulated in a container file e.g. for easier access, a file parser (not shown in the figure) is used to decapsulate each coded media bitstream from the container file. The recording storage 1570 or a decoder 1580 may comprise the file parser, or the file parser is attached to either recording storage 1570 or the decoder 1580. It should also be noted that the system may include many decoders, but here only one decoder 1570 is discussed to simplify the description without a lack of generality
[0336| The coded media bitstream may be processed further by a decoder 1570, whose output is one or more uncompressed media streams. Finally, a Tenderer 1590 may reproduce the uncompressed media streams with a loudspeaker or a display, for example. The receiver 1560, recording storage 1570, decoder 1570, and Tenderer 1590 may reside in the same physical device or they may be included in separate devices.
[0337] A sender 1540 and/or a gateway 1550 may be configured to perform switching between different representations e.g. for view switching, bitrate adaptation and/or fast start-up, and/or a sender 1540 and/or a gateway 1550 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to respond to requests of the receiver 1560 or prevailing conditions, such as throughput, of the network over which the bitstream is conveyed. A request from the receiver can be, e.g., a request for a Segment or a
Subsegment from a different representation than earlier, a request for a change of transmitted scalability layers and/or sub-layers, or a change of a rendering device having different capabilities compared to the previous one. A request for a Segment may be an HTTP GET request. A request for a Subsegment may be an HTTP GET request with a byte range. Additionally or alternatively, bitrate adjustment or bitrate adaptation may be used for example for providing so-called fast start-up in streaming services, where the bitrate of the transmitted stream is lower than the channel bitrate after starting or random-accessing the streaming in order to start playback immediately and to achieve a buffer occupancy level that tolerates occasional packet delays and/or retransmissions. Bitrate adaptation may include multiple representation or layer up-switching and representation or layer down-switching operations taking place in various orders.
[0338| A decoder 1580 may be configured to perform switching between different representations e.g. for view switching, bitrate adaptation and/or fast start-up, and/or a decoder 1580 may be configured to select the transmitted representation(s). Switching between different representations may take place for multiple reasons, such as to achieve faster decoding operation or to adapt the transmitted bitstream, e.g. in terms of bitrate, to prevailing conditions, such as throughput, of the network over which the bitstream is conveyed. Faster decoding operation might be needed for example if the device including the decoder 580 is multi-tasking and uses computing resources for other purposes than decoding the scalable video bitstream. In another example, faster decoding operation might be needed when content is played back at a faster pace than the normal playback speed, e.g. twice or three times faster than conventional real-time playback rate. The speed of decoder operation may be changed during the decoding or playback for example as response to changing from a fast-forward play from normal playback rate or vice versa, and consequently multiple layer up-switching and layer down switching operations may take place in various orders.
[0339] In the above, many embodiments have been described with reference to the equirectangular projection format. It needs to be understood that embodiments similarly apply to equirectangular pictures where the vertical coverage is less than 180 degrees. For example, the covered elevation range may be from -75° to 75°, or from -60° to 90° (i.e., covering one both not both poles). It also needs to be understood that embodiments similarly cover horizontally segmented equirectangular projection format, where a horizontal segment covers an azimuth range of 360 degrees and may have a resolution potentially differing from the resolution of other horizontal segments. Furthermore, it needs to be understood that embodiments similarly apply to omnidirectional picture formats, where a first sphere region of the content is represented by the equirectangular projection of limited elevation range and a second sphere region of the content is represented by another projection, such as cube map projection. For example, the elevation range -45° to 45° may be represented by a "middle" region of
equirectangular projection, and the other sphere regions may be represented by a rectilinear projection, similar to cube faces of a cube map but where the comers overlapping with the middle region on the spherical domain are cut out. In such cases, embodiments can be applied to the middle region represented by the equirectangular projection.
[03401 In the above, some embodiments have been described with reference to terminology of particular codecs, most notably HEVC. It needs to be understood that embodiments can be similarly realized with respective terms of other codecs. For example, rather than tiles or tile sets, embodiments could be realized with rectangular slice groups of H.264/AVC.
[0341] The phrase along the bitstream (e.g. indicating along the bitstream) may be used in claims and described embodiments to refer to out-of-band transmission, signaling, or storage in a manner that the out-of-band data is associated with the bitstream. The phrase decoding along the bitstream or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the bitstream.
[03421 The phrase along the track (e.g. including, along a track, a description of a motion- constrained coded sub-picture sequence) may be used in claims and described embodiments to refer to out-of-band transmission, signaling, or storage in a manner that the out-of-band data is associated with the track. In other words, the phrase "a description along the track" may be understood to mean that the description is not stored in the file or segments that carry the track, but within another resource, such as a media presentation description. For example, the description of the motion-constrained coded sub-picture sequence may be included in a media presentation description that includes information of a Representation conveying the track. The phrase decoding along the track or alike may refer to decoding the referred out-of-band data (which may be obtained from out-of-band transmission, signaling, or storage) that is associated with the track.
[0343] In the above, some embodiments have been described with reference to segments, e.g. as defined in MPEG-DASH. It needs to be understood that embodiments may be similarly realized with subsegments, e.g. as defined in MPEG-DASH.
[0344] In the above, some embodiments have been described in relation to DASH or MPEG- DASH. It needs to be understood that embodiments could be similarly realized with any other similar streaming system, and/or any similar protocols as those used in DASH, and/or any similar segment and/or manifest formats as those used in DASH, and/or any similar client operation as that of a DASH client. For example, some embodiments could be realized with the M3U manifest format.
[0345| In the above, some embodiments have been described in relation to ISOBMFF, e.g. when it comes to segment format. It needs to be understood that embodiments could be similarly realized with any other file format, such as Matroska, with similar capability and/or structures as those in
ISOBMFF.
[0346] In the above, some embodiments have been described with reference to encoding or including indications or metadata in the bitstream and/or decoding indications or metadata from the bitstream. It needs to be understood that indications or metadata may additionally or alternatively be encoded or included along the bitstream and/or decoded along the bitstream. For example, indications or metadata may be included in or decoded from a container file that encapsulates the bitstream.
[03471 In the above, some embodiments have been described with reference to including metadata or indications in or along a container file and/or parsing or decoding metadata and/or indications from or along a container file. It needs to be understood that indications or metadata may additionally or alternatively be encoded or included in the video bitstream, for example as SEI message(s) or VUI, and/or decoded in the video bitstream, for example from SEI message(s) or VUI.
[0348] The following describes in further detail suitable apparatus and possible mechanisms for implementing the embodiments of the invention. In this regard reference is first made to Fig. 12 which shows a schematic block diagram of an exemplary apparatus or electronic device 50 depicted in Fig. 13, which may incorporate a transmitter according to an embodiment of the invention.
[0349| The electronic device 50 may for example be a mobile terminal or user equipment of a wireless communication system. However, it would be appreciated that embodiments of the invention may be implemented within any electronic device or apparatus which may require transmission of radio frequency signals.
[0350] The apparatus 50 may comprise a housing 30 for incorporating and protecting the device. The apparatus 50 further may comprise a display 32 in the form of a liquid crystal display. In other embodiments of the invention the display may be any suitable display technology suitable to display an image or video. The apparatus 50 may further comprise a keypad 34. In other embodiments of the invention any suitable data or user interface mechanism may be employed. For example, the user interface may be implemented as a virtual keyboard or data entry system as part of a touch-sensitive display. The apparatus may comprise a microphone 36 or any suitable audio input which may be a digital or analogue signal input. The apparatus 50 may further comprise an audio output device which in embodiments of the invention may be any one of: an earpiece 38, speaker, or an analogue audio or digital audio output connection. The apparatus 50 may also comprise a battery 40 (or in other embodiments of the invention the device may be powered by any suitable mobile energy device such as solar cell, fuel cell or clockwork generator). The term battery discussed in connection with the embodiments may also be one of these mobile energy devices. Further, the apparatus 50 may comprise a combination of different kinds of energy devices, for example a rechargeable battery and a solar cell. The apparatus may further comprise an infrared port 41 for short range line of sight communication to other devices. In other embodiments the apparatus 50 may further comprise any suitable short range communication solution such as for example a Bluetooth wireless connection or a USB/FireWire wired connection.
[0351] The apparatus 50 may comprise a controller 56 or processor for controlling the apparatus 50. The controller 56 may be connected to memory 58 which in embodiments of the invention may store both data and/or may also store instructions for implementation on the controller 56. The controller 56 may further be connected to codec circuitry 54 suitable for carrying out coding and decoding of audio and/or video data or assisting in coding and decoding carried out by the controller 56.
[03521 The apparatus 50 may further comprise a card reader 48 and a smart card 46, for example a universal integrated circuit card (UICC) reader and a universal integrated circuit card for providing user information and being suitable for providing authentication information for authentication and authorization of the user at a network.
[0353] The apparatus 50 may comprise radio interface circuitry 52 connected to the controller and suitable for generating wireless communication signals for example for communication with a cellular communications network, a wireless communications system or a wireless local area network. The apparatus 50 may further comprise an antenna 60 connected to the radio interface circuitry 52 for transmitting radio frequency signals generated at the radio interface circuitry 52 to other apparatus(es) and for receiving radio frequency signals from other apparatus(es).
[0354| In some embodiments of the invention, the apparatus 50 comprises a camera 42 capable of recording or detecting imaging.
[0355] With respect to Fig. 14, an example of a system within which embodiments of the present invention can be utilized is shown. The system 10 comprises multiple communication devices which can communicate through one or more networks. The system 10 may comprise any combination of wired and/or wireless networks including, but not limited to a wireless cellular telephone network (such as a global systems for mobile communications (GSM), universal mobile telecommunications system (UMTS), long term evolution (LTE) based network, code division multiple access (CDMA) network etc.), a wireless local area network (WLAN) such as defined by any of the IEEE 802.x standards, a Bluetooth personal area network, an Ethernet local area network, a token ring local area network, a wide area network, and the Internet.
[0356] For example, the system shown in Fig. 14 shows a mobile telephone network 11 and a representation of the internet 28. Connectivity to the internet 28 may include, but is not limited to, long range wireless connections, short range wireless connections, and various wired connections including, but not limited to, telephone lines, cable lines, power lines, and similar communication pathways.
[03571 The example communication devices shown in the system 10 may include, but are not limited to, an electronic device or apparatus 50, a combination of a personal digital assistant (PDA) and a mobile telephone 14, a PDA 16, an integrated messaging device (IMD) 18, a desktop computer 20, a notebook computer 22, a tablet computer. The apparatus 50 may be stationary or mobile when carried by an individual who is moving. The apparatus 50 may also be located in a mode of transport including, but not limited to, a car, a truck, a taxi, a bus, a train, a boat, an airplane, a bicycle, a motorcycle or any similar suitable mode of transport.
[0358| Some or further apparatus may send and receive calls and messages and communicate with service providers through a wireless connection 25 to a base station 24. The base station 24 may be connected to a network server 26 that allows communication between the mobile telephone network 11 and the internet 28. The system may include additional communication devices and communication devices of various types.
[03591 The communication devices may communicate using various transmission technologies including, but not limited to, code division multiple access (CDMA), global systems for mobile communications (GSM), universal mobile telecommunications system (UMTS), time divisional multiple access (TDMA), frequency division multiple access (FDMA), transmission control protocol- internet protocol (TCP-IP), short messaging service (SMS), multimedia messaging service (MMS), email, instant messaging service (IMS), Bluetooth, IEEE 802.11, Long Term Evolution wireless communication technique (LTE) and any similar wireless communication technology. A
communications device involved in implementing various embodiments of the present invention may communicate using various media including, but not limited to, radio, infrared, laser, cable connections, and any suitable connection.
[0360] Although the above examples describe embodiments of the invention operating within a wireless communication device, it would be appreciated that the invention as described above may be implemented as a part of any apparatus comprising a circuitry in which radio frequency signals are transmitted and received. Thus, for example, embodiments of the invention may be implemented in a mobile phone, in a base station, in a computer such as a desktop computer or a tablet computer comprising radio frequency communication means (e.g. wireless local area network, cellular radio, etc.).
[03611 In general, the various embodiments of the invention may be implemented in hardware or special purpose circuits or any combination thereof. While various aspects of the invention may be illustrated and described as block diagrams or using some other pictorial representation, it is well understood that these blocks, apparatus, systems, techniques or methods described herein may be implemented in, as non-limiting examples, hardware, software, firmware, special purpose circuits or logic, general purpose hardware or controller or other computing devices, or some combination thereof.
[03621 Embodiments of the inventions may be practiced in various components such as integrated circuit modules. The design of integrated circuits is by and large a highly automated process. Complex and powerful software tools are available for converting a logic level design into a semiconductor circuit design ready to be etched and formed on a semiconductor substrate.
[0363 J Programs, such as those provided by Synopsys, Inc. of Mountain View, California and Cadence Design, of San Jose, California automatically route conductors and locate components on a semiconductor chip using well established rules of design as well as libraries of pre stored design modules. Once the design for a semiconductor circuit has been completed, the resultant design, in a standardized electronic format (e.g., Opus, GDSII, or the like) may be transmitted to a semiconductor fabrication facility or "fab" for fabrication. [0364] The foregoing description has provided by way of exemplary and non-limiting examples a full and informative description of the exemplary embodiment of this invention. However, various modifications and adaptations may become apparent to those skilled in the relevant arts in view of the foregoing description, when read in conjunction with the accompanying drawings and the appended claims. However, all such and similar modifications of the teachings of this invention will still fall within the scope of this invention.

Claims

1. A method comprising:
obtaining a track or a representation wherein a recommended viewport is covered with a specified quality with respect to a quality of the remaining areas;
including, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with the specified quality with respect to the quality of the remaining areas.
2. The method according to claim 1, wherein the metadata comprises one or more of the following:
indication that the track or the representation covers the recommended viewport at a higher quality than remaining areas;
indication that the track or the representation covers the recommended viewport at a uniform quality;
indication of an absolute quality ranking value indicating the quality for the recommended viewport;
indication that the track or the representation covers the recommended viewport at a consistent or uniform quality which is higher quality than remaining areas;
indication of an absolute quality ranking value indicating the quality of remaining areas excluding the recommended viewport;
indication of a quality ranking value difference between the quality ranking values of the recommended viewport and of the remaining areas;
statistics, such as an average, of the proportion of time that the track or the representation covers the recommended viewport at a higher quality than remaining areas
statistics, such as minimum, average, and/or maximum, of quality ranking values of the content covering the recommended viewport;
association of the track or representation with a particular recommended viewport timed metadata track or representation;
indication that the track or the representation covers only the recommended viewport and excludes remaining areas.
3. The method according to claim 2, wherein the statistics of the proportion of time is as an average,
4. The method according to claim 2 or 3, wherein the statistics of quality ranking values of the content covering the recommended viewport is a minimum, an average, and/or a maximum,
5. The method according to any of the claims 1 to 4, said specified quality comprising one of:
higher quality than the quality of the remaining areas;
consistent quality;
uniform quality;
consistent, higher quality than the quality of the remaining areas;
uniform, higher quality than the quality of the remaining areas.
6. The method according to any of the claims 1 to 5, wherein the metadata expressing the association of the track or representation with a particular recommended viewport timed metadata track or representation comprises one or more of the following:
a track reference of a particular type from the recommended viewport timed metadata track to the video track(s), wherein the recommended viewport has higher quality than remaining areas;
a track reference of a particular type from the video track to the recommended viewport timed metadata track, wherein the recommended viewport has higher quality than remaining areas;
@associationId from a Representation containing the recommended viewport timed metadata track to the Representation(s) containing video track(s), wherein the recommended viewport has higher quality than remaining areas, and @associationType of a particular type;
@associationId from the Representation(s) containing video track(s), wherein the recommended viewport has higher quality than remaining areas, and @associationType of a particular type to a Representation containing the recommended viewport timed metadata track.
7. The method according to any of the claims 1 to 6 further comprising:
providing an enhanced recommended viewport media track selection information in a media presentation description with a property element comprising a descriptor which has an association or relationship with the recommended viewport track adaptation set.
8. An apparatus comprising at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least:
obtain a track or a representation wherein a recommended viewport is covered with a higher quality than remaining areas; include, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with a higher quality than remaining areas.
9. An apparatus comprising:
a first circuitry configured to obtain a track or a representation wherein a recommended viewport is covered with a higher quality than remaining areas;
a second circuitry configured to include, in or along the track or the representation, metadata indicating that the track or the representation covers the recommended viewport with a higher quality than remaining areas.
10. A method comprising:
receiving, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a specified quality with respect to a quality of the remaining areas;
selecting, based on the metadata, to process the track or the representation.
11. The method according to claim 10, wherein the metadata comprises one or more of the following:
indication that the track or the representation covers the recommended viewport at a higher quality than remaining areas;
indication that the track or the representation covers the recommended viewport at a uniform quality;
indication of an absolute quality ranking value indicating the quality for the recommended viewport;
indication that the track or the representation covers the recommended viewport at a consistent or uniform quality which is higher quality than remaining areas;
indication of an absolute quality ranking value indicating the quality of remaining areas excluding the recommended viewport;
indication of a quality ranking value difference between the quality ranking values of the recommended viewport and of the remaining areas;
statistics, such as an average, of the proportion of time that the track or the representation covers the recommended viewport at a higher quality than remaining areas
statistics, such as minimum, average, and/or maximum, of quality ranking values of the content covering the recommended viewport;
association of the track or representation with a particular recommended viewport timed metadata track or representation;
indication that the track or the representation covers only the recommended viewport and excludes remaining areas.
12. The method according to claim 11, wherein the statistics of the proportion of time is as an average,
13. The method according to claim 11 or 12, wherein the statistics of quality ranking values of the content covering the recommended viewport is a minimum, an average, and/or a maximum,
14. The method according to any of the claims 10 to 13, said specified quality comprising one of:
higher quality than the quality of the remaining areas;
consistent quality;
uniform quality;
consistent, higher quality than the quality of the remaining areas;
uniform, higher quality than the quality of the remaining areas.
15. The method according to any of the claims 10 to 14, wherein the metadata expressing the association of the track or representation with a particular recommended viewport timed metadata track or representation comprises one or more of the following:
a track reference of a particular type from the recommended viewport timed metadata track to the video track(s), wherein the recommended viewport has higher quality than remaining areas;
a track reference of a particular type from the video track to the recommended viewport timed metadata track, wherein the recommended viewport has higher quality than remaining areas;
@associationId from a Representation containing the recommended viewport timed metadata track to the Representation(s) containing video track(s), wherein the recommended viewport has higher quality than remaining areas, and @associationType of a particular type;
@associationId from the Representation(s) containing video track(s), wherein the recommended viewport has higher quality than remaining areas, and @associationType of a particular type to a Representation containing the recommended viewport timed metadata track.
16. The method according to any of the claims 10 to 15 further comprising:
receiving an enhanced recommended viewport media track selection information in a media presentation description with a property element comprising a descriptor which has an association or relationship with the recommended viewport track adaptation set.
17. An apparatus comprising at least one processor and at least one memory, said at least one memory stored with code thereon, which when executed by said at least one processor, causes the apparatus to perform at least:
receive, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a higher quality than remaining areas; select, based on the metadata, to process the track or the representation.
18. An apparatus comprising:
a first circuitry configured to receive, from or along a track or a representation, metadata indicating that the track or the representation covers a recommended viewport with a higher quality than remaining areas;
a second circuitry configured to select, based on the metadata, to process the track or the representation.
PCT/FI2020/050213 2019-04-05 2020-04-01 An apparatus, a method and a computer program for omnidirectional video WO2020201632A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201962829828P 2019-04-05 2019-04-05
US62/829,828 2019-04-05

Publications (1)

Publication Number Publication Date
WO2020201632A1 true WO2020201632A1 (en) 2020-10-08

Family

ID=72666514

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/FI2020/050213 WO2020201632A1 (en) 2019-04-05 2020-04-01 An apparatus, a method and a computer program for omnidirectional video

Country Status (1)

Country Link
WO (1) WO2020201632A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114173199A (en) * 2021-11-24 2022-03-11 深圳Tcl新技术有限公司 Video output method and device, intelligent equipment and storage medium
WO2022150077A1 (en) 2021-01-06 2022-07-14 Tencent America LLC Method and apparatus for media scene description
CN115243092A (en) * 2022-07-01 2022-10-25 网易(杭州)网络有限公司 Video playing method, device and storage medium
EP4125275A1 (en) * 2021-07-27 2023-02-01 Nokia Technologies Oy A method, an apparatus and a computer program product for video conferencing
WO2023114464A1 (en) * 2021-12-17 2023-06-22 Interdigital Vc Holdings, Inc. Viewport-based and region-of-interest-based retrieval of media objects in scene rendering engines

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180077210A1 (en) * 2016-09-09 2018-03-15 Nokia Technologies Oy Method and apparatus for controlled observation point and orientation selection audiovisual content
WO2018175855A1 (en) * 2017-03-23 2018-09-27 Vid Scale, Inc. Metrics and messages to improve experience for 360-degree adaptive streaming

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180077210A1 (en) * 2016-09-09 2018-03-15 Nokia Technologies Oy Method and apparatus for controlled observation point and orientation selection audiovisual content
WO2018175855A1 (en) * 2017-03-23 2018-09-27 Vid Scale, Inc. Metrics and messages to improve experience for 360-degree adaptive streaming

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
WANG YK: "WD 4 of ISO/IEC 23090-2 OMAF 2nd edition", MPEG DOCUMENT MANAGEMENT SYSTEM, 125TH MPEG MEETING, w18227, 5 February 2019 (2019-02-05), Marrakech, Retrieved from the Internet <URL:http://phenix.int-evry.fr/mpeg> [retrieved on 20200831] *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022150077A1 (en) 2021-01-06 2022-07-14 Tencent America LLC Method and apparatus for media scene description
CN115315943A (en) * 2021-01-06 2022-11-08 腾讯美国有限责任公司 Method and apparatus for media scene description
EP4088451A4 (en) * 2021-01-06 2023-06-14 Tencent America LLC Method and apparatus for media scene description
US11800184B2 (en) 2021-01-06 2023-10-24 Tencent America LLC Method and apparatus for media scene description
EP4125275A1 (en) * 2021-07-27 2023-02-01 Nokia Technologies Oy A method, an apparatus and a computer program product for video conferencing
CN114173199A (en) * 2021-11-24 2022-03-11 深圳Tcl新技术有限公司 Video output method and device, intelligent equipment and storage medium
CN114173199B (en) * 2021-11-24 2024-02-06 深圳Tcl新技术有限公司 Video output method and device, intelligent equipment and storage medium
WO2023114464A1 (en) * 2021-12-17 2023-06-22 Interdigital Vc Holdings, Inc. Viewport-based and region-of-interest-based retrieval of media objects in scene rendering engines
CN115243092A (en) * 2022-07-01 2022-10-25 网易(杭州)网络有限公司 Video playing method, device and storage medium
CN115243092B (en) * 2022-07-01 2024-02-23 网易(杭州)网络有限公司 Video playing method, device and storage medium

Similar Documents

Publication Publication Date Title
US10893256B2 (en) Apparatus, a method and a computer program for omnidirectional video
US11689705B2 (en) Apparatus, a method and a computer program for omnidirectional video
US10728521B2 (en) Apparatus, a method and a computer program for omnidirectional video
US11671625B2 (en) Apparatus, a method and a computer program for video coding and decoding
US11671588B2 (en) Apparatus, a method and a computer program for video coding and decoding
US11082719B2 (en) Apparatus, a method and a computer program for omnidirectional video
US20200177809A1 (en) Method and an apparatus and a computer program for encoding media content
WO2017140945A1 (en) An apparatus, a method and a computer program for video coding and decoding
EP3349467B1 (en) An apparatus, a method and a computer program for video coding and decoding
WO2019141907A1 (en) An apparatus, a method and a computer program for omnidirectional video
WO2017093611A1 (en) A method for video encoding/decoding and an apparatus and a computer program product for implementing the method
US20230059516A1 (en) Apparatus, a method and a computer program for omnidirectional video
WO2017140948A1 (en) An apparatus, a method and a computer program for video coding and decoding
WO2020201632A1 (en) An apparatus, a method and a computer program for omnidirectional video
WO2018115572A2 (en) An apparatus, a method and a computer program for video coding and decoding
US11190802B2 (en) Apparatus, a method and a computer program for omnidirectional video
EP3673665A1 (en) An apparatus, a method and a computer program for omnidirectional video
US11909983B2 (en) Apparatus, a method and a computer program for video coding and decoding

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20785012

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20785012

Country of ref document: EP

Kind code of ref document: A1